Git Product home page Git Product logo

dsp's People

Contributors

gaolii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

dsp's Issues

How to run your method on a different setup?

Hi,

Thanks for your contribution and the publication of your code. I was wondering if it is possible to run your method in a different setup, for example, let's say that I want to adapt Cityscapes ===> BDD.

I have tried to run this setup by making some minor modifications but it seems like your method is conditioned to these id2json class-wise objects, which I do not know how I can generate those files for the aforementioned setup Cityscapes ===> BDD.

Could you please help me out on this?

Long tail class inmixing

Hello, first of all thanks for the contribution and sharing the code. I have been using this implementation for a while and considering the class "rand_mixer" for mixing the long tail classes with source/target inputs, I notice that the long tail classes tend to not show up in the mixed result. To ilustrate the issue, I add some screenshots of plots
for each variable.

Considering the code, I believe that the long tail class mixing in line 325 in train.py:

   data, target = transformsgpu.oneMix(MixMask, data=mixdata, target=mixtarget)

I introduced plotting the data and targets before and after this function to see the long tail mixing in action with the following code:

      plot_input(MixMask, 'mask_lt', 'Mask over the long tail class image', save=True)
      plot_input(img, 'image_lt', "Image where the long tail class is in", save=True)
      plot_input(lbl, 'label_lt', 'Label of the longtail class image', save=True)

      data, target = transformsgpu.oneMix(MixMask, data=mixdata, target=mixtarget)

      plot_input(mixdata[1], 'image_lt', 'Image where the long tail class comes in', save=True)
      plot_input(data[0], 'image_lt', 'Mixed image', save=True)

      plot_input(mixtarget[1], 'label_lt', 'Label where the long tail class comes in', save=True)
      plot_input(target[0], 'label_lt', 'Mixed label', save=True)

The plot input functions that is used above, is shown here:

   def plot_input(input, input_type, name=None, save=False):
    with torch.no_grad():

        if input_type == 'image_lt':  # Image while in long tail function
            input = input.cpu().detach().numpy()

            input = np.transpose(input, (1, 2, 0))

            image = input + IMG_MEAN
            image = image[:, :, ::-1]
 
            plt.imshow(image.astype(int))


        elif input_type == 'mask_lt': # mask while in long tail function

            input = input.cpu().detach().numpy()
            plt.imshow(input, cmap='binary')

        elif input_type == 'label_lt': # label while in long tail function

            input = input.cpu().detach().numpy()
            color_mask = cm_eval(input)
            # print(color_mask)
            plt.imshow(color_mask)

        else:
            print("Incorrect input for creating an image")

        if not (name is None):
            title_obj = plt.title(name)  # get the title property handler
            plt.getp(title_obj)

        if save:
            start_writeable = datetime.datetime.now().strftime('%m-%d_%H-%M')
            plt.savefig(os.path.join("./saved/images", start_writeable+input_type+name), bbox_inches='tight')
        plt.show()

So when I plot this. The following figures are displayed:

  • The Mask over the long tail image:
    12-14_10-56mask_ltMask over the long tail class image

  • The Image with the long tail class object in it:
    12-14_10-56image_ltImage where the long tail class is in

  • The Label with the long tail class object in it:
    12-14_10-56label_ltLabel of the longtail class image

  • The Image where the long tail class object comes into:
    12-14_10-57image_ltImage where the long tail class comes in

  • The Label where the long tail class object comes into:
    12-14_10-57label_ltLabel where the long tail class comes in

  • The Image with the long tail class object mixed in:
    12-14_10-57image_ltMixed image

  • The Label with the long tail class object mixed in:
    12-14_10-57label_ltMixed label

As you can see, the long tail class is not mixed into the image. This happens in every occasion, but in many situations the long tail class is really small, so not visible in the image. Above a clear example is shown.

Countering the argument above, I have had two times a situation, where the image with the long tail class objects was overwritten in the image and label partly. However, this was done in a small strip in the vertical direction. An additional requirement for this to occur, was that the long tail class object was at the (x,0) position. I have established to save one of those examples. These are shown below:

  • The Mask over the long tail image:
    12-14_10-52mask_ltMask over the long tail class image

  • The Image with the long tail class object in it:
    12-14_10-52image_ltImage where the long tail class is in

  • The Label with the long tail class object in it:
    12-14_10-52label_ltLabel of the longtail class image

  • The Image where the long tail class object comes into:
    12-14_10-52image_ltImage where the long tail class comes in

  • The Label where the long tail class object comes into:
    12-14_10-52label_ltLabel where the long tail class comes in

  • The Image with the long tail class object mixed in:
    12-14_10-52image_ltMixed image

  • The Label with the long tail class object mixed in:
    12-14_10-52label_ltMixed label

Could someone explain this behavior to me?
And if possible suggest a solution to this problem.

Could GPU 3090 run your code.

RuntimeError: CUDA out of memory. Tried to allocate 18.00 MiB (GPU 0; 23.70 GiB total capacity; 1.38 GiB already allocated; 7.00 MiB free; 1.42 GiB reserved in total by PyTorch)

I have two 3090 GPU.
But it CUDA out of memory

Error when running training with batch size = 4

If I try to double the current batch size of 2 to 4, I get the following error:

torch.Size([2, 19, 512, 512]) torch.Size([4, 512, 512]) Traceback (most recent call last): File "train.py", line 819, in <module> main() File "train.py", line 660, in main L_l2 = loss_calc(logits_t_s, labels) * (1-lam) + lam * loss_calc(logits_t_s, targets_t) File "train.py", line 112, in loss_calc return criterion(pred, label) File "~/miniconda3/envs/dsp_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "~/projects/DSP/utils/loss.py", line 30, in forward predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c)
I have printed the shapes and it seems like logits_t_s still has a batch size of 2. Any particular reason for this?

Some questions

Thanks for your sharing code, it's really an awesome work! I have 2 questions as follows:

  1. Does there exist the warmup stage of adversarial training?
  2. I notice that is only used in DSP, considering that the process is in a soft manner, why is it not used as a weight to compute loss?

Training time and problem of DACS training curve reported in your paper

Hi,
I try to reproduce your code (gta5 -> cityscapes) and I found that it has a very slow training speed, which is nearly 4x slower than the DACS. I wonder why... And also, can I ask you the total training time for you to reproduce the result (miou =55) ?
Another issue is I found that the training curve of DACS in your paper is mismatch with my experiment result as below, I wonder how you get the result in your paper ?
My experiment on DACS
Screen Shot 2021-08-15 at 11 54 49 PM

Yours
Screen Shot 2021-08-15 at 11 56 18 PM

The mistake in Tabel 2 of paper

Thanks for sharing the code, it's really an awesome work!

But there seems to exist a mistake in "Table 2: Results of different domain adaptation methods for the SYNTHIA → Cityscapes task".

The mIoU* of 13 classes should be 59.9 but not 63.8 according to the IoU per class, can you check it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.