Git Product home page Git Product logo

lcfcn's Introduction

ServiceNow completed its acquisition of Element AI on January 8, 2021. All references to Element AI in the materials that are part of this project should refer to ServiceNow.

LCFCN - ECCV 2018 (Try in a Colab)

Where are the Blobs: Counting by Localization with Point Supervision

[Paper][Video]

Make the segmentation model learn to count and localize objects by adding a single line of code. Instead of applying the cross-entropy loss on dense per-pixel labels, apply the lcfcn loss on point-level annotations.

Usage

pip install git+https://github.com/ElementAI/LCFCN
from lcfcn import lcfcn_loss

# compute an CxHxW logits mask using any segmentation model
logits = seg_model.forward(images)

# compute loss given 'points' as HxW mask (1 pixel label per object)
loss = lcfcn_loss.compute_loss(points=points, probs=logits.sigmoid())

loss.backward()

Predicted Object Locations

Experiments

1. Install dependencies

pip install -r requirements.txt

This command installs pydicom and the Haven library which helps in managing the experiments.

2. Download Datasets

3. Train and Validate

python trainval.py -e trancos -d <datadir> -sb <savedir_base> -r 1
  • <datadir> is where the dataset is located.
  • <savedir_base> is where the experiment weights and results will be saved.
  • -e trancos specifies the trancos training hyper-parameters defined in exp_configs.py.

4. View Results

3.1 Launch Jupyter from terminal

> jupyter nbextension enable --py widgetsnbextension --sys-prefix
> jupyter notebook

3.2 Run the following from a Jupyter cell

from haven import haven_jupyter as hj
from haven import haven_results as hr

try:
    %load_ext google.colab.data_table
except:
    pass

# path to where the experiments got saved
savedir_base = <savedir_base>

# filter exps
filterby_list = None
# get experiments
rm = hr.ResultManager(savedir_base=savedir_base, 
                      filterby_list=filterby_list, 
                      verbose=0)
# dashboard variables
title_list = ['dataset', 'model']
y_metrics = ['val_mae']

# launch dashboard
hj.get_dashboard(rm, vars(), wide_display=True)

This script outputs the following dashboard

Citation

If you find the code useful for your research, please cite:

@inproceedings{laradji2018blobs,
  title={Where are the blobs: Counting by localization with point supervision},
  author={Laradji, Issam H and Rostamzadeh, Negar and Pinheiro, Pedro O and Vazquez, David and Schmidt, Mark},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={547--562},
  year={2018}
}

lcfcn's People

Contributors

issamlaradji avatar mlaradji avatar servicenowresearch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lcfcn's Issues

Inference problem

After I train the network with multiple classes(11), the inference result is confusing.
In applyOnImage.py:
The first problem is in the line:
imsave(save_path, ut.combine_image_blobs(image_raw, pred_blobs)):
ValueError: Invalid shape for image array: (10, w,h, c); the counts[None]: [[7. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

so I add pred_blobs_max = np.argmax(pred_blobs,axis=0),
imsave(save_path, ut.combine_image_blobs(image_raw, pred_blobs_max))
and it works

However, here the second problem comes:
the visualization result of prediction images is very confusing, it seems that the count is correct but the pixel location is not on the object

the dataset is the same to pascal.py

Thank you very much!
@IssamLaradji

How to plot loss during the training ?

Dear Sir,
First, I want to know how to plot the loss curve during training. Second, if I trained the model reach to 50 epochs, is it possible that I can extract the loss and plot it corresponding the epoch or iteration ? Thank you very much.

How to create files for folder images?

hello ,
how I can create files of image-x-xxxxx.txt & image-x-xxxxxdots.png & image-x-xxxxxmask.mat?
how I can create these files for own datasets?
do these data for necessary for run?
thanks

Batch-aware loss function?

As far as I my understanding goes, the lcfcn loss works with one sample per batch. Is that really the case?

If so, how hard would it be to adapt it to more than one sample per batch?

I have tried this code:

loss = torch.mean(
    torch.Tensor(
        [
            lcfcn_loss.compute_loss(points=points[i], probs=logits[i].sigmoid())
            for i in range(points.shape[0])
        ]
    )
)

Could that work? 🤔

Thanks for any help!

Error in losses.py

I get the following error when I try to train from scratch:

Traceback (most recent call last):
File "D:/Projects/LCFCN/main.py", line 49, in
main()
File "D:/Projects/LCFCN/main.py", line 40, in main
train.train(dataset_name, model_name, metric_name, path_history, path_model, path_opt, path_best_model, args.reset)
File "D:\Projects\LCFCN\train.py", line 76, in train
epoch=epoch)
File "D:\Projects\LCFCN\utils.py", line 30, in fit
loss = loss_function(model, batch)
File "D:\Projects\LCFCN\losses.py", line 31, in lc_loss
loss = compute_image_loss(S, counts)
File "D:\Projects\LCFCN\losses.py", line 75, in compute_image_loss
Target = (BgFgCounts.view(n*k) > 0).view(-1).float()
RuntimeError: shape '[1]' is invalid for input of size 2

Does this have something to do with the network output size? Or the image size?
How may I fix this?

ERROR: Command errored out with exit status 128

I got this error in my anaconda when running this command pip install git+https://github.com/ElementAI/LCFCN

Collecting git+https://github.com/ElementAI/LCFCN
  Cloning https://github.com/ElementAI/LCFCN to c:\users\****\appdata\local\temp\pip-req-build-52v1c34l
  Running command git clone -q https://github.com/ElementAI/LCFCN 'C:\Users\***\AppData\Local\Temp\pip-req-build-52v1c34l'
  error: invalid path 'results/test.png_blobs_count:32.png'
  fatal: unable to checkout working tree
  warning: Clone succeeded, but checkout failed.
  You can inspect what was checked out with 'git status'
  and retry with 'git restore --source=HEAD :/'

The prediction was incorrect where use best_model_trancos_ResFCN.pth

When I tested it with pictures from the Internet, the results were completely incorrect. What was the reason?
python main.py -image_path /home/quh/pythonwork/C-3-Framework/datasets/test/a1/a1.png -model_path checkpoints/best_model_trancos_ResFCN.pth -model_name ResFCN
python main.py -image_path /home/quh/pythonwork/C-3-Framework/datasets/test/a1/t1.png -model_path checkpoints/best_model_trancos_ResFCN.pth -model_name ResFCN
t1 png_blobs_count:87
a1 png_blobs_count:61

About batch_size ang gpu

Thanks for your code!I have two questions.
1.Is bs only set to 1?Because the speed is so slow,i want to increase bs,but when runs it can have orror.What's the problem?
2.When i use cpu the speed is slow with when i use gpu?Why?Where should i modify?

Thank you!

when run " bash checkpoints/download.sh"

when i run " bash checkpoints/download.sh"there is a error:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to www.dropbox.com port 443: Connection refused
Archive: pascal_ResFCN.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of pascal_ResFCN.zip or
pascal_ResFCN.zip.zip, and cannot find pascal_ResFCN.zip.ZIP, period.
mv: cannot stat 'pascal_ResFCN/State_Dicts/best_model.pth': No such file or directory
mv: cannot stat 'pascal_ResFCN/history.json': No such file or directory

Inference script

Hi

Thanks for this wonderful work.

  1. Is there an inference script for the Shanghai Tech B dataset that can iterate through each of the test images and return the metrics?

  2. The inference script test_on_image.py seems to have an error: There is no variable or library that is called ut but ut is used in the script. Can you correct it?

  3. Can you also provide an example to invoke the inference script test_on_image.py for a single image from Shanghai Tech B?

Thanks a lot.

How to train on own dataset ?

I wanna try training LCFCN on my own dataset. What are the things I should be looking at (images, annotations, etc) to train the model on my own dataset ?

where is those txts in tgrancos.py

when I run the command : python main.py -m test -e trancos ,I can't find the TRANCOS_v3/image_sets/test.txt'.i have followed all your instructions ,why?

Does the following code in losses.py contain a bug?

The following code from losses.py file from line 11 to 19.

    model.train()
    N =  batch["images"].size(0)
    assert N == 1

    blob_dict = get_blob_dict(model, batch)
    # put variables in cuda
    images = batch["images"].cuda()
    points = batch["points"].cuda()
    counts = batch["counts"].cuda()

The get_blob_dict set the model to eval(), and afterwards the model is in eval. The model is in eval during the code to calculate loss and backward loss .
Does this correct?

ValueError: exp_list is empty...

Dear author, I am using the codes on my own dataset to count pigs. I am just starting now, the training seems fine. But when I try to visualize the dashboard in Jupyter notebook, it always reports the following error.
image

Can you help me with this? Also, the command I use to train is 'python trainval.py -e piggy '. I was able to set other parameters as their default value as:
image

But if I set parser.add_argument('-e', '--exp_group_list', default=piggy, nargs='+'), it always reports that the name 'piggy' is not defined. But the command line 'python trainval.py -e piggy ' works fine, which is pretty weird. Can you help me with this? Thanks very much.

Can you kindly provide the scripts for testing and visualization of blobs?

Dear author, thanks for this great repo. However, I find that there is only the trainval.py script, there are no codes to test on the trained model on the test sets. Can you kindly provide this? Also, can you show me how to visualize the blobs like in ..\results\test.png_blobs_count_32.png ? Thanks very much!

Problem when use best_model_pascal_ResFCN.pth

Hello,
After I ran the command

python main.py -image_path figures/test.png
-model_path checkpoints/best_model_pascal_ResFCN.pth
-model_name ResFCN

I got this error:

"Model: ResFCN - Dataset: pascal - Metric: mRMSE
Traceback (most recent call last):
File "main.py", line 45, in
main()
File "main.py", line 33, in main
applyOnImage.apply(args.image_path, args.model_name, args.model_path)
File "/content/LCFCN/applyOnImage.py", line 34, in apply
imsave(save_path, ut.combine_image_blobs(image_raw, pred_blobs))
File "/usr/local/lib/python3.6/dist-packages/skimage/io/_io.py", line 144, in imsave
return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)
File "/usr/local/lib/python3.6/dist-packages/skimage/io/manage_plugins.py", line 210, in call_plugin
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/imageio/core/functions.py", line 253, in imwrite
raise ValueError("Image must be 2D (grayscale, RGB, or RGBA).")
ValueError: Image must be 2D (grayscale, RGB, or RGBA)."

I almost try many images but I can't fix this error.
Did I miss something?
Thank you very much.

about fp loss

I have a question about fp loss.
I kown you pick those blobs which do not contain points(marked label) as false positive dataset.
But if a blob do not contail a point while it truly in goals. maybe besides the point. put this blob as fp is not accurately true.
do you have some suggestion?
thanks!

Reproducing results in paper

I'm trying to reproduce the results of Table 2 in the paper. I ran the code in the repo on the TRANCOS dataset and got the following results after 1000 epochs:

validation_mae = 3.80
test_mae = 3.55

The paper reports a test MAE of 3.32. Is the difference of 0.2 MAE reasonable given different seeds, different initialization, etc?

And/or is there some way to seed the code so it gets 3.32 precisely?

Thanks!

Edit: Looking at the output some more, I see that the best valid MAE was 3.36 after 902 epochs. Are the numbers in Table 2 reporting validation or test performance?

About p["cls"]

I don't understand what the role of p["cls"] is.I used Pascal2007, but the generated pointsJSON doesn't have p["cls"] , Can you give an example of pointsJSON?I want to use my data set to train the model.how should I handle it?
THANK YOU!

Issue with testing a tif image

Hi Issam,

I trained the LCFCN for a cell counting use case. When I test a TIF image, the model gives the following error,
RuntimeError: output with shape [1, 2048, 2048] doesn't match the broadcast shape [3, 2048, 2048]
But when I convert the same image to a JPG, the model predicts the count. On the downside, since I convert to a JPG, there is some information loss and the cell count is not accurate.
May I know why is this happening and how to solve this?

Thanks,
Srikanth

how to get the points as trancos.py for training

now i want to train on my own dataset.I have read the trancos.py,pascal.py,shanghai.py.my dataset consists of images and their center point coordinates.In the trancos.py and shanghai.py line 46,we can see they all have the .mat files ,what should I do to prepare the points without ROI and .mat files ? I am really confused

Wrong output with other backbone networks

Hi there, thanks very much for this nice work. However, since the original fcn8_resnet is large, when I tried the LCFCN loss with other backbone networks such as Unet and LinkNet in this link https://github.com/ternaus/robot-surgery-segmentation/blob/master/models.py or lightweight mobilenet, the loss seems to be decreasing normally, but the predict output count is always 1, please refer to the figure attached. I debugged the codes and can't find out what's wrong. Can you give me any tips? Thanks very much!
image

Finetuning

Hi, do you have any tips about which layers to (not) freeze in order to finetune Shanghai FCN8 model on own dataset?

Thank you in advance.

Display results

Hello,
I'm wondering if the images displayed in the results dashboard come from the training, validation or testing set.
How do I select which subset to use? Also, how can I print the names of those images?
In general, where can I find documentation about how to use the dashboard (in addition to your snippet)?
Thanks

Error in loading .pth

Dear author, thanks for your nice repo. However, I encountered this problem in loading saved model files: model.load_state_dict(hu.torch_load(model_path)) This line reports this error: `AttributeError: 'LCFCN' object has no attribute 'model'. I tried several solutions, but all resulted in failure. Can you help me with this issue? Thanks very much! ps, my python version is 3.7.7.

the shanghai model

the checkpoints/download.sh does't include ShanghaiTech checkpoint, can you update it? thanks very much!

What is p["cls"]??

Hey,

I am not able to understand the use case of this statement and what is p["cls"]?

points[int(p["y"]), int(p["x"])] = p["cls"]

Dear author, can you give me a test.py

Dear author, thanks for this great repo. However, I find that there is only the trainval.py script, there are no codes to test on the trained model on the test sets. Can you kindly provide this?

How to use best_model_pascal_ResFCN.pth?

HI,
After i use command "
python3 main.py -image_path figures/test.png
-model_path checkpoints/best_model_pascal_ResFCN.pth
-model_name ResFCN"
I get this :


RuntimeError: Error(s) in loading state_dict for ResFCN:
size mismatch for score_32s.bias: copying a param with shape torch.Size([21]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for score_32s.weight: copying a param with shape torch.Size([21, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 2048, 1, 1]).
size mismatch for score_16s.bias: copying a param with shape torch.Size([21]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for score_16s.weight: copying a param with shape torch.Size([21, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 1024, 1, 1]).
size mismatch for score_8s.bias: copying a param with shape torch.Size([21]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for score_8s.weight: copying a param with shape torch.Size([21, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 512, 1, 1]).


It seems best_model_pascal_ResFCN.pth is not match for ResFCN.
Did i miss something? Can the best_model_pascal_ResFCN.pth work well for detect pedestrian like the picture shown in the README.
Thanks

Convergence Properties

Under what conditions can we expect this model to converge? When I try to train the model on a different dataset, which has two classes (background and foreground) the model never predicts blobs. Instead, it will incorrectly make point predictions which mostly get pushes down by the false positive loss.

How to annotate fro custom data training?

I have dataset of images with point annotations in the format of a JSON file. What is the format for annotations provided for LCFCN? Can I directly use (x,y) coordinates of the points or do I need to preprocess them to a different format?

There was a problem when training on the shanghaitech dataset

Traceback (most recent call last):
Training Epoch 1 .... 0 batches
File "D:/zgy/LCFCN-master/main.py", line 45, in
main()
File "D:/zgy/LCFCN-master/main.py", line 36, in main
train.train(dataset_name, model_name, metric_name, path_history, path_model, path_opt, path_best_model, args.reset)
File "D:\zgy\LCFCN-master\train.py", line 76, in train
epoch=epoch)
File "D:\zgy\LCFCN-master\utils.py", line 28, in fit
for i, batch in enumerate(dataloader):
File "D:\Anaconda\envs\py-zhu\lib\site-packages\torch\utils\data\dataloader.py", line 614, in next
indices = next(self.sample_iter) # may raise StopIteration
File "D:\Anaconda\envs\py-zhu\lib\site-packages\torch\utils\data\sampler.py", line 160, in iter
for idx in self.sampler:
File "D:\zgy\LCFCN-master\utils.py", line 234, in iter
indices = np.random.randint(0, self.n_samples, self.size)
File "mtrand.pyx", line 993, in mtrand.RandomState.randint
ValueError: low >= high

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.