Git Product home page Git Product logo

lim-anggun / fgsegnet_v2 Goto Github PK

View Code? Open in Web Editor NEW
147.0 11.0 43.0 19.01 MB

FgSegNet_v2: "Learning Multi-scale Features for Foreground Segmentation.” by Long Ang LIM and Hacer YALIM KELES

Home Page: https://arxiv.org/abs/1808.01477

License: Other

Python 70.43% Jupyter Notebook 20.65% C++ 8.68% Makefile 0.24%
foreground-detection foreground-segmentation-network fgsegnet background-subtraction feature-pooling-module fpm-module video-surveillance

fgsegnet_v2's Issues

how can I evaluate the segmentation results quantificationally?

Hello, thank you again for sharing your work. You have provided a notebook to see the prediction example , and I want to know how you evaluate the results quantificationally. You extract the resulting segmentation picture from the ipynb, then calculate the F-measures and other metrics somewhere else? Thank you.

How can I train with my own video sequence?

As I know, if I want to train with my own video sequence, I should manually config FgSegNetModule.py.
But I'm a newbie on keras even on deep learning.
I found that I should modify the code bellow to fit my input video:

if dataset_name=='CDnet':
            if(self.scene=='tramCrossroad_1fps'):
                x = MyUpSampling2D(size=(1,1), num_pixels=(2,0), method_name=self.method_name)(x)
            elif(self.scene=='bridgeEntry'):
                x = MyUpSampling2D(size=(1,1), num_pixels=(2,2), method_name=self.method_name)(x)
            elif(self.scene=='fluidHighway'):
                x = MyUpSampling2D(size=(1,1), num_pixels=(2,0), method_name=self.method_name)(x)
            elif(self.scene=='streetCornerAtNight'): 
                x = MyUpSampling2D(size=(1,1), num_pixels=(1,0), method_name=self.method_name)(x)
                x = Cropping2D(cropping=((0, 0),(0, 1)))(x)
            elif(self.scene=='tramStation'):  
                x = Cropping2D(cropping=((1, 0),(0, 0)))(x)
            elif(self.scene=='twoPositionPTZCam'):
                x = MyUpSampling2D(size=(1,1), num_pixels=(0,2), method_name=self.method_name)(x)
            elif(self.scene=='turbulence2'):
                x = Cropping2D(cropping=((1, 0),(0, 0)))(x)
                x = MyUpSampling2D(size=(1,1), num_pixels=(0,1), method_name=self.method_name)(x)
            elif(self.scene=='turbulence3'):
                x = MyUpSampling2D(size=(1,1), num_pixels=(2,0), method_name=self.method_name)(x)

But I don't know what num_pixels I should pass to it...
How can I know what num_pixels corresponding my video sequence?And under what situation I should use Cropping2D()?
And is there anything I should modify?
Thank you very much for replying.

unable to load the model after adding instance_normalization

@lim-anggun I have read your previous comment on adding the instance_nornalization.py .But I am still getting the error
I have added this line of code as mentioned by you.
from FgSegNet.instance_normalization import InstanceNormalization model = load_model(mdl_path, custom_objects={'MyUpSampling2D': MyUpSampling2D, 'InstanceNormalization': InstanceNormalization})

Error :

`-----------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-26-69f03b7b76fd> in <module>
      4 mdl_path = 'FgSegNet_M/CDnet/models50/baseline/mdl_pedestrians.h5'
      5 from FgSegNet.instance_normalization import InstanceNormalization
----> 6 model = load_model(mdl_path, custom_objects={'MyUpSampling2D': MyUpSampling2D, 'InstanceNormalization': InstanceNormalization})
      7 #from FgSegNet_v2_module.py import loss2, acc2
      8 #model = load_model(mdl_path, custom_objects={'MyUpSampling2D': MyUpSampling2D, 'InstanceNormalization': InstanceNormalization})

~/anaconda3/envs/p3/lib/python3.6/site-packages/keras/models.py in load_model(filepath, custom_objects, compile)
    262                       metrics=metrics,
    263                       loss_weights=loss_weights,
--> 264                       sample_weight_mode=sample_weight_mode)
    265 
    266         # Set optimizer weights.

~/anaconda3/envs/p3/lib/python3.6/site-packages/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, **kwargs)
    679             loss_functions = [losses.get(l) for l in loss]
    680         else:
--> 681             loss_function = losses.get(loss)
    682             loss_functions = [loss_function for _ in range(len(self.outputs))]
    683         self.loss_functions = loss_functions

~/anaconda3/envs/p3/lib/python3.6/site-packages/keras/losses.py in get(identifier)
    100     if isinstance(identifier, six.string_types):
    101         identifier = str(identifier)
--> 102         return deserialize(identifier)
    103     elif callable(identifier):
    104         return identifier

~/anaconda3/envs/p3/lib/python3.6/site-packages/keras/losses.py in deserialize(name, custom_objects)
     92                                     module_objects=globals(),
     93                                     custom_objects=custom_objects,
---> 94                                     printable_module_name='loss function')
     95 
     96 

~/anaconda3/envs/p3/lib/python3.6/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    157             if fn is None:
    158                 raise ValueError('Unknown ' + printable_module_name +
--> 159                                  ':' + function_name)
    160         return fn
    161     else:

ValueError: Unknown loss function:loss

I would appreciate your advice on this. Thank you.

compilation error

The log is as follows:

Using TensorFlow backend.

->>> intermittentObjectMotion / abandonedBox
Traceback (most recent call last):
File "extract_mask.py", line 205, in
img = kImage.load_img(ROI_file, grayscale=True)
File "/usr/local/lib/python3.5/dist-packages/keras/preprocessing/image.py", line 322, in load_img
img = pil_image.open(path)
File "/usr/local/lib/python3.5/dist-packages/PIL/Image.py", line 2878, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'CDnet2014_dataset/intermittentObjectMotion/abandonedBox/ROI.bmp'

How to handle real video

Thank you for your work,
I have a question, I see that the code is training for each test set. So now I have a real video (not in the dataset), no ground truth. So how should I extract the foreground?

How can I load model?

Hi, I had finished training, and I got some *.h5 files. But I'm not familiar with keras, so I use the code like this to load model:
from keras.models import load_model
path = "/path/to/mdl_skating.h5 "
model = load_model(path)

But it raise "ValueError: Unknow layer: InstanceNormalization"

I haven't used keras before. So can you give me a demo to load "mdl_skating.h5" to predict "/CDNet2014_dataset/badWeather/skating" video sequence?

Can it work well on other dataset?

I want to use it in some real scenes, can this deep net work well on other datasets, such as VOC? Will it be trained well on non-video datasets?

ModuleNotFoundError 'keras.engine.base_layer'

Hi, thanks for the job!
I follow the instruction to run these project, after I installed the environment, I run "python3 FgSegNet_v2_CDnet.py" but I got some error below:
Traceback (most recent call last): File "FgSegNet_v2_CDnet.py", line 37, in <module> from FgSegNet_v2_module import FgSegNet_v2_module File "/home/jhd/face_recognition/softwares/FgSegNet_v2/scripts/FgSegNet_v2_module.py", line 15, in <module> from my_upsampling_2d import MyUpSampling2D File "/home/jhd/face_recognition/softwares/FgSegNet_v2/scripts/my_upsampling_2d.py", line 13, in <module> from keras.engine.base_layer import InputSpec ModuleNotFoundError: No module named 'keras.engine.base_layer'

Is there something I miss?

How can i solve this error?

->>> baseline / highway
Traceback (most recent call last):

File "", line 1, in
runfile('D:/ShiYanchnegxu/FgSegNet_v2-master/testing_scripts/extract_mask.py', wdir='D:/ShiYanchnegxu/FgSegNet_v2-master/testing_scripts')

File "D:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)

File "D:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "D:/ShiYanchnegxu/FgSegNet_v2-master/testing_scripts/extract_mask.py", line 239, in
model = load_model(mdl_path)

File "D:\ProgramData\Anaconda3\envs\TF1.1.0\lib\site-packages\keras\models.py", line 227, in load_model
with h5py.File(filepath, mode='r') as f:

File "D:\ProgramData\Anaconda3\lib\site-packages\h5py_hl\files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)

File "D:\ProgramData\Anaconda3\lib\site-packages\h5py_hl\files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)

File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper

File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper

File "h5py\h5f.pyx", line 78, in h5py.h5f.open

OSError: Unable to open file (unable to open file: name = 'FgSegNet_v2\models25\baseline\mdl_highway.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

License File

Can you please add a license file to clarify the usage limits of your code?

Evaluation Code

Hello how to evaluate the result of the method in CDnet 2014 dataset? I follow your links and tried to follow everything but I got this error.

The file C:\dataset\badWeather\blizzard\stats.txt doesn't exist.
It means there was an error calling the comparator.
Traceback (most recent call last):
  File "modified_process_folder.py", line 129, in <module>
    main()
  File "modified_process_folder.py", line 46, in main
    processFolder(datasetPath, binaryRootPath)
  File "modified_process_folder.py", line 62, in processFolder
    confusionMatrix = compareWithGroungtruth(videoPath, binaryPath)  #STATS
  File "modified_process_folder.py", line 84, in compareWithGroungtruth
    return readCMFile(statFilePath)
  File "modified_process_folder.py", line 90, in readCMFile
    raise Exception('error')
Exception: error

Should we recompile the comparator?, but in original file comparator.exe is exist.

multispectral use

Hello, I am writing to consult whether this model is for 3 bands (RGB) only. If the multispectral images are as input, will the model automatically processes all the channels or just the first 3 bands? Thank you.

Memory leak and how to train using a gpu ?

how can i enable gpu training cause i'm trying to train on colab but it always goes beyond the 12Gb RAM limit is it even normal ?
i'm using keras 2.0.6 and tensorflow 1.1.0 as advised but the training process is extremely long and only works with batch_size= 1 otherwise there's a RAM memory problem !

Thanks for your help.
@lim-anggun

Training on real video

Thanks for your work! I want to apply this model on real video, but from the code, it seems that if I train on real video, I still need to label the foreground mask. So do I need to label mask on real video?

Design of decoder

Dear author, thanks for your paper about change detection. I' m very interested in the design of decoder in your architecture. However, I am curious why you combine information in that way : alpha*f + f where alpha is average pooling of information from upper layer. Numerous papers tend to implement network like U-net in semantic segmentation field.

About the meaning of void_label

I see the code in the script , such as in the loss function, the void_label = -1.
I am confused, is -1 means background or something else?

how to train my own data?

thanks a lot for your great work, but how to make my own data like the CDnet2014? and how to train it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.