Git Product home page Git Product logo

camera-trap-classifier's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

camera-trap-classifier's Issues

ctc.train error

I used Option 1 to prepare data for a small dataset of 3 species. When running ctc.train, I get the following error:

c:\users\mallevx\appdata\local\continuum\anaconda3\envs\ctc\lib\site-packages\camera_trap_classifier\config\config.py:19: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  self.cfg = yaml.load(fp)
Traceback (most recent call last):
  File "c:\users\mallevx\appdata\local\continuum\anaconda3\envs\ctc\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\mallevx\appdata\local\continuum\anaconda3\envs\ctc\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\mallevx\AppData\Local\Continuum\anaconda3\envs\ctc\Scripts\ctc.train.exe\__main__.py", line 9, in <module>
  File "c:\users\mallevx\appdata\local\continuum\anaconda3\envs\ctc\lib\site-packages\camera_trap_classifier\train.py", line 330, in main
    zip(output_labels, output_labels_clean)}
  File "c:\users\mallevx\appdata\local\continuum\anaconda3\envs\ctc\lib\site-packages\camera_trap_classifier\train.py", line 329, in <dictcomp>
    for o, c in
KeyError: 'species'

My ctc.train command:

ctc.train -train_tfr_path C:/Users/mallevx/ctc/tfr/ -val_tfr_path C:/Users/mallevx/ctc/tfr/ -test_tfr_path C:/Users/mallevx/ctc/tfr/ -class_mapping_json C:/Users/mallevx/ctc/tfr/label_mapping.json -run_outputs_dir C:/Users/mallevx/ctc/my_model/run1/ -model_save_dir C:/Users/mallevx/ctc/my_model/save1/ -model ResNet18 -labels species count -labels_loss_weights 1 0.5 -batch_size 128 -n_cpus 4 -n_gpus 1 -buffer_size 512 -max_epochs 70

Contents of my label_mapping.json:
{"class": {"BADGER_1": 0, "BEAVER_1": 1, "MINK_1": 2}}

Any thoughts on what's wrong? Thanks in advance!

predict with pre-trained models

I am trying to use the pre-trained models that were published with your paper to predict on our own data from Kenya. For now I am using the ss_blank_vs_non_blank_small_201711150811.hdf5 model. I created the following .json files:

class_mapping_blanks.json
{"class": {"0": "blank", "1": "non_blank"}}

pre_processing_blank.json
{"output_height": 224,"output_width": 224,"image_means": [0.16718366742134094, 0.16718366742134094, 0.16718366742134094],"image_stdevs": [0.2730404734611511, 0.2891225218772888, 0.2891225218772888],"is_training": 0,"image_choice_for_sets": "random"}

and use the following command:
ctc.predict -image_dir T:\Kenya\Test_Photos_Small -results_file T:\KenyaPred.csv -model_path "T:\Willi Code\models\ss\ss_blank_vs_non_blank_small_201711150811.hdf5" -class_mapping_json "T:\Willi Code\models\ss\class_mapping_blanks.json" -pre_processing_json "T:\Willi Code\models\ss\pre_processing_blank.json" -batch_size 32

Things initially look good;

Arg: csv_path, Value:None
Arg: csv_images_root_path, Value:
Arg: csv_id_col, Value:
Arg: csv_images_cols, Value:['']
Arg: image_dir, Value:T:\Kenya\Test_Photos_Small
Arg: results_file, Value:T:\KenyaPred.csv
Arg: export_file_type, Value:csv
Arg: model_path, Value:T:\Willi Code\models\ss\ss_blank_vs_non_blank_small_201711150811.hdf5
Arg: class_mapping_json, Value:T:\Willi Code\models\ss\class_mapping_blanks.json
Arg: pre_processing_json, Value:T:\Willi Code\models\ss\pre_processing_blank.json
Arg: batch_size, Value:32
Arg: aggregation_mode, Value:mean
2019-04-10 12:08:44.183003: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-04-10 12:08:44.882380: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Quadro M2000M major: 5 minor: 0 memoryClockRate(GHz): 1.137
pciBusID: 0000:01:00.0
totalMemory: 4.00GiB freeMemory: 3.34GiB
2019-04-10 12:08:44.891074: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-04-10 12:08:45.955152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-04-10 12:08:45.961057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0
2019-04-10 12:08:45.966280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N
2019-04-10 12:08:45.969242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3055 MB memory) -> physical GPU (device: 0, name: Quadro M2000M, pci bus id: 0000:01:00.0, compute capability: 5.0)
Read following class mappings:
  Reading values for entry: class
    Key: 0 - Value: blank
    Key: 1 - Value: non_blank
Read following pre processing options:
  Key: output_height - Value: 224
  Key: output_width - Value: 224
  Key: image_means - Value: [0.16718366742134094, 0.16718366742134094, 0.16718366742134094]
  Key: image_stdevs - Value: [0.2730404734611511, 0.2891225218772888, 0.2891225218772888]
  Key: is_training - Value: 0
  Key: image_choice_for_sets - Value: random

but then I get the following error:

Traceback (most recent call last):
  File "c:\program files\python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\program files\python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Program Files\Python36\Scripts\ctc.predict.exe\__main__.py", line 9, in <module>
  File "c:\program files\python36\lib\site-packages\camera_trap_classifier\predict.py", line 92, in main
    batch_size=args['batch_size'])
  File "c:\program files\python36\lib\site-packages\camera_trap_classifier\predicting\predictor.py", line 103, in predict_from_image_dir
    batch_size, export_type)
  File "c:\program files\python36\lib\site-packages\camera_trap_classifier\predicting\predictor.py", line 294, in _predict_inventory
    sub_preds = self._iterate_inventory_dataset(dataset, sub_inventory)
  File "c:\program files\python36\lib\site-packages\camera_trap_classifier\predicting\predictor.py", line 378, in _iterate_inventory_dataset
    self.processor.map_and_extract_model_prediction(_id_preds)
  File "c:\program files\python36\lib\site-packages\camera_trap_classifier\predicting\processor.py", line 24, in map_and_extract_model_prediction
    list(self.id_to_class_mapping_clean[output].keys())
KeyError: 'dense_1'

which is in the code that maps the model output to the classes. I can load the model into my own Keras code and make predictions that way, so the model is fine. Any suggestions would be welcome.

Getting UnboundLocalError with train command

Hi Marco,

I am getting following error with the train command:

/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/config/config.py:19: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  self.cfg = yaml.load(fp)
Traceback (most recent call last):
  File "/usr/local/bin/ctc.train", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/train.py", line 376, in main
    n_parallel_file_reads=args['n_parallel_file_reads'])
  File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/data/utils.py", line 368, in n_records_in_tfr_dataset
    logger.debug("Finished -- Counted {} records".format(counter[-1]))
UnboundLocalError: local variable 'counter' referenced before assignment

My train command looks like this:

ctc.train -train_tfr_path $PWD/tfr_files -val_tfr_path $PWD/tfr_files -test_tfr_path $PWD/tfr_files -class_mapping_json $PWD/tfr_files/label_mapping.json -run_outputs_dir $PWD/my_model1_run1/ -model_save_dir $PWD/my_model1_save1/ -model ResNet18 -labels class -batch_size 128 -n_cpus 4 -buffer_size 512 -max_epochs 70

Any thoughts as to what I might be doing wrong? Thanks as always!

ctc.train error

Hi Marco,

Thanks a lot for this super cool package. I'm an R user but the documentation helped a lot. I used another package written in R (MLWIC), and I want to know if I get better result with your code, and see if it's faster, as the other one is super slow.

So as a background. I'm using the docker version with labels (only cpu). I created my csv file, and the image column has the path to my images (/data/train/images/*.JPG). I have around 38000 images with labels. The json file was created. and when I looked at it everything made sense. Then I ran the code to create the tfr files ( test and validation are around 1800, and the rest are the training). So it created 7 train *.tfrecodr, and one for each validation and test. When I opened the label_mapping.json, everything looked legit:
{"count": {"1": 0}, "species": {"buffalo": 1, "waterbuck": 17, "darkness": 2, "jackal": 11, "giraffe": 5, "hartebeest": 8, "hare": 7, "impala": 10, "vegetation": 16, "eland": 3, "grantgazelle": 6, "humanactivities": 9, "lion": 12, "spottedhyena": 14, "baboon": 0, "elephant": 4, "zebra": 18, "servalcat": 13, "unknown": 15}}.
Then to train the model, I followed your documentaion, and just changed the -label, and added the weight_loss parameter:
sudo docker exec ctc ctc.train -train_tfr_path /data/tfr_files/ -val_tfr_path /data/tfr_files/ -test_tfr_path /data/tfr_files/ -class_mapping_json /data/tfr_files/label_mapping.json -run_outputs_dir /data/run1/ -model_save_dir /data/save1/ -model small_cnn -labels species count -labels_loss_weights 1 0.5 -batch_size 16 -n_cpus 4 -n_gpus 0 -buffer_size 16 -max_epochs 70 -color_augmentation full_randomized
but I get bunch of errors:

Traceback (most recent call last): File "/usr/local/bin/ctc.train", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/train.py", line 489, in main output_loss_weights=args['labels_loss_weights']) File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/training/prepare_model.py", line 322, in create_model metrics=[accuracy, top_k_accuracy]) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/checkpointable/base.py", line 474, in _method_wrapper method(self, *args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/training.py", line 617, in compile output_loss = weighted_loss(y_true, y_pred, sample_weight, mask) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/training_utils.py", line 598, in weighted score_array = fn(y_true, y_pred) File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/training/utils.py", line 93, in masked_loss_function return loss_function(y_true * mask, y_pred * mask) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/backend.py", line 3630, in sparse_categorical_crossentropy logits = array_ops.reshape(output, [-1, int(output_shape[-1])]) TypeError: int returned non-int (type NoneType)

I would be grateful if you can let me know what the issue is, and how I can resolve it. I don't know much about python, so sorry in advance if I said something that didn't make sense.

Loading best_model.hdf5 for Xception Tensorflow

When I try to load the weights into the Xception model of Tensorflow, i'm getting the following error:
ValueError: You are trying to load a weight file containing 88 layers into a model with 80 layers.
Have any changes been made to the Xception model being used in this repo?
Regards,
praneet195

Test code fail with ValueError when running on Google Colab

Thanks for the great software!
I am trying to test it out on Google Colab, but there is an error and here is how to reproduce it:
Runtime: Python3 GPU

  • Install via pip:
    pip install git+git://github.com/marco-willi/camera-trap-classifier.git # egg=camera_trap_classifier[tf-gpu]

  • Try the training test from bash:
    %cd /usr/local/lib/python3.6/dist-packages/camera_trap_classifier/
    !python -m unittest discover test/training

  • This will prduce an InvalidArgumentError with message:
    ERROR: testModelRuns (test_create_model.CreateModelTests) testModelRuns (test_create_model.CreateModelTests) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1607, in _create_c_op c_op = c_api.TF_FinishOperation(op_desc) tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 0 but is rank 1 for 'metrics/label/label/species_top_k_accuracy/cond/Switch' (op: 'Switch') with input shapes: [?], [?].

  • There is also another exemption, with much longer trace backs but problem occur in
    /training/test_create_model.py, line 39, in testModelRuns output_loss_weights=None)
    /training/prepare_model.py, line 322, in create_model metrics=[accuracy, top_k_accuracy])
    /training/tracking/base.py", line 457, in _method_wrapper result = method(self, *args, **kwargs)' before trace went to tensorflowand produce a VelueError with message:
    File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1610, in _create_c_op
    raise ValueError(str(e))
    ValueError: Shape must be rank 0 but is rank 1 for 'metrics/label/label/species_top_k_accuracy/cond/Switch' (op: 'Switch') with input shapes: [?], [?].
    `

This error also occurs when use ctc.train. All pipeline before training works as expected, so do two other tests.

Thanks!

Python 3.6.13 Error

Hi @marco-willi. We've recently had to upgrade python within our docker image, to run the latest version of PantheraIDS. We're still using the same version of Tensorflow, but now getting this error when trying to run the Southern Africa model:

Traceback (most recent call last):
File "/srv/shiny-server/custom/machine_classifier/main_prediction/predict.py", line 105, in
main()
File "/srv/shiny-server/custom/machine_classifier/main_prediction/predict.py", line 85, in main
aggregation_mode=args['aggregation_mode'])
File "/srv/shiny-server/custom/machine_classifier/main_prediction/camera_trap_classifier/predicting/predictor.py", line 73, in init
self.model = load_model_from_disk(self.model_path, compile=False)
File "/srv/shiny-server/custom/machine_classifier/main_prediction/camera_trap_classifier/training/prepare_model.py", line 31, in load_model_from_disk
build_masked_loss(K.sparse_categorical_crossentropy)})
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 229, in load_model
model_config = json.loads(model_config.decode('utf-8'))
AttributeError: 'str' object has no attribute 'decode'

Code for Windows

I am working on Camera Trap Images. I have just started and trying to understand the codes. It would be great if I could get an idea or resources for windows.

transfer learning error

Hi Marco,

I am trying to do transfer learning with the existing model
model path : 'C:/Users/meiyip/Desktop/wildlife/my_model/save1/best_model.hdf5'

Below is the command I had tried
ctc.train -train_tfr_path C:/Users/meiyip/Desktop/transfer_learning/tfr_files/
-val_tfr_path C:/Users/meiyip/Desktop/transfer_learning/tfr_files/
-test_tfr_path C:/Users/meiyip/Desktop/transfer_learning/tfr_files/
-class_mapping_json C:/Users/meiyip/Desktop/transfer_learning/tfr_files/label_mapping.json
-run_outputs_dir C:/Users/meiyip/Desktop/transfer_learning/my_model/run1/
-model_save_dir C:/Users/meiyip/Desktop/transfer_learning/my_model/save1/
-model small_cnn
-labels species count
-batch_size 30
-n_cpus 4
-n_gpus 1
-buffer_size 512
-max_epochs 5
-early_stopping_patience 10
-transfer_learning
-model_to_load C:/Users/meiyip/Desktop/wildlife/my_model/save1/best_model.hdf5
-output_width 224
-output_height 224

However I am getting error below:

image

Do you have an example on how transfer learning can be done using this repo?

Thank you.
Mei Yee

$'\r': command not found

I'm running the docker image for camera-trap-classifier from https://hub.docker.com/r/will5448/camera-trap-classifier on a computing cluster. The create_dataset_inventory command gives the following error:

./condor_exec.exe: line 2: $'\r': command not found
./condor_exec.exe: line 7: $'\r': command not found
./condor_exec.exe: line 9: -export_path: command not found
./condor_exec.exe: line 10: $'\r': command not found
Traceback (most recent call last):
  File "/usr/lib/python3.5/logging/config.py", line 558, in configure
    handler = self.configure_handler(handlers[name])
  File "/usr/lib/python3.5/logging/config.py", line 731, in configure_handler
    result = factory(**kwargs)
  File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/config/logging.py", line 16, in logmaker
    return logging.FileHandler(log_path, mode, encoding)
  File "/usr/lib/python3.5/logging/__init__.py", line 1008, in __init__
    StreamHandler.__init__(self, self._open())
  File "/usr/lib/python3.5/logging/__init__.py", line 1037, in _open
    return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/home/malleshappa/dataset_inventory/run_debug.log'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/ctc.create_dataset_inventory", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/create_dataset_inventory.py", line 136, in main
    setup_logging(log_output_path=args['log_outdir'])
  File "/usr/local/lib/python3.5/dist-packages/camera_trap_classifier/config/logging.py", line 35, in setup_logging
    logging.config.dictConfig(config)
  File "/usr/lib/python3.5/logging/config.py", line 795, in dictConfig
    dictConfigClass(config).configure()
  File "/usr/lib/python3.5/logging/config.py", line 566, in configure
    '%r: %s' % (name, e))
ValueError: Unable to configure handler 'debug_file_handler': [Errno 2] No such file or directory: '/home/malleshappa/dataset_inventory/run_debug.log'

condor_exec.exe is the scheduling program used by the computing cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.