Git Product home page Git Product logo

deep_learning_for_camera_trap_images's People

Contributors

arashno avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep_learning_for_camera_trap_images's Issues

Is the code in phase2/eval.py wrong?

The code in line 162 in phase2/eval.py :

true_top5_predictions_count+= top5_accuracy
true_top3_predictions_count+= top3_accuracy
one_bin_off_val+= obol_val

print(batch_format_str%(step, '[' + ', '.join('%.3f' % (item/(step+1.0)) for item in true_predictions_count) + ']', true_top5_predictions_count/(step + 1.0), true_top3_predictions_count/(step+1.0), obol_val/(step+1.0), accv_all/total_examples, prcv_all/total_examples, recv_all/total_examples))

I think the bold code obol_val is not right, but should be one_bin_off_val. Am I right?

EF_val.csv - how to get?

I was trying to run your code with pre-trained model.

Where can i find EF_val.csv.

Please let me know.

Converting model to Pytorch

Hi @arashno ,

I need to convert your res152 recognition model to pytorch. I am using mmdnn for conversion, but I cannot find the output node which needs to be given to the mmdnn conversion command. I have visualized the graph using the meta file, and there is no name that looks like output, softmax or predictions.

Can you or anyone else help with the conversion?

Can you please provide pb file?

If possible, can you please provide a pb file(TF Saved Model) for the pre-trained models instead of checkpoint files.

Please let me know.

Pre-processing test images: normalizing a dataset

Hello there,

I am attempting to train and test a new image classification algorithm using the R package MLWIC provided in your recent paper, Tabak et al. 2018. I am an experienced R user but do not have experience in python.

I'm trying to process images to ready them for training and testing. In the Tabak et al 2018 paper, it mentions that authors followed methods in your recent publication (Norouzzadeh et al 2018 - appendix)

I have a few questions.

1. Am I correct in assuming that the train command in MLWIC performs "random cropping, horizontal flipping, brightness modification, and contrast modification" to each training image, as is recommended in both papers?

There is code provided within an "L1" folder that does seem to do this, but it's not clear (to me) if this code is leveraged during MLWIC::train. I did ask Mikey Tabak about this (here) and I wondered if you could provide any further clarification.

2. How is image normalization carried out for the test dataset?

When I read through the Norouzzadeh et al 2018 appendix (linked above) there is a section on the second page, second paragraph, that states:

After scaling down the images, we computed the mean and standard deviation of pixel intensities for each color channel separately and then we normalized the images by subtracting the average and dividing by the standard deviation.

I did some reading and found that some authors use the mean and standard deviation of the entire dataset, while others use the mean and standard deviation of each image. Forgive me this question is naive... (I am, after all, the intended "ecologist-not-data-scientist" audience for the program ๐Ÿ˜€) - did you use the mean and sd of each image, or of the whole dataset?

I have one final question but I feel that it is more general, so I've posted it on stackoverflow. If you do have time to check it out there, I would be very grateful for your two cents.

https://stackoverflow.com/questions/55306443/normalizing-an-image-in-r-using-mean-and-standard-deviation

Is it a best practice to remove camera software labels on input photos?

My end goal is to train a model with input from a number of different datasets across my study area.

I noticed that some cameras, e.g., ScoutGuard, label images just at the bottom left and right - e.g., "ScoutGuard 09.15.2010 17:05:01". The label has a grey background and black text.

Other cameras, e.g., Reconyx, will label images on the top and bottom on the image with a band of black background with white text.

I'm concerned that these areas of the photo will influence the decisions made by the machine learning algorithm. For one, the algorithm might associate one style of labels with certain species more common in that dataset. Secondly, the dark black or bright white in the labels will undoubtedly change the values of the entire image when the image is normalized before entering the machine learning process.

Did you remove these labels from the images before processing them in the machine learning algorithm you've used here?

I ask because I just figured I would have to do this, but wasn't sure if I was over-thinking it.

Why is there no animal behavior? Doesn't the paper say you can predict exactly what an animal is doing

reslut:
1,b'/home/lf/\xe6\xa1\x8c\xe9\x9d\xa2/jpg/100.jpg',1,[1, 0],[0.9994, 0.0006]
2,b'/home/lf/\xe6\xa1\x8c\xe9\x9d\xa2/jpg/23.jpg',1,[1, 0],[0.9996, 0.0004]
But I didn't find the results of animal behavior,[0.9996, 0.0004]I think there are only top1 and top5 results, but there is no output of Top2-Top4 results
and:
Batch Number: 0, Top-1 Hit: 2, Top-5 Hit: 2, Top-1 Accuracy: 1.000, Top-5 Accuracy: 1.000
This is the result of a CMD command,I don't understand

Unclear how to use the code

Despite looking at the recommended repo, it's still a little unclear how to use this.

An example include an appropriately-formatted input file and a couple of example images would go along way towards making this useful to others.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.