Git Product home page Git Product logo

fruit-images-dataset's Introduction

Fruits-360: A dataset of images containing fruits and vegetables

Version: 2020.05.18.0

A high-quality, dataset of images containing fruits and vegetables. The following fruits and vegetables are included: Apples (different varieties: Crimson Snow, Golden, Golden-Red, Granny Smith, Pink Lady, Red, Red Delicious), Apricot, Avocado, Avocado ripe, Banana (Yellow, Red, Lady Finger), Beetroot Red, Blueberry, Cactus fruit, Cantaloupe (2 varieties), Carambula, Cauliflower, Cherry (different varieties, Rainier), Cherry Wax (Yellow, Red, Black), Chestnut, Clementine, Cocos, Corn (with husk), Cucumber (ripened), Dates, Eggplant, Fig, Ginger Root, Granadilla, Grape (Blue, Pink, White (different varieties)), Grapefruit (Pink, White), Guava, Hazelnut, Huckleberry, Kiwi, Kaki, Kohlrabi, Kumsquats, Lemon (normal, Meyer), Lime, Lychee, Mandarine, Mango (Green, Red), Mangostan, Maracuja, Melon Piel de Sapo, Mulberry, Nectarine (Regular, Flat), Nut (Forest, Pecan), Onion (Red, White), Orange, Papaya, Passion fruit, Peach (different varieties), Pepino, Pear (different varieties, Abate, Forelle, Kaiser, Monster, Red, Stone, Williams), Pepper (Red, Green, Orange, Yellow), Physalis (normal, with Husk), Pineapple (normal, Mini), Pitahaya Red, Plum (different varieties), Pomegranate, Pomelo Sweetie, Potato (Red, Sweet, White), Quince, Rambutan, Raspberry, Redcurrant, Salak, Strawberry (normal, Wedge), Tamarillo, Tangelo, Tomato (different varieties, Maroon, Cherry Red, Yellow, not ripened, Heart), Walnut, Watermelon.

Dataset properties

Total number of images: 90483.

Training set size: 67692 images (one fruit or vegetable per image).

Test set size: 22688 images (one fruit or vegetable per image).

Multi-fruits set size: 103 images (more than one fruit (or fruit class) per image)

Number of classes: 131 (fruits and vegetables).

Image size: 100x100 pixels.

Filename format: image_index_100.jpg (e.g. 32_100.jpg) or r_image_index_100.jpg (e.g. r_32_100.jpg) or r2_image_index_100.jpg or r3_image_index_100.jpg. "r" stands for rotated fruit. "r2" means that the fruit was rotated around the 3rd axis. "100" comes from image size (100x100 pixels).

Different varieties of the same fruit (apple for instance) are stored as belonging to different classes.

Repository structure

Folders Training and Test contain images for training and testing purposes.

Folder test-multiple_fruits contains images with multiple fruits. Some of them are partially covered by other fruits. This is an excelent test for real-world detection.

Folder src/image_classification contains the python code for training the neural network. It uses the TensorFlow 2.0 library.

Folder src/image_classification_tf_1.8.0 contains the old version of the python code for training the neural network. It uses the TensorFlow 1.8.0 library.

Folder src/utils contains the C++ code used for extracting the fruits or vegetables from background.

Folder papers contains the research papers related to this dataset.

Alternate download

The dataset can also be downloaded from: Kaggle

How to cite

Horea Muresan, Mihai Oltean, Fruit recognition from images using deep learning, Acta Univ. Sapientiae, Informatica Vol. 10, Issue 1, pp. 26-42, 2018.

How we created the dataset

Fruits and vegetables were planted in the shaft of a low speed motor (3 rpm) and a short movie of 20 seconds was recorded.

A Logitech C920 camera was used for filming the fruits. This is one of the best webcams available.

Behind the fruits we placed a white sheet of paper as background.

However due to the variations in the lighting conditions, the background was not uniform and we wrote a dedicated algorithm which extract the fruit from the background. This algorithm is of flood fill type: we start from each edge of the image and we mark all pixels there, then we mark all pixels found in the neighborhood of the already marked pixels for which the distance between colors is less than a prescribed value. We repeat the previous step until no more pixels can be marked.

All marked pixels are considered as being background (which is then filled with white) and the rest of pixels are considered as belonging to the object.

The maximum value for the distance between 2 neighbor pixels is a parameter of the algorithm and is set (by trial and error) for each movie.

Pictures from the test-multiple_fruits folder were made with a Nexus 5X phone.

Results

We have run TensorFlow on these data and the results are presented in this research paper.

History

Fruits were filmed at the dates given below (YYYY.MM.DD):

2017.02.25 - Apple (golden).

2017.02.28 - Apple (Red Yellow 1, red, golden2), Kiwi, Pear, Grapefruit, Lemon, Orange, Strawberry.

2017.03.05 - Apple (golden3, Braeburn, Granny Smith, red2).

2017.03.07 - Apple (red3).

2017.05.10 - Plum, Peach, Peach flat, Apricot, Nectarine, Pomegranate.

2017.05.27 - Avocado, Papaya, Grape, Cherrie.

2017.12.25 - Carambula, Cactus fruit, Granadilla, Kaki, Kumsquats, Passion fruit, Avocado ripe, Quince.

2017.12.28 - Clementine, Cocos, Mango, Lime, Lychee.

2017.12.31 - Apple Red Delicious, Pear Monster, Grape White.

2018.01.14 - Banana, Grapefruit Pink, Mandarine, Pineapple, Tangelo.

2018.01.19 - Huckleberry, Raspberry.

2018.01.26 - Dates, Maracuja, Plum 2, Salak, Tamarillo.

2018.02.05 - Guava, Grape White 2, Lemon Meyer

2018.02.07 - Banana Red, Pepino, Pitahaya Red.

2018.02.08 - Pear Abate, Pear Williams.

2018.05.22 - Lemon rotated, Pomegranate rotated.

2018.05.24 - Cherry Rainier, Cherry 2, Strawberry Wedge.

2018.05.26 - Cantaloupe (2 varieties).

2018.05.31 - Melon Piel de Sapo.

2018.06.05 - Pineapple Mini, Physalis, Physalis with Husk, Rumbutan.

2018.06.08 - Mulberry, Redcurrant.

2018.06.16 - Cherry Red, Hazelnut, Walnut, Tomato.

2018.06.17 - Cherry Wax (Yellow, Red, Black).

2018.08.19 - Apple Red Yellow 2, Grape Blue, Grape White 2, Grape White 3, Peach 2, Plum 3, Tomato Maroon, Tomato 1-4.

2018.12.20 - Nut Pecan, Pear Kaiser, Tomato Yellow.

2018.12.21 - Banana Lady Finger, Chesnut, Mangostan.

2018.12.22 - Pomelo Sweetie.

2019.04.21 - Apple Crimson Snow, Apple Pink Lady, Blueberry, Kohlrabi, Mango Red, Pear Red, Pepper (Red, Yellow, Green).

2019.06.18 - Beetroot Red, Corn, Ginger Root, Nectarine Flat, Nut Forest, Onion Red, Onion Red Peeled, Onion White, Potato Red, Potato Red Washed, Potato Sweet, Potato White.

2019.07.07 - Cauliflower, Eggplant, Pear Forelle, Pepper Orange, Tomato Heart.

2019.09.22 - Corn Husk, Cucumber Ripe, Fig, Pear 2, Pear Stone, Tomato not Ripened, Watermelon.

License

MIT License

Copyright (c) 2017-2020 Mihai Oltean, Horea Muresan

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

fruit-images-dataset's People

Contributors

horea94 avatar imwildcat avatar mihaioltean avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fruit-images-dataset's Issues

problem in test

Hi, I succeeded train network for 40,000 repetitions :
.
.
.
time: 57 step: 39500 loss: 0.0116 accuracy: 1.0000
time: 60 step: 39750 loss: 0.0094 accuracy: 1.0000
time: 61 step: 40000 loss: 0.0010 accuracy: 1.0000

but in the level of test and predict. i get this result:

Predicted 367 out of 400; partial accuracy 0.8950
Predicted 465 out of 500; partial accuracy 0.9009
Predicted 560 out of 600; partial accuracy 0.9019
Predicted 648 out of 700; partial accuracy 0.8961
final accuracy on test data : 0.8942356
{'Apple Breaburn': 65, 'Apple Golden 1': 9, apple golden 2': 10, .........'banana': 30, banana lady finger : 8.....
what is mean?

and the level of prediction i get this result:
python detect_fruits.py --image_path=images/redapple.jpg
label index: 18 - label: cactus fruit - probability: 0.7114

Checkpoint error when freezing

When I run freeze_graph.py with the following flags;

python freeze_graph.py --input_checkpoint=C:\Tensorflow1\object_detection\fruit_models\checkpoint --output_graph=C:\Tensorflow1\object_detection\fruit_models\frozen_graph.pb --output_node_names = C:\Tensorflow1\object_detection\fruit_models\out\out --input_meta_graph_def = C:\Tensorflow1\object_detection\fruit_models\model.ckpt.meta

I encounter the following error:
File "C:\Users...\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file C:\Tensorflow1\object_detection\fruit_models\checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?

Do you have any idea how to fix this?
Thanks in advance.

Error

Hi,
I tried running your network on my raspberry pi 3 and in the code build_image_data.py, I received the following error message:
Traceback (most recent call last):
File "build_image_data.py", line 405, in
tf.app.run()

And looking in the other codes, I didn't find the definition of the run function of the tensorflow. Can you help me, please?

Thank you very much in advance.

Bad image

Please fix Test/Apple Golden 1/56_100.jpg

training file

hi . good evening.
I do not have a powerful system to training. It is possible for you to give me the trained files.
Thanks so much.
M.Hamidi
[email protected]

Issue with lables_file

While defining flags -- lables_file, getting error -

ArgumentError: argument --labels_file: conflicting option string: --labels_file

same problem in run

Hi. when i run and test program using this command :
my OS is windows 7.

C:\Users\ali\Downloads\Compressed\frt\src\image_classification>python fruit_dete
ction.py --image_path=images/cherryWaxYellow.jpg

occure this error:

[[Node: ReaderReadV2_1 = ReaderReadV2[_device="/job:localhost/replica:0 /task:0/device:CPU:0"](WholeFileReaderV2, input_producer_1)]]

Training issue

Hey,

I don't know what I'm doing wrong, but every time I run the training after the first step the loss is already 0, and the accuracy is 1. (time: 18.3030 step: 1 loss: 0.0000 accuracy: 1.0000) I'm running the training with only the Banana dataset.

Partial accuracy 0.000???

Hello @Horea94
I have used useCkpt = False during training and it gave me error during testing as per below.

File "network/fruit_test_net.py", line 82, in <module>
    saver.restore(sess, ckpt.model_checkpoint_path)
AttributeError: 'NoneType' object has no attribute 'model_checkpoint_path'

In order to resolve above issue i commented #saver.restore(sess, ckpt.model_checkpoint_path) in the fruit_test_net.py test script. After that it runs well but it gave me Partial accuracy 0.000.

Predicted 5 out of 100; partial accuracy 0.0000
Predicted 8 out of 200; partial accuracy 0.0000
Predicted 9 out of 300; partial accuracy 0.0000
Predicted 10 out of 400; partial accuracy 0.0000
Predicted 12 out of 500; partial accuracy 0.0000
Predicted 13 out of 600; partial accuracy 0.0000
Predicted 16 out of 700; partial accuracy 0.0000
Predicted 19 out of 800; partial accuracy 0.0000
Predicted 20 out of 900; partial accuracy 0.0000
Predicted 23 out of 1000; partial accuracy 0.0000
Predicted 27 out of 1100; partial accuracy 0.0000
Predicted 28 out of 1200; partial accuracy 0.0000
Predicted 32 out of 1300; partial accuracy 0.0000
Predicted 34 out of 1400; partial accuracy 0.0000
Predicted 38 out of 1500; partial accuracy 0.0000
```
any idea??

how to get the label

Hello!

I see that there are only images in the train folder, but no labels. How can I get the label. txt corresponding to the images in the train folder? I would greatly appreciate it if you could help me solve this problem.

extract_images documentation

I am currently trying to use the extract_images.cpp file for a different application (figurine model dogs) and it is fairly unclear what the arguments are doing in the r_box struct and how the code works as a whole.

Can there be some more documentation on how this file operates? In my opinion it could be extremely useful for generic industry use, but right now it is very unclear.

MacOS clone and run, this error occurs: OSError: Unable to open file (unable to open file: name = 'output_files/fruit-360 model/model.h5',

Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
data (InputLayer)            (None, 100, 100, 3)       0         
_________________________________________________________________
lambda_1 (Lambda)            (None, 100, 100, 4)       0         
_________________________________________________________________
conv1 (Conv2D)               (None, 100, 100, 16)      1616      
_________________________________________________________________
conv1_relu (Activation)      (None, 100, 100, 16)      0         
_________________________________________________________________
pool1 (MaxPooling2D)         (None, 50, 50, 16)        0         
_________________________________________________________________
conv2 (Conv2D)               (None, 50, 50, 32)        12832     
_________________________________________________________________
conv2_relu (Activation)      (None, 50, 50, 32)        0         
_________________________________________________________________
pool2 (MaxPooling2D)         (None, 25, 25, 32)        0         
_________________________________________________________________
conv3 (Conv2D)               (None, 25, 25, 64)        51264     
_________________________________________________________________
conv3_relu (Activation)      (None, 25, 25, 64)        0         
_________________________________________________________________
pool3 (MaxPooling2D)         (None, 12, 12, 64)        0         
_________________________________________________________________
conv4 (Conv2D)               (None, 12, 12, 128)       204928    
_________________________________________________________________
conv4_relu (Activation)      (None, 12, 12, 128)       0         
_________________________________________________________________
pool4 (MaxPooling2D)         (None, 6, 6, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 4608)              0         
_________________________________________________________________
fcl1 (Dense)                 (None, 1024)              4719616   
_________________________________________________________________
dropout_1 (Dropout)          (None, 1024)              0         
_________________________________________________________________
fcl2 (Dense)                 (None, 128)               131200    
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
predictions (Dense)          (None, 121)               15609     
=================================================================
Total params: 5,137,065
Trainable params: 5,137,065
Non-trainable params: 0
_________________________________________________________________
None
Found 55357 images belonging to 121 classes.
Found 6119 images belonging to 121 classes.
Found 20618 images belonging to 121 classes.
Epoch 1/25
1108/1108 [==============================] - 278s 251ms/step - loss: 3.7775 - accuracy: 0.1765 - val_loss: 2.2806 - val_accuracy: 0.7799
Epoch 2/25
/Users/mingh/.pyenv/versions/3.8.1/lib/python3.8/site-packages/keras/callbacks/callbacks.py:706: RuntimeWarning: Can save best model only with val_acc available, skipping.
  warnings.warn('Can save best model only with %s available, '
1108/1108 [==============================] - 276s 249ms/step - loss: 0.7650 - accuracy: 0.7800 - val_loss: 0.3695 - val_accuracy: 0.9317
Epoch 3/25
1108/1108 [==============================] - 270s 244ms/step - loss: 0.2649 - accuracy: 0.9177 - val_loss: 0.4914 - val_accuracy: 0.9593
Epoch 4/25
1108/1108 [==============================] - 271s 245ms/step - loss: 0.1443 - accuracy: 0.9546 - val_loss: 0.0690 - val_accuracy: 0.9747
Epoch 5/25
1108/1108 [==============================] - 275s 248ms/step - loss: 0.0923 - accuracy: 0.9704 - val_loss: 0.2997 - val_accuracy: 0.9745
Epoch 6/25
1108/1108 [==============================] - 279s 252ms/step - loss: 0.0671 - accuracy: 0.9788 - val_loss: 0.0598 - val_accuracy: 0.9799
Epoch 7/25
1108/1108 [==============================] - 270s 243ms/step - loss: 0.0487 - accuracy: 0.9843 - val_loss: 0.0040 - val_accuracy: 0.9822
Epoch 8/25
1108/1108 [==============================] - 266s 240ms/step - loss: 0.0446 - accuracy: 0.9862 - val_loss: 0.0636 - val_accuracy: 0.9820
Epoch 9/25
1108/1108 [==============================] - 265s 239ms/step - loss: 0.0370 - accuracy: 0.9887 - val_loss: 0.0085 - val_accuracy: 0.9806
Epoch 10/25
1108/1108 [==============================] - 266s 240ms/step - loss: 0.0304 - accuracy: 0.9901 - val_loss: 0.0721 - val_accuracy: 0.9835

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.05000000074505806.
Epoch 11/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0136 - accuracy: 0.9958 - val_loss: 0.0014 - val_accuracy: 0.9874
Epoch 12/25
1108/1108 [==============================] - 266s 240ms/step - loss: 0.0101 - accuracy: 0.9968 - val_loss: 0.1642 - val_accuracy: 0.9873
Epoch 13/25
1108/1108 [==============================] - 276s 249ms/step - loss: 0.0085 - accuracy: 0.9972 - val_loss: 2.9743e-04 - val_accuracy: 0.9887
Epoch 14/25
1108/1108 [==============================] - 267s 241ms/step - loss: 0.0088 - accuracy: 0.9971 - val_loss: 4.8541e-04 - val_accuracy: 0.9887
Epoch 15/25
1108/1108 [==============================] - 270s 244ms/step - loss: 0.0082 - accuracy: 0.9976 - val_loss: 0.0287 - val_accuracy: 0.9882
Epoch 16/25
1108/1108 [==============================] - 267s 241ms/step - loss: 0.0077 - accuracy: 0.9976 - val_loss: 1.9202e-04 - val_accuracy: 0.9877
Epoch 17/25
1108/1108 [==============================] - 262s 237ms/step - loss: 0.0076 - accuracy: 0.9978 - val_loss: 0.0138 - val_accuracy: 0.9891
Epoch 18/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0073 - accuracy: 0.9977 - val_loss: 0.1534 - val_accuracy: 0.9899
Epoch 19/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0063 - accuracy: 0.9978 - val_loss: 0.0231 - val_accuracy: 0.9894

Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.02500000037252903.
Epoch 20/25
1108/1108 [==============================] - 272s 245ms/step - loss: 0.0045 - accuracy: 0.9986 - val_loss: 0.0017 - val_accuracy: 0.9918
Epoch 21/25
1108/1108 [==============================] - 279s 252ms/step - loss: 0.0034 - accuracy: 0.9991 - val_loss: 0.0782 - val_accuracy: 0.9889
Epoch 22/25
1108/1108 [==============================] - 270s 244ms/step - loss: 0.0041 - accuracy: 0.9987 - val_loss: 0.0175 - val_accuracy: 0.9895

Epoch 00022: ReduceLROnPlateau reducing learning rate to 0.012500000186264515.
Epoch 23/25
1108/1108 [==============================] - 269s 242ms/step - loss: 0.0033 - accuracy: 0.9991 - val_loss: 0.0074 - val_accuracy: 0.9918
Epoch 24/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0030 - accuracy: 0.9991 - val_loss: 0.0096 - val_accuracy: 0.9907
Epoch 25/25
1108/1108 [==============================] - 272s 246ms/step - loss: 0.0024 - accuracy: 0.9994 - val_loss: 8.0244e-06 - val_accuracy: 0.9920
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-2-0f6530fe8848> in <module>
     40 
     41 model = network(input_shape=input_shape, num_classes=num_classes)
---> 42 train_and_evaluate_model(model, name="fruit-360-model")

<ipython-input-1-061a40b784e9> in train_and_evaluate_model(model, name, epochs, batch_size, verbose, useCkpt)
    130                                   callbacks=[learning_rate_reduction, save_model])
    131 
--> 132     model.load_weights(model_out_dir + "/model.h5")
    133 
    134     validationGen.reset()

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/keras/engine/saving.py in load_wrapper(*args, **kwargs)
    490                 os.remove(tmp_filepath)
    491             return res
--> 492         return load_function(*args, **kwargs)
    493 
    494     return load_wrapper

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
   1219         if h5py is None:
   1220             raise ImportError('`load_weights` requires h5py.')
-> 1221         with h5py.File(filepath, mode='r') as f:
   1222             if 'layer_names' not in f.attrs and 'model_weights' in f:
   1223                 f = f['model_weights']

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/h5py/_hl/files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds)
    404             with phil:
    405                 fapl = make_fapl(driver, libver, rdcc_nslots, rdcc_nbytes, rdcc_w0, **kwds)
--> 406                 fid = make_fid(name, mode, userblock_size,
    407                                fapl, fcpl=make_fcpl(track_order=track_order),
    408                                swmr=swmr)

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
    171         if swmr and swmr_support:
    172             flags |= h5f.ACC_SWMR_READ
--> 173         fid = h5f.open(name, flags, fapl=fapl)
    174     elif mode == 'r+':
    175         fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5f.pyx in h5py.h5f.open()

OSError: Unable to open file (unable to open file: name = 'output_files/fruit-360 model/model.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Problems to run freeze_graph

After training I tried to generate a graph with freeze_graph.py algorithm and I received this error:

(machine-learning) D:\Carine\Fruit-Images-Dataset\src\image_classification\utils>python freeze_graph.py
Input checkpoint '' doesn't exist!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.