Git Product home page Git Product logo

realtimeobjectdetection's People

Contributors

nicknochnack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realtimeobjectdetection's Issues

Is 'Creating TF Reacords' part necessary?

I'm newbie and trying the tutorial I found on youtube. Everything is perfect until I came to the 'Creating TF Records' part. I see many errors, the first one is no module found: pandas, I solved it. Then I encounter with an error which says
File "Tensorflow/scripts/generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
ModuleNotFoundError: No module named 'object_detection'

I search this error and I write 'pip install tensorflow-object-detection-api' in cmd. Then I encounter another error which says,

File "Tensorflow/scripts/generate_tfrecord.py", line 61, in label_map = label_map_util.load_labelmap(args.labels_path) File "C:\Python38\lib\site-packages\object_detection\utils\label_map_util.py", line 132, in load_labelmap with tf.gfile.GFile(path, 'r') as fid:
AttributeError: module 'tensorflow' has no attribute 'gfile'

Do you have any ide how to fix? Or should I pass this part?

gfile

i have tried to change tf.gfile.gfile to tf.io.gfile and same error

2023-09-09 21:56:44.780502: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2023-09-09 21:56:44.780726: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "c:\Users\ASUS\Desktop\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 61, in <module>
    label_map = label_map_util.load_labelmap(args.labels_path)
  File "c:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\object_detection\utils\label_map_util.py", line 132, in load_labelmap
    with tf.gfile.GFile(path, 'r') as fid:
AttributeError: module 'tensorflow' has no attribute 'gfile'

Image Augmentation with label datasets

I am using object detection api ssd_mobnet for detecting my objects. I want to apply Data augmentation on my dataset. But I am not sure, whether, I should label my images first or apply Data augmentation? . In case, if I label my dataset first and then apply Data augmentation, will my augmented data will be labelled automatically or I should label them again??
And augmentation data can't be labelled properly because they are rotated by different angles.
So is there any solution regarding this issue?

Detecting wrong objects

Hi.
I am trying to detect gestures, at this moment only 2 gestures. But I get a lot of false positive detections.
In train folder 67(including 7 background pictures) pics, in test 6 pics.
Model - ssd_mobilenet_v2_fpnlite 320x320.
TF2 - version 2.4.1.
Usually I go with 50000 steps and batch size 8 or lower. My GPU can't handle bigger batch size.
I changed background some of my images to different colors, having only my hand on some color.

I tried to add background images as described here #tensorflow/models#3365 (comment)
But I am not sure if it helped because I still have a lot of false positive detections.
Should I have added backgrounds to the test folder as well?

I also tried lowering learning rate (0.07 -> 0.007) .
Usually I get loss around 0.058 or 0.150.

I started training with learning rate 0.07 for the first 50000 steps, for the second 50000 steps I lowered rate to 0.007. Simply trying to understand if changing it after 50000 will give different result. If it won't work I wanted to try increasing number of pictures.

What should I do to get rid of the false positive detections?

issue while generating tf records

Traceback (most recent call last):
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 168, in
tf.app.run()
File "C:\Users\new\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\platform\app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\new\AppData\Local\Programs\Python\Python310\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "C:\Users\new\AppData\Local\Programs\Python\Python310\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 158, in main
tf_example = create_tf_example(group, path)
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 132, in create_tf_example
classes.append(class_text_to_int(row['class']))
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 101, in class_text_to_int
return label_map_dict[row_label]
KeyError: 'hello'
Traceback (most recent call last):
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 168, in
tf.app.run()
File "C:\Users\new\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\platform\app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\new\AppData\Local\Programs\Python\Python310\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "C:\Users\new\AppData\Local\Programs\Python\Python310\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 158, in main
tf_example = create_tf_example(group, path)
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 132, in create_tf_example
classes.append(class_text_to_int(row['class']))
File "G:\hand\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 101, in class_text_to_int
return label_map_dict[row_label]
KeyError: 'hello'

TensorSliceReader constructor:

I keep getting the following error when I try to train the model. The checkpoint files are saved in the pre-trained-models folder but it isn't being found.

raise errors_impl.NotFoundError(None, None, error_message)
tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for Tensorflow/workspace/pre-trained-models/faster_rcnn_resnet101_v1_640x640_coco17_tpu-8.tar/faster_rcnn_resnet101_v1_640x640_coco17_tpu-8/faster_rcnn_resnet101_v1_640x640_coco17_tpu-8/checkpoint/ckpt-0

ValueError: 'images' must have either 3 or 4 dimensions.

Any solution for this error ? @nicknochnack


ValueError Traceback (most recent call last)
in ()
4
5 input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
----> 6 detections = detect_fn(input_tensor)
7
8 num_detections = int(detections.pop('num_detections'))

8 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise

ValueError: in user code:

<ipython-input-88-7dabc3bc48bd>:11 detect_fn  *
    image, shapes = detection_model.preprocess(image)
/usr/local/lib/python3.7/dist-packages/object_detection/meta_architectures/ssd_meta_arch.py:484 preprocess  *
    normalized_inputs, self._image_resizer_fn)
/usr/local/lib/python3.7/dist-packages/object_detection/utils/shape_utils.py:492 resize_images_and_return_shapes  *
    outputs = static_or_dynamic_map_fn(
/usr/local/lib/python3.7/dist-packages/object_detection/utils/shape_utils.py:246 static_or_dynamic_map_fn  *
    outputs = [fn(arg) for arg in tf.unstack(elems)]
/usr/local/lib/python3.7/dist-packages/object_detection/core/preprocessor.py:3259 resize_image  *
    new_image = tf.image.resize_images(
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper  **
    return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/image_ops_impl.py:1468 resize_images
    skip_resize_if_same=True)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/image_ops_impl.py:1320 _resize_images_common
    raise ValueError('\'images\' must have either 3 or 4 dimensions.')

ValueError: 'images' must have either 3 or 4 dimensions.

ImportError: cannot import name 'string_int_label_map_pb2' from 'object_detection.protos'

evn: Win10+Anaconda
Traceback (most recent call last):
File "F:\SSD-MobileNet\Python\sign_language\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection\utils\label_map_util.py", line 21, in
from object_detection.protos import string_int_label_map_pb2
ImportError: cannot import name 'string_int_label_map_pb2' from 'object_detection.protos' (C:\ProgramData\Anaconda3\lib\site-packages\object_detection\protos_init_.py)
Traceback (most recent call last):
File "F:\SSD-MobileNet\Python\sign_language\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection\utils\label_map_util.py", line 21, in
from object_detection.protos import string_int_label_map_pb2
ImportError: cannot import name 'string_int_label_map_pb2' from 'object_detection.protos' (C:\ProgramData\Anaconda3\lib\site-packages\object_detection\protos_init_.py)

error when creating TF records

I have an error when running

!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x {IMAGE_PATH + '/train'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/train.record'}
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x{IMAGE_PATH + '/test'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/test.record'}

error:

Traceback (most recent call last):
  File "F:\Robo\Sign detector\Tensorflow\scripts\generate_tfrecord.py", line 168, in <module>
    tf.app.run()
  File "C:\Users\abdul\.conda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 36, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "C:\Users\abdul\.conda\envs\tensorflow\lib\site-packages\absl\app.py", line 312, in run
    _run_main(main, args)
  File "C:\Users\abdul\.conda\envs\tensorflow\lib\site-packages\absl\app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "F:\Robo\Sign detector\Tensorflow\scripts\generate_tfrecord.py", line 158, in main
    tf_example = create_tf_example(group, path)
  File "F:\Robo\Sign detector\Tensorflow\scripts\generate_tfrecord.py", line 132, in create_tf_example
    classes.append(class_text_to_int(row['class']))
  File "F:\Robo\Sign detector\Tensorflow\scripts\generate_tfrecord.py", line 101, in class_text_to_int
    return label_map_dict[row_label]
KeyError: 'yes'

I hope anyone can help me.
Thanks in advance

Error when creating TF records file

https://www.youtube.com/watch?v=V0Pk_dPU2lY

Minute 52:00, when i run in cmd the python command i get this error :

Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 27, in
import tensorflow.compat.v1 as tf
ModuleNotFoundError: No module named 'tensorflow'

I followed all your steps from "how to install tensorflow in 5 steps"...

error creating in creating tf record

Traceback (most recent call last):
File "C:\Users\Pictures\sign recognition\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
File "C:\Users\AppData\Roaming\Python\Python311\site-packages\object_detection\utils\label_map_util.py", line 21, in
from object_detection.protos import string_int_label_map_pb2
File "C:\Users\AppData\Roaming\Python\Python311\site-packages\object_detection\protos\string_int_label_map_pb2.py", line 36, in
_descriptor.FieldDescriptor(
File "C:\Users\AppData\Roaming\Python\Python311\site-packages\google\protobuf\descriptor.py", line 561, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:

  1. Downgrade the protobuf package to 3.20.x or lower.
  2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Traceback (most recent call last):
File "C:\Users\Pictures\sign recognition\RealTimeObjectDetection\Tensorflow\scripts\generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
File "C:\Users\PARAS YADAV\AppData\Roaming\Python\Python311\site-packages\object_detection\utils\label_map_util.py", line 21, in
from object_detection.protos import string_int_label_map_pb2
File "C:\Users\AppData\Roaming\Python\Python311\site-packages\object_detection\protos\string_int_label_map_pb2.py", line 36, in
_descriptor.FieldDescriptor(
File "C:\Users\AppData\Roaming\Python\Python311\site-packages\google\protobuf\descriptor.py", line 561, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:

  1. Downgrade the protobuf package to 3.20.x or lower.
  2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
    please tell how to resolve this error

ModuleNotFoundError Traceback (most recent call last) <ipython-input-20-fd28d0a39720> in <module> ----> 1 import tensorflow as tf 2 from object_detection.utils import config_util 3 from object_detection.protos import pipeline_pb2 4 from google.protobuf import text_format ModuleNotFoundError: No module named 'tensorflow'ModuleNotFoundError Traceback (most recent call last) <ipython-input-20-fd28d0a39720> in <module> ----> 1 import tensorflow as tf 2 from object_detection.utils import config_util 3 from object_detection.protos import pipeline_pb2 4 from google.protobuf import text_format ModuleNotFoundError: No module named 'tensorflow'

Hi nick, while I'm trying to pass the step 5, Update Config for Transfer Learning in your tutorial notebook, I got this error

ModuleNotFoundError Traceback (most recent call last)
in
----> 1 import tensorflow as tf
2 from object_detection.utils import config_util
3 from object_detection.protos import pipeline_pb2
4 from google.protobuf import text_format

ModuleNotFoundError: No module named 'tensorflow'

i already followed your step in installing microsoft c++, protoc, and also the setup requirements like in this videos:https://www.youtube.com/watch?v=dZh_ps8gKgs&feature=youtu.be

can you help me with this issue?

The only thing that I did not install is the cuda and cudnn as O dont have nvidia gpu..

I'm waiting for your response

Thank you

ModuleNotFoundError: No module named 'tensorflow.contrib'

Hey, i have a problem with tensorflow's versions and everytime i run this :

import os
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder

print(CONFIG_PATH)

I get this error :

ModuleNotFoundError Traceback (most recent call last)
in
2 from object_detection.utils import label_map_util
3 from object_detection.utils import visualization_utils as viz_utils
----> 4 from object_detection.builders import model_builder
5
6 print(CONFIG_PATH)

~/.local/lib/python3.6/site-packages/object_detection/builders/model_builder.py in
27 from object_detection.builders import image_resizer_builder
28 from object_detection.builders import losses_builder
---> 29 from object_detection.builders import matcher_builder
30 from object_detection.builders import post_processing_builder
31 from object_detection.builders import region_similarity_calculator_builder as sim_calc

~/.local/lib/python3.6/site-packages/object_detection/builders/matcher_builder.py in
21
22 if tf_version.is_tf1():
---> 23 from object_detection.matchers import bipartite_matcher # pylint: disable=g-import-not-at-top
24
25

~/.local/lib/python3.6/site-packages/object_detection/matchers/bipartite_matcher.py in
18 import tensorflow.compat.v1 as tf
19
---> 20 from tensorflow.contrib.image.python.ops import image_ops
21 from object_detection.core import matcher
22

ModuleNotFoundError: No module named 'tensorflow.contrib'

Please Help

Unable to resolve an error

We are facing this issue :
(
from object_detection import model lib_v2
ImportError: cannot import name 'model_lib_v2' from 'object_detection' (C:\Users\darsh\AppData\Local\Programs\Python\Python37\lib\sie-packages\object_detection_init_.py)
)

In order to resolve this issue we manually moved the model_lib_v2 in object detection folder.

The issue gets resolved but another error pops out that goes like " no module named pycocotools " , to rectify this erro we tried cloning the git repo of pycocotools but during cloning it throws an error " failed building wheel for pycocotools ".

We tried approximately every solution available on internet to rectify this error and still not able to resolve this issue please help us out.

tf_slim error in Step 7 of loading trained model from checkpoint

Hi, I am trying to run step 7 in the tutorial.
This line is giving me an error:
from object_detection.builders import model_builder

It throws an error:

ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-1-067428168487> in <module>
      2 from object_detection.utils import label_map_util
      3 from object_detection.utils import visualization_utils as viz_utils
----> 4 from object_detection.builders import model_builder

~/RealTimeObjectDetection/object_detection/builders/model_builder.py in <module>
     21 from absl import logging
     22 
---> 23 from object_detection.builders import anchor_generator_builder
     24 from object_detection.builders import box_coder_builder
     25 from object_detection.builders import box_predictor_builder

~/RealTimeObjectDetection/object_detection/builders/anchor_generator_builder.py in <module>
     21 from __future__ import print_function
     22 from six.moves import zip
---> 23 from object_detection.anchor_generators import flexible_grid_anchor_generator
     24 from object_detection.anchor_generators import grid_anchor_generator
     25 from object_detection.anchor_generators import multiple_grid_anchor_generator

~/RealTimeObjectDetection/object_detection/anchor_generators/flexible_grid_anchor_generator.py in <module>
     17 import tensorflow.compat.v1 as tf
     18 
---> 19 from object_detection.anchor_generators import grid_anchor_generator
     20 from object_detection.core import anchor_generator
     21 from object_detection.core import box_list_ops

~/RealTimeObjectDetection/object_detection/anchor_generators/grid_anchor_generator.py in <module>
     25 from object_detection.core import anchor_generator
     26 from object_detection.core import box_list
---> 27 from object_detection.utils import ops
     28 
     29 

~/RealTimeObjectDetection/object_detection/utils/ops.py in <module>
     26 from six.moves import zip
     27 import tensorflow.compat.v1 as tf
---> 28 import tf_slim
     29 from object_detection.core import standard_fields as fields
     30 from object_detection.utils import shape_utils

**ModuleNotFoundError: No module named 'tf_slim'**

Upon Inspection, I found that tf slim has to be pip installed which I did. But it still throws an error. I am using Tensorflow 2.3 as required in this tutorial. Please help as I am stuck with the inference!

Dimension issue

Can someone help me out on how to resolve this issue. It was first detecting the test image in colab but when I run it again it gives me this error. And when it gives me the bounding box on the test image, the font size is too small. I have tried different methods but its not working. I am doing all this training and testing in colab.

Here is the code. Object detection Api setup runs successfully and model is trained and tested.
img = cv2.imread(IMAGE_PATH)
image_np = np.array(img)

input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = detect_fn(input_tensor)

num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections

detection_classes should be ints.

detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
line_thickness=9,
max_boxes_to_draw=5,
min_score_thresh=.3,
agnostic_mode=False)

image=cv2.resize(image_np_with_detections,(1500, 800),interpolation = cv2.INTER_NEAREST)
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()

Here is the error

File "/content/drive/MyDrive/Tensorflow/Research/models/research/object_detection/core/preprocessor.py", line 3327, in resize_image *
new_image = tf.image.resize_images(

ValueError: 'images' must have either 3 or 4 dimension

Defining object recognition boundaries

Hey!
i'm working on a project that it's main idea is tracking a bee.
The bee is inside a maze. How can I set the image recognition to be only inside the maze?
and how can i get coordinates of the bee so that in every X period of time I will print to a file the location of the bee?
thank you

AttributeError: No attribute AttentionPosition

Traceback (most recent call last):
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 32, in
from object_detection import model_lib_v2
File "/Users/samitkapadia/opt/anaconda3/envs/sign/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 30, in
from object_detection import inputs
File "/Users/samitkapadia/opt/anaconda3/envs/sign/lib/python3.7/site-packages/object_detection/inputs.py", line 26, in
from object_detection.builders import model_builder
File "/Users/samitkapadia/opt/anaconda3/envs/sign/lib/python3.7/site-packages/object_detection/builders/model_builder.py", line 36, in
from object_detection.meta_architectures import context_rcnn_meta_arch
File "/Users/samitkapadia/opt/anaconda3/envs/sign/lib/python3.7/site-packages/object_detection/meta_architectures/context_rcnn_meta_arch.py", line 42, in
class ContextRCNNMetaArch(faster_rcnn_meta_arch.FasterRCNNMetaArch):
File "/Users/samitkapadia/opt/anaconda3/envs/sign/lib/python3.7/site-packages/object_detection/meta_architectures/context_rcnn_meta_arch.py", line 95, in ContextRCNNMetaArch
faster_rcnn_pb2.AttentionPosition.POST_BOX_CLASSIFIER)
AttributeError: module 'object_detection.protos.faster_rcnn_pb2' has no attribute 'AttentionPosition'

Real time object detection Step 8

Showing error

ValueError: Creating variables on a non-first call to a function decorated with tf.function.

Probable Solution

In tensorflow 2.0, if you use pure eager execution, you can declare and re-use the same variable more than once since a tf.Variable - in eager mode - is just a plain Python object that gets destroyed as soon as the function ends and the variable, thus, goes out of scope.

Don't know how to change it. Please show how to get rid from this error

Thank you

NotImplementedError

When I run the train command that is issued it gives me this error:
Traceback (most recent call last):
File "C:/models/research/object_detection/model_main_tf2.py", line 113, in
tf.compat.v1.app.run()
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "C:/models/research/object_detection/model_main_tf2.py", line 110, in main
record_summaries=FLAGS.record_summaries)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\object_detection\model_lib_v2.py", line 546, in train_loop
train_dataset_fn)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\util\deprecation.py", line 340, in new_func
return func(*args, **kwargs)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 1143, in experimental_distribute_datasets_from_function
return self.distribute_datasets_from_function(dataset_fn, options)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 1135, in distribute_datasets_from_function
dataset_fn, options)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\distribute\mirrored_strategy.py", line 547, in _distribute_datasets_from_function
options)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 162, in get_distributed_datasets_from_function
input_contexts, strategy, options)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1273, in init
self._input_contexts, self._input_workers, dataset_fn))
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1936, in _create_datasets_from_function_with_input_context
dataset = dataset_fn(ctx)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\object_detection\model_lib_v2.py", line 541, in train_dataset_fn
input_context=input_context)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\object_detection\inputs.py", line 898, in train_input
reduce_to_frame_fn=reduce_to_frame_fn)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\object_detection\builders\dataset_builder.py", line 252, in build
input_reader_config)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\object_detection\builders\dataset_builder.py", line 237, in dataset_map_fn
fn_to_map, num_parallel_calls=num_parallel_calls)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\util\deprecation.py", line 340, in new_func
return func(*args, **kwargs)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 2685, in map_with_legacy_function
use_legacy_function=True))
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 4246, in init
use_legacy_function=use_legacy_function)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3493, in init
self._function.add_to_graph(ops.get_default_graph())
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\framework\function.py", line 546, in add_to_graph
self._create_definition_if_needed()
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\framework\function.py", line 378, in _create_definition_if_needed
self._create_definition_if_needed_impl()
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\framework\function.py", line 409, in _create_definition_if_needed_impl
capture_resource_var_by_value=self._capture_resource_var_by_value)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\framework\function.py", line 971, in func_graph_from_py_func
outputs = func(*func_graph.inputs)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3485, in wrapper_fn
ret = _wrapper_helper(*args)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3453, in _wrapper_helper
ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
File "C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 670, in wrapper
raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in user code:

C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\object_detection\data_decoders\tf_example_decoder.py:524 default_groundtruth_weights  *
    [tf.shape(tensor_dict[fields.InputDataFields.groundtruth_boxes])[0]],
C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper  **
    return target(*args, **kwargs)
C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\ops\array_ops.py:3120 ones
    output = _constant_if_small(one, shape, dtype, name)
C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\ops\array_ops.py:2804 _constant_if_small
    if np.prod(shape) < 1000:
<__array_function__ internals>:6 prod

C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\numpy\core\fromnumeric.py:3031 prod
    keepdims=keepdims, initial=initial, where=where)
C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\numpy\core\fromnumeric.py:87 _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
C:\Users\kirit\anaconda3\envs\cvt\lib\site-packages\tensorflow\python\framework\ops.py:855 __array__
    " a NumPy call, which is not supported".format(self.name))

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

I even tried to redo the project from scratch : didn't work

Tensorflow version : 2.4.1

Regarding Epochs

Hi,
We are mentioning the num_train_steps. Can we manually mention number of epochs instead of num_train_steps or both same.
I am trying to include 100 epochs for my training. Can we do that in ssd object detection and if so pls tell me how can we do it.

detect in real time error

AttributeError Traceback (most recent call last)
in
1 while True:
2 # detection_classes should be ints.
----> 3 detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
4
5 label_id_offset = 1

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in getattr(self, name)
398 If you are looking for numpy-related methods, please run the following:
399 from tensorflow.python.ops.numpy_ops import np_config
--> 400 np_config.enable_numpy_behavior()""".format(type(self).name, name))
401 self.getattribute(name)
402

AttributeError:
'EagerTensor' object has no attribute 'astype'.
If you are looking for numpy-related methods, please run the following:
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()

detections = detect_fn(input_tensor)
from matplotlib import pyplot as plt

Couldn't satisfy the requirements

Hey @nicknochnack When I tried to load all the dependencies to install tensorflow object detection I'm facing this issue of
Couldn't satisfy the requirements
ERROR: Could not find a version that satisfies the requirement tensorflow_io (from object-detection==0.1) (from versions: none)
ERROR: No matching distribution found for tensorflow_io (from object-detection==0.1)

Please help me out

Generate_tfrecord.py parse error

I change nothing from your original file, and I ran the cell for "generating tf records". It showed error messages like this
Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 170, in
tf.app.run()
File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python37\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python37\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "Tensorflow/scripts/generate_tfrecord.py", line 157, in main
examples = xml_to_csv(args.xml_dir)
File "Tensorflow/scripts/generate_tfrecord.py", line 81, in xml_to_csv
tree = ET.parse(xml_file)
File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python37\lib\xml\etree\ElementTree.py", line 1197, in parse
tree.parse(source, parser)
File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python37\lib\xml\etree\ElementTree.py", line 598, in parse
self._root = parser._parse_whole(source)
xml.etree.ElementTree.ParseError: no element found: line 1, column 0

Can you help me. It created the null file inside annotation folders

error script ?

root@blade:/home/a/RealTimeObjectDetection# python3 Tutorial.ipynb
Traceback (most recent call last):
File "Tutorial.ipynb", line 195, in
"collapsed": true
NameError: name 'true' is not defined

Hello I did all steps but got stuck when running the script...

```TypeError: expected string or bytes-like object``` in "8. Detect in Real-Time"


TypeError Traceback (most recent call last)
in
23 image_np_with_detections = image_np.copy()
24
---> 25 viz_utils.visualize_boxes_and_labels_on_image_array(
26 image_np_with_detections,
27 detections['detection_boxes'],

~\anaconda3\lib\site-packages\object_detection\utils\visualization_utils.py in visualize_boxes_and_labels_on_image_array(image, boxes, classes, scores, category_index, instance_masks, instance_boundaries, keypoints, keypoint_scores, keypoint_edges, track_ids, use_normalized_coordinates, max_boxes_to_draw, min_score_thresh, agnostic_mode, line_thickness, mask_alpha, groundtruth_box_visualization_color, skip_boxes, skip_scores, skip_labels, skip_track_ids)
1245 alpha=1.0
1246 )
-> 1247 draw_bounding_box_on_image_array(
1248 image,
1249 ymin,

~\anaconda3\lib\site-packages\object_detection\utils\visualization_utils.py in draw_bounding_box_on_image_array(image, ymin, xmin, ymax, xmax, color, thickness, display_str_list, use_normalized_coordinates)
158 """
159 image_pil = Image.fromarray(np.uint8(image)).convert('RGB')
--> 160 draw_bounding_box_on_image(image_pil, ymin, xmin, ymax, xmax, color,
161 thickness, display_str_list,
162 use_normalized_coordinates)

~\anaconda3\lib\site-packages\object_detection\utils\visualization_utils.py in draw_bounding_box_on_image(image, ymin, xmin, ymax, xmax, color, thickness, display_str_list, use_normalized_coordinates)
210 fill=color)
211 try:
--> 212 font = ImageFont.truetype('arial.ttf', 24)
213 except IOError:
214 font = ImageFont.load_default()

~\anaconda3\lib\site-packages\PIL\ImageFont.py in truetype(font, size, index, encoding, layout_engine)
850
851 try:
--> 852 return freetype(font)
853 except OSError:
854 if not isPath(font):

~\anaconda3\lib\site-packages\PIL\ImageFont.py in freetype(font)
847
848 def freetype(font):
--> 849 return FreeTypeFont(font, size, index, encoding, layout_engine)
850
851 try:

~\anaconda3\lib\site-packages\PIL\ImageFont.py in init(self, font, size, index, encoding, layout_engine)
171 pass
172 else:
--> 173 freetype_version = parse_version(features.version_module("freetype2"))
174 if freetype_version < parse_version("2.8"):
175 warnings.warn(

~\anaconda3\lib\site-packages\packaging\version.py in parse(version)
54 """
55 try:
---> 56 return Version(version)
57 except InvalidVersion:
58 return LegacyVersion(version)

~\anaconda3\lib\site-packages\packaging\version.py in init(self, version)
273
274 # Validate the version and parse it into pieces
--> 275 match = self._regex.search(version)
276 if not match:
277 raise InvalidVersion("Invalid version: '{0}'".format(version))

TypeError: expected string or bytes-like object

ImportError: No module named object_detection.utils or ModuleNotFoundError: No module named 'object_detection'

Traceback (most recent call last):
File "/content/generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
ModuleNotFoundError: No module named 'object_detection'
Traceback (most recent call last):
File "/content/generate_tfrecord.py", line 29, in
from object_detection.utils import dataset_util, label_map_util
ModuleNotFoundError: No module named 'object_detection'

I've already installed tensorflow, tensorflow object detection API through pip, changed line 111 into GFile.... but still the issue persists.

MacOS Catalina 10.15.3, python3.8, tensorflow (latest official update)

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd1 in position 221: invalid continuation byte

Traceback (most recent call last):
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 114, in
tf.compat.v1.app.run()
File "D:\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "D:\anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 308, in run
_run_main(main, args)
File "D:\anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 254, in _run_main
sys.exit(main(argv))
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 111, in main
record_summaries=FLAGS.record_summaries)
File "D:\anaconda3\envs\tensorflow\lib\site-packages\object_detection\model_lib_v2.py", line 594, in train_loop
summary_writer_filepath)
File "D:\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py", line 560, in create_file_writer_v2
create_fn=create_fn, init_op_fn=init_op_fn)
File "D:\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py", line 311, in init
self._init_op = init_op_fn(self._resource)
File "D:\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_summary_ops.py", line 142, in create_summary_file_writer
flush_millis, filename_suffix)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd1 in position 221: invalid continuation byte

Have anyone encountered this problem before? Can you provide detailed steps to solve it?

ParseError: 21:11 : Message type "object_detection.protos.Initializer" has no field named "random_normal_initializer".Message type "object_detection.protos.Initializer" has no field named "random_normal_initializer".


ParseError Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 config = config_util.get_configs_from_pipeline_file(CONFIG_PATH)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\object_detection\utils\config_util.py:139, in get_configs_from_pipeline_file(pipeline_config_path, config_override)
137 with tf.io.gfile.GFile(pipeline_config_path, "r") as f:
138 proto_str = f.read()
--> 139 text_format.Merge(proto_str, pipeline_config)
140 if config_override:
141 text_format.Merge(config_override, pipeline_config)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:719, in Merge(text, message, allow_unknown_extension, allow_field_number, descriptor_pool, allow_unknown_field)
690 def Merge(text,
691 message,
692 allow_unknown_extension=False,
693 allow_field_number=False,
694 descriptor_pool=None,
695 allow_unknown_field=False):
696 """Parses a text representation of a protocol message into a message.
697
698 Like Parse(), but allows repeated values for a non-repeated field, and uses
(...)
717 ParseError: On text parsing problems.
718 """
--> 719 return MergeLines(
720 text.split(b'\n' if isinstance(text, bytes) else u'\n'),
721 message,
722 allow_unknown_extension,
723 allow_field_number,
724 descriptor_pool=descriptor_pool,
725 allow_unknown_field=allow_unknown_field)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:793, in MergeLines(lines, message, allow_unknown_extension, allow_field_number, descriptor_pool, allow_unknown_field)
768 """Parses a text representation of a protocol message into a message.
769
770 See Merge() for more details.
(...)
787 ParseError: On text parsing problems.
788 """
789 parser = _Parser(allow_unknown_extension,
790 allow_field_number,
791 descriptor_pool=descriptor_pool,
792 allow_unknown_field=allow_unknown_field)
--> 793 return parser.MergeLines(lines, message)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:818, in _Parser.MergeLines(self, lines, message)
816 """Merges a text representation of a protocol message into a message."""
817 self._allow_multiple_scalars = True
--> 818 self._ParseOrMerge(lines, message)
819 return message

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:837, in _Parser._ParseOrMerge(self, lines, message)
835 tokenizer = Tokenizer(str_lines)
836 while not tokenizer.AtEnd():
--> 837 self._MergeField(tokenizer, message)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:967, in _Parser._MergeField(self, tokenizer, message)
964 tokenizer.Consume(',')
966 else:
--> 967 merger(tokenizer, message, field)
969 else: # Proto field is unknown.
970 assert (self.allow_unknown_extension or self.allow_unknown_field)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:1042, in _Parser._MergeMessageField(self, tokenizer, message, field)
1040 if tokenizer.AtEnd():
1041 raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,))
-> 1042 self._MergeField(tokenizer, sub_message)
1044 if is_map_entry:
1045 value_cpptype = field.message_type.fields_by_name['value'].cpp_type

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:967, in _Parser._MergeField(self, tokenizer, message)
964 tokenizer.Consume(',')
966 else:
--> 967 merger(tokenizer, message, field)
969 else: # Proto field is unknown.
970 assert (self.allow_unknown_extension or self.allow_unknown_field)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:1042, in _Parser._MergeMessageField(self, tokenizer, message, field)
1040 if tokenizer.AtEnd():
1041 raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,))
-> 1042 self._MergeField(tokenizer, sub_message)
1044 if is_map_entry:
1045 value_cpptype = field.message_type.fields_by_name['value'].cpp_type

[... skipping similar frames: _Parser._MergeField at line 967 (2 times), _Parser._MergeMessageField at line 1042 (2 times)]

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:967, in _Parser._MergeField(self, tokenizer, message)
964 tokenizer.Consume(',')
966 else:
--> 967 merger(tokenizer, message, field)
969 else: # Proto field is unknown.
970 assert (self.allow_unknown_extension or self.allow_unknown_field)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:1042, in _Parser._MergeMessageField(self, tokenizer, message, field)
1040 if tokenizer.AtEnd():
1041 raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,))
-> 1042 self._MergeField(tokenizer, sub_message)
1044 if is_map_entry:
1045 value_cpptype = field.message_type.fields_by_name['value'].cpp_type

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\text_format.py:932, in _Parser._MergeField(self, tokenizer, message)
929 field = None
931 if not field and not self.allow_unknown_field:
--> 932 raise tokenizer.ParseErrorPreviousToken(
933 'Message type "%s" has no field named "%s".' %
934 (message_descriptor.full_name, name))
936 if field:
937 if not self._allow_multiple_scalars and field.containing_oneof:
938 # Check if there's a different field set in this oneof.
939 # Note that we ignore the case if the same field was set before, and we
940 # apply _allow_multiple_scalars to non-scalar fields as well.

ParseError: 21:11 : Message type "object_detection.protos.Initializer" has no field named "random_normal_initializer".

config

After running these command "python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/my_ssd_mobnet --pipeline_config_path=Tensorflow/workspace/models/my_ssd_mobnet/pipeline.config --num_train_steps=5000" not showing anything .

D:\study\python\AI\Deep Learning\convolutional neural network(CNN)\projects\Real Time Face Mask Detection with Tensorflow and Python>python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/my_ssd_mobnet --pipeline_config_path=Tensorflow/workspace/models/my_ssd_mobnet/pipeline.config --num_train_steps=5000
2021-03-12 20:14:31.022628: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_addons\utils\ensure_tf_install.py:43: UserWarning: You are currently using a nightly version of TensorFlow (2.5.0-dev20210311).
TensorFlow Addons offers no support for the nightly versions of TensorFlow. Some things might work, some other might not.
If you encounter a bug, do not file an issue on GitHub.
UserWarning,
2021-03-12 20:14:34.805533: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-03-12 20:14:34.838992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1779] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB/s
2021-03-12 20:14:34.839139: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-03-12 20:14:34.863061: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-03-12 20:14:34.863191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-03-12 20:14:34.880273: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-03-12 20:14:34.884287: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-03-12 20:14:34.930953: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-03-12 20:14:34.945747: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-03-12 20:14:34.946548: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-03-12 20:14:34.946722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1917] Adding visible gpu devices: 0
2021-03-12 20:14:34.947126: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-03-12 20:14:34.948525: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1779] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB/s
2021-03-12 20:14:34.949145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1917] Adding visible gpu devices: 0
2021-03-12 20:14:35.473383: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-03-12 20:14:35.473677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1310] 0
2021-03-12 20:14:35.474115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1323] 0: N
2021-03-12 20:14:35.474626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1464] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2167 MB memory) -> physical GPU (device:
0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0312 20:14:35.478294 16912 mirrored_strategy.py:363] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 5000
I0312 20:14:35.486292 16912 config_util.py:552] Maybe overwriting train_steps: 5000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0312 20:14:35.487290 16912 config_util.py:552] Maybe overwriting use_bfloat16: False
Windows fatal exception: stack overflow

Current thread 0x00004210 (most recent call first):
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6240 in mul
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py", line 530 in multiply
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\util\dispatch.py", line 206 in wrapper
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1537 in _mul_dispatch
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1236 in r_binary_op_wrapper
File "C:\Users\damit\AppData\Roaming\Python\Python37\site-packages\object_detection\models\ssd_mobilenet_v2_fpn_keras_feature_extractor.py", line 203 in preprocess
File "C:\Users\damit\AppData\Roaming\Python\Python37\site-packages\object_detection\meta_architectures\ssd_meta_arch.py", line 482 in preprocess
File "C:\Users\damit\AppData\Roaming\Python\Python37\site-packages\object_detection\model_lib_v2.py", line 524 in train_loop
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 110 in main
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\absl\app.py", line 251 in _run_main
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\absl\app.py", line 300 in run
File "C:\Users\damit\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 113 in

D:\study\python\AI\Deep Learning\convolutional neural network(CNN)\projects\Real Time Face Mask Detection with Tensorflow and Python>

ImportError: DLL load failed while importing QtCore: %1 is not a valid Win32 application.

when i run this code i got this:

E:\RealTimeObjectDetection\Tensorflow\labelImg>pyrcc5 -o resources.py
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\programdata\anaconda3\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\Scripts\pyrcc5.exe_main
.py", line 4, in
File "c:\programdata\anaconda3\lib\site-packages\PyQt5\pyrcc_main.py", line 21, in
from PyQt5.QtCore import PYQT_VERSION_STR, QDir, QFile
ImportError: DLL load failed while importing QtCore: %1 is not a valid Win32 application.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.