Git Product home page Git Product logo

adeel-intizar / xtreme-vision Goto Github PK

View Code? Open in Web Editor NEW
80.0 80.0 20.0 63.38 MB

A High Level Python Library to empower students, developers to build applications and systems enabled with computer vision capabilities.

License: MIT License

Python 5.26% Jupyter Notebook 94.74%
artificial-intelligence artificial-neural-networks centernet computer-vision deep-learning image-processing instance-segmentation keras keras-tensorflow machine-learning object-detection python resnet retinanet segmentation semantic-segmentation tensorflow tf-keras xtreme-vision yolov4

xtreme-vision's People

Contributors

adeel-intizar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

xtreme-vision's Issues

Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized.

I install xtreme-vision
pip install --no-dependencies xtreme-vision
it shows version - xtreme_vision - 1.5.1

On colab it runs with only - pip install Xtreme-Vision
python version as -
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux

but on linux server it does not
OS. Version as -
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic

I installed all this packages on public ip separately -

tensorflow - 2.3.0-dev20200522
keras - 2.4.3
PIL - 8.0.1
cv2 - 4.5.1-pre
matplotlib - 3.3.3
numpy - 1.18.4
pandas - 1.1.5
sklearn - 0.23.1
imgaug - 0.4.0
labelme2coco - 0.1.2
progressbar - 2.3dev
scipy - 1.4.1
h5py - 2.10.0
matplotlib - 3.3.3
skimage - 0.17.2
after running train_yolov4.py
after 45 mis shows Error as - >

Errors may have originated from an input operation.
Input Source operations connected to node gradient_tape/YOLOv4/CSPDarknet53/csp_res_net_1/yolo_conv 2d_16/sequential_18/mish_16/Sigmoid:
YOLOv4/CSPDarknet53/csp_res_net_1/yolo_conv2d_16/sequential_18/batch_normalization_16/FusedBatchNo rmV3 (defined at /Xtreme-Vision/xtreme_vision/Detection/yolov4/model/common.py:92)

Function call stack:
train_function

2020-12-11 00:32:53.589305: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]

Kindly guide please...

Error while training yolov4tiny

/.local/lib/python3.8/site-packages/xtreme_vision/Detection/yolov4/tf/dataset.py", line 198, in bboxes_to_ground_truth
ground_truth[i][0, _y, _x, j, 0:4] = xywh

IndexError: index 20 is out of bounds for axis 1 with size 20

 [[{{node PyFunc}}]]
 [[IteratorGetNext]] [Op:__inference_train_function_14598]

Function call stack:
train_function

tensorflow::var does not exist error in Object_Detection.Use_RetinaNet()

Hi,
I'm having errors in Object_Detection.Use_RetinaNet().
Using conda, cuda=10.0,tensorflow=2.3, gpu=3080, python 3.8.
Below is package installed and full trace of errors.
Any clue to this issue? Thanks.

-------------------------------------------------------------------

(tfgpu10-2.30) C:\Users\3080\Desktop\Captures\Photos-T-shirts>pip freeze
absl-py @ file:///tmp/build/80754af9/absl-py_1607439979954/work
aiohttp @ file:///C:/ci/aiohttp_1607109697839/work
appdirs @ file:///home/conda/feedstock_root/build_artifacts/appdirs_1603108395799/work
astunparse==1.6.3
async-timeout==3.0.1
attrs @ file:///tmp/build/80754af9/attrs_1604765588209/work
backcall==0.2.0
blinker==1.4
brotlipy==0.7.0
cachetools @ file:///tmp/build/80754af9/cachetools_1607706694405/work
certifi==2020.12.5
cffi @ file:///C:/ci/cffi_1606255208697/work
chardet @ file:///C:/ci/chardet_1605303225733/work
click==7.1.2
cloudpickle @ file:///home/conda/feedstock_root/build_artifacts/cloudpickle_1598400192773/work
colorama @ file:///tmp/build/80754af9/colorama_1607707115595/work
configobj==5.0.6
cryptography==2.9.2
cycler==0.10.0
cytoolz==0.11.0
dask @ file:///home/conda/feedstock_root/build_artifacts/dask-core_1607657054678/work
decorator==4.4.2
gast==0.3.3
google-auth @ file:///tmp/build/80754af9/google-auth_1607969906642/work
google-auth-oauthlib @ file:///tmp/build/80754af9/google-auth-oauthlib_1603929124518/work
google-pasta==0.2.0
grpcio @ file:///C:/ci/grpcio_1597406462198/work
h5py==2.10.0
idna @ file:///tmp/build/80754af9/idna_1593446292537/work
imagecodecs @ file:///D:/bld/imagecodecs_1607400615884/work
imageio @ file:///home/conda/feedstock_root/build_artifacts/imageio_1594044661732/work
imgaug==0.4.0
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1602276842396/work
ipython @ file:///C:/ci/ipython_1604083276484/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
jedi==0.17.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1607956439537/work
jsonschema==3.2.0
Keras @ file:///home/conda/feedstock_root/build_artifacts/keras_1593112171383/work
Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work
Keras-Preprocessing==1.1.2
kiwisolver @ file:///D:/bld/kiwisolver_1604322452227/work
labelme2coco==0.1.2
Markdown @ file:///C:/ci/markdown_1605111189761/work
matplotlib @ file:///D:/bld/matplotlib-suite_1605180506412/work
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
multidict @ file:///C:/ci/multidict_1600456481656/work
networkx @ file:///home/conda/feedstock_root/build_artifacts/networkx_1598210780226/work
numpy==1.18.5
oauthlib==3.1.0
olefile @ file:///home/conda/feedstock_root/build_artifacts/olefile_1602866521163/work
opencv-python==4.4.0.46
opencv-python-headless==4.4.0.46
opt-einsum==3.1.0
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1607785313469/work
pandas @ file:///D:/bld/pandas_1609079442959/work
parso @ file:///tmp/build/80754af9/parso_1607623074025/work
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
Pillow @ file:///D:/bld/pillow_1604748918199/work
pooch @ file:///home/conda/feedstock_root/build_artifacts/pooch_1606467285986/work
progressbar2 @ file:///home/conda/feedstock_root/build_artifacts/progressbar2_1599661727525/work
prompt-toolkit @ file:///tmp/build/80754af9/prompt-toolkit_1602688806899/work
protobuf==3.13.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
Pygments @ file:///tmp/build/80754af9/pygments_1607368905949/work
PyJWT @ file:///C:/ci/pyjwt_1608658192037/work
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
pyparsing==2.4.7
PyQt5==5.12.3
PyQt5-sip==4.19.18
PyQtChart==5.12
PyQtWebEngine==5.12.1
pyreadline==2.1
pyrsistent==0.17.3
PySocks @ file:///C:/ci/pysocks_1605287845585/work
python-dateutil==2.8.1
python-utils==2.4.0
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1608904108784/work
PyWavelets @ file:///D:/bld/pywavelets_1607290958158/work
PyYAML==5.3.1
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
requests-oauthlib==1.3.0
rsa @ file:///tmp/build/80754af9/rsa_1596998415516/work
scikit-image==0.18.1
scikit-learn @ file:///D:/bld/scikit-learn_1608656402924/work
scipy==1.4.1
Shapely==1.7.1
six @ file:///C:/ci/six_1605187374963/work
tensorboard @ file:///home/builder/ktietz/conda/conda-bld/tensorboard_1604313476433/work/tmp_pip_dir
tensorboard-plugin-wit==1.6.0
tensorflow==2.3.0
tensorflow-estimator @ file:///tmp/build/80754af9/tensorflow-estimator_1599136169057/work/whl_temp/tensorflow_estimator-2.3.0-py2.py3-none-any.whl
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp79xdzxkt/threadpoolctl-2.1.0-py3-none-any.whl
tifffile @ file:///home/conda/feedstock_root/build_artifacts/tifffile_1607552325200/work
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1600973991856/work
tornado @ file:///D:/bld/tornado_1604105353952/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1608900042843/work
traitlets @ file:///tmp/build/80754af9/traitlets_1602787416690/work
typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1598376058250/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1606938623459/work
wcwidth @ file:///tmp/build/80754af9/wcwidth_1593447189090/work
Werkzeug==1.0.1
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306167264/work
wincertstore==0.2
wrapt==1.12.1
xtreme-vision==1.5.1
yarl @ file:///C:/ci/yarl_1598045274898/work
zipp @ file:///tmp/build/80754af9/zipp_1604001098328/work

-----------------------------------------------------------------

from xtreme_vision.Detection import Object_Detection

modeld = Object_Detection()
modeld.Use_RetinaNet()


Downloading Weights File...
Please Wait...

Downloading data from https://github.com/fizyr/keras-retinanet/releases/download/0.5.1/resnet50_coco_best_v2.1.0.h5
152666112/152662144 [==============================] - 55s 0us/step
Traceback (most recent call last):
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 1349, in _run_fn
return self._call_tf_sessionrun(options, feed_dict, fetch_list,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 1441, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable pyramid_classification/bias/Initializer/Variable from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/pyramid_classification/bias/Initializer/Variable/class tensorflow::Var does not exist.
[[{{node pyramid_classification/bias/Initializer/ReadVariableOp}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection_init_.py", line 143, in Use_RetinaNet
self.model.load_model(self.weights_path, self.classes, backbone)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection\retinanet_init_.py", line 65, in load_model
self.model = models.load_model(self.weights_path, backbone_name = backbone)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection\retinanet\models_init_.py", line 87, in load_model
return keras.models.load_model(filepath, custom_objects=backbone(backbone_name).custom_objects)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\saving\save.py", line 182, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 181, in load_model_from_hdf5
load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 708, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\backend.py", line 3601, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\backend.py", line 630, in get_session
_initialize_variables(session)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\backend.py", line 1065, in _initialize_variables
session.run(variables_module.variables_initializer(uninitialized_vars))
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 957, in run
result = self._run(None, fetches, feed_dict, options_ptr,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 1180, in _run
results = self._do_run(handle, final_targets, final_fetches,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 1358, in _do_run
return self._do_call(_run_fn, feeds, fetches, targets, options,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable pyramid_classification/bias/Initializer/Variable from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/pyramid_classification/bias/Initializer/Variable/class tensorflow::Var does not exist.
[[node pyramid_classification/bias/Initializer/ReadVariableOp (defined at C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection\retinanet\initializers.py:36) ]]

Original stack trace for 'pyramid_classification/bias/Initializer/ReadVariableOp':
File "", line 1, in
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection_init_.py", line 143, in Use_RetinaNet
self.model.load_model(self.weights_path, self.classes, backbone)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection\retinanet_init_.py", line 65, in load_model
self.model = models.load_model(self.weights_path, backbone_name = backbone)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection\retinanet\models_init_.py", line 87, in load_model
return keras.models.load_model(filepath, custom_objects=backbone(backbone_name).custom_objects)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\saving\save.py", line 182, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 177, in load_model_from_hdf5
model = model_config_lib.model_from_config(model_config,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\saving\model_config.py", line 55, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\layers\serialization.py", line 171, in deserialize
return generic_utils.deserialize_keras_object(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 354, in deserialize_keras_object
return cls.from_config(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2238, in from_config
return functional.Functional.from_config(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 616, in from_config
input_tensors, output_tensors, created_layers = reconstruct_from_config(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 1204, in reconstruct_from_config
process_layer(layer_data)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 1186, in process_layer
layer = deserialize_layer(layer_data, custom_objects=custom_objects)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\layers\serialization.py", line 171, in deserialize
return generic_utils.deserialize_keras_object(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 354, in deserialize_keras_object
return cls.from_config(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2238, in from_config
return functional.Functional.from_config(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 616, in from_config
input_tensors, output_tensors, created_layers = reconstruct_from_config(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 1214, in reconstruct_from_config
process_node(layer, node_data)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 1162, in process_node
output_tensors = layer(input_tensors, **kwargs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\base_layer_v1.py", line 757, in call
self._maybe_build(inputs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\base_layer_v1.py", line 2098, in _maybe_build
self.build(input_shapes)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 206, in build
self.bias = self.add_weight(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\base_layer_v1.py", line 431, in add_weight
variable = self._add_variable_with_custom_getter(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\training\tracking\base.py", line 745, in _add_variable_with_custom_getter
new_variable = getter(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py", line 133, in make_variable
return tf_variables.VariableV1(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\variables.py", line 260, in call
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\variables.py", line 206, in _variable_v1_call
return previous_getter(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\variables.py", line 199, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2583, in default_variable_creator
return resource_variable_ops.ResourceVariable(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\variables.py", line 264, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1507, in init
self._init_from_args(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1651, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\xtreme_vision\Detection\retinanet\initializers.py", line 36, in call
result = keras.backend.ones(shape, dtype=dtype) * -math.log((1 - self.probability) / self.probability)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\variables.py", line 1074, in _run_op
return tensor_oper(a.value(), *args, **kwargs)
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 555, in value
return self._read_variable_op()
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 657, in _read_variable_op
result = read_and_set_handle()
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 647, in read_and_set_handle
result = gen_resource_variable_ops.read_variable_op(self._handle,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\ops\gen_resource_variable_ops.py", line 489, in read_variable_op
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 742, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\framework\ops.py", line 3477, in _create_op_internal
ret = Operation(
File "C:\Users\3080\anaconda3\envs\tfgpu10-2.30\lib\site-packages\tensorflow\python\framework\ops.py", line 1949, in init
self._traceback = tf_stack.extract_stack()

FileNotFoundError: Failed to find images

on training yolov4 with custom dataset got the error

classes.names
train_images_file.txt
train_img_dir/
all are present in the same directory

The python code is this down below:

from xtreme_vision.Detection.Custom import Train_YOLOv4

CLASS_PATH = 'classes.names'
TRAIN_IMAGES_FILE = 'train_images_file.txt'
TRAIN_IMAGE_DIR = 'train_img_dir/'

print("class path", CLASS_PATH)
print("train images file", TRAIN_IMAGES_FILE)
print("train image dir", TRAIN_IMAGE_DIR)

model = Train_YOLOv4()
model.create_model(classes_path=CLASS_PATH, input_size=640, batch_size=2)

model.load_data(train_images_file=TRAIN_IMAGES_FILE, train_img_dir=TRAIN_IMAGE_DIR)

model.train(epochs=20, lr=1e-4, steps_per_epoch=400)

  File "train_yolov4.py", line 16, in <module>
    model.train(epochs=20, lr=1e-4, steps_per_epoch=400)
  File "/usr/local/lib/python3.6/dist-packages/xtreme_vision/Detection/Custom/yolo.py", line 204, in train
    steps_per_epoch = steps_per_epoch)
  File "/usr/local/lib/python3.6/dist-packages/xtreme_vision/Detection/yolov4/tf/__init__.py", line 278, in fit
    use_multiprocessing=False,
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
    tmp_logs = train_function(iterator)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 807, in _call
    return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2829, in __call__
    return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
    cancellation_manager=cancellation_manager)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 550, in call
    ctx=ctx)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
  (0) Unknown:  FileNotFoundError: Failed to find images```

Error from tensorflow

Hello,

I get the following error when running the examples:ImportError: cannot import name 'BatchNormalization' from 'tensorflow.python.keras.layers' (/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/tensorflow/python/keras/layers/init.py)

Thanks

'Tensor' object has no attribute 'numpy' when i run pose_estimation.py

hello guys, i want to run pose_estimation.py file . But, when code enter to method predict() of class PoseEstimation:
def predict(self, img:np.ndarray, output_path:str, debug=True):
(more code here)

    joint_heatmap = np.squeeze(joint_heatmap.numpy())
    joint_locations = np.squeeze(joint_locations.numpy())
    joint_offsets = np.squeeze(joint_offsets.numpy())
    heatmap = np.squeeze(heatmap.numpy())
    offsets = np.squeeze(offsets.numpy())
    whs = np.squeeze(whs.numpy())```

this message apears : 'Tensor' object has no attribute 'numpy' . I suppose this trouble because numpy() work with eager mode enable. i´ve tried to enable eager mode but it didnt work yet with the followings commands:
tf.compat.v1.enable_eager_execution() and self.model.run_eagerly = True **but tf.executing_eagerly() prints False
Finally, I´m using these dependencies:

  • Tensor flow 2.4.1
  • keras 2.4.3
  • opencv-python 4.5.1
  • numpy 1.19.5
  • pillow 8.1.0
  • matplotlib 3.3.4
  • pandas 1.2.2
  • sckit-learn 0.24.1
  • sckit-image 0.18.1
  • imgaug 0.4.0
    I would appreciate very much help me to solve this issue. thanks

Problem with CDCL model

Hello,

Pose estimation with the CDCL model gives the following errors:Traceback (most recent call last):
File "PoseEstimation.py", line 11, in
model.Detect_From_Image(input_path = infile,
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/xtreme_vision/Estimation/init.py", line 96, in Detect_From_Image
_ = run_image(input_path, output_path)
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/xtreme_vision/Segmentation/cdcl/inference_15parts_skeletons.py", line 425, in run_image
canvas, heatmap, paf, people, seg = process(oriImg, flipImg, params, model_params, model, Img)
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/xtreme_vision/Segmentation/cdcl/inference_15parts_skeletons.py", line 138, in process
output_blobs = model.predict(input_img)
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/keras/engine/training_v1.py", line 969, in predict
return func.predict(
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/keras/engine/training_generator_v1.py", line 817, in predict
return predict_generator(
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/keras/engine/training_generator_v1.py", line 257, in model_iteration
aggregator.create(batch_outs)
File "/Users/Projects/CompVis/Xtreme-Vision/extvis/lib/python3.8/site-packages/keras/engine/training_utils_v1.py", line 429, in create
raise RuntimeError('Attempted to aggregate unsupported object {}.'
RuntimeError: Attempted to aggregate unsupported object Tensor("model/Mconv8_stage1_L1/BiasAdd:0", shape=(1, 46, 69, 38), dtype=float32).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.