Git Product home page Git Product logo

distracted-driver-detection's Introduction

๐‡๐ž๐ฅ๐ฅ๐จ ๐ญ๐ก๐ž๐ซ๐ž, ๐Ÿ๐ž๐ฅ๐ฅ๐จ๐ฐ <๐š๐šŽ๐šŸ๐šŽ๐š•๐š˜๐š™๐šŽ๐š›๐šœ/>!

You have finally discovered my Github profile.

Languages

Python Java JavaScript C C++ SQL

Technologies

Node.js React AWS Docker Jira

Let's Connect โ˜•

LinkedIn Facebook Instagram

distracted-driver-detection's People

Contributors

abhinav1004 avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

distracted-driver-detection's Issues

with input video it's working but with webcam not working

while training kernel was dead. what is a reason behind this ?
my board(nvidia xavier AGX) having 8gb ram that is reason or not ?
on bench it's working with input video but while it's coming in real scenario it's giving wrong prediction.what is reason behind this kindly give me solution.

Log loss

Can you show us or provide us the code about how u calculated log loss for all those models. Please

design module error

HI,

i try run your code with some images but i got error like below .can you help to solve this issue


ResourceExhaustedError Traceback (most recent call last)
in
11 model.add(Dropout(0.5))
12 model.add(Flatten())
---> 13 model.add(Dense(500, activation='relu', kernel_initializer='glorot_normal'))
14 model.add(Dropout(0.5))
15 model.add(Dense(10, activation='softmax', kernel_initializer='glorot_normal'))

/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
515 self._self_setattr_tracking = False # pylint: disable=protected-access
516 try:
--> 517 result = method(self, *args, **kwargs)
518 finally:
519 self._self_setattr_tracking = previous_value # pylint: disable=protected-access

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/sequential.py in add(self, layer)
221 # If the model is being built continuously on top of an input layer:
222 # refresh its output.
--> 223 output_tensor = layer(self.outputs[0])
224 if len(nest.flatten(output_tensor)) != 1:
225 raise ValueError(SINGLE_LAYER_OUTPUT_ERROR_MSG)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
950 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
951 return self._functional_construction_call(inputs, args, kwargs,
--> 952 input_list)
953
954 # Maintains info about the Layer.call stack.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1089 # Check input assumptions set after layer building, e.g. input shape.
1090 outputs = self._keras_tensor_symbolic_call(
-> 1091 inputs, input_masks, args, kwargs)
1092
1093 if outputs is None:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
820 return nest.map_structure(keras_tensor.KerasTensor, output_signature)
821 else:
--> 822 return self._infer_output_signature(inputs, args, kwargs, input_masks)
823
824 def _infer_output_signature(self, inputs, args, kwargs, input_masks):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
860 # overridden).
861 # TODO(kaftan): do we maybe_build here, or have we already done it?
--> 862 self._maybe_build(inputs)
863 outputs = call_fn(inputs, *args, **kwargs)
864

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
2708 # operations.
2709 with tf_utils.maybe_init_scope(self):
-> 2710 self.build(input_shapes) # pylint:disable=not-callable
2711 # We must set also ensure that the layer is marked as built, and the build
2712 # shape is stored since user defined build functions may not be calling

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py in build(self, input_shape)
1190 constraint=self.kernel_constraint,
1191 dtype=self.dtype,
-> 1192 trainable=True)
1193 if self.use_bias:
1194 self.bias = self.add_weight(

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, use_resource, synchronization, aggregation, **kwargs)
637 synchronization=synchronization,
638 aggregation=aggregation,
--> 639 caching_device=caching_device)
640 if regularizer is not None:
641 # TODO(fchollet): in the future, this should be handled at the

/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter)
808 dtype=dtype,
809 initializer=initializer,
--> 810 **kwargs_for_getter)
811
812 # If we set an initializer and the variable processed it, tracking will not

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in make_variable(name, shape, dtype, initializer, trainable, caching_device, validate_shape, constraint, use_resource, collections, synchronization, aggregation, partitioner)
140 synchronization=synchronization,
141 aggregation=aggregation,
--> 142 shape=variable_shape if variable_shape else None)
143
144

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in call(cls, *args, **kwargs)
258 def call(cls, *args, **kwargs):
259 if cls is VariableV1:
--> 260 return cls._variable_v1_call(*args, **kwargs)
261 elif cls is Variable:
262 return cls._variable_v2_call(*args, **kwargs)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in _variable_v1_call(cls, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint, use_resource, synchronization, aggregation, shape)
219 synchronization=synchronization,
220 aggregation=aggregation,
--> 221 shape=shape)
222
223 def _variable_v2_call(cls,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in (**kwargs)
197 shape=None):
198 """Call on Variable class. Useful to force the signature."""
--> 199 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
200 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access
201 previous_getter = _make_getter(getter, previous_getter)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator(next_creator, **kwargs)
2616 synchronization=synchronization,
2617 aggregation=aggregation,
-> 2618 shape=shape)
2619 else:
2620 return variables.RefVariable(

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in call(cls, *args, **kwargs)
262 return cls._variable_v2_call(*args, **kwargs)
263 else:
--> 264 return super(VariableMetaclass, cls).call(*args, **kwargs)
265
266

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py in init(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape)
1583 aggregation=aggregation,
1584 shape=shape,
-> 1585 distribute_strategy=distribute_strategy)
1586
1587 def _init_from_args(self,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape)
1710 with ops.name_scope("Initializer"), device_context_manager(None):
1711 if init_from_fn:
-> 1712 initial_value = initial_value()
1713 if isinstance(initial_value, trackable.CheckpointInitialValue):
1714 self._maybe_initialize_trackable()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/initializers/initializers_v2.py in call(self, shape, dtype, **kwargs)
408 """
409 return super(VarianceScaling, self).call(
--> 410 shape, dtype=_get_dtype(dtype), **kwargs)
411
412

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py in call(self, shape, dtype, **kwargs)
592 # constant from scipy.stats.truncnorm.std(a=-2, b=2, loc=0., scale=1.)
593 stddev = math.sqrt(scale) / .87962566103423978
--> 594 return self._random_generator.truncated_normal(shape, 0.0, stddev, dtype)
595 elif self.distribution == "untruncated_normal":
596 stddev = math.sqrt(scale)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py in truncated_normal(self, shape, mean, stddev, dtype)
1089 op = random_ops.truncated_normal
1090 return op(
-> 1091 shape=shape, mean=mean, stddev=stddev, dtype=dtype, seed=self.seed)
1092
1093 # Compatibility aliases

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/random_ops.py in truncated_normal(shape, mean, stddev, dtype, seed, name)
196 rnd = gen_random_ops.truncated_normal(
197 shape_tensor, dtype, seed=seed1, seed2=seed2)
--> 198 mul = rnd * stddev_tensor
199 value = math_ops.add(mul, mean_tensor, name=name)
200 tensor_util.maybe_set_static_shape(value, shape)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
1162 with ops.name_scope(None, op_name, [x, y]) as name:
1163 try:
-> 1164 return func(x, y, name=name)
1165 except (TypeError, ValueError) as e:
1166 # Even if dispatching the op failed, the RHS may be a tensor aware

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py in _mul_dispatch(x, y, name)
1494 return sparse_tensor.SparseTensor(y.indices, new_vals, y.dense_shape)
1495 else:
-> 1496 return multiply(x, y, name=name)
1497
1498

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py in multiply(x, y, name)
516 """
517
--> 518 return gen_math_ops.mul(x, y, name)
519
520

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py in mul(x, y, name)
6066 return _result
6067 except _core._NotOkStatusException as e:
-> 6068 _ops.raise_from_not_ok_status(e, name)
6069 except _core._FallbackException:
6070 pass

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6860 message = e.message + (" name: " + name if name is not None else "")
6861 # pylint: disable=protected-access
-> 6862 six.raise_from(core._status_to_exception(e.code, message), None)
6863 # pylint: enable=protected-access
6864

/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)

ResourceExhaustedError: OOM when allocating tensor with shape[32768,500] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Mul]

Error in Distracted-Driver-Detection-vgg16

HI,

i got error like

InternalError: Could not synchronize CUDA stream: CUDA_ERROR_LAUNCH_TIMEOUT: the launch timed out and was terminated

pls suggest me how to solve it

csv_files folder

Can you please mention how did you create train.csv and test.csv files ?

wrong prediction on video

I tried to run the video prediction using the model i trained but it seems to be giving the wrong label on the prediction on both the input_video.mp4 as well as my own video recordings. I have also tried to train a model using your code but the activity label still does not seem to match what the driver is doing. Can you help me with this? Im new to ML and i dont know where is the problem.

key error occurred while running model prediction result

creating the prediction results for the image classification and shifting the predicted images to another folder

#with renamed filename having the class name predicted for that image using model
with open(os.path.join(JSON_DIR,'class_name_map.json')) as secret_input:
info = json.load(secret_input)

for i in range(data_test.shape[0]):
new_name = data_test.iloc[i,0].split("/")[-1].split(".")[0]+"_"+info[data_test.iloc[i,1]]+".jpg"
shutil.copy(data_test.iloc[i,0],os.path.join(PREDICT_DIR,new_name))

#saving the model predicted results into a csv file
data_test.to_csv(os.path.join(os.getcwd(),"csv_files","short_test_result.csv"),index=False)

ERROR


KeyError Traceback (most recent call last)
in
6
7 for i in range(data_test.shape[0]):
----> 8 new_name = data_test.iloc[i,0].split("/")[-1].split(".")[0]+"_"+info[data_test.iloc[i,1]]+".jpg"
9 shutil.copy(data_test.iloc[i,0],os.path.join(PREDICT_DIR,new_name))
10

KeyError: 'c2 '

โ€‹@Abhinav1004 could u please give me solution for this key error.

Prediction Error

i have like

KeyError Traceback (most recent call last)
in
6
7 for i in range(data_test.shape[0]):
----> 8 new_name = data_test.iloc[i,0].split("/")[-1].split(".")[0]+"_"+info[data_test.iloc[i,1]]+".jpg"
9 shutil.copy(data_test.iloc[i,0],os.path.join(PREDICT_DIR,new_name))
10

KeyError: 'C0'

thanks
kalai

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.