Git Product home page Git Product logo

nd013-c2-fusion-starter's Introduction

SDCND : Sensor Fusion and Tracking

This is the project for the second course in the Udacity Self-Driving Car Engineer Nanodegree Program : Sensor Fusion and Tracking.

In this project, you'll fuse measurements from LiDAR and camera and track vehicles over time. You will be using real-world data from the Waymo Open Dataset, detect objects in 3D point clouds and apply an extended Kalman filter for sensor fusion and tracking.

The project consists of two major parts:

  1. Object detection: In this part, a deep-learning approach is used to detect vehicles in LiDAR data based on a birds-eye view perspective of the 3D point-cloud. Also, a series of performance measures is used to evaluate the performance of the detection approach.
  2. Object tracking : In this part, an extended Kalman filter is used to track vehicles over time, based on the lidar detections fused with camera detections. Data association and track management are implemented as well.

The following diagram contains an outline of the data flow and of the individual steps that make up the algorithm.

Also, the project code contains various tasks, which are detailed step-by-step in the code. More information on the algorithm and on the tasks can be found in the Udacity classroom.

Project File Structure

📦project
┣ 📂dataset --> contains the Waymo Open Dataset sequences

┣ 📂misc
┃ ┣ evaluation.py --> plot functions for tracking visualization and RMSE calculation
┃ ┣ helpers.py --> misc. helper functions, e.g. for loading / saving binary files
┃ ┗ objdet_tools.py --> object detection functions without student tasks
┃ ┗ params.py --> parameter file for the tracking part

┣ 📂results --> binary files with pre-computed intermediate results

┣ 📂student
┃ ┣ association.py --> data association logic for assigning measurements to tracks incl. student tasks
┃ ┣ filter.py --> extended Kalman filter implementation incl. student tasks
┃ ┣ measurements.py --> sensor and measurement classes for camera and lidar incl. student tasks
┃ ┣ objdet_detect.py --> model-based object detection incl. student tasks
┃ ┣ objdet_eval.py --> performance assessment for object detection incl. student tasks
┃ ┣ objdet_pcl.py --> point-cloud functions, e.g. for birds-eye view incl. student tasks
┃ ┗ trackmanagement.py --> track and track management classes incl. student tasks

┣ 📂tools --> external tools
┃ ┣ 📂objdet_models --> models for object detection
┃ ┃ ┃
┃ ┃ ┣ 📂darknet
┃ ┃ ┃ ┣ 📂config
┃ ┃ ┃ ┣ 📂models --> darknet / yolo model class and tools
┃ ┃ ┃ ┣ 📂pretrained --> copy pre-trained model file here
┃ ┃ ┃ ┃ ┗ complex_yolov4_mse_loss.pth
┃ ┃ ┃ ┣ 📂utils --> various helper functions
┃ ┃ ┃
┃ ┃ ┗ 📂resnet
┃ ┃ ┃ ┣ 📂models --> fpn_resnet model class and tools
┃ ┃ ┃ ┣ 📂pretrained --> copy pre-trained model file here
┃ ┃ ┃ ┃ ┗ fpn_resnet_18_epoch_300.pth
┃ ┃ ┃ ┣ 📂utils --> various helper functions
┃ ┃ ┃
┃ ┗ 📂waymo_reader --> functions for light-weight loading of Waymo sequences

┣ basic_loop.py
┣ loop_over_dataset.py

Installation Instructions for Running Locally

Cloning the Project

In order to create a local copy of the project, please click on "Code" and then "Download ZIP". Alternatively, you may of-course use GitHub Desktop or Git Bash for this purpose.

Python

The project has been written using Python 3.7. Please make sure that your local installation is equal or above this version.

Package Requirements

All dependencies required for the project have been listed in the file requirements.txt. You may either install them one-by-one using pip or you can use the following command to install them all at once: pip3 install -r requirements.txt

Waymo Open Dataset Reader

The Waymo Open Dataset Reader is a very convenient toolbox that allows you to access sequences from the Waymo Open Dataset without the need of installing all of the heavy-weight dependencies that come along with the official toolbox. The installation instructions can be found in tools/waymo_reader/README.md.

Waymo Open Dataset Files

This project makes use of three different sequences to illustrate the concepts of object detection and tracking. These are:

  • Sequence 1 : training_segment-1005081002024129653_5313_150_5333_150_with_camera_labels.tfrecord
  • Sequence 2 : training_segment-10072231702153043603_5725_000_5745_000_with_camera_labels.tfrecord
  • Sequence 3 : training_segment-10963653239323173269_1924_000_1944_000_with_camera_labels.tfrecord

To download these files, you will have to register with Waymo Open Dataset first: Open Dataset – Waymo, if you have not already, making sure to note "Udacity" as your institution.

Once you have done so, please click here to access the Google Cloud Container that holds all the sequences. Once you have been cleared for access by Waymo (which might take up to 48 hours), you can download the individual sequences.

The sequences listed above can be found in the folder "training". Please download them and put the tfrecord-files into the dataset folder of this project.

Pre-Trained Models

The object detection methods used in this project use pre-trained models which have been provided by the original authors. They can be downloaded here (darknet) and here (fpn_resnet). Once downloaded, please copy the model files into the paths /tools/objdet_models/darknet/pretrained and /tools/objdet_models/fpn_resnet/pretrained respectively.

Using Pre-Computed Results

In the main file loop_over_dataset.py, you can choose which steps of the algorithm should be executed. If you want to call a specific function, you simply need to add the corresponding string literal to one of the following lists:

  • exec_data : controls the execution of steps related to sensor data.

    • pcl_from_rangeimage transforms the Waymo Open Data range image into a 3D point-cloud
    • load_image returns the image of the front camera
  • exec_detection : controls which steps of model-based 3D object detection are performed

    • bev_from_pcl transforms the point-cloud into a fixed-size birds-eye view perspective
    • detect_objects executes the actual detection and returns a set of objects (only vehicles)
    • validate_object_labels decides which ground-truth labels should be considered (e.g. based on difficulty or visibility)
    • measure_detection_performance contains methods to evaluate detection performance for a single frame

In case you do not include a specific step into the list, pre-computed binary files will be loaded instead. This enables you to run the algorithm and look at the results even without having implemented anything yet. The pre-computed results for the mid-term project need to be loaded using this link. Please use the folder darknet first. Unzip the file within and put its content into the folder results.

  • exec_tracking : controls the execution of the object tracking algorithm

  • exec_visualization : controls the visualization of results

    • show_range_image displays two LiDAR range image channels (range and intensity)
    • show_labels_in_image projects ground-truth boxes into the front camera image
    • show_objects_and_labels_in_bev projects detected objects and label boxes into the birds-eye view
    • show_objects_in_bev_labels_in_camera displays a stacked view with labels inside the camera image on top and the birds-eye view with detected objects on the bottom
    • show_tracks displays the tracking results
    • show_detection_performance displays the performance evaluation based on all detected
    • make_tracking_movie renders an output movie of the object tracking results

Even without solving any of the tasks, the project code can be executed.

The final project uses pre-computed lidar detections in order for all students to have the same input data. If you use the workspace, the data is prepared there already. Otherwise, download the pre-computed lidar detections (~1 GB), unzip them and put them in the folder results.

External Dependencies

Parts of this project are based on the following repositories:

License

License

nd013-c2-fusion-starter's People

Contributors

mvirgo avatar sudkul avatar uanjali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

nd013-c2-fusion-starter's Issues

Project dependencies may have API risk issues

Hi, In nd013-c2-fusion-starter, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

numpy
opencv-python
protobuf
easydict
pytorch
pillow
matplotlib
wxpython
shapely
tqdm
open3d

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3.
The version constraint of dependency protobuf can be changed to >=2.6.0,<=3.0.0a3.
The version constraint of dependency pillow can be changed to ==9.2.0.
The version constraint of dependency pillow can be changed to >=2.0.0,<=9.1.1.
The version constraint of dependency matplotlib can be changed to >=1.4.0,<=3.0.3.
The version constraint of dependency shapely can be changed to >=1.3.0,<=1.8.2.
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the numpy
numpy.linalg.inv
The calling methods from the protobuf
google.protobuf.descriptor.FileDescriptor
google.protobuf.reflection.GeneratedProtocolMessageType
google.protobuf.symbol_database.Default
The calling methods from the pillow
PIL.Image.open
The calling methods from the matplotlib
matplotlib.use
matplotlib.transforms.Affine2D.translate
matplotlib.path.Path
matplotlib.ticker.FuncFormatter
matplotlib.transforms.Affine2D.rotate_around
matplotlib.transforms.Affine2D
matplotlib.patches.PathPatch
The calling methods from the shapely
shapely.geometry.Polygon
The calling methods from the tqdm
tqdm.tqdm
The calling methods from the all methods
numpy.identity
get_3d_box_projected_corners
inds.i.detections.get_yaw.astype
PoseResNet
box2_conners.cpu.numpy.astype
bn_model.weight.data.convert2cpu.numpy.tofile
shapely.geometry.Polygon
misc.objdet_tools.show_bev
tools.waymo_reader.simple_waymo_open_dataset_reader.utils.parse_range_image_and_camera_projection
softmax_outs.outs.sum
ws.hs.ws.W.hs.H.C.B.x.view.transpose.contiguous
torch.cos
matplotlib.transforms.Affine2D.translate
numpy.sum
matplotlib.pyplot.Rectangle
x.data.size
box1.intersection
model.to.eval
student.association.Association
ax.set_title
student.measurements.Sensor.generate_measurement
recall75.to_cpu.item
torch.nn.functional.max_pool2d
_b
model
bn_model.bias.data.copy_
numpy.block
bn_model.running_mean.convert2cpu.numpy
self.file.seek
pickle.dump
x.encode
torch.exp
self.relu
max
compute_iou_nms
get_box_transformation_matrix
utils.iou_rotated_boxes_utils.get_polygons_areas_fix_xy
os.path.splitext
numpy.arctan2
torch.from_numpy.to
x.view
handle_list.append
loss_y.to_cpu.item
type
tools.waymo_reader.simple_waymo_open_dataset_reader.utils.get
x.cpu.detach
scores.view.size
cmap
fp.read.split
xs.view.view
self.bn3
convert_labels_into_objects
tools.waymo_reader.simple_waymo_open_dataset_reader.utils.get_box_transformation_matrix
numpy.sqrt
nms_cpu
numpy.amax
matplotlib.pyplot.pause
devs_x_all.append
torch.nn.functional.relu.size
matplotlib.patches.PathPatch
gpu_matrix.size.torch.FloatTensor.copy_
misc.evaluation.make_movie
track.update_attributes
ax2.imshow
topk_scores.view
torch.cat
bn_model.weight.data.convert2cpu.numpy
tools.waymo_reader.simple_waymo_open_dataset_reader.dataset_pb2.MatrixInt32
camera_calibration.extrinsic.transform.np.array.reshape
int
x.data.dim
torch.no_grad
numpy.argsort
score.argsort.max
torch.nn.ModuleList
numpy.empty
track.set_t
bn_model.running_var.numpy
score.argsort.size
google.protobuf.symbol_database.Default
stride.W.stride.H.C.B.W.H.C.B.x.view.expand.contiguous
torch.nn.ConvTranspose2d
iou_rotated_single_vs_multi_boxes_cpu
student.objdet_eval.compute_performance_stats
block
tools.waymo_reader.simple_waymo_open_dataset_reader.dataset_pb2.MatrixFloat
compute_box_corners
self.Upsample_expand.super.__init__
center_devs.append
torch.nn.LeakyReLU
matplotlib.pyplot.ylabel
torch.cat.clone
zlib.decompress
misc.objdet_tools.pcl_from_range_image
student.objdet_eval.measure_detection_performance
W.H.C.B.x.view.expand
class_preds.float
ri.data.np.array.reshape
matplotlib.pyplot.tight_layout
total_loss.to_cpu.item
get_corners_vectorize
numpy.unique.astype
torch.sigmoid.view
nms_cpu.append
cen_offset.view.view
MaxPoolDark
box.len.range.i.i.box.i.box.Polygon.buffer
self.layer4
manager.manage_tracks
torch.IntTensor
loss_re.to_cpu.item
ind.unsqueeze.expand
matplotlib.path.Path
self.MaxPoolDark.super.__init__
self.maxpool
dim.view.view
student.objdet_pcl.show_range_image
key.strip.strip
exec_list.append
torch.sqrt
PIL.Image.open
_nms
google.protobuf.symbol_database.Default.RegisterFileDescriptor
numpy.random.normal
open.read
matplotlib.transforms.Affine2D
fc_model.weight.data.numpy.tofile
bev_maps.squeeze.permute.numpy
self.BasicBlock.super.__init__
topk_ind.batch.topk_inds.view._gather_feat.view.view
_nms.size
outputs.numpy.numpy
bn_model.weight.data.numpy.tofile
student.measurements.Sensor
ax.set_ylabel
pose.transpose.transpose
time.append
utils.iou_rotated_boxes_utils.iou_rotated_boxes_targets_vs_anchors.max
range
roll_pts.pts.roll_pts.pts.sum.abs
out_filters.append
self.conv1
all_labels.append
student.filter.Filter.predict
precision.to_cpu.item
student.trackmanagement.Trackmanagement
operator.itemgetter
google.protobuf.descriptor.EnumValueDescriptor
matplotlib.ticker.FuncFormatter
google.protobuf.descriptor.Descriptor
utils.torch_utils.to_cpu
simple_waymo_open_dataset_reader.dataset_pb2.MatrixFloat
self.unassigned_meas.remove
bn_model.running_mean.numpy.tofile
dict.split
width.topk_inds.int
Line.find_intersection
torch.nn.BatchNorm2d
student.objdet_pcl.show_pcl
misc.objdet_tools.extract_front_camera_image
bn_model.running_mean.copy_
numpy.ones_like
torch.FloatTensor
self._get_deconv_cfg
boxes.size
models.yolo_layer.YoloLayer
x.size
manager.result_list.append
devs_y_all.append
get_corners_torch
tools.objdet_models.darknet.utils.evaluation_utils.post_processing_v2
tools.waymo_reader.simple_waymo_open_dataset_reader.WaymoDataFileReader
ind.unsqueeze.expand.unsqueeze
bn_model.running_mean.numpy
self.GlobalAvgPool2d.super.__init__
self.scaled_anchors.view
bn_model.bias.numel
ind.self.models
cv2.destroyAllWindows
numpy.max
cv2.polylines
self.downsample
ax2.add_patch
conv_model.bias.data.numpy.tofile
tensor.detach.cpu
numpy.log
torch.nn.functional.pad
bev_maps.squeeze.permute
fp.readline.lstrip
target_boxes.pred_box.iou_rotated_single_vs_multi_boxes_cpu.max
cv2.cvtColor
roll_pts.pts.roll_pts.pts.sum
tensor.clone
torch.sin
ws.W.hs.H.ws.hs.C.B.x.view.transpose
torch.nn.Sequential
utils.cal_intersection_rotated_boxes.PolyArea2D
ax.add_patch
self.init_track
numpy.zeros_like
compute_ap
google.protobuf.symbol_database.Default.RegisterMessage
cv2.rectangle
os.path.isfile
os.path.join
layers.append
os.path.abspath
torch.nn.functional.interpolate.size
self.build_targets
utils.cal_intersection_rotated_boxes.intersection_area
self.create_network
gy.floor
bn_model.running_mean.convert2cpu.numpy.tofile
bev_maps.squeeze.permute.numpy.astype
os.path.normpath
cv2.waitKey
utils.iou_rotated_boxes_utils.iou_rotated_boxes_targets_vs_anchors
sorted
misc.objdet_tools.show_objects_labels_in_bev
torch.full.float
Reorg
target_boxes.size
keep.l_obj_confs.reshape
torch.full.size
target.long
noobj_mask.pred_conf.mean
print
bn_model.weight.data.copy_
invalid.detections.weights.sum
torch.nn.MaxPool2d
label_list.append
misc.objdet_tools.convert_labels_into_objects
self.track_list.append
NameError
torch.nn.L1Loss
K.batch_size.clses.view.float
conv_model.bias.data.numpy
ax.plot
self.PoseResNet.super.__init__
KF.update
numpy.logical_and
ws.W.hs.H.ws.hs.C.B.x.view.transpose.contiguous
cv2.VideoWriter.release
torch.float.self.device.g.torch.arange.repeat
camera_projection.data.np.array.reshape.ParseFromString
confs.argsort
conf_noobj.to_cpu.item
os.remove
meas_list.sensor.in_fov
ws.ws.W.hs.hs.H.C.B.x.view.transpose
torch.nn.functional.relu.cpu
len
tensor.detach
io.BytesIO
_gather_feat
bn_model.running_var.copy_
compute_beam_inclinations
student.objdet_detect.detect_objects
box2_conners.cpu.numpy
score.argsort.float
torch.topk
self.conv3
misc.objdet_tools.project_labels_into_camera
recall50.to_cpu.item
misc.objdet_tools.validate_object_labels
table.append
heat.hmax.float
os.getcwd
numpy.zeros.reshape
gimre.t
numpy.transpose
math.sin
keep.l_max_conf.reshape
models.darknet_utils.parse_cfg
configs.device.bev_maps.to.float
torch.float.np.pi.torch.tensor.cuda
torch.nn.init.constant_
weights.sum
matplotlib.pyplot.legend
torch.device
os.path.dirname
cvt_box_2_polygon
format
conv_model.bias.numel
simple_waymo_open_dataset_reader.dataset_pb2.MatrixInt32
bn_model.running_var.numpy.tofile
ret.append
torch.nn.functional.interpolate
ax.set_xlabel
torch.nn.functional.softplus
gi.gj.best_n.b.pred_cls.argmax
matplotlib.pyplot.get_current_fig_manager
join
torch.nn.functional.binary_cross_entropy
img_range.astype.astype
box1_conners.cpu.numpy
cv2.rotate
numpy.where
numpy.cos
torch.sum
numpy.broadcast_to
numpy.column_stack
_gather_feat.size
torch.float.torch.tensor.cuda
self.apply_kfpn
z_coor.view.view
student.objdet_detect.load_configs
ws.ws.W.hs.hs.H.C.B.x.view.transpose.contiguous
fc_model.weight.data.copy_
boxes.t
torch.nn.Linear
compute_range_image_cartesian
compute_range_image_polar
bev_corners.reshape.astype
ri_cartesian.transpose.transpose
matplotlib.transforms.Affine2D.rotate_around
bn_model.bias.data.numpy
numpy.minimum
student.objdet_pcl.bev_from_pcl
numpy.sin
self.YoloLayer.super.__init__
conv_model.weight.data.numpy.tofile
obj_mask.pred_conf.mean
torch.tensor
bn_model.bias.data.numpy.tofile
dataset_pb2.Frame.ParseFromString
self.conv_up_level1
numpy.mean
torch.roll
mng.frame.Maximize
self.conv_up_level2
temp_outs.append
copy.deepcopy
loss_eular.to_cpu.item
google.protobuf.descriptor.FileDescriptor
tools.waymo_reader.simple_waymo_open_dataset_reader.utils.get_image_transform
numpy.einsum
c.target_cls.sum
numpy.linspace
calib.extrinsic.transform.np.matrix.reshape
any
ri.data.np.array.reshape.ParseFromString
tools.objdet_models.darknet.models.darknet2pytorch.Darknet
cv2.resize
out_strides.append
out_widths.append
numpy.argmax
self._make_layer
self.conv_up_level3
camera_projection.data.np.array.reshape
next
get_corners_torch.cpu
model.to.to
convex_conners.clone.detach.cpu.numpy
GlobalAvgPool2d
tuple
conf_obj.to_cpu.item
image_pred.max.argsort
K.topk_ind.int
feat.permute.contiguous
giou_loss.to_cpu.item
_gather_feat.view
scores.view.view
os.path.realpath
ws.hs.ws.W.hs.H.C.B.x.view.transpose
tools.waymo_reader.simple_waymo_open_dataset_reader.dataset_pb2.MatrixFloat.ParseFromString
numpy.ones
open.readline
Track
i.tp.cumsum
pos_negs.append
pred_h.torch.exp.clamp
torch.atan2
width.topk_inds.int.float
get_corners
self.deconv_layers.named_modules
utils.torch_utils.convert2cpu
convex_conners.cpu.numpy
Measurement
self.__getattr__.modules
torch.nn.functional.leaky_relu
setuptools.setup
numpy.sum.append
self.layer2
topk_ind.batch.topk_inds.view._gather_feat.view
self.maxpool.size
os.listdir
self.get_closest_track_and_meas
torch.distributed.all_reduce
torch.nn.Sequential.add_module
gxy.long
bn_model.running_var.convert2cpu.numpy
scipy.spatial.ConvexHull
gx.floor
conv3x3
torch.sqrt.sum
g.torch.float.self.device.g.torch.arange.repeat.t.view
bn_model.bias.data.convert2cpu.numpy
torch.full
Mish
google.protobuf.descriptor.EnumDescriptor
min
misc.helpers.load_object_from_file
torch.nn.functional.relu
torch.zeros
self.file.read
value.strip.strip
head.fpn_idx.format.self.__getattr__
ax.scatter
manager.handle_updated_track
bn_model.bias.data.convert2cpu.numpy.tofile
mask.unsqueeze.expand_as.unsqueeze
iter
obj_mask.iou_scores.mean.to_cpu.item
pickle.load
math.atan2
meas_list.append
models.darknet_utils.print_cfg
pred_w.torch.exp.clamp
conv_model.bias.data.copy_
conv_model.weight.data.convert2cpu.numpy
parse_cfg
loss_h.to_cpu.item
utils.iou_rotated_boxes_utils.iou_pred_vs_target_boxes
cv2.VideoWriter.write
sys._getframe
get_yaw
multi_boxes.transpose
simple_waymo_open_dataset_reader.dataset_pb2.MatrixFloat.ParseFromString
gpu_matrix.size.torch.LongTensor.copy_
_gather_feat.gather
direction.view.view
ax.get_legend_handles_labels
compute_2d_bounding_box
self.Reorg.super.__init__
zip
Line.cal_values
mask.unsqueeze.expand_as
pred_boxes.t
matplotlib.pyplot.subplots
numpy.maximum
fp.readline.split
open
ax.legend
r.append
super
self.Bottleneck.super.__init__
numpy.arccos
class_confs.float
torch.nn.Softmax
batch_metrics.append
self.load_state_dict
utils.iou_rotated_boxes_utils.iou_rotated_boxes_targets_vs_anchors.t
self.unassigned_tracks.remove
fc_model.bias.data.numpy
torch.cat.size
self.deconv_layers
calibration.extrinsic.transform.np.array.reshape
cv2.line
conv_model.weight.data.numpy
self.layer1
ax.set_xlim
torch.empty.view
obj_mask.iou_scores.mean
x.sigmoid_
torch.nn.functional.mse_loss
isinstance
prediction.permute.contiguous
numpy.array
tools.waymo_reader.simple_waymo_open_dataset_reader.utils.draw_3d_box
shapely.geometry.Polygon.intersection
open.close
StopIteration
get_rotation_matrix
is_label_inside_detection_area
torch.nn.functional.softmax
self.bn1
self.file.tell
google.protobuf.symbol_database.Default.RegisterEnumDescriptor
project_detections_into_bev
convex_conners.clone.detach
student.filter.Filter
numpy.logical_and.reduce
pred_boxes.size
super.__init__
torch.sigmoid.float
keep.l_max_id.reshape
torch.sigmoid
z.y.x.np.array.reshape
numpy.isnan
numpy.linalg.inv
model.to.load_state_dict
torch.nn.functional.relu.view
misc.evaluation.plot_tracks
torch.sqrt.mean
torch.nn.MSELoss
dataset_pb2.Frame
ax.set_ylim
self.EmptyModule.super.__init__
x.encode.decode
torch.nn.Conv2d
conv_model.weight.data.copy_
prediction.permute.contiguous.permute
det_performance_all.append
numpy.zeros
rmse.append
models.darknet_utils.load_conv_bn
numpy.where.sum
self.compute_grid_offsets
numpy.logical_and.reduce.sum
stride.W.stride.H.C.B.W.H.C.B.x.view.expand.contiguous.view
self.conv2
self.__getattr__
project_labels_into_camera
google.protobuf.descriptor.FieldDescriptor
torch.load
fig.savefig
ax2.cla
_gather_feat.permute
get_3d_box_projected_corners.astype
misc.evaluation.plot_rmse
sys.path.append
PoseResNet.init_weights
torch.empty
torch.stack
convex_conners.clone.detach.cpu
bn_model.weight.data.numpy
p.append
ys.view.view
numpy.std
K.batch_size.clses.view.float.view
devs_z_all.append
load_configs_model
t.item
fc_model.bias.data.copy_
student.association.Association.associate_and_update
g.torch.float.self.device.g.torch.arange.repeat.view
box1_conners.cpu.numpy.astype
numpy.stack
self.associate
conv_model.bias.data.convert2cpu.numpy
x.cpu.detach.numpy
easydict.EasyDict
PolyArea2D
torch.nn.init.normal_
math.cos
width.topk_inds.int.float.view
misc.objdet_tools.show_objects_in_bev_labels_in_camera
line.lstrip.rstrip
gxy.long.t
intersection_area
_topk
matplotlib.pyplot.show
cv2.imshow
numpy.fromfile
project_to_pointcloud
torch.sigmoid.size
loss_im.to_cpu.item
torch.tanh
enumerate
cvt_box_2_polygon.intersection
torch.full.sum
ind.unsqueeze.expand.size
target.long.t
target_boxes.t
self.track_list.remove
ax.set_aspect
models.darknet_utils.load_conv
block.keys
cv2.imread
gpu_matrix.size
loss_obj.to_cpu.item
torch.nn.functional.avg_pool2d
hasattr
frame.pose.transform.np.array.reshape
self.relu.unsqueeze
conv_model.weight.numel
self.addTrackToList
torch.nn.ModuleList.append
torch.nn.SmoothL1Loss
loss_x.to_cpu.item
self.Darknet.super.__init__
numpy.unique
parse_cfg.append
new_intersection.append
cv2.VideoWriter
bev_maps.squeeze
ious.append
range_image_top_pose.data.np.array.reshape
numpy.int_
torch.arange
numpy.matrix
num_w.start.start.buf.torch.from_numpy.reshape
torch.zeros.append
self.read_record
numpy.concatenate
g.torch.float.self.device.g.torch.arange.repeat.t
ax.cla
numpy.random.seed
tools.waymo_reader.simple_waymo_open_dataset_reader.utils.decode_image
EmptyModule
self.__setattr__
object_labels.len.np.ones.astype
target_labels.gi.gj.best_n.b.pred_cls.argmax.float
numpy.flip
self._make_deconv_layer
ValueError
cv2.circle
tqdm.tqdm
obj_mask.class_mask.mean
target.size
self.Upsample_interpolate.super.__init__
loss_w.to_cpu.item
torch.nn.ReLU
misc.helpers.make_exec_list
conv_model.bias.data.convert2cpu.numpy.tofile
boxes.transpose
gwh.t
ax.xaxis.set_major_formatter
torch.utils.model_zoo.load_url
topk_ind.batch.topk_ys.view._gather_feat.view
torch.from_numpy
fc_model.bias.data.numpy.tofile
out_heights.append
Upsample_expand
torch.cat.cpu
dict
float
a_idx.anchors_polygons.intersection
ax.hist
pred_conf.float.sum
conv_model.weight.data.convert2cpu.numpy.tofile
p_poly.intersection
head.self.__getattr__
str
numpy.matmul
ax.text
bn_model.running_var.convert2cpu.numpy.tofile
convex_conners.cpu.numpy.astype
os.path.expanduser
print_cfg
self.bn2
loss_cls.to_cpu.item
models.darknet_utils.load_fc
struct.unpack
fc_model.weight.data.numpy
torch.log
google.protobuf.reflection.GeneratedProtocolMessageType
matplotlib.pyplot.xlabel
numpy.amin
student.objdet_detect.create_model
torch.clamp
topk_ind.batch.topk_xs.view._gather_feat.view
numpy.any
Line
fc_model.weight.numel
cls_acc.to_cpu.item
fc_model.bias.numel
img.endswith
detections.append
matplotlib.use
block.split
self.layer3
utils.torch_utils.to_cpu.append
torch.full.type
gxy.t
fp.readline.rstrip
a.ravel.ravel
_transpose_and_gather_feat
torch.LongTensor

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

attrdict missing from requirements.txt

When installing requirements.txt using pip, I get the following error:

    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-mjt2owsq/wxpython/setup.py'"'"'; __file__='"'"'/tmp/pip-install-mjt2owsq/wxpython/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-mjt2owsq/wxpython/pip-egg-info
         cwd: /tmp/pip-install-mjt2owsq/wxpython/
    Complete output (7 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-mjt2owsq/wxpython/setup.py", line 27, in <module>
        from buildtools.config import Config, msg, opj, runcmd, canGetSOName, getSOName
      File "/tmp/pip-install-mjt2owsq/wxpython/buildtools/config.py", line 30, in <module>
        from attrdict import AttrDict
    ModuleNotFoundError: No module named 'attrdict'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

This is solved by installing attrdict.

Invalid requirements.txt

  • the package for PyTorch on PyPI is not named pytorch but torch.
  • scipy is required (imported here) but not listed in requirements.txt

Also, it may be helpful to add information about the versions of packages that are required to requirements.txt - by default pip will always install the latest version, which may break the code in the future (if major updates of the dependencies are released).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.