This repository encapsulates the ROS workspace containing the necessary packages and program nodes to simulate a simple turtlebot3 and further performing SLAM on turtlebot3 whilst adding noise to wheel odometry sensor motion model and IMU sensor. It also includes the results folder containing images and videos of simulation of turtlebot3 for different situations and a documentation as part of my Internship with Arrow Electronics (eInfochips).
- Setting up NVIDIA Jetson
- Installing and Verifying relevant packages
- Testing CSI-Camera
- Installing and Inferencing pre-trained models
- Performing Facial recognition and gesture recognition
- Pose Tracking
- Setting up GPIO for communication protocols
- Setting up SPI for Jetson-IO
- Integrating an STM32 Micro-controller with NVIDIA Jetson Nano
- Reading sensor data
- Fusing sensor data from multiple sources
sudo apt-get install python3
sudo apt-get install gedit
Verify if the package has been correctly installed
which python3
#output should be
/usr/bin/python3
In a new terminal,
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get install -y python3-pip
#To install any specific package in the future
pip3 install package_name
The following commands confirm that your camera is succesfully connected to NVIDIA Jetson
ls /dev/video0
nvgstcapture-1.0 --orientation=2
Clone the CSI camera github repository
git clone https://github.com/JetsonHacksNano/CSI-Camera.git
cd CSI-Camera
gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=2 ! 'video/x-raw, width=816, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
In a new terminal, Install numpy package
sudo apt-get update
sudo apt install python3-numpy
sudo apt install libcanberra-gtk-module
Run the facial detection and eye tracking program
python3 face_detect.py
you should a similiar output -
The following commands confirm that your camera is succesfully connected to NVIDIA Jetson
ls /dev/video0
nvgstcapture-1.0 --orientation=2
In order to install the pre-trained models
cd jetson-inference/tools
./download-models.sh
After downloading and installing the pre-trained models and building the project from source, ensure that the terminal is located
cd jetson-inference/build/aarch64/bin
Next, after navigatingt to the mentioned directory, run the following command -
./imagenet.py images/orange_0.jpg images/test/output_0.jpg
After running the following command, you should receive a similiar output (the first run will take the TensorRT a few minutes to optimize the network)
The Imagenet model supports and handles video stream processing as well Running a video from the disk
wget https://nvidia.box.com/shared/static/tlswont1jnyu3ix2tbf7utaekpzcx4rc.mkv -O jellyfish.mkv
./imagenet.py --network=resnet-18 jellyfish.mkv images/test/jellyfish_resnet18.mkv
The following classification video opens up
The following commands install the OpenCV4 library and Tkinter package
pip3 install opencv-contrib-python
sudo apt-get install python3-tk
Install Tensorflow
sudo apt update
sudo apt install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2'
Install Cuda, CudNN, TensorRT, and TensorFlow for Python
sudo apt install cmake libopenblas-dev
Download the frozen model from the link https://github.com/apollo-time/facenet/raw/master/model/resnet/facenet.pb
Further convert the .pb model to python file -