autorope / donkeycar Goto Github PK
View Code? Open in Web Editor NEWOpen source hardware and software platform to build a small scale self driving car.
Home Page: http://www.donkeycar.com
License: MIT License
Open source hardware and software platform to build a small scale self driving car.
Home Page: http://www.donkeycar.com
License: MIT License
It's currently impossible to train a model on several datasets. You can get around this by creating a dataset out of the sessions you need and training the model on the bigger datasets. Training on datasets too big for memory creates some new problems.
Users should only need to list two datasets in the train.py or explore.py script to use.
As I understand RemoteClient already sends angle and throttle data when making the request to the Tornado server. I guess it would be feasible to get these values from an RC receiver (whose driver needs implementing), and pass them to the server.
This image would provide the default folder structure, settings and could optionally include keras/tensorflow and opencv.
Current the way a vehicle is controlled is by loading a webpage served by the vehicle's Raspberry Pi. This makes it impossible to control from far way because it does not have as static ip address.
To fix this, a remote server can act as a proxy between the user and the vehicle. The remote server serves the page for the user controls and the vehicle constantly sends and receives updates from the server.
Code canges:
Here is a great example: https://chatbotslife.com/using-augmentation-to-mimic-human-driving-496b569760a9#.b6jyrypqu
I would like to study the source codes.
I have a question about class DifferentialDriveMixer.
I am confused about below codes
l_speed = ((self.left_motor.speed + throttle)/3 - angle/5)
r_speed = ((self.right_motor.speed + throttle)/3 + angle/5)
I want to know why '3' and '5' are used.
What are they stand for? Are they experience values?
Instead of using models to predict a single steering value, use a model that predicts the steering angle category. This will also give us the probability the angle is correct and will let us more easily combine models.
I've tested my motors, using the Adafruit example scripts, and everything is working.
When I get to the last step of starting the remote control, this is the response:
Traceback (most recent call last): File "scripts/drive.py", line 32, in <module> mythrottlecontroller = dk.actuators.PCA9685_Controller(cfg['throttle_actuator_channel']) File "/home/bob/donkey/donkey/actuators.py", line 34, in __init__ self.pwm = Adafruit_PCA9685.PCA9685() File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_PCA9685/PCA9685.py", line 75, in __init__ self.set_all_pwm(0, 0) File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_PCA9685/PCA9685.py", line 111, in set_all_pwm self._device.write8(ALL_LED_ON_L, on & 0xFF) File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_GPIO/I2C.py", line 114, in write8 self._bus.write_byte_data(self._address, register, value) File "/home/bob/donkey/env/lib/python3.4/site-packages/Adafruit_PureIO/smbus.py", line 236, in write_byte_data self._device.write(data) OSError: [Errno 5] Input/output error
I ported the line detection autopilot from the Compound Eye car into a python notebook. This should be added as an available autopilot.
https://wroscoe.github.io/compound-eye-autopilot.html#compound-eye-autopilot
A single "mixer" class that handles command distribution to all actuators is more flexible than separate "throttle" and "steering" classes.
For instance in differential or skid steering one should be aware of both the throttle as well as the steering value to assign correct (PWM) values to the actuators
--datasets - comma separated list of datasets
--sessions - comma separated list of sessions
--loops - how many times to try
--name - name of created model, dataset, or results
Currently the throttle output by the autopilot has no awareness of the client-side brake button or max throttle settings. It would be nice to update the server to respect the client side settings and brake control, which will make it easier prevent and stop runaways.
should be something like
python manage.py makevideo --session basic --outputfile ~/test.mp4
One of the most common and laborious tasks of building a self driving car is saving and accessing data for different trial sessions. Currently this is accessed through the FileRecorder. This is not a clear way to represent the access of the data.
A much cleaner way would be to use Session objects with uses cases like:
sfactory = SessionFactory('~/sessions')
session = sfactory.new('port')
for i in records:
session.record(r[img], r[angle])
X, Y = sesssion.array()
standard_init_linux.go:178: exec user process caused "exec format error"
At the next race (june 16th), several of us plan to implement lidar + odometry on Donkey2 cars. However the current donkey software doesn't support adding additional inputs since the record/decided functions are hardcoded (angle, steering, img_arr). A generalized state class would allow the cars to support data from additional sensors and allow flexible data retrieval (ie images from last 10 frames).
This VehicleState class would act like a ROSBag and could save data like this:
state = VehicleState(save_to='/path/to/session/')
state.put('image', img_arr)
state.put('throttle', throttle)
state.put('angle', angle)
VehicleState saves data to disk depending on the type of data. 1 or 3 channel arrays (images) would be saved as jpg images. Single value data (speed, throttle would be saved to a csv file that contains a list of all values in the format:
key, value, time
throttle, .23, 12:32:32
angle, -.1, 12:32:33
image, /path/to/image, 12:32:33
throttle, .23, 12:32:35
angle, -.1, 12:32:36
image, /path/to/image, 12:32:37
VehicleState also saves recent data in memory in a first-in-first-out queue (ring queue). This will be used by pilots using recurrent networks that need several frames of data.
This is how you'd create a state that saves the last 4 values of each variable.
state = VehicleState(memory=4)
You could then retrieve the last variable values like this:
img_arr, throttle, angle = state.get(['image', 'throttle', 'angle'])
Since the variables are not being recorded at the same time the state class would need to interpolate the different data sets to create a tabular output that Keras / Tensorflow needs.
Include
Currently if a vehicle is turned on when a PWM throttle signal is being pulsed, the vehicle calibrates this as zero. This way when the PWM throttle value goes to 0 the car will go in reverse.
The demo script should show how the vehicle should be initialized to ensure that it's calibrated.
I noticed that the installation procedure stops because of lapack and fortran packages are missing if
git clone https://github.com/wroscoe/donkey donkeycar
pip install -e donkeycar
You can get them with:
sudo apt-get install libblas-dev liblapack-dev
sudo apt-get install gfortran
When I run drive.py --remote The connection to the server never completes. When I Ctrl-C I see this traceback showing it's getting stuck when requests trys to make a connection.
If I open jupyter notebook and use the same drive script, it works.
Maybe this has something to do with cashed ip addresses in bash?
^CTraceback (most recent call last):
File "scripts/drive.py", line 57, in <module>
car.start()
File "/home/pi/donkey/donkey/vehicles.py", line 33, in start
milliseconds)
File "/home/pi/donkey/donkey/remotes.py", line 60, in decide
'json': json.dumps(data)}) #hack to put json in file
File "/usr/lib/python3/dist-packages/requests/api.py", line 94, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 362, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
body=body, headers=headers)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 308, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.4/http/client.py", line 1090, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python3.4/http/client.py", line 1128, in _send_request
self.endheaders(body)
File "/usr/lib/python3.4/http/client.py", line 1086, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.4/http/client.py", line 924, in _send_output
self.send(msg)
File "/usr/lib/python3.4/http/client.py", line 859, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 154, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 133, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 78, in create_connection
sock.connect(sa)
KeyboardInterrupt
Occasionally when starting drive.py on the Pi, I would see the following error:
pi@raspberrypi:~/donkey $ python scripts/drive.py --remote http://172.20.10.5:8887
Detected running on rasberrypi. Only importing select modules.
Using TensorFlow backend.
center: 410
PiVideoStream loaded.. .warming camera
/usr/lib/python3/dist-packages/picamera/encoders.py:544: PiCameraResolutionRounded: frame size rounded up from 160x120 to 160x128
width, height, fwidth, fheight)))
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/donkey/donkey/remotes.py", line 79, in update
self.state['milliseconds'],)
File "/home/pi/donkey/donkey/remotes.py", line 140, in decide
data = json.loads(r.text)
File "/usr/lib/python3.4/json/init.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.4/json/decoder.py", line 361, in raw_decode
raise ValueError(errmsg("Expecting value", s, err.value)) from None
ValueError: Expecting value: line 1 column 1 (char 0)
I've not managed to get repro steps; I found that restarting the Docker container and then retrying the drive.py script with the same arguments worked.
Donkey is importing all modules in the donkey/ directory, which leads to a lot of unneeded module imports, and breaks in many cases (such as running drive_pi.py, which requires envoy and Keras, even thought they are not used). Can this be architected differently so that only modules needed are imported?
Speed and angle should be set to 0.
Currently sessions are passed to the remote sever on creation. This limits the server to only handling one session at a time and prevents switching.
The plan is to attempt an odometer setup inspired by this post.
Here's my initial take on how this might work:
Reading the sensor:
Calculating distance and speed:
Sending distance and velocity back to server:
@wroscoe @adammconway does the above approach sound reasonable to you? Do you have preferences on the units used for distance and velocity?
This is a request for the 2.1 version that uses parts.
For many of the computer vision approaches an undistorted image is needed to determine the real angle of the line. Given that their are so many cars that use the same camera it would be helpful to have a part that would undistort the image for the default camera.
Most logic can be copied from here: https://github.com/wroscoe/udacity_projects/blob/master/P4_advanced_lanelines/Code%20and%20Writeup.ipynb
Currently the user can move make the angles and speed go well beyond range the car can implement. This makes it hard to correct once these ranges are way out of range. This could be accomplished by only increasing/decreasing ranges if the value is below the max / above the min.
Currently the only way to change auto pilots is by restarting the server using different CLI variables. This is slow and doesn't facilitate quick iteration.
A faster way to test poilots would be to switch them from the control webpage. To do this the following would need to be changed.
This simulator already has a course that looks like the warehouse:
https://github.com/tawnkramer/sdsandbox
Or the udacity one:
https://github.com/udacity/self-driving-car-sim
Image variants would be helpful to train more generalized driving models from a limited image set. The variants should include.
People who use the docker image to start the server don't use an updated version of the git repo. The repo can be updated manually by running
bash start-server.sh -d
and then running the following inside the docker instance.
git pull origin master
but it would be better if this updated automatically, assuming we can keep the master branch free of conflicts. @yconst do you know how to do this.
This attempts to separate the core donkey library and users vehicle configurations.
When a user runs pip install donkeycar
the donkey-admin.py script will be added to the PATH. This will then allow the user then run commands like donkey-admin makecar mydonkey
to create a folder ~/mydonkey
that contains all the config files to run the car.
A command can be added during the pip process like this. http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html
This was the main issue I experienced at the Feb 18th track day that prevented me from reliably driving the car around the track for training. It always manifests as an intermittent problem but I am also able to observe it at home, although less frequently than I saw it at the track day.
I'm running a local donkey server over wifi, so 4G latency is not a factor here. On wifi, I'm frequently seeing lag times spike above 1s, sometimes as long as 30 or more seconds. The clues that I've seen so far are:
Here's a sample console log from the pi that shows the spikes. Lag time of ~0.06 is about normal on my home network.
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06897997856140137
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.07510542869567871
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06453394889831543
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.759141206741333
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.05977487564086914
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0692141056060791
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06003284454345703
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.16736602783203125
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06820440292358398
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0678567886352539
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.12179088592529297
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.05697226524353027
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0699162483215332
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06665158271789551
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.17603182792663574
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 1.0047976970672607
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0619354248046875
throttle update: 0.0
pulse: 370
Vehicles have common maneuvers they can use without input from the pilot. For example.
I think it might help to normalize the steering angle. Maybe just divide by 90 before training - and multiply by 90 on the output from predict. Or pass -1 to 1 through the steering control.
After driving 10 minutes, the car stopped responding and updating images. This was the error from on the pi.
/usr/lib/python3/dist-packages/picamera/encoders.py:545: PiCameraResolutionRoun$
width, height, fwidth, fheight)))
123
angle: -6 throttle: 46
remote client: {"angle": "-6", "throttle": "46"}
Traceback (most recent call last):
File "demos/drive_pi.py", line 55, in <module>
car.start()
File "/home/pi/code/donkey/donkey/vehicles.py", line 39, in start
self.steering_actuator.update(angle)
File "/home/pi/code/donkey/donkey/actuators.py", line 73, in update
self.pwm.set_pwm(self.channel, 0, pulse)
File "/home/pi/code/donkey/env/lib/python3.4/site-packages/Adafruit_PCA9685/P$
self._device.write8(LED0_ON_L+4*channel, on & 0xFF)
File "/home/pi/code/donkey/env/lib/python3.4/site-packages/Adafruit_GPIO/I2C.$
self._bus.write_byte_data(self._address, register, value)
File "/home/pi/code/donkey/env/lib/python3.4/site-packages/Adafruit_PureIO/sm$
self._device.write(data)
OSError: [Errno 5] Input/output error
"""
Proposed Refactor:
The current platform design does not leave room to change/innovate
the drive loop. This an alternative way to define the drive loop
using modular components and shared variables. It borrows from
the design of Keras and ROS.
"""
#Local Car
V = Vehicle()
V.data = ['img', #image from camera
'c_angle', #control angle (from user)
'c_throttle',
'c_drive_mode',
'p_angle',
'p_throttle',
'a_angle',
'a_throttle',
V.add(WebMonitor(),
output='c_angle', 'c_throttle', 'c_drive_mode')
V.add(PiCamera(),
output='img')
V.add(CNN(),
input=['img', 'a_angle', 'a_throttle'],
output=['p_angle', 'p_throttle'])
V.add(DriveLogic(),
input=['c_angle', 'c_throttle', 'c_drive_mode',
'p_angle', 'p_throttle'],
output=['a_angle', 'a_throttle'])
V.add(SteeringActuator(),
input='a_angle')
V.add(ThrottleActuator(),
input='a_throttle')
V.add(Recorder(), input='*')
#Remote Car
V = Vehicle()
V.data = ['img', #image from camera
'c_angle', #control angle (from user)
'c_throttle',
'c_drive_mode',
'p_angle',
'p_throttle',
'a_angle',
'a_throttle',
V.add(RemoteMonitor(),
output='c_angle', 'c_throttle', 'c_drive_mode')
V.add(PiCamera(),
output='img')
V.add(RemoteLogic(),
input=['img',
'c_angle', 'c_throttle', 'c_drive_mode',
'a_angle', 'a_throttle'],
output=['a_angle', 'a_throttle'])
V.add(SteeringActuator(),
input='a_angle')
V.add(ThrottleActuator(),
input='a_throttle')
Parts can now be created in separate code bases and still used in Donkey. The odometer could be a perfect example. The repo would have the part class code as well as the docs to install the odometer.
There have been some comments about swapping in/out the stock RC controller and since there are only 2 servo controls used( steering / throttle ) an Arduino could do this. But since the rPi has only 1 hardware PWM, it would be more efficient to stream the servo control via a single PPM stream. This way an Arduino could decode the PPM and drive the servo's. And with a single digial I/O flag/pin the Arduino could read and use the stock RC receiver signals for training.
Hi everyone, I just cloned on Ubuntu 14, installed and run docker. When I bash start-server.sh I get:
.........
start-server: Running Donkey server container...
Loading modules for server.
Starting Donkey Server...
Using TensorFlow backend.
Traceback (most recent call last):
File "/donkey/scripts/serve.py", line 12, in
w = dk.remotes.DonkeyPilotApplication()
File "/donkey/donkey/remotes.py", line 175, in init
self.pilots = ph.default_pilots()
File "/donkey/donkey/pilots.py", line 84, in default_pilots
pilot_list = self.pilots_from_models()
File "/donkey/donkey/pilots.py", line 71, in pilots_from_models
models_list = [f for f in os.scandir(self.models_path)]
FileNotFoundError: [Errno 2] No such file or directory: '/root/mydonkey/models'
I'm getting the following from command line:
(env)pi@raspberrypi:~/donkey $ sudo bash start-server.sh
start-server: Building Donkey server image...
Sending build context to Docker daemon 113.8 MB
Step 1/18 : FROM python:3
---> b6cc5d70bc28
Step 2/18 : RUN apt-get -y update
---> Running in 78d46290b93b
standard_init_linux.go:178: exec user process caused "exec format error"
The command '/bin/sh -c apt-get -y update' returned a non-zero code: 1
start-server: Running Donkey server container...
Unable to find image 'donkey:latest' locally
docker: Error response from daemon: repository donkey not found: does not exist or no pull access.
See 'docker run --help'.
Does anyone have an idea to fix this?
I'm running this on a Raspberry Pi 3, using the image from https://s3.amazonaws.com/donkey_resources/donkey.img.zip.
Docker was installed with: curl -sSL https://get.docker.com | sh
Here are some initial thoughts on improvements that I think could be made here:
DeviceOrientationEvent
API seems to have sufficient cross-platform browser support that it could be used reliably on most iOS and Android phones. More info here: https://developer.mozilla.org/en-US/docs/Web/API/DeviceOrientationEventThe vehicle could be passed as an optional parameter when running the script files, otherwise the default (vehicle.ini) would be used.
I am following the instructions from the google doc, and using the following instructions:
git clone http://github.com/wroscoe/donkey.git
cd donkey
sudo bash start-server.sh
The process fails with the following error.
~/donkey$ sudo bash start-server.sh
start-server: Running Donkey server container...
Loading modules for server.
Starting Donkey Server...
Using TensorFlow backend.
Traceback (most recent call last):
File "/donkey/scripts/serve.py", line 12, in
w = dk.remotes.DonkeyPilotApplication()
File "/donkey/donkey/remotes.py", line 175, in init
self.pilots = ph.default_pilots()
File "/donkey/donkey/pilots.py", line 84, in default_pilots
pilot_list = self.pilots_from_models()
File "/donkey/donkey/pilots.py", line 71, in pilots_from_models
models_list = [f for f in os.scandir(self.models_path)]
FileNotFoundError: [Errno 2] No such file or directory: '/root/mydonkey/models'
I have had a search and could not find any clues, so am wondering whether I am being dense and this is something really simple or whether there is a genuine issue.
Many thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.