Comments (9)
You need to specify name of the maps without the .yaml
extension like the following:
map_names=["zigzag_dists", "4way", "loop_empty", "small_loop"]
as shown in /tutorials/get_tile_coordinates.py
If you would like to collect tile coordinates for all the maps, you can do so by:
import os
map_names = os.listdir("maps")
and running the script with:
python get_tile_coordinates.py
will generate the file /challenge-aido_RL-IL/tutorials/tile_coordinates.csv
where all the tile coordinates for each of the map are stored.
Remember that you need to copy the coordinates information (whatever inside the .csv
file) and put it accordingly inside get_tiles function in duckietown_rl/gym_duckietown/simulator.py
, as described here.
from challenge-aido_rl-il.
Thanks for your reply! Yes, I tried this. But I don't know why my new map will have this error.
Here is my new map:
I want to set an obstacle in the middle of the road.
from challenge-aido_rl-il.
Oh, I see. I haven't created a map on my own, so please check out the official repository of gym_duckietown, especially the Design part, where they explain the structure of .yaml file. They say the array of tiles should be 2-d array, as far as I can see your array is 1-d. Perhaps, that causes an issue
from challenge-aido_rl-il.
You need to specify name of the maps without the
.yaml
extension like the following:
map_names=["zigzag_dists", "4way", "loop_empty", "small_loop"]
as shown in /tutorials/get_tile_coordinates.pyIf you would like to collect tile coordinates for all the maps, you can do so by:
import os map_names = os.listdir("maps")
and running the script with:
python get_tile_coordinates.py
will generate the file/challenge-aido_RL-IL/tutorials/tile_coordinates.csv
where all the tile coordinates for each of the map are stored.Remember that you need to copy the coordinates information (whatever inside the
.csv
file) and put it accordingly inside get_tiles function induckietown_rl/gym_duckietown/simulator.py
, as described here.
Thanks for your reply! I follow the above instruction and put the map "loop_obstacles" into the simulator, but it occurs this error when I run python -m scripts.train_ddpg
.
from challenge-aido_rl-il.
I edited the simulator such that every time a map is loaded/reset, the agent starts at a 'good' position. I improve the 'goodness' of the starting point with 2 things:
-
by default simulator chooses a point randomly in a tile to launch the agent on that point. One thing I do is that I narrow down the interval so that the agent does not start at the edges of the tiles (the disadvantage of starting at the edge is that RL agent might go off the tile quickly).
-
I want the robot to start nearby the centre of the tile (not further away than a threshold I defined: 0.18 meters) AND the angle that it needs to turn to align itself with the centre line of the tile should not be larger than 100 degrees, again a threshold defined by me.
Simulator tries to match all of the conditions above for a fixed number of times (MAX_SPAWN_ATTEMPTS
, which is by default 5000 as you see in the error). So what happened with your code is that the simulator could not find a proper
point in the map which satisfies all the conditions above. And after 5000 steps, it stopped.
See the 2 simulator.py
files to see the differences: my code - original code. You might stretch out the constraints and perhaps the simulator will find a 'good' spot to launch the agent
from challenge-aido_rl-il.
One very weird thing is that i can run the "loop obstacles" map successfully with python PIDcontroller.py but it cannot run in python -m scripts.train_ddpg, but it is the same map configuration.
from challenge-aido_rl-il.
Yes, that looks strange.
Are you sure the file tutorials/maps/loop_obstacles.yaml
is exactly the same as in duckietown_rl/maps/loop_obstacles.yaml
?
If yes, then I would guess its luck! Cause remember the simulator selects a point to launch the agent randomly. Even though it's a low possibility, it might happen that in one run, simulator finds a suitable point in 5k trials
from challenge-aido_rl-il.
I edited the simulator such that every time a map is loaded/reset, the agent starts at a 'good' position. I improve the 'goodness' of the starting point with 2 things:
- by default simulator chooses a point randomly in a tile to launch the agent on that point. One thing I do is that I narrow down the interval so that the agent does not start at the edges of the tiles (the disadvantage of starting at the edge is that RL agent might go off the tile quickly).
- I want the robot to start nearby the center of the tile (not further away than a threshold I defined: 0.18 meters) AND the angle that it needs to turn to align itself with the center line of the tile should not be larger than 100 degrees, again a threshold defined by me.
Simulator tries to match all of the conditions above for a fixed number of times (
MAX_SPAWN_ATTEMPTS
, which is by default 5000 as you see in the error). So what happened with your code is that the simulator could not find aproper
point in the map which satisfies all the conditions above. And after 5000 steps, it stopped.See the 2
simulator.py
files to see the differences: my code - original code. You might stretch out the constraints and perhaps the simulator will find a 'good' spot to launch the agent
Thanks,😁 I tried to change the x,z initial position as +1 rather than +0.8. I found that it is easy to start a good pose now.
One thing that I don't understand is whether the robot can learn to avoid the obstacles in your code? For example in ddpg scripts, I haven't seen where we could get the other objects' information.
from challenge-aido_rl-il.
Avoiding obstacles was not in my project's goals, so I did not build an algorithm for that.
One might do the following:
- Use the function
proximity_penalty2
(See link) to determine whether the agent collided or not - Edit reward function: subtract reward, in other words, add a penalty, if the agent collides with static/dynamic objects (See an example)
Try out the reward function if it behaves in the way you would want to: e.g. does the reward is less when the agent collides, etc. Then, you can train the RL agent and hopefully it will learn to avoid obstacles
from challenge-aido_rl-il.
Related Issues (4)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from challenge-aido_rl-il.