Git Product home page Git Product logo

gym-collision-avoidance's People

Contributors

btlopez avatar dependabot[bot] avatar indirected avatar jaredbmit avatar mfe7 avatar rosewang2008 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gym-collision-avoidance's Issues

Agents' abnormal behaviors

Hi! As I am exploring the project, I followed the instruction in the documentation by starting from gym_collision_avoidance/experiments/src/example.py and tried to edit it and change some behaviors of the agents. And then I found some abnormal agent behaviors. I wonder if they come from the flaws in the policies or because I coded them in a wrong way?

First, I let an agent run in CADRL policy (in orange) while another run in noncoop (as reference obstacle); when I increase the range of the destination to (25,25), the agent doesn't move.
image
But with the same setup, switching to GA3C_CADRL policy then would work.
image

However, after increasing the range of destination to 35, GA3C_CADRL policy fails to work too.
image

Could you please take a look at this? Thank you very much!

is the function world_coordinates_to_map_indices in Map is right?

def world_coordinates_to_map_indices(self, pos):
# for a single [px, py] -> [gx, gy]
gx = int(np.floor(self.origin_coords[0]-pos[1]/self.grid_cell_size))
gy = int(np.floor(self.origin_coords[1]+pos[0]/self.grid_cell_size))
grid_coords = np.array([gx, gy])
in_map = gx >= 0 and gy >= 0 and gx < self.map.shape[0] and gy < self.map.shape[1]
return grid_coords, in_map
i think that
the gx = int(np.floor(pos[0]/self.grid_cell_size-self.origin_coords[0]))
the gy = int(np.floor(pos[1]/self.grid_cell_size-self.origin_coords[1]))
is that right? or I don't understand the world_coordinates_to_map_indices?

TypeError: 'NoneType' object is not subscriptable

Hello, when I run ./gym_collision_avoidance/experiments/run_trajectory_dataset_creator.sh according to https://gym-collision-avoidance.readthedocs.io/en/latest/pages/use_cases.html#collect-a-dataset-of-trajectories, there is an error as follows:

Traceback (most recent call last):
  File "src/run_trajectory_dataset_creator.py", line 157, in <module>
    main()
  File "src/run_trajectory_dataset_creator.py", line 130, in main
    agents = test_case_fn(**test_case_args)
TypeError: get_testcase_random() got an unexpected keyword argument 'agents_policy'

So I try to replace agents = test_case_fn() with agents = test_case_fn(**test_case_args), but there is another error:

Traceback (most recent call last):
  File "src/run_trajectory_dataset_creator.py", line 158, in <module>
    main()
  File "src/run_trajectory_dataset_creator.py", line 140, in main
    times_to_goal, extra_times_to_goal, collision, all_at_goal, any_stuck, agents = run_episode(env, one_env)
  File "/home/hanbin/catkin_ws/src/CADRL/rl_collision_avoidance/gym-collision-avoidance/gym_collision_avoidance/experiments/src/env_utils.py", line 50, in run_episode
    obs, rew, done, info = env.step([None])
  File "/home/hanbin/catkin_ws/src/CADRL/rl_collision_avoidance/gym-collision-avoidance/baselines/baselines/common/vec_env/vec_env.py", line 108, in step
    return self.step_wait()
  File "/home/hanbin/catkin_ws/src/CADRL/rl_collision_avoidance/gym-collision-avoidance/baselines/baselines/common/vec_env/dummy_vec_env.py", line 51, in step_wait
    obs, self.buf_rews[e], self.buf_dones[e], self.buf_infos[e] = self.envs[e].step(action)
  File "/home/hanbin/catkin_ws/src/CADRL/rl_collision_avoidance/gym-collision-avoidance/venv/lib/python3.6/site-packages/gym/core.py", line 263, in step
    observation, reward, done, info = self.env.step(action)
  File "/home/hanbin/catkin_ws/src/CADRL/rl_collision_avoidance/gym-collision-avoidance/gym_collision_avoidance/envs/collision_avoidance_env.py", line 161, in step
    self._take_action(actions, dt)
  File "/home/hanbin/catkin_ws/src/CADRL/rl_collision_avoidance/gym-collision-avoidance/gym_collision_avoidance/envs/collision_avoidance_env.py", line 245, in _take_action
    all_actions[agent_index, :] = agent.policy.external_action_to_action(agent, actions[agent_index])
TypeError: 'NoneType' object is not subscriptable

How to solve it? Thank you very much!

AssertionError in while performing Minimum working example

Hello, i'm having trouble during execution of example.py
AssertionError: The obs returned by the step() method observation keys is not same as the observation space keys, obs keys: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], space keys: ['is_learning', 'num_other_agents', 'dist_to_goal', 'heading_ego_frame', 'pref_speed', 'radius', 'other_agents_states']
Looking forward to your reply, thanks!

Error during executio initial example

Hello, i'm having trouble during execution of src/example.py (using Python 3.6.15 and Mac OS ventura 13.3.1

Traceback (most recent call last):
  File "src/example.py", line 58, in <module>
    main()
  File "src/example.py", line 48, in main
    obs, rewards, game_over, which_agents_done = env.step(actions)
  File "/omittedownpath/gym-collision-avoidance/venv/lib/python3.6/site-packages/gym/wrappers/order_enforcing.py", line 37, in step
    return self.env.step(action)
  File "/omittedownpath/gym-collision-avoidance/venv/lib/python3.6/site-packages/gym/wrappers/env_checker.py", line 37, in step
    return env_step_passive_checker(self.env, action)
  File "/omittedownpath/gym-collision-avoidance/venv/lib/python3.6/site-packages/gym/utils/passive_env_checker.py", line 246, in env_step_passive_checker
    check_obs(obs, env.observation_space, "step")
  File "/omittedownpath/gym-collision-avoidance/venv/lib/python3.6/site-packages/gym/utils/passive_env_checker.py", line 159, in check_obs
    ), f"{pre} observation keys is not same as the observation space keys, obs keys: {list(obs.keys())}, space keys: {list(observation_space.spaces.keys())}"
AssertionError: The obs returned by the `step()` method observation keys is not same as the observation space keys, obs keys: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], space keys: ['is_learning', 'num_other_agents', 'dist_to_goal', 'heading_ego_frame', 'pref_speed', 'radius', 'other_agents_states']

Anyone could help ? I run the example with the created venv after runned install.sh

ModuleNotFoundError: No module named 'scipy' when running example.sh

Hi, when trying to simulate the Minimun Working Example, I encountered the following issue where I was prompted that: No module named 'scipy'. However, I installed scipy/scipy3-python both using pip and apt-get already. Would this possibly be a problem because the codes aren't fully compatible with the new version of Python?

This problem showed up while I was using Python 3.10 in Ubuntu, I tried to downgrade Python to 3.9 but the issue persists.

Could you please help? Thank you so much!

Scipy_bug

How Can I Import Customized Map?

Hi! Really sorry for the continuous questions! As I am exploring the project, I wanted to import customized maps. I do see that in the world_maps folder there are three maps, as well as Map.py which sets up a map. But then I realized that the Map class is only imported by collision_avoidance_env.py but CollisionAvoidanceEnv class isn't subsequently imported by any one of the experiment python files such as example.py and run_full_test_suite.py. Therefore, I wonder where I can edit the map, say importing 001.png? Thank you very much for your time!

ModuleNotFoundError: No module named 'rvo2'

hello, author, I initialize submodules, install dependencies and src code according to 'https://gym-collision-avoidance.readthedocs.io/en/latest/pages/install.html' and then I run './example.sh' and there is a bug:
Entered virtualenv.

Running example python script

Traceback (most recent call last):
File "src/example.py", line 6, in
from gym_collision_avoidance.envs import test_cases as tc
File "/home/hb/catkin_ws/src/rl_ca/gym-collision-avoidance/gym_collision_avoidance/envs/test_cases.py", line 21, in
from gym_collision_avoidance.envs.policies.RVOPolicy import RVOPolicy
File "/home/hb/catkin_ws/src/rl_ca/gym-collision-avoidance/gym_collision_avoidance/envs/policies/RVOPolicy.py", line 5, in
import rvo2
ModuleNotFoundError: No module named 'rvo2'
How to solve it?
Thanks!

Loading weight problem

Dear Mr Everett:
I've already loaded your collision avoidance system on the win10 platform and can run examp.py,but it was not found when running the nn_navigation_value_multi-py file
4_agents_dataset_value_train.p, how do I generate the .p file? I hope you can reply me in your busy schedule or give me some advice,Thank you again
Yours
Sincerely

Visual settings about the color and size of the agent, etc.

Hello, Mr. Everet
I would like to know where is the visualization script in the current project so that I can change colors etc.
Also, is there no python file in the current project that I can use directly for training? I saw the instruction to train myself a new spolicy in your description, but I just started college and I don't have the ability to program it myself, maybe you can give me some advice on how to train myself a new strategy.
Looking forward to your reply, thanks.

Code request

Hello, Mr. Michael Everett!
I'm a graduate student in a university and I'm interested in your article on multi-agent robot navigation.In particular, I am very much involved in the study of Socially Aware Motion Planning with Deep Reinforcement Learning However, I cannot download the data you used in the school. I hope you can provide the code of this article and send it to me. I wish you a pleasant study!
Thanks

Training source

Thanks for your contribution for DRL with mobile robot.

I'm interested in DRL and mobile robot, I already read your works and want to training new one. But when I checked the your open source, there didn't exist training source since maybe license. If it is possible to add example training source such as default CADRL, it will be helpful really. Just reading the docs is insufficient to implement the training source.

Regards.

  • I want to get 'agents_dataset_value_train.p' dataset in order to train the model. Can I receive the files or know this components?

how to run 'run_trajectory_dataset_creator.sh' faster

Hello, Mr. Everet.
I have two questions.
1- how can i run 'run_trajectory_dataset_creator.sh' faster. I tried to run this code with 4 agent and pretrained policy 'CADRL'
But the time of create dataset is about 55 trajectory in a minute, and I need at least three million.
I tried to comment line of "164 rewards = self._compute_rewards()" in "collision_avoidance_env.py" for speed up, but I got about 60 trajectory in a minute. That is very slow.

2 - how can i train "rl_collision_avoidance(ga3c-cadrl)" with graphics card?

Problem with pip during installation

Hello,

I am a student at the University of Michigan and currently conducting research in the area of multi-agent collision avoidance. I am trying to install the package through Anaconda however I receive the following error:

pip install gym_collision_avoidance

ERROR: Could not find a version that satisfies the requirement gym_collision_avoidance (from versions: none)
ERROR: No matching distribution found for gym_collision_avoidance

Thank you!

The PPOCADRLPolicy could not load the mfe_network?

Hello
I want to use the PPOCADRLPolicy , but the baselines package does not have mfe_network. Is there a another baselines pack, or where I can find the train and test code? thanks

/ from gym_collision_avoidance.envs.policies.PPOCADRLPolicy import PPOCADRLPolicy
File "/home/cjzhang/workspace/robot/YaoGuang/rl_collision_avoidance_syou/gym_collision_avoidance/envs/policies/PPOCADRLPolicy.py", line 6, in
from baselines.ppo2.mfe_network import mfe_network
ModuleNotFoundError: No module named 'baselines.ppo2.mfe_network'

env.step(action) list size

I'm attempting to train a new policy in the "CollisionAvoidance-v0" env using the A2C model from stable-baselines.

Following documentation instructions, I created the following files:

  • LearningPolicy file, LearningPolicyA2C.py:
class LearningPolicyA2C(LearningPolicy):
    """ The A2C-CADRL policy while it's still being trained
    """
    def __init__(self):
        LearningPolicy.__init__(self)
        # self.possible_actions = network.Actions()

    def external_action_to_action(self, agent, external_action):
        """ Convert the discrete external_action into an action for this environment using properties about the agent.

        Args:
            agent (:class:`~gym_collision_avoidance.envs.agent.Agent`): the agent who has this policy
            external_action (int): discrete action between 0-11 directly from the network output # NOT A DISCRETE ACTION

        Returns:
            [speed, heading_change] command

        """
        heading_change = agent.max_heading_change*(2.*external_action[1] - 1.)
        speed = agent.pref_speed * external_action[0]
        actions = np.array([speed, heading_change])
        # raw_action = self.possible_actions.actions[int(external_action)]
        # action = np.array([agent.pref_speed*raw_action[0], raw_action[1]])
        return actions
  • a sub-class Train to the Config.py file (and imported the new policy to the policy_dict at the top of test_cases.py)
class Train (Config):
    def __init__(self):
        self.MAX_NUM_AGENTS_IN_ENVIRONMENT = 2
        self.MAX_NUM_AGENTS_TO_SIM = 2
        Config.__init__(self)
        self.TRAIN_SINGLE_AGENT = True
        self.TEST_CASE_ARGS['policy_to_ensure'] = 'learning_A2C'
        self.TEST_CASE_ARGS['policies'] = ['learning_A2C', 'RVO']
        self.TEST_CASE_ARGS['policy_distr'] = [0.5, 0.5] # Added to prevent error 'assert(len(policies)==len(policy_distr))' in test_cases.py, occurs during env.reset()

  • a training script, trainA2C.py

I've had trouble implementing model.learn() from stable-baselines, so in the meantime I am simply implementing a random agent where the agent samples a possible action:

import os
os.environ['GYM_CONFIG_CLASS'] = 'Train'

import gym
from gym_collision_avoidance.envs import Config
import gym_collision_avoidance.envs.test_cases as tc
from gym_collision_avoidance.experiments.src.env_utils import create_env
from stable_baselines.common.policies import MlpPolicy
#from stable_baselines.common import make_vec_env
from stable_baselines import A2C
from stable_baselines.common.env_checker import check_env


# env: a VecEnv wrapper around the CollisionAvoidanceEnv
# one_env: an actual CollisionAvoidanceEnv class (the unwrapped version of the first env in the VecEnv)
env, one_env = create_env()

# check_env(env, warn=True)

model = A2C(MlpPolicy, env, verbose=1)
# model.learn(total_timesteps=1000)

# The reset method is called at the beginning of an episode
obs = env.reset()

num_episodes = 1000

for i in range(num_episodes):
    action = env.action_space.sample()
    obs, reward, done, info = env.step(action)
    if done:
        obs = env.reset()

When I attempt to debug the code, the following error occurs:

action = env.action_space.sample() produces a (2,) array of actions as expected. However, during env.step(action), the function self._take_action(actions, dt) is called in the collision_avoidance_env.py file, where i run into an error:

for agent_index, agent in enumerate(self.agents):
            if agent.is_done:
                continue
            elif agent.policy.is_external:
                all_actions[agent_index, :] = agent.policy.external_action_to_action(agent, actions[agent_index])
            else:
                dict_obs = self.observation[agent_index]
                all_actions[agent_index, :] = agent.policy.find_next_action(dict_obs, self.agents, agent_index)

The agent.policy.external_action_to_action() method passes actions[agent_index] as the action. However, my actions variable is a (2,) array with a heading and speed for my agent. Therefore, only the delta heading angle is passed as an argument (and would have an index error if more than 2 agents defined in class Train() of config.py).

Am I implementing a portion of the code incorrectly, or should I change the _take_actions(actions,dt) method in collision_avoidance_env.py so that it passes the (2,) array of actions to external_action_to_action() method? (see below for change)

            elif agent.policy.is_external:
                all_actions[agent_index, :] = agent.policy.external_action_to_action(agent, actions)

Train new policy from scratch

Hello @mfe7 ,
In these days I am trying to train a new RL policy, following the documentation. I am not able to execute a train because I don't know what I need to use to execute the following line:

rl_action = model.sample(obs)

How should I replace model? There is an example file that I can follow to achieve this goal? Please, I need help because I am working on a course project at my university.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.