Git Product home page Git Product logo

Comments (12)

reso1 avatar reso1 commented on August 22, 2024 2

Cool, thanks for your kind help!

BTW, if you have any plan on formally integrating MARL on IsaacGym using rl_games, I'm very glad to help it :)

from isaacgymenvs.

Denys88 avatar Denys88 commented on August 22, 2024

rl_games supports MARL.
You can use independent PPO or with central value.
Starcraft multi agent is the good example: https://github.com/Denys88/rl_games/blob/master/rl_games/envs/smac_env.py
In env you just need to implement this function:

    def get_number_of_agents(self):
        return self.n_agents 

If you want to use central_value you need to set
use_central_value = True
and return dict:

{
'obs' : obs,
'state' : state
}

for observations.
In this case obs shape will be (envs_count * agents_count, *) and state shape is (envs_count, *).
You can take a look config example for this use case:
https://github.com/Denys88/rl_games/blob/master/rl_games/configs/smac/3m.yaml

I think if you try to create simple MA env from ant, for example every leg can be agent :) I can help to adapt and test it with rl_games.
Thanks,
Denys

from isaacgymenvs.

reso1 avatar reso1 commented on August 22, 2024

Thanks for your reply and kind advices! I will try to implement this idea in IsaacGym and share further results.

I've read the rl_games MARL part code on SMAC env, it supports MARL indeed. But I have 2 further questions on the MARL algorithm implementation of rl_games, based on my code reading and understanding.

  1. Is the multi-agent PPO implementation in rl_games identical to MAPPO algorithm in this paper?
  2. Can the current rl_games MARL algorithm support heterogeneous agents w/ different action/observation space (i.e. different actor/critic nets)?

Best,
Jingtao

from isaacgymenvs.

Denys88 avatar Denys88 commented on August 22, 2024
  1. Almost yes. A little bit other value normalization code. And I didn't apply death masking. it should be easy to make it exact the same.

  2. My goal was to achieve maximum performance on gpu so I didn't add support for cases like this directly. But if you are using discrete action space you can provide different action masks to disable some actions. And right now only different obs space for agent and critic is supported.

from isaacgymenvs.

reso1 avatar reso1 commented on August 22, 2024

Thanks a lot for explanations! I've implemented the MA ant env (each leg corresponds to an agent), however the training result is quite bad for ant, maybe using MAPPO for multi-joints robot control is that suitable? Anyway thanks again!

from isaacgymenvs.

Denys88 avatar Denys88 commented on August 22, 2024

Could you try to use central value with a whole observation?
But I believe this case should work fine even with independent ppo. Can you share your env?

from isaacgymenvs.

reso1 avatar reso1 commented on August 22, 2024

Hi Denys, actually I used central value for the task however not getting good results, maybe there are some problems in my code and I'm very happy if you could help to check my code.

I've forked IsaacGymEnvs and committed my MA_Ant env, you can check this link and test it, the registered env task name is MA_Ant.

Apart from the new env class ma_ant, there are also 2 changes in the original IsaacGymEnvs repo:

  1. The ma_ant class is inherited from the ma_vec_task class, where only function allocate_buffers is modified to adapt to the buffer shape changes.
  2. In the get_env_info function in RLGPUEnv class, number of agents and use_central_value are added to the info dict.

from isaacgymenvs.

Denys88 avatar Denys88 commented on August 22, 2024

Thanks, Ill play with it tomorrow evening.
Just wondering what best score did you achieve?

from isaacgymenvs.

reso1 avatar reso1 commented on August 22, 2024

The best reward of MA_Ant is below 10 (Ant env is around 4k), but I did not fine-tune training parameters used in MA_Ant, so I don't whether there are some problems in my code, or it's just not suitable to consider each leg as an agent in MAPPO.

from isaacgymenvs.

Denys88 avatar Denys88 commented on August 22, 2024

@reso1 I've found bug in your code:
expand_env_ids was buggy, so you actually always returned done=True for 3of4 legs.
I rewrote it in that way to make sure it works:

@torch.jit.script
def expand_env_ids(env_ids, n_agents):
    # type: (Tensor, int) -> Tensor
    device = env_ids.device
    agent_env_ids = torch.zeros((n_agents * len(env_ids)), device=device, dtype=torch.long)
    agent_env_ids[0::4] = env_ids * n_agents + 0
    agent_env_ids[1::4] = env_ids * n_agents + 1
    agent_env_ids[2::4] = env_ids * n_agents + 2
    agent_env_ids[3::4] = env_ids * n_agents + 3
    return agent_env_ids

And I added one more change to the compute_ant_observations:

    obs_idxs = torch.eye(4, dtype=torch.float32, device=self.device)
    obs_idxs = obs_idxs.repeat(state.size()[0], 1)
    obs = state.repeat_interleave(repeats, dim=0)
    obs = torch.cat([obs, obs_idxs], dim=-1)

So legs see a little bit different observations, I think it is not a must for your case but without it maximum reward was 2000.
And achieved 4k+ reward in less than 2 minutes without using central value(any way all legs see the same observations).
image
As you see movement is not as good and we need some tuning to make it perfect but it works somehow.
ezgif-2-74e7ff13e97a

from isaacgymenvs.

reso1 avatar reso1 commented on August 22, 2024

from isaacgymenvs.

Frank-Dz avatar Frank-Dz commented on August 22, 2024

Hi Bryan, the following is the pointer to the MA ant repo, you can check the closed issue in IsaacGymEnvs where I mentioned details about this environment. https://github.com/reso1/IsaacGymEnvs Enjoy! Bryan Chen @.> 于2021年12月18日周六 10:41写道:

Hi @reso1 https://github.com/reso1, could you please give me some pointers if I wanted to fork your MA ant repo to add a simple mat to the environment? In particular I would like to recreate the RoboSumo environment https://github.com/openai/robosumo . Thank you! — Reply to this email directly, view it on GitHub <#7 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFSEU4OXBVA4XC3YTXIZMT3URPYGFANCNFSM5IIPTREA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you were mentioned.Message ID: @.
>

Hi ~ Are you going to share the repo? I am excited to play with it:D

from isaacgymenvs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.