Git Product home page Git Product logo

xinjinghao / deep-reinforcement-learning-algorithms-with-pytorch Goto Github PK

View Code? Open in Web Editor NEW
1.0K 9.0 122.0 56.67 MB

Clean, Robust, and Unified PyTorch implementation of popular Deep Reinforcement Learning (DRL) algorithms (Q-learning, Duel DDQN, PER, C51, Noisy DQN, PPO, DDPG, TD3, SAC, ASL)

Python 100.00%
deep-reinforcement-learning pytorch reinforcement-learning machine-learning asl c51 categorical-dqn ddpg double-dqn dueling-dqn noisynet-dqn ppo prioritized-experience-replay q-learning sac td3

deep-reinforcement-learning-algorithms-with-pytorch's Introduction

Clean, Robust, and Unified PyTorch implementation of popular DRL Algorithms


0.Star History


1.Dependencies

This repository uses the following python dependencies unless explicitly stated:

gymnasium==0.29.1
numpy==1.26.1
pytorch==2.1.0

python==3.11.5

2.How to use my code

Enter the folder of the algorithm that you want to use, and run the main.py to train from scratch:

python main.py

For more details, please check the README.md file in the corresponding algorithm folder.


3. Separate links of the code


4. Recommended Resources for DRL

4.1 Simulation Environments:

  • gym and gymnasium (Lightweight & Standard Env for DRL; Easy to start; Slow):

  • Isaac Gym (NVIDIA’s physics simulation environment; GPU accelerated; Superfast):

  • Sparrow (Light Weight Simulator for Mobile Robot; DRL friendly):

  • ROS (Popular & Comprehensive physical simulator for robots; Heavy and Slow):

  • Webots (Popular physical simulator for robots; Faster than ROS; Less realistic):

4.2 Books:

4.3 Online Courses:

4.4 Blogs:


5. Important Papers

DQN: Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning[J]. nature, 2015, 518(7540): 529-533.

Double DQN: Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double q-learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2016, 30(1).

Duel DQN: Wang, Ziyu, et al. "Dueling network architectures for deep reinforcement learning." International conference on machine learning. PMLR, 2016.

PER: Schaul T, Quan J, Antonoglou I, et al. Prioritized experience replay[J]. arXiv preprint arXiv:1511.05952, 2015.

C51: Bellemare M G, Dabney W, Munos R. A distributional perspective on reinforcement learning[C]//International conference on machine learning. PMLR, 2017: 449-458.

NoisyNet DQN: Fortunato M, Azar M G, Piot B, et al. Noisy networks for exploration[J]. arXiv preprint arXiv:1706.10295, 2017.

PPO: Schulman J, Wolski F, Dhariwal P, et al. Proximal policy optimization algorithms[J]. arXiv preprint arXiv:1707.06347, 2017.

DDPG: Lillicrap T P, Hunt J J, Pritzel A, et al. Continuous control with deep reinforcement learning[J]. arXiv preprint arXiv:1509.02971, 2015.

TD3: Fujimoto S, Hoof H, Meger D. Addressing function approximation error in actor-critic methods[C]//International conference on machine learning. PMLR, 2018: 1587-1596.

SAC: Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor[C]//International conference on machine learning. PMLR, 2018: 1861-1870.

ASL: Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity


6. Training Curves of my Code:

CartPole LunarLander
Pong Enduro

CartPole LunarLander

CartPole LunarLander

CartPole LunarLander

Pendulum LunarLanderContinuous

deep-reinforcement-learning-algorithms-with-pytorch's People

Contributors

xinjinghao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-reinforcement-learning-algorithms-with-pytorch's Issues

Issuee of 2.1

Hello, this is the issue for file 2.1. Since I ran the program with the CPU, not CUDA, I changed the default to the CPU in the main dvc, and the model has been successfully trained.
But when loading the second image, I get "raise RuntimeError('Attempting to deserialize object on a CUDA'RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.” I don't know how to modify it

Some basic issues

Hello, I am a beginner in reinforcement learning. Thank you very much for providing me with such a library for easy learning. As the step size of each of my epochs is not fixed, I would like to know how to implement it if I want to train and record on an episode basis. If this is not easy to implement, I would like to know if training on steps would cause any problems and what content is being recorded at this time? Is it the average of several episodes or something? Thank you!

相关参考文献

您好:

非常感谢您的分享,如果可以的话,能附带您参考的文章或者书吗?

谢谢

Looking for cooperation

Dear XinJingHao,

I hope this message finds you well. I am a PhD student from SJTU. A friend of mine recommended this repository to me. I noticed similarities between the tutorial I am currently developing and yours.

I am in the process of creating a Reinforcement Learning (RL) tutorial that aims to provide a comprehensive resource with both code examples and in-depth mechanism explanations. You can find the initial codebase for my tutorial at this repository: https://github.com/SCP-CN-001/RL101. At present, it appears that both of us have completed the coding segment of our respective tutorials.

I am reaching out to gauge your interest in collaborating on the documentation aspect of these tutorials. If you find merit in the idea of combining our efforts to enhance the educational value of our materials, I believe we can create a more comprehensive and impactful resource.

If this proposal intrigues you, please feel free to reach out to me via the email address provided on my GitHub profile: https://github.com/SCP-CN-001.

Looking forward to the possibility of collaborating with you on this endeavor.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.