Git Product home page Git Product logo

alexseysua / autonomysim Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nervosys/autonomysim

0.0 0.0 0.0 304.9 MB

Open-source simulation engine for robotic general intelligence (RGI)

Home Page: https://nervosys.github.io/AutonomySim/

License: Apache License 2.0

Shell 0.52% C++ 72.78% Python 5.93% C 4.13% C# 14.34% PowerShell 1.26% TeX 0.05% Makefile 0.02% CMake 0.30% Batchfile 0.65% Dockerfile 0.03%

autonomysim's Introduction


AutonomySim logo

The simulation engine for autonomous systems

Announcements

  • Windows now has separate Batch/Command and PowerShell build systems.
  • The autonomysim Python package has undergone a complete overhaul! AutonomyLib is next.
  • A new documentation system has been rolled out that covers the Python and C++ APIs.
  • Windows: all build scripts have been translated from Command/Batch to PowerShell. Unreal Engine still generates Batch files and we are still ironing out the bugs.
  • Unreal Engine version 5.0 brought powerful new features including Nanite and Lumen, while deprecating support for the PhysX backend.
  • AutonomySim supports Unreal Engine version 5.03 and above. For version 4.27, you can use the ue4.27 branch. The master branch supports Unreal Engine version 5.3 and below.
  • Support for Unity Engine, Gazebo, and ROS1 has been deprecated to focus on Unreal Engine, ROS2, ArduPilot/PX4, qGroundControl, PyTorch, and real-time applications of AutonomyLib via software- and hardware-in-the-loop.
  • We are researching ways to seemlessly interoperate between AutonomySim and Omniverse/IsaacSim. The Omniverse Unreal Engine Connector makes it possible to sync Unreal Engine data with an Omniverse Nucleus server, which can then sync with any Omniverse Connect application including IsaacSim.
  • Linux: added ROS2 support for Ubuntu 22.04 LTS (Jammy Jellyfish).
  • macOS: Unreal Engine version 5.2 brought native support for Apple/ARM M-series silicon.

For a complete list of changes, view the change log.

Vision

"A central challenge in the branch of artificial intelligence (AI) known as machine learning (ML) is the massive amount of labeled data needed to train supervised models. Datasets for real-world systems are either hand-crafted or automatically labeled using other ML models, introducing biases and errors into data and models, and limiting learning to the offline case. While game engines have long used hardware-accelerated physics engines based on Newtonian dynamics to simulate motion, physics-based rendering (PBR) and related accelerators have made real-time ray-tracing a reality, extending physical realism to the visual domain. Realism is only increasing with the growing use of Earth observation data. For the first time in history, the average user can generate high-fidelity labeled datasets for offline learning or learn physics-based models online. This has revolutioned AI for robotics, where the data and safety requirements are often otherwise intractable." -Dr. Adam Erickson, 2024

Introduction

AutonomySim is a high-fidelity, photorealistic simulator for multi-agent and -domain autonomous systems, intelligent robotic systems, or embodiment as it is known in the artificial intelligence (AI) community. AutonomySim is built on Unreal Engine and based on Microsoft AirSim. It is an open-source, cross-platform, modular simulator for AI in robotics that supports software-in-the-loop (SITL) and hardware-in-the-loop (HITL) operational modes for popular flight controllers (e.g., Pixhawk/PX4, APM/ArduPilot). Future support is planned for ground control software (GCS) including qGroundControl. AutonomySim is developed as an Unreal Engine plugin that can be dropped into any Unreal environment or downloaded from the Epic Marketplace. The goal of AutonomySim is to provide physically realistic multi-modal simulations with popular built-in libraries and application programming interfaces (APIs) for the development of new sensing, communication, actuation, and AI systems.

AutonomySim provides a foundation for building high-fidelity simulations of a wide variety of autonomous systems. After an extensive analysis of existing solutions, Nervosys created AutonomySim for its internal product development. We encourage others to use it too! Unlike AirSim and other projects based upon it, we intend to make public any and all improvements to the core software framework. We merely ask that you share your improvements in return, although you are not obligated to in any way (AutonomySim uses a permissive license that supports commercialization). Together, we can create the ideal open-source simulation platform for intelligent robotic systems.

Join us in developing the most advanced simulator for intelligent robotic systems!

Supported Operating Systems

Below is a list of officially supported operating systems. We recommend using Windows until Linux support improves.

Windows

  • Windows 10
  • Windows 11
  • Windows Server 2019 (untested)
  • Windows Server 2022 (untested)

Linux

  • Ubuntu 20.04 LTS (Focal Fossa)
  • Ubuntu 22.04 LTS (Jammy Jellyfish) - Vulkan incompatibility, Docker recommended
  • Ubuntu Server 22.04 LTS (untested)
  • Ubuntu Core 22 (untested)
  • Botnix 1.0 (in development)

macOS

Note

Unreal Engine version 5.2 and up provide native support for Apple/ARM M-series silicon

  • macOS 11 (Big Sur)
  • macOS 12 (Monterey)
  • macOS 13 (Ventura)
  • macOS 14 (Sonoma)

Getting Started

Coming soon.

Documentation

For details on all aspects of AutonomySim, view the documentation.

For an overview of the simulation architecture, see the below figure.

architecture
Overview of the simulation architecuture from Shah et al. (2017).

Demonstrations

Coming soon.

Operational Modes

Mirroring real-world robotic systems, AutonomySim supports three different operational modes:

  1. Human operation
  2. Machine operation
  3. Hybrid human-machine operation

Human Operation

If you have wired or remote controller, you can manually control vehicles in the simulator as shown below. For ground vehicles, you can use the arrow keys for control inputs (i.e., steering, accelerating, decelerating). See more details here.

aerial vehicle ground vehicle

Machine Operation

AutonomySim exposes Application Programming Interfaces (APIs) for progammatic interaction with the simulation vehicles and environment. These APIs can be used to control vehicles and the environment (e.g., weather), generate imagery, audio, or video, record control inputs along with vehicle and environment state, et cetera. The APIs are exposed through a remote procedure call (RPC) interface and are accessible through a variety of languages, including C++, Python, C#, and Java.

The APIs are also available as part of a separate, independent, cross-platform library, so that they can be deployed on a real-time embedded system on your vehicle. That way, you can write and test your code in simulation, where mistakes are relatively cheap, before deploying it to real-world systems. Moreover, a core focal area of AutonomySim is the development of simulation-to-real (sim2real) domain adaptation AI models, a form of transfer learning. These metamodels map from models of simulations to models of real-world systems, leveraging the universal function approximation abilities of artificial neural networks to implicitly represent real-world processes not explicitly represented in simulations.

Note that you can use Sim Mode setting to specify the default vehicle or the new Computer Vision mode, so you don't get prompted each time you start AutonomySim. See this for more details.

Hybrid Human-Machine Operation

Using a form of hardware-in-the-loop (HITL), AutonomySim is capable of operating in hybrid human-machine mode. The classical example is a semi-autonomous aircraft stabilization program, which maps human control inputs (or lack thereof) into optimal control outputs.

Generating Labeled Data for Offline Machine Learning

There are two general approaches to generating labeled data with AutonomySim:

  1. Using the record button manually
  2. Using the APIs programmatically

The first method, using the record button, is the easiest method. Simply press the big red button in the lower right corner to begin recording. This will record the vehicle pose/state and image for each frame. The data logging code is simple and easy to customize to your application.

record screenshot
Human/manual data recording mode.

The second method, using the APIs, is a more precise and repeatable method for generating labeled data. The APIs allow you to be in full control of the how, what, where, and when of data logging.

Computer Vision Mode

It is also possible to use AutonomySim with vehicles and physics disabled. This is known as Computer Vision Mode and it supports both human and machine control. In this mode, you can use the keyboard or APIs to position cameras in arbitrary poses and collect imagery including depth, disparity, surface normals, or object segmentation masks. As the name implies, this is useful for generating labeled data for learning computer vision models. See this for more details.

Labeled Data Modalities

The following sensors and data modalities are either available or planned:

  • RGB imagery
  • Depth
  • Disparity
  • Surface normals
  • Object panoptic, semantic, and instance segmentation masks
  • Object bounding boxes (coming soon)
  • Audio (coming soon)
  • Video (coming soon)
  • Short- or long-wavelength infrared imagery (see)
  • Multi- and Hyper-spectral (coming soon)
  • LiDAR (see; GPU acceleration coming soon)
  • RaDAR (coming soon)
  • SoNAR (coming soon)

Autolabeling systems may be added in the future.

Vehicles

Ground

  • Automobile
  • BoxCar (coming soon)
  • ClearPath Husky (coming soon)
  • Pioneer P3DX (coming soon)

Air

  • Quadcopter

Machine Learning Applications

Learning Perception, Communication, Planning, and Control Models

Coming soon.

Imitation or Apprenticeship Learning

Coming soon. An example of recording control inputs and vehicle state for learning control systems.

Neural Radiance Fields

Coming soon. Learning compressed 3-D radiative transfer models.

Large Language Models

Coming soon. An example of using a large language model (LLM) to parse text commands into planning and control inputs for robotic systems. See Eureka.

Learning Surrogate Models or Emulators

Coming soon.

Learning World Models

Coming soon.

Other Applications

Sensor System Development

Coming soon.

Locomotion System Development

Coming soon. An example of learning structure, actuator, and locomotion models. This is useful, for example, for developing robotic systems that are robust to major structural failures, such as the loss of motors or legs.

Communication System Development

Coming soon.

Simulating Specific or General Environments

Coming soon.

Environmental Dynamics

Weather

The weather system support human and machine control. Press the F10 key to see the available weather effect options. You can also control the weather using the APIs, as shown here.

weather menu
Weather effects menu.

Press the F1 key to see other available options.

Procedural Terrain Generation

Coming soon.

Tutorials

Videos

Guides

Projects

Join the Community

For updates or answers to your questions, join our GitHub Discussion group here or our Discord channel here.

For information on becoming a contributor, see the following section.

Contributing

Community contributions are strongly encouraged via GitHub Issues and Pull Requests. If you are looking for areas to contribute, please take a look at the open issues. For more information about contributing to the project, please visit the contributing page.

Our GitHub Insights page provides a sense of the project activity.

Project Structure

The AutonomySim repository consists of multiple projects with a project, the core of which is AutonomyLib. Additional projects include DroneServer, DroneShell, HelloCar, HelloDrone, MavLinkCom, Examples, and LogViewer.

It provides wrappers for Unreal Engine, Python, and ROS2, as well as build scripts for Docker and Azure.

The build system uses Visual Studio 2022 for Windows and CMake for cross-platform support. Pre-build scripts are run beforehand to prepare the target project for compilation.

For more information, see the following pages:

Current and Past Users

A subset of the organizations, people, and projects that have used AutonomySim or its predecessor, AirSim, are listed here.

If you would like to be featured on this list, please submit a request here.

Roadmap

  • Focus on Unreal Engine, deprecate support for ROS1, Unity, Gazebo
  • Project reorganization and modernization (restructuring, renaming, refactoring, porting, updating)
    • Add support for the latest Unreal Engine version 5.3
    • Add API, RPC support for Rust, deprecate support for Java and C#
    • Update automated tests
  • Add libraries and tools for artificial intelligence (AI)
    • CUDA Toolkit, CuDNN, TensorRT
    • Python, Mojo, PyTorch, JAX, Flax, MLX, OpenCV
    • Generative models: LLaMA 2, Mistral/Mixtral, OpenHermes, SD, LLaVA
    • Robotics foundation models
    • Multi-modal models
    • Interpretability, explainability, and hard bounds or guardrails
    • Safety and cybersecurity
  • Add headless server mode for control via external program, container, virtual machine, or local network
    • Add NVIDIA JetPack and Botnix OS support for software-in-the-loop (SITL)
  • Add the JSBSim flight dynamics model (FDM) plugin for Unreal Engine per Project Antoinette
  • Add flight control software (FCS): BetaFlight, OpenPilot, LibrePilot, dRehmFlight, Flightmare/flightlib
  • Add MavLink-based ground control software (GCS): qGroundControl, Mission Planner, Auterion Mission Control
  • Add self-driving car (i.e., rover) software: openpilot, Autoware, CARLA, Vista, Aslan, OpenPodcar/ROS
  • Add NVIDIA Omniverse IsaacSim/Gym interoperability

Sponsors

  • Nervosys: "Accelerating the development of robotic general intelligence"

Donations

AutonomySim is made possible by Nervosys, NVIDIA, Epic Games, Microsoft, the Linux Foundation and countless contributors to related projects.

We need your support to ensure the success of AutonomySim development. Reach out to us at [email protected] to learn how you can support this project.

Background

AutonomySim began as an update to the open-source AirSim project, which Microsoft shutdown in July of 2022 to focus on their closed-source cloud software-as-a-service (SaaS) version. Our first task was to update AirSim to support Unreal Engine 5, which we soon discovered was in already in development at other organizations. Unfortunately, these organizations only seemed to be interested in creating closed-source cloud SaaS platforms similar to Microsoft, which had resulted in the AirSim project being archived. Fearing a repeat of this outcome, we wanted to take the project in a new, open, multi-agent, -domain, and -modal direction. We are not very interested in cloud platforms, which are simply other peoples' computers, but rather on running AutonomySim in our own secure enclaves. It is, after all, a game engine. We want to see it in all its glory and think you will too.

While Unreal Engine is well-suited to simulating the terrestrial domain due to its classical Newtonian physics engine, the aerial domain is better represented by dedicated flight dynamics models (FDMs). These small models approximate much larger computational fluid dynamics (CFD) models that are too expensive to run in real-time. Thus, it makes little sense to limit AutonomySim to the aerial domain and individual agents, given that multi-agent, -domain, and -modal simulation capabilities are needed to operate in complex real-world systems. We hope that we, as a community, can bring the marine and aerial domains to parity with dedicated simulators.

Are you a fluid dynamics expert? We would love to have your input.

Comparison with Related Projects

Below is a comparison with AirSim and its other forks.

Project Origin Year New Features Updated Framework Server SaaS Organization
AirSim original 2017 - 2022 open-source closed-source Project AirSim Microsoft
Cosys-AirSim fork 2020 Sensors, Matlab 2024 open-source - - Cosys Lab
Colosseum fork 2022 Unreal Engine 5 2023 open-source closed-source SWARM Codex Labs
AirGen fork 2023 - - closed-source closed-source GRID Scaled Foundations
AirSim-Client original 2022 Rust 2023 open-source - - Kristoffer Solberg Rakstad
AutonomySim fork 2023 Major refactoring 2024 open-source open-source - Nervosys

Compared to other simulation engines for robotic systems, AutonomySim is open-source and built on top of a state-of-the-art game engine with the best available features and performance. It also has batteries-included support for popular machine learning workflows.

AutonomySim has been designed from the ground-up for robotic general intelligence (RGI) or general robotic intelligence (GRI) based on multi-modal, high-dimensional sensing combined with state-of-the-art AI modeling techniques, terms and concepts that Nervosys rightfully invented.

References

For technical aspects on the design of AutonomySim, refer to the original AirSim manuscript:

@techreport{shah2017,
  author = {Shital Shah and Debadeepta Dey and Chris Lovett and Ashish Kapoor},
  year = 2017,
  title = {{Aerial Informatics and Robotics Platform}},
  number = {MSR-TR-2017-9},
  institution = {Microsoft Research},
  url = {https://www.microsoft.com/en-us/research/project/aerial-informatics-robotics-platform/},
  eprint = {https://www.microsoft.com/en-us/research/wp-content/uploads/2017/02/aerial-informatics-robotics.pdf},
  note = {AirSim draft manuscript}
}

A list of manuscripts related to the design and implementation of AutonomySim and its predecessors can be found here. Please open a GitHub Issue to add your manuscript.

A manuscript on the design and implementation of AutonomySim is forthcoming.

Frequently Asked Questions (FAQ)

For other questions, see the FAQ and feel free to post issues in the repository here.

Code of Conduct

The AutonomySim Code of Conduct is based on the Contributor Covenant version 2.1, itself inspired by the Mozilla standards. The original unmodified covenant can be found here. The changes made better reflect the core value of our organization in preserving freedom.

For answers to common questions about this code of conduct, see the FAQ. Translations are available here.

Contact us through GitHub Discussions with any additional questions or comments, so that we may maintain transparency in adopting community guidelines.

License

This project is released under the Apache 2.0 License, a permissible license often preferred for commercial use.

Any and all sublicenses can be found here.


xwerx logo
"Accelerating the development of robotic general intelligence"
TM 2024 © Nervosys, LLC

autonomysim's People

Contributors

admercs avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.