Git Product home page Git Product logo

bark-ml's People

Contributors

ferenctorok avatar juloberno avatar marcelbrucker avatar marcelbruckner avatar marvinlue avatar patrickhart avatar pedroacvieira avatar

Watchers

 avatar

bark-ml's Issues

Review/Rework of our fork with Tobias Kessler

  • MARCEL BRUCKNER the two archives pretrained_agents and expert_trajectories: these cannot stay in the repo, such huge files make git incredibly slow. (Yet I see the point we need them, no discussion on that.) --> Proposal: Create another, new archive in the GAIL-4-BARK github project, store the archives using git lfs there and define a bazel filegroup to access the files in the main repo, cf. https://git.fortiss.org/autosim/interaction_dataset/-/blob/master/BUILD (better Ideas welcome) --> https://github.com/GAIL-4-BARK/large_data_store

  • MARCEL BRUCKNER provide a shell script to unzip the files to the correct locations. I failed the first time to execute the example as the path provided in gail_params.json ("expert_path_dir": "data/expert_trajectories/sac_20000_observations",) did not match anything in the archive. --> Obsolete with the https://github.com/GAIL-4-BARK/large_data_store

  • MARCEL BRUCKNER Out of the box failing test: //bark_ml/tests/py_library_tf2rl_tests:rendered_tests FAILED in 24.7s <-- Several errors, maybe also related to wrong file locations. --> These were never intended to run as tests, as they are rendering to screen (Not possible in bazel test), changed them to be a py_binary

  • MARCEL BRUCKNER Out of the box failing test: //examples:tfa FAILED in 11.1s <-- Im not sure what result I should expect here --> Intended to fail, as only runnable in the docker container (Patrick hardcoded some file paths)

  • FERENC TÖRÖK Clean up the notebook: Either fill or delete the empty sections, make the "project task" section a documentation of what was done and not what was intended to be done. (otherwise the notebook and the gail.py example are really nice!)

  • MARCEL BRUCKNER bark_ml/environments/single_agent_runtime.py --> Why? This change is highly non-generic, revert or improve --> [Quick fix: Replace [0]*16 with [0] * observer.observation_space.shape or similar] Fixes an index error if the ego agent is not valid anymore after the world.Step() function is called. Makes the highway and intersection blueprint runnable.

  • MARCEL BRUCKNER save the files once with the pep8 styleguilde, with two spaces indent and 80 chars line length, see https://github.com/bark-simulator/bark/blob/master/.vscode/settings.json --> Done

  • FERENC TÖRÖK bark_ml/library_wrappers/lib_tf2rl/tf2rl_wrapper.py --> why doesnt the tf2rl wrapper derive from single agent runtime? this introduces code dublication and unnecessary boilerplate, eg. the property defs for _scenario, self.action_space, self.oberservation_space. Or do I miss something? --> Clarified in mattermost

  • FERENC TÖRÖK _normalize_observation() in bark_ml/library_wrappers/lib_tf2rl/tf2rl_wrapper.py and normalize() in bark_ml/library_wrappers/lib_tf2rl/load_expert_trajectories.py --> why code dublication? --> normalization utils to encapsulate function

  • MARCEL BRUCKNER bark_ml/library_wrappers/lib_tf_agents/runners/tfa_runner.py --> revert the changes here and create a new class derived from TFARunner with the suff related to expert traj generation. --> Subclass SACRunnerGenerator created

  • MARCEL BRUCKNER bark_ml/library_wrappers/lib_tf2rl/generate_expert_trajectories.py --> I dont get line 215ff: try: observations[agent_id]["merge"] = obs_world.lane_corridor.center_line.bounding_box[0].x() > 900 reason? effect? ---> Delete completely as not used

  • MARCEL BRUCKER general: param_server["Scenario"]["Generation"]["InteractionDatasetScenarioGeneration"][....] can be shortened using local_params = param_server["Scenario"]["Generation"]["InteractionDatasetScenarioGeneration"] plus local_params[...] --> Use local copy of the param_server dict for shorter notation


LEFT ISSUES

  • bark_ml/library_wrappers/lib_tf2rl/generate_expert_trajectories.py --> simulate_scenario(): why not use the bark runtime?
    --> See mattermost
    --> The world.Evaluate() function gives an empty info dict when replaying the dataset. What could be done is implement a new evaluator that wraps the measure_world() function and then add it as an evaluator to the runtime. Then the world.Evaluate() would give the desired infos. I think this is unnecessary complex at this point?!

BRANCHES

Tests for GAIL related classes

Tests

There are currently no tests for the GAIL runners and agents, as well as for the tf2rl classes.
Please have a look at the linked files and write tests:

Also there are no tests for:

As I have split up the CI tests, you can run your tests with: bazel run :gail_tests


Test data

I included some test data: https://github.com/GAIL-4-BARK/bark-ml/tree/master/bark_ml/tests/py_library_tf2rl_tests/gail_data/expert_data/open-ai

The test suite should not depend on some pre-generation step of expert trajectories.
We need some expert trajectories that are included in the gail_data folder which can be accessed during test time to have some deterministic test behavior:

  • gym training tests
  • bark training tests

For the examples/gail.py script it is perfectly fine to have the whole loop of generating the expert trajectories and then reading them into the trainer, but for the tests we need fixed expert trajectories.


Starting point

We have made many tests for the generate script.
To get an overview in how to test in python, please have a look at:

Task 4: Evaluate the agent

Exchange the trained models on the German and the Chinese map: how well is the
generalization?

ONLY place gail agents on the map: Can we generate scenes that "look alike" the real scenarios?

Task 5: Wrap up the results

Create a out-of-the-box playable jupyter notebook to demonstrate the results.

Merge back to the bark master branch.

Task 3: Train the gail agent on the Interaction Dataset

As a data source, we will use the interaction dataset. Here,
we are interested in the merging scenes: deu_merging_mt and chn_merging_zs

Have a look how the Interaction Dataset is integrated in bark: [Interaction dataset tutorial](https://github.com/bark-
simulator/bark/blob/setup_tutorials/docs/tutorials/04_interaction_dataset.ipynb) (Note that
the dataset itsself is NOT enclosed with bark due to license limitations).

Train + validate agents individually for each scenes: In the first step, replace one agent and use
all other agents from the dataset: The gail agent navigates safely. Afterwards, replace more
than one agent: Can we still navigate safely?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.