Git Product home page Git Product logo

differentiable-robot-model's Introduction

differentiable robot model

CircleCI Code style: black

Differentiable and learnable robot model. Our differentiable robot model implements computations such as forward kinematics and inverse dynamics, in a fully differentiable way. We also allow to specify
parameters (kinematics or dynamics parameters), which can then be identified from data (see examples folder).

Currently, our code should work with any kinematic trees. This package comes with wrappers specifically for:

  • TriFinger Edu
  • Kuka iiwa
  • Franka Panda
  • Allegro Hand
  • Fetch Arm
  • a 2-link toy robot

You can find the documentation here: Differentiable-Robot-Model Documentation

Installation

Requirements: python>= 3.7

clone this repo and install from source:

git clone [email protected]:facebookresearch/differentiable-robot-model.git
cd differentiable-robot-model
python setup.py develop

Examples

2 examples scripts show the learning of kinematics parameters

python examples/learn_kinematics_of_iiwa.py

and the learning of dynamics parameters

python examples/learn_dynamics_of_iiwa.py

L4DC paper and experiments

the notebook experiments/l4dc-sim-experiments shows a set of experiments that are similar to what we presented in our L4DC paper

@InProceedings{pmlr-v120-sutanto20a, 
    title = {Encoding Physical Constraints in Differentiable Newton-Euler Algorithm}, 
    author = {Sutanto, Giovanni and Wang, Austin and Lin, Yixin and Mukadam, Mustafa and Sukhatme, Gaurav and Rai, Akshara and Meier, Franziska}, 
    pages = {804--813}, 
    year = {2020},
    editor = {Alexandre M. Bayen and Ali Jadbabaie and George Pappas and Pablo A. Parrilo and Benjamin Recht and Claire Tomlin and Melanie Zeilinger}, 
    volume = {120}, 
    series = {Proceedings of Machine Learning Research}, 
    address = {The Cloud}, month = {10--11 Jun}, 
    publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/sutanto20a/sutanto20a.pdf},
    url = {http://proceedings.mlr.press/v120/sutanto20a.html}, 
}

Testing

running pytest in the top-level folder will run our differentiable robot model tests, which compare computations against pybullet.

Code Contribution

We enforce linters for our code. The formatting test will not pass if your code does not conform.

To make this easy for yourself, you can either

  • Add the formattings to your IDE
  • Install the git pre-commit hooks by running
    pip install pre-commit
    pre-commit install

For Python code, use black.

To enforce this in VSCode, install black, set your Python formatter to black and set Format On Save to true.

To format manually, run: black .

License

differentiable-robot-model is released under the MIT license. See LICENSE for additional details about it. See also our Terms of Use and Privacy Policy.

differentiable-robot-model's People

Contributors

1heart avatar exhaustin avatar fmeier avatar gsutanto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

differentiable-robot-model's Issues

Discrepancy between Fwd and Inv Dynamics in CUDA

Inverse and forward dynamics functions should agree with each other. We were testing this assumption in the following:

tau_pred = compute_inverse_dynamics(q, qd, qdd)
qdd_pred = compute_forward_dynamics(q, qd, tau_pred)

We expected qdd and qdd_pred to be very close to each other.

When we do this experiment with device=cpu, the L2 error is in the 10^-5 order. But when we use device=cuda, we get L2 errors around 0.12 which is unexpected.

I have a simple script that minimally reproduces this problem here

@fmeier @gsutanto @exhaustin

CUDA support

Hi,

Thank you for sharing your code! I can only run your code on CPU. Does this code work with CUDA as well?

ugently need help on a very simple conceptual problem

Hi there,

I am trying to understand why I fail to produce identical qdd when I use the same robot to perform forward dynamics twice.

Here is the test script:

``

n_data=100
device="cpu"
gt_robot_model = DifferentiableKUKAiiwa(device=device)
train_data = generate_sine_motion_forward_dynamics_data(
    gt_robot_model, n_data=n_data, dt=1.0 / 250.0, freq=0.1
)
train_loader = DataLoader(dataset=train_data, batch_size=100, shuffle=False)

for g in [True, False]:
    for d in [True, False]:
        
        for batch_idx, batch_data in enumerate(train_loader):
                q, qd, qdd, tau = batch_data
                
        
                qdd_pred = gt_robot_model.compute_forward_dynamics(
                    q=q, qd=qd, f=tau, include_gravity=g, use_damping=d
                )
        
                print (g, d, torch.sum(torch.abs(qdd_pred - qdd)))

printout:
True True tensor(15084.983)
True False tensor(19104.133)
False True tensor(20023.293)
False False tensor(23508.631)

Please let me know which concepts I have failed to grasp, so that I can read up on them.
Many thanks,

ModuleNotFoundError: No module named 'diff_robot_data'

Error after rename the repo dir to differentiable_robot_model:

File "..../differentiable_robot_model/differentiable_robot_model/robot_model.py", line 20, in <module>
    import diff_robot_data
ModuleNotFoundError: No module named 'diff_robot_data'

I added an __init__.py file in the repo dir and I tried

import sys
sys.path.append('../')

But it still wouldn't import. I need to define robot_description_folder by hand for now.

Method compute_endeffector_jacobian does not support batch

Hello,

I have been experimenting with the compute_endeffector_jacobian method within DifferentiableRobotModel class. It seems that it can not support batch joint angle inputs, and always returns linear and angular Jacobians with shape [3, n_dof].

This issue can be reproduced by the code below:

from differentiable_robot_model import DifferentiableRobotModel
import diff_robot_data
import torch
import numpy as np
import os

rel_urdf_path = "panda_description/urdf/panda_no_gripper.urdf"
ee_link_name = "panda_virtual_ee_link"
urdf_path = os.path.join(diff_robot_data.__path__[0], rel_urdf_path)

robot_model = DifferentiableRobotModel(urdf_path)

limits_per_joint = robot_model.get_joint_limits()
joint_lower_bounds = np.array(
    [joint["lower"] for joint in limits_per_joint])
joint_upper_bounds = np.array(
    [joint["upper"] for joint in limits_per_joint])

# generate random joint angles
batch_size = 10
joint_angles = np.random.uniform(
    joint_lower_bounds, 
    joint_upper_bounds, 
    (batch_size, 7))
joint_angles = torch.from_numpy(joint_angles.astype('float32'))

# compute forward kinematics, which supports batch
ee_position, ee_orientation = robot_model.compute_forward_kinematics(
    joint_angles, 
    link_name=ee_link_name)
# prints end effector position shape: torch.Size([10, 3])
print('end effector position shape:', ee_position.shape)
# prints end effector orientation shape: torch.Size([10, 4])
print('end effector orientation shape:', ee_orientation.shape)

# compute Jacobians, which does not support batch
position_jacobian, orientation_jacobian = robot_model.compute_endeffector_jacobian(
    joint_angles,
    link_name=ee_link_name
)
# prints position jacobian shape: torch.Size([3, 7])
print('position jacobian shape:', position_jacobian.shape)
# prints orientation jacobian shape: torch.Size([3, 7])
print('orientation jacobian shape:', orientation_jacobian.shape)

Thanks,
Anqi

Query regarding learning of robot model using this code

Dear Contributors,

Great work! I have thoroughly looked into the code and understood the following:

Step 1: First the sine wave trajectory data having (q, dq, ddq, tau) is being generated using ground truth robot model as given below:

train_data = generate_sine_motion_inverse_dynamics_data(gt_robot_model, n_data=1000, dt=1.0/250.0, freq=0.05)

Step 2: Then torque values are predicted using the learnable robot model as:

tau_pred = learnable_robot_model.compute_inverse_dynamics(q=q, qd=qd, qdd_des=qdd_des, include_gravity=True)

My query is regarding the step 2. You are computing the inverse dynamics (i.e. tau) using the same model from which you have computed the ground truth values. I think the learnable robot model should be a neural network. Please guide me regarding this.

Thanks.

Best,
Deepak Raina

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.