Git Product home page Git Product logo

mplib's Introduction

MPlib: a Lightweight Motion Planning Library

PyPI - Version Downloads Build python wheels Documentation License

MPlib is a lightweight python package for motion planning, which is decoupled from ROS and is easy to set up.
With a few lines of python code, one can achieve most of the motion planning functionalities in robot manipulation.

Installation

Pre-built pip packages support Ubuntu 20.04+ with Python 3.8+.

pip install mplib

Usage

See our tutorial for detailed usage and examples.

mplib's People

Contributors

colin97 avatar jiayuan-gu avatar kolinguo avatar lexseal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mplib's Issues

Fail to completely execute the desirable action in sapien env using mplib.Planner

System:
OS version: Ubuntu 20.04
Python version (if applicable): Python 3.8
SAPIEN version (pip freeze | grep sapien): 2.2.2
Environment: Desktop

Describe the bug
I create an env with sapien but when I execute a planned action with env.step(action), where action is the target pose of the end effector (8-dim: 3 for pos, 4 for quat, 1 for gripper), it cannot reach my desired pose (but it manages to move in the correct direction). I guess it may be some problem with my simulation time or planning timestep, etc. But I cannot discover how to fix this problem.

Additional context
Some reference code here:

self._sim_freq = 500
self._control_freq = 100
self._sim_steps_per_control = self._sim_freq // self._control_freq
self.dt = 1.0 / (self._control_freq / self._sim_freq)

    def step(self, action): 
        self.current_step += 1
        self.success_planned = self.move_to_pose(action)

    def move_to_pose(self, action):
        pose = action[:-1]   
        open_gripper = action[-1]  
        self.plan_timestep = 0.1
        result = self.planner.plan_screw(pose, self.agent.get_qpos(), time_step=self.plan_timestep) # mplib.Planner
        if result["status"] != "Success":
            self.scene.step()
            return 1
        self.follow_path(result)
        return 0
    
    def follow_path(self, result):
        n_step = result["position"].shape[0]
        assert n_step > 0, "no path"
        min_step = int(self._sim_freq * self.plan_timestep * 2)
        for i in range(min(min_step, n_step)):
            for j in range(7):
                self.active_joints[j].set_drive_target(result["position"][i][j])
                self.active_joints[j].set_drive_velocity_target(result["velocity"][i][j])
            self.scene.step()

Support for mobile robots

The general idea of the feature is that we want the robot arm not to be fixed anymore and instead sit on a cart that can move around in the environment. This can allow for interesting applications.

The feature will be realized in two parts:

  1. The easy part where we modify the planner to take in where the arm has moved to. Subsequent planning will have the arm fixed at this new location.
  2. The second part involves digging into the ompl documentation and see how to set up the planner to plan for a trajectory for the cart while avoiding collision with the environment.

After finished, make a tutorial/demo.

Consolidate Collision Code

As of now, we have the ability to check collision between arm and world objects as well as arm and point-cloud. However, the collision code is a little jumbled up and not clear to read for devs.

Additionally, it is not intuitive for the user to use the set of APIs we have provided. In particular, I personally never know when I do a collision query, what is and is not turned on.

Finally, we allow the user to update the point cloud but not the ability to add/remove collision objects from the planning world.

Demos for new features

  • Finish up the documentation
  • Add tutorial and more demos (with other robots)
  • Make a video of the new features

Sapien to Planning World

It would be nice to have a function take a sapien world object and populate the planning world accordingly.

Collision checking doesn't seem to work as expected

Does self.planning_world.collide_full() (from here) check for collisions with the pointcloud added using update_point_cloud (from here), or does it just check for collisions between links of the robot? Additionally, update_point_cloud takes in pointcloud from the robot, as is obtained using the gym environment but centered around the robot's base, correct?

I tried to construct the following simple example: Here, the scene simply consists of a robot and a box (loaded using a custom .obj file -- I know the Sapien collision avoidance tutorial creates the box directly, but I want to be able to do this using custom .obj files). I place the robot to be in collision with the box, and I would like to detect this collision. I obtain a 360 degree pointcloud (as done in the ManiSkill gym environment), center the pointcloud to be around the robot's base, and then use update_point_cloud, but somehow self.planning_world.collide_full() is still empty:

import sapien.core as sapien
from sapien.utils import Viewer
import numpy as np
from scipy.spatial.transform import Rotation as R
from PIL import Image
from copy import deepcopy
from ManiSkill.mani_skill.env.camera import CombinedCamera, read_images_from_camera, read_pointclouds_from_camera

# these first four functions are from ManiSkill/mani_skill/env/base_env.py, modified to not use 'self'
def _load_camera(cam_info, agent, scene):
        cam_info = deepcopy(cam_info)
        if 'parent' in cam_info:
                if cam_info['parent'] == 'robot':
                        parent = agent.get_base_link()
                else:
                        assert False
                        parent = self.objects[cam_info['parent']]
                        if isinstance(parent, sapien.Articulation):
                                parent = parent.get_links()[0]
                camera_mount_actor = parent
                del cam_info['parent']
        else:
            camera_mount_actor = scene.create_actor_builder().build_kinematic()
        pose = sapien.Pose(cam_info['position'], cam_info['rotation'])
        del cam_info['position'], cam_info['rotation']
        camera = scene.add_mounted_camera(
            actor=camera_mount_actor, pose=pose, **cam_info, fovx=0
        )
        return camera

def render(mode='color_image', depth=False, seg=None, camera_names=None, scene=None, cameras=None):
        scene.update_render()
        if mode == 'human':
                if self._viewer is None:
                        self._viewer = Viewer(self._renderer)
                        self._setup_viewer()
                self._viewer.render()
                return self._viewer
        else:
                if seg is not None:
                        if seg == 'visual':
                                seg_idx = 0
                        elif seg == 'actor':
                                seg_idx = 1
                        elif seg == 'both':
                                seg_idx = [0, 1]
                        else:
                                raise NotImplementedError()
                else:
                        seg_idx = None
                if camera_names is None:
                        cameras = self.cameras
                else:
                        cameras_new = []
                        for camera in cameras:
                                if camera.get_name() in camera_names:
                                        cameras_new.append(camera)
                        cameras = cameras_new
                if mode == 'color_image' or mode == 'pointcloud':
                        views = {}
                        get_view_func = read_images_from_camera if mode == 'color_image' else read_pointclouds_from_camera
                        for cam in cameras:
                                cam.take_picture()
                        for cam in cameras:
                                if isinstance(cam, CombinedCamera):
                                        view = cam.get_combined_view(mode, depth, seg_idx) # list of dict for image, dict for pointcloud
                                else:
                                        view = get_view_func(cam, depth, seg_idx) # dict
                                views[cam.get_name()] = view
                        return views

def _post_process_view(view_dict, robot_link_ids=None):
        actor_id_seg = view_dict['seg'] # (n, m, 1)
        mask = np.zeros(actor_id_seg.shape, dtype=np.bool)
        for actor_id in robot_link_ids:
            mask = mask | ( actor_id_seg == actor_id )

        view_dict['seg'] = mask

def post_processing(obs, obs_mode, robot_link_ids):
        views = obs[obs_mode]
        for cam_name, view in views.items():
                if isinstance(view, list):
                        for view_dict in view:
                                _post_process_view(view_dict, robot_link_ids=robot_link_ids)
                        combined_view = {}
                        for key in view[0].keys():
                                combined_view[key] = np.concatenate([view_dict[key] for view_dict in view], axis=-1)
                        views[cam_name] = combined_view
                else: # view is a dict
                        _post_process_view(view, robot_link_ids=robot_link_ids)
                if len(views) == 1:
                        view = next(iter(views.values()))
                        obs[obs_mode] = view
        return obs

def main():
        # setup
        engine = sapien.Engine()
        renderer = sapien.VulkanRenderer()
        engine.set_renderer(renderer)

        scene = engine.create_scene()
        scene.set_timestep(1 / 100.0)

        rscene = scene.get_renderer_scene()
        rscene.set_ambient_light([0.5, 0.5, 0.5])
        rscene.add_directional_light([0, 1, -1], [0.5, 0.5, 0.5], shadow=True)
        rscene.add_point_light([1, 2, 2], [1, 1, 1], shadow=True)
        rscene.add_point_light([1, -2, 2], [1, 1, 1], shadow=True)
        rscene.add_point_light([-1, 0, 1], [1, 1, 1], shadow=True)

        # viewer
        viewer = Viewer(renderer)  # Create a viewer (window)
        viewer.set_scene(scene)  # Bind the viewer and the scene
        viewer.set_camera_xyz(x=-4, y=0, z=2)
        viewer.set_camera_rpy(r=0, p=-np.arctan2(2, 4), y=0)
        viewer.window.set_camera_parameters(near=0.05, far=100, fovy=1)

        # load box object
        loader: sapien.URDFLoader = scene.create_urdf_loader()
        urdf = 'box.urdf'
        asset = loader.load(urdf)
        rotation = np.array([1, 0, 0, 0])
        translation = np.array([-1, -1, 0])
        asset.set_pose(sapien.Pose(p=translation, q=rotation))

        # load and place robot (using code from the gym environment so we can obtain pointclouds)
        import pathlib
        from mani_skill.utils.config_parser import (
                preprocess,
                process_variables,
                process_variants,
        )
        import importlib
        config_file = pathlib.Path('ManiSkill/mani_skill/assets/config_files/open_cabinet_drawer.yml')
        yaml_config = preprocess(config_file)
        config = deepcopy(yaml_config)
        _level_rng = np.random.RandomState(seed=0)
        variant_config = {'partnet_mobility_id': '1004'}
        config = process_variables(config, _level_rng)
        level_config, level_variant_config = process_variants(
                config, _level_rng, variant_config
        )
        agent_config = level_config['agent']
        module_name, class_name = agent_config['agent_class'].rsplit('.', 1)
        module = importlib.import_module(module_name)
        AgentClass = getattr(module, class_name)
        agent = AgentClass(engine, scene, agent_config)

        # choose robot configuration such that arm is colliding with box
        cur_state = agent.get_state()
        cur_state[12:14] = np.array([-1, 0])
        cur_state[18:28] = np.zeros(10)
        cur_state[19] = 1.45
        cur_state[20] = -0.7 # 1.75
        agent.set_state(cur_state)

        # get pointcloud from this position
        robot_link_ids = agent.get_link_ids()
        cam_infos = [level_config['render']['cameras'][1]] # only keep robot cameras, no world camera
        cameras = []
        for cam_info in cam_infos:
                if 'sub_cameras' in cam_info:
                        sub_cameras = [_load_camera(sub_cam_info, agent, scene) for sub_cam_info in cam_info['sub_cameras']]
                        combined_camera = CombinedCamera(cam_info['name'], sub_cameras)
                        cameras.append(combined_camera)
                else:
                        assert False
                        camera = self._load_camera(cam_info)
                        cameras.append(camera)
        seg='actor'
        obs = {'agent': agent.get_state(with_controller_state=False), 'pointcloud': render(mode='pointcloud', camera_names=['robot'], seg=seg, scene=scene, cameras=cameras)}
        obs = post_processing(obs, obs_mode='pointcloud', robot_link_ids=robot_link_ids)
        np.save('pointcloud_test.npy', obs['pointcloud'])

        # adjust pointcloud to be centered around robot
        obs['pointcloud']['xyz'][:, 0] += 1

        '''
        # collision detection
        import mplib
        link_names = ['panda_link0', 'panda_link1', 'panda_link2', 'panda_link3', 'panda_link4', 'panda_link5', 'panda_link6', 'panda_link7', 'panda_link8', 'panda_hand', 'panda_leftfinger', 'panda_rightfinger']
        joint_names = ['panda_joint1', 'panda_joint2', 'panda_joint3', 'panda_joint4', 'panda_joint5', 'panda_joint6', 'panda_joint7', 'panda_finger_joint1', 'panda_finger_joint2']
        planner = mplib.Planner(
                urdf="./assets/robot/panda/panda.urdf",
                srdf="./assets/robot/panda/panda.srdf",
                user_link_names=link_names,
                user_joint_names=joint_names,
                move_group="panda_hand",
                joint_vel_limits=np.ones(7),
                joint_acc_limits=np.ones(7))
        planner.update_point_cloud(obs['pointcloud']['xyz'])
        path = planner.plan(goal_pose=np.zeros(9),current_qpos=agent.get_state(by_dict=True)['qpos'][1:],use_point_cloud=True)
        '''

        while not viewer.closed:  # Press key q to quit
            viewer.render()


if __name__ == '__main__':
        main()

With the collision detection portion at the end commented out, we can see in the viewer that the robot is clearly in collision with the box. However, when we uncomment this part, we see that self.planning_world.collide_full() is still empty -- this is not the expected behavior, is it?

For completeness, box.urdf:

<?xml version="1.0" ?>
<robot name="box">
        <link name="link_box">
                <visual name="box">
                        <origin xyz="0 0 0"/>
                        <geometry>
                                <mesh filename="box.obj"/>
                        </geometry>
                </visual>
                <collision>
                        <origin xyz="0 0 0"/>
                        <geometry>
                                <mesh filename="box.obj"/>
                        </geometry>
                </collision>
        </link>
</robot>

and box.obj:

o Mesh
v -0.15 -0.15 -0.25
v 0.15 -0.15 -0.25
v 0.15 0.15 -0.25
v -0.15 0.15 -0.25
v -0.15 -0.15 0.5
v 0.15 -0.15 0.5
v 0.15 0.15 0.5
v -0.15 0.15 0.5
vn 0 -1 0.5
vn 0 1 0.5
vn -1 0 0.5
vn 1 0 0
vn 0 0 1
vn 0 0 -1
vt 0 0
vt 1 0
vt 0 1
vt 1 1
f 1/1/1 2/2/1 6/4/1
f 1/1/1 6/4/1 5/3/1
f 3/1/2 4/2/2 8/4/2
f 3/1/2 8/4/2 7/3/2
f 4/1/3 1/2/3 5/4/3
f 4/1/3 5/4/3 8/3/3
f 2/1/4 3/2/4 7/4/4
f 2/1/4 7/4/4 6/3/4
f 5/1/5 6/2/5 7/4/5
f 5/1/5 7/4/5 8/3/5
f 4/1/6 3/2/6 2/4/6
f 4/1/6 2/4/6 1/3/6

Apologies for the incredibly long post.

Investigate why rrt connect results in jagged paths

When planning with rrt connect, the arm frequently goes back and forth before finally reaching the goal pose. Moreover, even when there is a straight collision-free trajectory, the arm does not usually take this path.

As far as I remember, the moveit planner does not make such movements using the same rrt connect.

Steps to take:

  1. Set up similar scene in rviz that resembles the scene inside examples/avoid_collision.py
  2. Try moveit with rrt connect on that scene
  3. Compare the results and try some other planning algorithms

avoid extreme trajectories

Current algorithms ensure collision-free paths. However, some of these trajectories can come uncomfortably close to potential collisions, which is not ideal for real-world deployment.

Are there methods to prevent such extreme trajectories or introduce a 'buffer threshold' option to ensure the generated paths maintain a safe distance from potential collisions?

MPlib installation issue on Mac

When I try to install MPlib using pip install mplib I get:

ERROR: Could not find a version that satisfies the requirement mplib (from versions: none)
ERROR: No matching distribution found for mplib

I am on MacOS with python 3.7 and pip 21.3.1. Any ideas?

Quality of Life Improvements

  • The API has some unintuitive parts where we supply the same information at multiple diff places. For example, we supply the joint limit information inside the URDF. However, we also need to supply possibly the same information to the planner as well. The planner grabs part of the info from the URDF but also uses the info we supply. It gets messy and confusing.
  • The demo has a lot of hard-coded constants such as the number 7. This is the number of motors on the original panda URDF. However, users who grab the demo and expect it to work on their robot out of the box may experience a lot of frustration when the demo does not work as expected. Change these hard-coded constants to something that we infer from the URDF.
  • stdout color. When using verbose against ompl, it prints a lot of information in color and is easy to read. When printing warnings, our library should use some different color, say yellow.

Self-collision api and env-collision API with documentation

Currently, the collision API is partially exposed to the Python end user. In particular, there is a collision request function that will return a CollisionResult object. This is unintuitive to use, and moreover, it does not seem to support self-collision.

Instead, we want to make the API dead simple and make a tutorial/example for people to copy off when they want to use this feature.

There will be two APIs:

  1. check_for_self_collision takes in an articulation object as well as a pose and returns pairs of links in contact with each other.
  2. check_for_env_collision takes in an articulation object as well as a pose and returns its collisions with all other objects in the planning world.

Sphinx not parsing docstring correctly in stub files

Sphinx is able to automatically document the pyi files. However, it is inconsistent at parsing the doc string correctly, sometimes leaving the arguments in a big clump and removing the return. An example can be found below

image

Support for XArm6 robot

Hi,

Does the library support planning for XArm6 robot? Its urdf is here and srdf is here. My code for setting up the planner is as follows:

link_names = [link.get_name() for link in self.robot.get_links()]
joint_names = [joint.get_name() for joint in self.robot.get_active_joints()]
self.hand_link_name = 'xarm_gripper_base_link'
self.planner = mplib.Planner(
    urdf=os.path.join(self.config.robot_asset_path, "xarm/xarm6_with_gripper.urdf"),
    srdf=os.path.join(self.config.robot_asset_path, "xarm/xarm6_with_gripper.srdf"),
    user_link_names=link_names,
    user_joint_names=joint_names,
    move_group=self.hand_link_name,
    joint_vel_limits=np.ones(6),
    joint_acc_limits=np.ones(6))

Swap out FCL for HPPFCL

FCL seems dead and lots of bugs.

Kolin has been running into half-plane bugs and segfaults.

The plane collision also segfaults.

API for checking if a pose is collision free

Two functions that given the pose of a robot (w/wo end effector attach) can be added to the planner:

  1. Is the pose self-collision-free?
  2. Is the pose environment collision-free?

The two functions should return boolean values.

Profile compile time

Compilation is quite slow even on my 12900k. Looks like the linking takes the most amount of time but need to verify.

Profile the compilation process and identify the bottleneck. Then, look for solutions to speed it up.

Better index out of range error

Currently, when user supplies the wrong number joints values or such, the error message is very cryptic, and I quote:

python3.11: /usr/include/eigen3/Eigen/src/Core/DenseCoeffsBase.h:163: Eigen::DenseCoeffsBase<Derived, 0>::CoeffReturnType Eigen::DenseCoeffsBase<Derived, 0>::operator const [with Derived = Eigen::Matrix<double, -1, 1>; Eigen::DenseCoeffsBase<Derived, 0>::CoeffReturnType = const double&; Eigen::Index = long int]: Assertion `index >= 0 && index < size()' failed.

This gives an idea of there's some indexing issues but the library does not tell the user where the error happened. In fact, there are multiple places that can cause this message to be shown. For users' sanity, add array bound checks for each of these places so they don't need to spend hours debugging what's going on.

Collisions not correctly disabled when no srdf file is passed in

When the user does not pass in an srdf, we first look for it by changing the urdf file extension to srdf in the hope there will be one in the same directory. If that fails, we resort to generating an srdf ourselves.

However, it seems the generation function isn't quite correct and doesn't even record all the adjacent links.

Transition to Sphinx

Some additional stuff to do

  • Include readme on the front page
  • Include references to the raw code
  • New tutorials for the new features
  • Set up github pipeline to generate docs after merge

Python 3.10 support

Could you please provide a packaged version for Python 3.10?

I've tried installing manually from source, but get hung up on the following errors:

/home/hartzj/MPlib/src/ompl_planner.h:1:10: fatal error: pinocchio/fwd.hpp: No such file or directory                                                                                                                                                         
    1 | #include <pinocchio/fwd.hpp>                                                                                                                                                                                                                          
      |          ^~~~~~~~~~~~~~~~~~~                                                                                                                                                                                                                          
compilation terminated.                                                                                                                                                                                                                                       
gmake[2]: *** [CMakeFiles/mp.dir/build.make:118: CMakeFiles/mp.dir/src/ompl_planner.cpp.o] Error 1                                                                                                                                                            
gmake[2]: *** Waiting for unfinished jobs....                                                                                                                                                                                                                 
/home/hartzj/MPlib/src/articulated_model.cpp:1:10: fatal error: pinocchio/algorithm/aba.hpp: No such file or directory                                                                                                                                        
    1 | #include <pinocchio/algorithm/aba.hpp>                                                                                                                                                                                                                
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                                                                                                
compilation terminated.                                                                                                                                                                                                                                       
gmake[2]: *** [CMakeFiles/mp.dir/build.make:76: CMakeFiles/mp.dir/src/articulated_model.cpp.o] Error 1                                                                                                                                                        
In file included from /home/hartzj/MPlib/src/kdl_model.h:4,                                                                                                                                                                                                   
                 from /home/hartzj/MPlib/src/kdl_model.cpp:1:                                                                                                                                                                                                 
/home/hartzj/MPlib/src/urdf_utils.h:3:10: fatal error: pinocchio/algorithm/jacobian.hpp: No such file or directory                                                                                                                                            
    3 | #include <pinocchio/algorithm/jacobian.hpp>                                                                                                                                                                                                           
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                                                                                           
compilation terminated.                                        
gmake[2]: *** [CMakeFiles/mp.dir/build.make:104: CMakeFiles/mp.dir/src/kdl_model.cpp.o] Error 1                                
In file included from /home/hartzj/MPlib/src/fcl_model.cpp:2:
/home/hartzj/MPlib/src/urdf_utils.h:3:10: fatal error: pinocchio/algorithm/jacobian.hpp: No such file or directory             
    3 | #include <pinocchio/algorithm/jacobian.hpp>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.            

I've tried installing Pinocchio as described here, first from apt, then from pip.

Wrapper for a set of collision objects associated with a single link

A lot of sapien collision geometry is composite. In particular, there are multiple collision geometries associated with a single collision link.

  • Add wrapper to house multiple collision objects
  • Add pointers back to parent for these wrappers so that we can do broad-phase collision later on
  • Add tests

Faster planning for use cases that does not require a specific end-effector rotation

Some users do not care about the rotation of the end-effector, as long as the arm reaches a certain position. This seems like a good opportunity to optimize the time needed to plan a motion.

One idea is to solve the IK without rotation constraints and then feed all the solutions to the planner asking it to return the first successful plan to any of the targets.

Before implementing the whole thing, it is useful to benchmark how much faster IK is without rotation constraints and how much faster planning is when multiple targets are admissible.

Planner init bug

Hi, in planner.py, Planner.init(...), if the srdf == '', then on line 74, it calles self.generate_collision_pair(), before assign self_user_link_names. However, inside self.generate_collision_pair(), on line 138, n_link = len(self.user_link_names). This leads to AttributeError: 'Planner" object has no attribute 'user_link_names'.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.