Git Product home page Git Product logo

picklearningwithpointcloud's Introduction

Learning to Pick with 3D Point Cloud

Assignment Project #2: 30% | Due Monday, Apr 26


Target

This experiment is mainly divided into two parts. They are 6D calibration and object grasping respectively. The 6D calibration is to establish the transformation basis of the camera coordinate system and the robot coordinate system with 3D point cloud information, that is, the hand-eye transformation matrix used to describe the relative spatial pose of the robot and the camera. On this basis, object grasping is to get the grasping coordinates through the 3D point cloud information by camera and control the movement of the robot to grasp.

Hardware List

The hardware equipment required for this experiment is as follows:

  • camera: Intel Realsense D435
  • mechanical arm: Aubo i5
  • calculation platform: MSI Trident, i7-10700, GTX1660 6G, 8G DDR4, Ubuntu OS
  • calibrators: 3D-printed L-shape Calibration Target with printed 4x4 Checkerboard
  • clamping device: pneumatic clamping jaw (air compressor and air valve)
  • target objects: plastic bottles, cans

Algorithm

In this project, we mainly used the GeoGrasp algorithm.

GeoGrasp is an algorithm based on 3D point clouds, which originally was designed and implemented by Zapata-Impata et al. This is the main frame of the algorithm we used in this project. Traditionally, vision-based grasping systems proposed in the literature usually take multiple views to detect and identify the object in front of the robot. Once they recognize the object and its pose, they proceed to calculate potential contact points using stored 3D CAD models. Some recent solutions find these grasping configurations by using machine learning techniques trained on large data sets or in simulation. GeoGrasp, in contrast, only need one camera to operate. It also needs no training data since it’s based on geometry analysis. However, some unsupervised learning approaches including Clustering and PCA are utilized in the process.

The general procedure of the algorithm could be divided into 3 parts. Scene segmentation, locating grasping areas and ranking grasping points, as shown in figure below.

image

The first step is to segment the potential objects out of the scene. Euclidean Cluster Extraction from the Point Cloud Library is used in the original algorithm. Other clustering methods like gaussian mixture model might behave better than simply calculating the Euclidean distances. However, it’s not guaranteed that the processing speed would satisfy the need of real-time usage.

After segmenting out the objects, we have to find the grasping areas for potential grasping points. To begin, filters are used to reduce noise on the surface of the object. Principle component analysis (PCA) is used to find the vector v, which approximate the orientation of the object. Cutting plane γ is found by calculating the centroid c of the point clouds. Cutting plane γ also need to be perpendicular to the vector v to guarantee that the grasping points is stable. The thickness of the cutting plane is decided empirically, 7mm. Opposite areas along the vector v is selected as grasping areas. To make the grasping areas more precise, initial grasping points are found by calculating the maximum and minimum in the areas, along the perpendicular axis. We then draw spheres around these two points, and set these as the final grasping areas.

The last step is to evaluate the contact points to determine the best grasping points. First, define θ be a grasp configuration whose contact points are one point q_i from each of the voxelized grasping areas. And then a ranking function is proposed to choose the best points by assessing the potential stability of the grasp, mainly considering the following four factors: 1) Distance to the cutting plane; 2) Curvature of the point; 3) Antipodal configuration; 4) Perpendicular grasp. Finally, the following ranking function is proposed to assess the potential stability of a grasp configuration:

image image image

Preparation

  1. Connect the power supply and plug in the AUBO.
  2. Turn the black knob in the main machine of AUBO from ON to the red emergency stop button upward to the right to unlock the emergency mode.
  3. Turn the emergency stop button on the teaching device up to the right, open the teaching device and click the save button in the popover.
  4. Connect the control box to the main machine with network cable.
  5. Right-click on the desktop to open the terminal and enter the following code to open PyCharm.
cd Downloads/pycharm-community-2020.3.3/bin
sh pycharm.sh
  1. Enter the following code in the terminal to turn on the camera.
realsense-viewer

Experiment

  1. Start the device, calibration, and walk the grid image
  2. Open the realsense-viewer, click Stereo Module, select four points and record, enter the plane_calculate.py, create the plane_model, and enter the main function image image
  3. Setup the grasper and pay attention to the safety height
  4. Read the crop_bounding from the realsense-viewer
  5. Run the robot arm
  6. Verify the algorithm by changing the position of the bottle or can

Video

https://www.bilibili.com/video/BV1R64y1m7nk/

Challenges and Solutions

  • PyCharm: Always open PyCharm in the terminal in which the instructions are directed. If you open PyCharm in any other way, you will have a path error problem.
  • Calibration plate problem: In the beginning, our calibration plate was broken.We use tape to hold it together and continue the 3D calibration.However, such calibration plates actually have a small bending, and the small errors will gradually accumulate and become larger in the process of iterative calibration, eventually resulting in a large deviation in the 3D calibration results.
  • The offset of the calibration plate relative to the end flange of the manipulator in CAIL-3D file was not modified, which would cause the calibration to fail.
  • You need to change the storage path of the computed results at the end of the code in EyeOnBase.py file.The previous code did not load the matrix results we calculated to the file that we read later, so we need to unify the file names before and after. image
  • Camera problem: When our camera is in STEREO Module state, the 3D view is opened. Its depth view has poor effect, with many black areas and missing information, which leads to the failure of our early calibration.
  • Pay attention to the stability of the camera's USB interface.Sometimes the USB interface of the camera will be unstable, and the results shown below will appear.A new socket is needed. image
  • When using Realsense-Viewer to customize a rectangular area during the 6D fetching process, the scope should not be too small, otherwise it is easy to cause the recognition target to exceed the recognition area.
  • The position of the empty bottle will also affect the grasping result.If the mouth of the bottle is facing the base of the mechanical arm, it will lead to a large tilt of the clamp, which will not be able to accurately clamp the bottle.

Conclusion

The main algorithm we used in this project was GeoGrasp. Compared with region proposal-based method, Geograsp do not have to collect multi-angle images to form 3D point cloud, which means fast and effective. Though general method may provide better accuracy, Geograsp is enough for garbage identification. In the course of the experiment, we encountered some problems, which were mentioned in the previous section. With the joint efforts of all, the problem was solved and the task was completed.

  • HAN Xudong: project management, result expression
  • ZHANG Zicong: hardware design and preparation
  • SONG Yichen: algorithm repetition and debugging
  • DENG Ranbao: algorithm repetition and debugging
  • WANG Haowen: 3D point cloud calibrition
  • LIU Zhengtao: hardware preparation
  • ZHANG Dapeng: system debugging
  • LIU Yangchenguang: problem record and summary

picklearningwithpointcloud's People

Contributors

hanxudong159 avatar rambledeng avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.