ISSN 0253-2778

CN 34-1054/N

Open AccessOpen Access JUSTC Research Article

Two-stage grasping detection for robots based on RGB images

Cite this:
https://doi.org/10.3969/j.issn.0253-2778.2020.01.001
  • Received Date: 15 November 2018
  • Rev Recd Date: 27 May 2019
  • Publish Date: 31 January 2020
  • Recently, robots have played big roles in more and more cases. An accurate grasp detection is a key component of a robot working process. An end-to-end method for robotic grasp detection in an RGB image containing objects is proposed in such a case, which takes the whole picture as input and gives the prediction result directly without using traditional sliding windows or region extraction. Obviously, different grasp points lead to different grasp orientations. The grasp detection method takes two steps. First, a convolutional neural network is trained to predict the positions of grasp points. Next, a square area with the preceding grasp point as the center is taken from the image, where the edges are extracted using the Canny edge detection and the lines are detected using Hough Transform. A principal-directiondetection algorithm is proposed to analyze these lines and detect grasp orientations and the distance between two parallel fingers. The method gives a better grasp detection and has an influence on computer vision using both deep learning and traditional algorithms.
    Recently, robots have played big roles in more and more cases. An accurate grasp detection is a key component of a robot working process. An end-to-end method for robotic grasp detection in an RGB image containing objects is proposed in such a case, which takes the whole picture as input and gives the prediction result directly without using traditional sliding windows or region extraction. Obviously, different grasp points lead to different grasp orientations. The grasp detection method takes two steps. First, a convolutional neural network is trained to predict the positions of grasp points. Next, a square area with the preceding grasp point as the center is taken from the image, where the edges are extracted using the Canny edge detection and the lines are detected using Hough Transform. A principal-directiondetection algorithm is proposed to analyze these lines and detect grasp orientations and the distance between two parallel fingers. The method gives a better grasp detection and has an influence on computer vision using both deep learning and traditional algorithms.
  • loading
  • 加载中

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return