Automated Robotic Picking in Unstructured Environments Robot Perception
Automated Robotic Picking in Unstructured Environments Robot Perception, Learning and Planning Prof. Hao Zhang Human-Centered Robotics Laboratory Background Approach Human-Centered Robotics 3 D model and synthetic point clouds • The use of robotic systems in human-social environments to help people live safer, comfortable and more independent lives • Enabling robots with capabilities to live among us and help take over tasks where our current society has shortcomings (Golem Krang, 2012) (Care-O-Bot 2011) Object Dataset 3 D scenes from RGB-D cameras (Amazon Picking Challenge, 2015) Item List Daily Life Assistance Search and rescue Daily life assistance Automated Picking in warehouses Challenges • Complex, dynamic, unstructured multi-human social environment • Real-time requirements under computational constraints Motivation Motion Planning Error Detection and Recovery Object Recognition • Detect and classify the items using 3 D and 2 D robot perception [1, 2] • Train an exemplar Support Vector Machine (SVM) classifier to detect and localize the items using the Big. BIRD object dataset [3] • Compute initial grasping points using Deep Grasping Method [4] • Refine grasping points using observations from multiple perspectives Motion Planning Items to be picked from the shelf • Pre-compute trajectory from a start position to the center of each shelf bin • Apply Move. It! [5] to plan a short trajectory between the bin center and the grasping point Error Detection and Recovery • Confirm if grasping is successful, else try grasping again Overview • Given an order list of items, pick the items from a shelf and place them in an order bin • Task must be completed autonomously without any human involvement whatsoever Can we make a robot autonomously pick the items in this scenario? Drop item in order bin Grasp Planning Grasping Planning Amazon Picking Challenge • Currently, Amazon’ automated warehouses are successful at moving and searching for items within the warehouse • However, item picking and sorting are still manually performed • Automated picking in unstructured environments still remains a difficult challenge 2 D scenes from wrist cameras Object Recognition Robot Operating System (ROS) Framework References Contact 1. 2. Hao Zhang, Ph. D. Assistant Professor Dept. of Elec. Engr. & Comp. Sci. Colorado School of Mines 3. 4. Baxter robot and workspace setup 5. “The Open. CV Library”, http: //opencv. org, accessed on 03/14/15. R. B. Rusu and S. Cousins, "3 D is here: Point Cloud Library (PCL), " in IEEE International Conference on Robotics and Automation, 2011. Singh, J. Sha, K. S. Narayan, T. Achim, P. Abbeel, “Big. BIRD: A large-scale 3 D database of object instances, ” in IEEE International Conference on Robotics and Automation, 2014. I. Lenz, H. Lee, A. Saxena, “Deep Learning for Detecting Robotic Grasps”, in International Journal of Robotics Research, to appear, 2015 “Move. It!”, http: //moveit. ros. org, accessed on 03/14/15. Phone: (303) 273 -3581 Email: hzhang@mines. edu HCRobotics Lab: http: //hcr. mines. edu
- Slides: 1