Update:
Approach from the winning team (Deft) :
teamdelft-apc.blogspot.nl 的页面Vision
We decided to use two industrial stereo camera's with rgb overlay camera's. One camera for detection of objects in the tote and one on the robot gripper to scan the bins. Detection went in two steps. For object recognition we used a deep learning neural network based on Faster RCNN. Next we performed localization using an implementation of super4PCS to do a global optimization of the pose estimation. After this the estimation is refined using ICP.
Motion
To make the robot motions as fast as possible, yet accurate enough for our sense-plan-act approach, we divided the problem into the motions outside the shelf and inside its bins. Outside there is no need for dynamic path planning: the environment is static and known a priori. We created a database of collision-free trajectories between the relevant fixed locations, e.g. image-capture and approach locations in front of the shelf’s bin and over the tote. To move inside the bins we simplified the dynamic path planning problem by using cartesian trajectories. To implement our solution we developed customized ROS-services on top of the ROS framework for motion planning MoveIt!
---------------------------------------
Not sure if there's a "mainstream" motion planning algorithm in APC.
For object recognition:
CNN for object detection and segmentation, then using ICP to estimate the object pose for grasping.