Dexterous Manipulation

The older body of work focused on the kinematics and dynamics of stable grasping. One used known object properties (e.g.., geometry, mass, friction) and known task requirements for forces in various directions to compute a set of stable contact locations and grasp forces. Many papers have been written on fast optimization algorithms for grasp forces and contact locations.

The problem with most of this work is that it requires excellent knowledge of the object and task. Also, it is cumbersome to add practical constraints arising from end-effector limitations, inteference from neighboring objects, etc. Furthermore, when we look at human and animal grasping, there is typically much less concern about optimization than to achieve grasps which are satisfactorily secure while also providing freedom for manipulation.

Newer work on autonomous grasping and manipulation has focused on deducing, via machine-learning techniques, suitable grasp regions from visual images (e.g. Ng and co-workers). The robot can move around to gather additional images as desired. The newest work involves having the robot tentatively grasp or probe objects such as door handles to see if they behave as anticipated. Little use has been made of tactile sensing but this is clearly an opportunity for future work. (If you have to open a door with either no tactile sensation or no vision, which is harder (i) in the initial stages of acquiring the grasp and (ii) subsequently opening the door by rotating handle?)

-- MarkCutkosky - 06 Nov 2008

Possible Arms:

Possible Grippers:

Possible Wrists:

-- SamsonPhan - 11 Nov 2008

 
This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback