CHAPTER 11VISION-BASED CONTROL

In Chapter 10 we described methods to control the forces and torques applied by a manipulator interacting with its environment. Force feedback is most useful when the end effector is in physical contact with the environment. During free motion, such as moving a gripper toward a grasping configuration, force feedback provides no information that can be used to guide the motion of the gripper. In such situations, noncontact sensing, such as computer vision, can be used to control the motion of the end effector relative to environment.

In this chapter we consider the problem of vision-based control. Unlike force control, with vision-based control the quantities of concern are typically not measured directly by the sensor. For example, if the task is to grasp an object, the quantities of concern are pose variables that describe the position of the object and the configuration of the gripper. A vision sensor provides a two-dimensional image of the workspace, but does not explicitly contain any information regarding the pose of the objects in the scene. There is, of course, a relationship between this image and the geometry of the robot’s workspace, but the task of inferring the 3D structure of the scene from an image is a difficult one, and a full solution to this problem is not required for most robotic manipulation tasks. The problem faced in vision-based control is that of extracting a relevant set of parameters from an image and using these parameters ...

Get Robot Modeling and Control, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.