The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to calculate the appropriate signals to the actuators (motors) which move the mechanical.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusion may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.
Autonomy levels
Control systems may also have varying levels of autonomy.
- Direct interaction is used for haptic or tele-operated devices, and the human has nearly complete control over the robot's motion.
- Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
- An autonomous robot may go for extended periods of time without human interaction. Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous, but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
- Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
- Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
- Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
- Full autonomy. The machine will create and complete all its tasks without human interaction.
No comments:
Post a Comment