L. De Silva and H. Ekanayake
Behavior-based robots have come under the spotlight in recent years by making their presence known in diverse fields of human interest. Applications in behavior-based robotics have continued to grow in areas such as demining, search and rescue, office automation, health care, etc and continue to replace human beings in risky and menial tasks. With this inspiration, this paper reviews the basic behavior architectures that govern the design of the reactive robots.
In designing the architectures, the earliest robotic systems used a high level structured approach known as the hierarchical paradigm. In this architecture, the robots followed a strict sequence of states known as the SENSE, PLAN and ACT cycle. In the SENSE stage, the robot would acquire information about its environment using its sensors and use this information to create a representative data structure, known as the world model. The PLAN stage receives this world model and devises an action plan for achieving the task. Finally, the ACT stage represents the execution of the actuator commands that are generated according to the directives received from the planning stage. However, the hierarchical paradigm suffered two major problems, namely the close world assumption and the frame problem, due to its limited flexibility.
These limitations gave the roboticists the incentive to look into biological studies and to search for new paradigms for robotic control. Thus, the concept of a behavior, the basic building block of the reactive architecture, was born. Through the use of behaviors, roboticists brought about new paradigms that would be used in designing robotic systems which were inspired by animal behaviors. These models explored how concurrent behaviors are handled in the biological world and these observations led to the development of several architectures that emerged later. This paper focuses on the two most tried architectures of all, namely, subsumption and potential-fields methodology.
Subsumption builds on layers of competencies that are vertically arranged in the order of their sophistication. Thus, the basic survival behaviors such as collision avoidance are aligned in the lowest layer while more cognitively advanced behaviors such as path mapping are aligned in the higher levels. The higher layers have the ability to inhibit or suppress the behaviors of the lower layers, hence the word, subsumption. The layers of competences are supposedly executed independently with each behavior having its own sensorial system.
Potential Fields Methodology (PFM) builds on the groundwork that each behavior is represented as a vector and thus the methodology is inherently regarded to be confined to the navigational robots. These behaviors, represented as vectors, are combined in vector summation to produce the emergent behavior. For presentational purposes, these behaviors are assumed to exert on the robot in the form of force fields by the objects in the environment and are illustrated as magnetic or gravitational fields.
The reactive paradigm started showing its limitations when moving into domains with complex tasks that required cognitive capabilities. These requirements were presented in the form of efficient path planning, map making and performance evaluation of the robot itself and the reactive paradigm proved incapable of satisfying these requirements. The solution was the hybrid deliberative/reactive paradigm. The proponents of the hybrid architectures advocate the use of the advantageous aspects of each of the previous two architectures and combining them to produce the desired architectures for the complex scenarios. Another interesting deviation of hybrid architectures from the reactive paradigm is that the sensors that were unique to each behavior in the reactive paradigm are now shared with the planning modules. With such convenient features, the hybrid architectures currently stand as the most preferred in building modern robotic systems.