Next: Distributed Implementation Up: Autonomous Agents: from Concepts Previous: Introduction

Our Multi-Agent Model

  Our model for autonomy-based multi-agent systems is composed of an Environment and a list of Agents. The Environment encompasses a list of Cells and a set of Objects which will be manipulated by the agents. Every Cell contains a list of Neighbour Cells, which implicitly sets the topology, and the set of objects actually available on it at a given time.

The architecture of an agent is displayed on figure 1. An agent possesses some sensors to perceive the world within which it moves, and some effectors to act in this world, so that it complies with the prescriptions of physically embodied agents and simulated embodied agents [Ziemke1997]. The implementation of the different modules presented on Figure 1, namely Perception, State, Actions and Control Algorithm depends on the application and is the user's responsibility. In the Perception module, the designer specifies the type of perception of the agent, e.g. if the agent perceives only the number of objects on the cell on which it stands. The State module encompasses the private information of the agent, e.g. whether it carries or not an object, its strain or whatever. The Actions module typically consists of the basic actions the agent can take, e.g. move to next cell, pick up an object or drop an object. The Control Algorithm module is particularly important because it defines the type of autonomy of the agent: it is precisely inside this module that the designer decides whether to implement an operational autonomy or a behavioral autonomy [Ziemke1997]. Operational autonomy is defined as the capacity to operate without human intervention, without being remotely controlled. Behavioral autonomy supposes that the basis of self-steering originates in the agent's own capacity to form and adapt its principles of behavior: an agent, to be behaviorally autonomous, does not only need the freedom to behave/operate without human intervention (operational autonomy), but further the freedom to have formed (learned or decided) its principles of behavior on its own (from its experience), at least partly.

   figure114
Figure 1: Architecture of an agent.


Next: Distributed Implementation Up: Autonomous Agents: from Concepts Previous: Introduction

Chantemargue Fabrice
Thu Mar 12 11:42:01 MET 1998