From Cognitive Architecture Wiki
The world itself is a very complex place for architecture agents to act in. Most architectures are designed to only deal with fractions of the total possible environmental complexity by acting in particular domains. For example, some architectures assume that the world is static and that the only things that change in the world are via an agent's actions. Other architectures may operate in dynamic environments but require that world be consistent or predictable. The links below briefly describe and define some of the possible environmental considerations made when developing cognitive architectures, mostly without direct reference to specific architectures.
A static environment consists of unchanging surroundings in which an agent navigates, manipulates, or perhaps simply problem solves. The agent, then, does not need to adapt to new situations, nor do its designers need to concern themselves with the issue of inconsistencies of the world model within the agent itself. An example of such an environment is a simulated office setting, where the doorways and halls never change, and there are no moving objects that populate the simulated space. Other static environments include those for simple problem solving and one-player games (such as the eight-puzzle) in which nothing changes except through the action of the agent.
Although this is an ideal environment for an agent to navigate in, the actual world is not at all static, and the goal of such projects is to create an agent that can navigate in the real world, which usually is dynamic.
If the ultimate design goal of an architecture is to create an agent that operates in a variety of real world environments it is necessary to include some mechanisms that allow the agent to operate in a dynamic environment: one that changes over time independent of the actions of the agent. Certainly there are real world environments that are not dynamic but static; but these are usually controlled situations that are limited in size and scope and thus not representative of the full range of environments in which we may like to utilize a generally intelligent agent. Furthermore, there may be dynamic simulated environments in which an intelligent agent could be put to good use. For instance, an intelligent agent could be used to direct dynamic truck routing from a central location.
Traditional planning systems have had trouble dealing with dynamic environments. In particular, issues such as truth maintenance in the agent's symbolic world model and replanning in response to changes in the environment must be addressed. These capabilities have been incorporated into several planning-type architectures but often reactivity is sacrificed due the complexity of integrating detailed sensory data with a world model. One approach to this problem is to eliminate the planning component altogether as is done in subsumption-type architectures.
The Environmental Consistency Hypothesis, as proposed by the designers of Prodigy posits that the environment can be assumed to change much slower (if at all) with respect to the speed of the reasoning and learning mechanisms. In this sense, the change is not so much the change in state as in dynamic environments but rather a change in the underlying principles that drive the environment.
Although this is true for some environments, adopting this hypothesis as part of the framework of the architecture may have serious implications (including complete loss of functionality) in which the properties of the environment do change at about the same rate of speed as the mechanisms of deliberation.
Developing an architecture that acts in the real world requires a commitment to constructing agents that are capable of handling the multitude of uncertain events caused by it's normally dynamic and unpredictable nature.
By operating in a simulated environment, an architecture is able to avoid dealing with such issues as real-time performance and unreliable sensors. A simulator can factor out uninteresting variables and allow the agent to focus on the critical issues of a task. Thus, the agent can be used as a testbed for higher-level cognitive functions such as planning and learning without real-world implementation issues getting in the way.
Operating in a simulated environment also offers the advantage that the agent may be exposed to a variety of different tasks and surroundings without an inordinate amount of development time. Thus, the same architecture can be applied to tasks involving space exploration and undersea diving without the necessity of developing the necessary hardware to transport the agent to either location.
Agents that operate in the real world are normally designed to meet different criteria than those that operate in simulated environments. Agents that operate in the real world require robust perception mechanisms and are often faced with dynamic and unpredictable environments and a higher degree of complexity than they might encounter otherwise.
The real world provides the agent with numerous potential challenges. The agent's sensors and effectors may be imperfect, it may be required to produce new plans based on updated information very rapidly, and it might have to reason about the temporal aspects of its plans.
All of these problems are avoided by the use of simulators, which frees researchers to focus on higher-level cognitive functions such as learning and planning. However, it may be that the solutions to these lower-level problems need to arise from within the architecture rather than from outside of it, which would have a profound impact on the ultimate architecture design. If this is indeed the case then ignoring these issues is ultimately a disservice to the potential growth of the architecture.
By choosing to address the issues incumbent in acting in the real world it is also possible to draw insights into their interaction with each other and the effect that increased knowledge (provided by high-level cognitive capabilities) can have on their solutions.
Both real and simulated environments can be very complex. Complexity in this case includes both the enormous amount of information that the environment contains and the enormous amount of input the environment can send to an agent. In both cases, the agent must have a way of managing this complexity. Often such considerations lead to the development of sensing strategies and attentional mechanisms so that the agent may more readily focus its efforts in such rich environments.
Many real and simulated environments are rich in detail and other information. The ability to incrementally add knowledge without significant slowdown is an important functionality for agents in such environments.
The richness and diversity of information can be difficult or impossible to capture during development, so learning is frequently employed to capture domain knowledge as the agent experiences its environment.
Sometimes a domain presents more perceptual information than an agent can even observe, let alone process intelligently. Additionally, it is important that such a (possibly continual) influx of perceptual data not overwhelm the agent and thus cause a degradation in its reactivity. However, it must respond to relevant information; otherwise, it may behave irrationally. Such considerations have driven the development of architectural mechanisms such as selective attention to manage this environmental complexity.
Environmental Effects on the Agent
A physical agent cannot possess infinite resources. There are bounds on memory and processing capabilities. The limited computation resources available to the agent directly influence the types of processing it can afford to do. Given these limitations it may not always be possible to guarantee perfect rationality. However, it is desirable that the agent perform as well as it it capable, obeying some bounded rationality constraints.
Sometimes an agent knows all possibly relevant information about its domain. In this case, learning is not required for domain understanding and the behavior of the system can be precoded, dependent on perceptions.
Associated with these environments is the closed world assumption, under which any fact not known to the agent can be taken to be false. This is similar to complete world knowledge, in that the agent knows everything that is true about its domain. This assumption greatly simplifies declarative representation tasks.
Vere and Bickmore have suggested the (informal) 99% rule which relaxes the requirement for complete domain knowledge. In specifying parameter ranges for objects Homer may encounter in the course of its activities, they limit the range from the space of all possibilities to ranges which cover 99% of the possible cases. The presumption is that if the agent is correct 99% of the time, it will be performing acceptably and the outlying ranges can simply be ignored.
Dynamic environments can be unpredictable. This means that not only is the world changing but it changes in a way that the agent can not (fully) comprehend. This often occurs when an agent's representation of the world is incomplete (or non-existent). Because of this unpredictability, it may be desirable that the agent's processing be interruptible, to handle unexpected, and urgent, contingencies.
A predictable environment is an environment for which an agent has an adequate (or perhaps complete) world model. For example, an agent that had a sophisticated, first-principles model of Newtonian physics could predict with reasonable accuracy the results of throwing, with a known force, objects of known mass. However, since such models are computationally prohibitive, most agents consider a dynamic world unpredictable as well.
This assumption does not hold for agent's that behave in simulated, dynamic worlds. Since those worlds can generally be predicted exactly (e.g., a grasp command always results in holding an object if the object is holdable and the agent is in the object's proximity), these dynamic environments can be considered to be predictable.
Domain events can occur asynchronously with respect to the agent. In such cases, if the agent is not constantly perceiving its world, events may go unnoticed, leading to seemingly irrational behavior. To avoid this, architectures often shift to more parallel approach, in terms of the sensing strategies used.
Multiple domain events can occur simultaneously. In such cases, it is important that the agent take actions appropriate to all relevant events. If it can only pay attention to some of the concurrent events, its rationality will suffer. Thus, many architectures use parallel methods in their sensing strategies.
Not all of the events occurring in the environment demand the same level of attention from the agent. It is important that the agent pay greatest attention to the events of the highest priority, so as to maintain salient behavior.
Limited Response Time
An agent rarely has an unbounded amount of time to take actions in response to an environmental event. This limits the amount of processing required before taking an action, and usually also limits the amount of knowledge brought to bear. As a result, many architectures turn to either interruptible processing or situated action. However, the agent must still act as rationally as possible, given the time allowed, according to some bounded rationality constraint.
A domain may require an agent to perform many different types of tasks simultaneously in order to "survive". As a result, many architectures support multiple, simultaneous goals. When multiple tasks and goals may be considered simultaneously, additional concerns include behavioral coherence and saliency.
Humans in the environment may be constantly monitoring the agent's performance. Often, they would like explanations of an agent's decisions, either for verification or debugging purposes. More commonly, they are able to add new knowledge directly to the agent's database. To facilitate this type of input, architectures often adopt a uniform, declarative style of knowledge representation. At the furthest extreme, both explanation and knowledge addition can take place in some natural language.