Issues

From Cognitive Architecture Wiki

Jump to: navigation, search

Although architectures may be characterized according to their properties, their capabilities, and the environments for which they are designed, additional questions about the architectural properties (as distinct from agent or behavioral properties) also exist. For example, how will some particular architecture scale? Or, to what extent does this architecture bring relevant knowledge to bear on some particular problem? These issues in general are not usually answered with simple responses (such as "Yes, the architecture will scale" or "No, the architecture is not completely rational"). Instead the questions suggest a continuum of possibilities for each issue. This section of the document introduces each of these issues which are then elaborated for individual architectures within the discussion of that architecture.

Contents

Generality

The generality of an architecture is a measure of the types of tasks and environments with which the architecture can successfully be used. Generality is both a function of versatility and taskability.

Architectures with this issue

Versatility

The versatility of an architecture is a measure of the types of goals, methods, and behaviors the architecture supports for the environments and tasks to which it has been designed. That is, to what extent does it accomplish its goals in specified environments, and are those methods applicable across many different environments and tasks?

Architectures with this issue

Rationality

The rationality of an architecture is a measure of consistency. That is, are the actions it performs always consistent with all its knowledge and goals? Generally, if an agent would perform two different actions with the same knowledge in two identical (environmental) situations, it is not said to be fully rational. The issues concerning rationality in cognitive architectures are discussed more completely as the maximum rationality hypothesis. Additionally, because of limited resources, full rationality may always be possible even when an agent as the general capability to act so. This is known as bounded rationality.

Architectures with this issue

Scalability

An architecture is considered to be scalable if, without direct change to the underlying mechanisms of the architecture, it can handle increasingly complex problems that demand a greater amount of knowledge. Often scaling up to larger problems demonstrates problems with efficiency and extended operation which had previously not been considered fully.

Architectures with this issue

Reactivity

The reactivity of a system, in simplest terms, is its ability to respond to changes in a possibly unpredictable environment; one of the goals of the systems discussed here is to react in real-time, in largely dynamic environments. Some systems use the notion of learning to complement reactivity, with the hope of decreasing reaction time. See also processing time with deliberation and efficiency.

Architectures with this issue

Efficiency

The efficiency of an architecture is a measure of its ability to do a task within certain time and space constraints. Furthermore, it defines the bottlenecks that might arise in a system when doing certain tasks. Most of the architectures considered here are used with agents that exist in the real world, and so most of the actions performed by these agents must be guaranteed within real-time.

Architectures with this issue

Ability to Add Knowledge

This capability of an architecture is strictly concerned with the addition of knowledge into the system by outside means (such as direct programming by a designer or user). Thus this is a separate issue from the ability to learn which is more of an autonomous action, or at least occurs within the agent itself. In considering the ability to add new knowledge, one may ask if the task is very easy (e.g., adding new data to a store of knowledge) or very difficult (requiring fundamental changes to the architecture to incorporate the new knowledge). In most cases, the answer to this question dependent on both the architecture itself and the type of knowledge which is to be incorporated.

Architectures with this issue

Ability to Learn

Learning is a capability that is important, almost central, to the majority of cognitive architectures considered in this document. Yet, learning in these systems is by no means as robust or diverse as human learning; thus, while learning is often a capability of an architecture, the following questions remain about the learning mechanism(s) involved:

Architectures with this issue

Taskability

The taskability of an architecture is its ability to perform different tasks based on external commands. For instance, can the architecture be asked to do various tasks without having to be reprogrammed or rewired?

Architectures with this issue

Extended Operation

In many cases, a given system may perform acceptably or even exceptionally over a short period of time. However, since many of the agents and architectures discussed in this document are being developed for environments in which much longer running times are necessary for the completion of a task (e.g., planetary expedition vehicles) or for environments in which an agent will shift continuously from task-to-task once initiated (e.g., a robot on a twenty-four hour assembly line) the question of how the architecture behaves over longer time scales becomes increasing central to a true measure of its effectiveness.


Questions for extended operation include:

  • What is the Mean-Time Between Failures for the system. Is the failure hardware or software delimited?
  • Is there a performance degradation over time? Almost as importantly, is a performance decrease sub-linear, linear, or super-linear with respect to increasing operation time?
  • What happens to the world model over time? Specifically, are previous world models and experiences in them learned and penetrable (such as a form of episodic knowledge), learned but impenetrable (such as some skill acquisition as procedural knowledge) or discarded without learning or recall of the event?

Architectures with this issue

Modularity (Software Engineering)

Because the environment is diverse and functions are specialized, not all functions should be necessarily engaged for any specific task within an architecture. For example, in humans and in robots different components and functions are utilized when walking than when problem solving. The discipline of software engineering suggests that systems whose components are designed according to function are easier to design, build, and maintain. This functional decomposition leads to a modular organization of the resulting system.

Architectures with this issue

Utility Problem

The Utility Problem results in explanation-based learning systems when the method used to determine the usefulness of learned rules (the operationality criterion) is unrealistic. This is the case in most current systems since no techniques have been developed to make the operationality criterion as sophisticated as the environments to which EBL has been applied. Rules are generally learned too frequently and thus learning may actually slow down the system. Carbonell, et al. (1991) identify three factors that may contribute to this degradation in performance:

  • Low Application Frequency: The rule may be over-specific and thus applied too infrequently to be useful.
  • High Match Cost: The cost of matching a rule -- especially those which represent a long sequence of operations -- may be prohibitively expensive (e.g., the matching problem for pre-condition lists is NP-complete).
  • Low Benefit: The rule may have only marginal utility in the problem domain.

Architectures with this issue

Frame Problem

The frame problem can be described as the task of any agent acting in a dynamic environment to keep its model of the world and its knowledge in general in synchrony with the world. In the case of detecting changes and asserting them perceptually, the problem is logically trivial. However, the effect of these changes on derivational knowledge and on the state of goals and strategies is non-trivial and not understood.


Suppose some robot has the task of stacking blocks and after partly completing the task, the blocks are toppled. In this case a strategy of back-tracking may be sufficient, but not all cases are this straight-forward. There could be many state-dependent decisions made which could be invalidated by the changes to the world but not easily detected since the state in which the decision was made could be far in the past.


STRIPS and STRIPS-like systems approach this problem by building goal and state trees that attempt to track the dependencies of knowledge on other knowledge so that when a particular information is invalidated, its antecedents can be efficiently and completely traced and adjusted. However, this represents only the knowledge-maintenance aspect of the problem. The problem of determining what to do next may or may not still be clear. If not clear, a strategy for reanalyzing goals must be present.

Architectures with this issue

Psychological Validity

This issue is an attempt to address the question: Does the architecture make any attempt to model aspects of human behavior? The answer is not always either easy or straightforward. Certainly some research in cognitive architectures is concerned with modeling the methods by which human solve problems. An example of this is the Teton architecture. Another approach is to try to develop architectures which behave intelligently without regard to the psychological plausibility of the method by which the behavior is achieved. Still yet another approach is to claim that intelligence can not be achieved without modeling the architecture of the brain first, and then determining the methods which will produce the desired behavior.


The Einstellung Effect is an example of piece of human data some cognitive architects point to to claim some validity of their own architecture. It is the observation that once people find a solution to a problem, they tend to stick with it, even if a better method is available.


The Power Law of Learning is another example. With more practice at a task, people seem to always be getting faster. However, the rate of learning decreases the more practice one has.

Architectures with this issue

Personal tools