Capabilities

From Cognitive Architecture Wiki

Jump to: navigation, search

Contents

Capabilities related to Learning

Single Learning Method

A system is said to learn if it is capable of acquiring new knowledge from its environment. Learning may also enable the ability to perform new tasks without having to be redesigned or reprogrammed, especially when accompanied by generalization. Learning is most often accomplished in a system that supports symbolic abstraction, though such a property is not exclusive (reinforcement strategies, for example, do not necessarily require symbolic representation). This type of learning is separated from the acquisition of knowledge through direct programming by the designer.

Architectures with this capability

Multi-Method Learning

As a capability, learning is often thought of as one of the necessary conditions for intelligence in an agent. Some systems extend this requirement by including a plethora of mechanisms for learning in order to obtain as much as possible from the system, or to allow various components of their system to learn in their own ways (depending on the modularity, representation, etc., of each). Additionally, multiple methods are included in a system in order to gauge the performance of one method against that of another.

Architectures with this capability

Caching

Caching can be seen as rote learning, but can also be seen as a form of explanation-based learning. This is simply storing a computed value to avoid having to compute it in the future. Caching vastly reduces the high cost of relying on meta-knowledge and the necessary retrieval and application.

Architectures with this capability

Learning by Instruction

An agent that is given information about the environment, domain knowledge, or how to accomplish a particular task on-line (that is in real-time as opposed to some off-line programming) is said to be able to learn from instruction. Some instruction is completely uni-directional: a teacher simply gives the agent the knowledge in a sequential series of instructions. Other learning is interactive: the teacher is prepared to instruct the agent when the agent lacks knowledge and requests it. This last method supports experiential learning in which a teacher may act as both a guide (when called upon) and as an authority (when the agent is placing itself in danger or making a critical mistake).

Architectures with this capability

Learning from Experimentation

Learning from experimentation, also called discovery, involves the use of domain knowlege, along with observations made about the enviroment, to extend and refine an agent's domain knowledge. The more systematic an agent manipulates its enviroment to determine new information, the more its behavior seems to follow traditional scientific experimental paradigms. However, the agent's action need not be so planned to produce new behavior.

Architectures with this capability

Learning by Analogy

Reasoning by analogy generally involves abstracting details from a a particular set of problems and resolving structural similarities between previously distinct problems. Analogical reasoning refers to this process of recognition and then applying the solution from the known problem to the new problem. Such a technique is often identified as case-based reasoning. Analogical learning generally involves developing a set of mappings between features of two instances. Paul Thagard and Keith Holyoak have developed a computational theory of analogical reasoning that is consistent with the outline above, provided that abstraction rules are provided to the model.

Architectures with this capability

Inductive Learning and Concept Acquisition

In contrast to abstraction, concept acquisition refers to the ability of an agent to identify the discriminating properties of objects in the world, to generate labels for the objects and to use the labels in the condition list of operators, thereby associating operations with the concept.


Concept acquisition normally proceeds from a set of positive and negative instances of some concept (or group of segregated concepts). With the presentation of the instances, the underlying algorithm makes correlations between the feature of the instances and their classification. The problem with this technique as it is described here is that it requires the specification of both relevant features and the possible concepts.


In general, as an inductive technique, concept acquisition should be able to generate new concepts spontaneously and to recognize the relevant features over the entire input domain.

Architectures with this capability

Learning from Abstraction

Contrasted with concept acquisition, abstraction is the ability to detect the relevant -- or critical -- information and action for a particular problem. Abstraction is often used in planning and problem solving in order to form a condition list for operators that lead from one complex state to another based on the criticality of the precondition.


For instance, in an office environment, a robot with a master key can effectively ignore doors if it knows how to open doors in general. Thus, the problem of considering doors in a larger plan may be abstracted from the problem solving. This can be performed by the agent repeatedly to obtain the most general result. Some architectures limit abstraction to avoid the problem of over-generalization, resulting in mistaken applications of the erroneously abstracted operator.

Architectures with this capability

Explanation-Based Learning

When an agent can utilize a worked example of a problem as a problem-solving method, the agent is said to have the capability of explanation-based learning (EBL). This is a type of analytic learning. The advantage of explanation-based learning is that, as a deductive mechanism, it requires only a single training example ( inductive learning methods often require many training examples). However, to utilize just a single example most EBL algorithms require all of the following:

  • The training example
  • A Goal Concept
  • An Operationality Criteria
  • A Domain Theory


From the training example, the EBL algorithm computes a generalization of the example that is consistent with the goal concept and that meets the operationality criteria (a description of the appropriate form of the final concept). One criticism of EBL is that the required domain theory needs to be complete and consistent. Additionally, the utility of learned information is an issue when learning proceeds indiscriminately. Other forms of learning that are based on EBL are knowledge compilation, caching and macro-ops.

Architectures with this capability

Transfer of Learning

A capability that comes from generalization and is related to learning by analogy. Learned information can be applied to other problem instances and possibly even other instances. Three specific types of learning transfer are normally identified:

  • Within-Trial: Learning applies immediately to the current situation.
  • Within-Task: Learning is general enough that it may apply to different problem instances in the same domain.
  • Across-Task: Learning applies to different domains. Examples here include some types of concept acquisition in which a concept learned in one domain (e.g., blocks) can be related to other concepts (e.g., bricks) through similarities (e.g., stackable). Across-task learning is then strongly analogical.

Architectures with this capability

Capabilities related to Planning and Problem Solving

Planning

Planning is arguably one of the most important capabilities for an intelligent agent to possess. In almost all cases, the tasks which these agents must carry out are expressed as goals to be achieved; the agent must then develop a series of actions designed to achieve this goal.


The ability to plan is closely linked to the agent's representation of the world. It seems that effective planning requires that 1) knowledge of the world is available to the planner, and since most worlds in which we are interested are reasonably complex, this is a strong motivation for implementing 2) a symbolic representation of knowledge. Typically, this knowledge contains information about possible actions in the world, which is then used by the planner in constructing a sequence of actions.


Planning itself is a prerequisite for several other capabilities that are often instantiated in intelligent agents. Certainly, problem solving relies heavily on planning, as most approaches to problem solving consist of incremental movements toward a solution; planning is integral to assembling these steps. Learning and planning have a reciprocal relationship wherein planning creates a new method for carrying out a task, which can then be learned for future use by the planner.

Architectures with this capability

Problem Solving

It may seem that all agents must solve problems, as indeed they must, but problem solving in the technical sense is the ability to consider and attain goals in particular domains using domain-independent techniques (such as the weak methods) as well as domain knowledge. Problem Solving includes the capability to acquire and reason about knowledge, although the level to which such capability is supported differs between architectures. Problem solving, especially human problem solving, has been characterized as deliberate movement through a problem space. A problem space defines the states that are possible for a particular problem instance and the operators available to transform one state to another. In this formulation, problem solving is search through the state space by applying operators until a recognizable goal state is reached.

Architectures with this capability

Replanning

Intelligent agents operating in dynamic environments often find it necessary to modify or completely rebuild plans in response to changes in their environment. There are several situations in which an agent should replan.


An intelligent agent should update its plan when it learns new information which helps it accomplish its current goal more quickly. For instance, it may be the case that in the process of satisfying one goal the agent also satisfies one or more of its other goals. The agent should recognize when it has already satisfied a goal and change its plan accordingly.


In addition, an agent should replan when facts about the world upon which its current plan are based change. This is important when in the act of achieving one goal the agent undoes another. The agent must realize this and update its plan to satisfy both goals.


Replanning is a capability that arises from other capabilities, namely planning and interuptability.

Architectures with this capability

Support for Multiple, Simultaneous Goals

Taskable agents can support the achievment of externally specified top-level goals. Some of these agents can support the archievement of many top-level goals at once. This is usually performed in conjunction with planning such that the goals are sequenced in some rational way.

Architectures with this capability

Self Reflection

Systems which are capable of self reflection are able to examine their own internal processing mechanisms. They can use this capability to explain their behavior, and modify their processing methods to improve performance. Such systems must have some form of Meta-Knowledge available, and in addition, they must actively apply the Meta-Knowledge to some task. The list below explains the common uses of self reflection.

  • Learning: Many systems reflect upon traces of problem solutions and try to extract generalities from them to improve their problem solving strategies. Histories of past problem solutions can be collectively examined to find commonalities that can lead to case based learning.
  • Performance Fine Tuning: Performance can be fine tuned by gathering statistics on the efficiency of various problem solving methods. These statistics are then examined to determine which problem solving methods are most efficient for certain classes of problems. This is closely related to the learning capability described above.
  • Explanation: Systems can use self reflection to explain their behavior to an outside observer. This action is often performed by examining traces of the problem solution and reporting key aspects of it.
  • Episodic Recall: Self Reflection can take the form of reporting past experiences to an outside observer. This is usually accomplished through some form of episodic memory, where experiences are stamped with indications of when they occurred.


There are several different mechanisms that can be included in an architecture to help facilitate self reflection. These are explained below.

  • "Glass Box" Knowledge Representation : If knowledge if uniformly represented and completely open to examination throughout the system then it is easier to add functionality which can examine this knowledge. Only one form of knowledge needs to be examined, and the knowledge itself is easily obtained. The other common approach to knowledge representation is called the "black box" approach, where knowledge is localized and hidden within the various modules of the system. This makes it difficult to extract the knowledge to be reflected upon, and may require the use of several different methods of looking at the knowledge once it is obtained.
  • Episodic Memory: Episodic memory is directly applicable to episodic recall. This type of memory is often costly, however, both in terms of space and time. As the agent's experiences grow the size of the memory space to store these experiences must grow as well. In addition, searching through past experiences for some specific detail is often too time consuming to be practical.
  • Problem Solving Traces: Problem solving traces are used by many of the learning mechanisms. In addition they can be used to explain the behavior of the system. Problem solving traces are usually only kept around for a short period of time, and are often tied to the specific learning or explanation functions that use them. Once the system learns from them (or proceeds to another task) the trace is discarded. Key aspects of it may be saved in an episodic memory if the system has one.

Architectures with this capability

Meta-Reasoning

Reasoning about reasoning, or meta-reasoning is a critical capability for agents attempting to display general intelligence. Generally intelligent agents must be capable of constantly improving skills, adapting to changes in the world, and learning new information. Meta-reasoning can be deployed implicitly through mechanisms such as domain-independent learning, or explicitly using, for example, declarative knowledge which the agent can interpret and manipulate. The domain-independent approaches seem the most successful so far.


Other aspects of meta-reasoning include the consideration of computational costs of processing, leading to the issues such as focused processing and real-time performance.

Architectures with this capability

Expert Systems Reasoning

An expert system is an artificial intelligence technique in which the knowledge to accomplish a particular task (or set of tasks) is encoded a priori from a human expert. An expert system typically consists to two pieces. The knowledge base represents the expert's domain knowledge and must be encoded as efficiently as possible due to the size of the database. This representation often takes the form of rules. The reasoner exploits the knowledge in the rules in order to apply that knowledge to a particular problem. Expert systems often have an explanation facility as well.


Production systems are often used to realize expert systems. Expert systems also often lag the cutting edge of AI research since they are normally more application-oriented. Examples of implemented expert systems include:

  • MYCIN: Diagnosis of Infectious Diseases
  • MOLE: Disease Diagnosis
  • PROSPECTOR: Mineral Exploration Advice
  • DESIGN ADVISOR: Silicon Chip Design Advice
  • R1: Computer Configuration

Architectures with this capability

Inductive and Deductive Reasoning

Deductive reasoning can be described as reasoning of the form if A then B. Deduction is in some sense the direct application of knowledge in the production of new knowledge. However, this new knowledge does not represent any new semantic information: the rule represents the knowledge as completely as the added knowledge since any time the assertions (A) are true then the conclusion B is true as well. Purely deductive learning includes methods such as caching, building macro-operators, and explanation-based learning.


In contrast to this, inductive reasoning results in the addition of semantic information. There are a great many ways in which inductive inference has been characterized but most are similar to those specified by the philosopher John Stuart Mill (1843). Basically, in this paradigm, positive instances of some phenomena that have a common trait identify that trait as indicating some larger commonality. Similarly, negative instances that differ for some trait from the positive instances are also indicative of a crucial feature. This methodology is at the center of concept acquisition programs and plays a key role in many AI systems. In general, induction is more difficult than deduction because of both the addition of new semantic information and because the inferred concept may not be the correct one. In induction, assertions do not necessarily lead to true conclusions.


Combinations of inductive and/or deductive reasoning are present in most cognitive architectures that utilize a symbolic world model and are described in the individual architecture document with more specific capabilities such as planning and learning.

Architectures with this capability

Capabilities related to Interaction with the Environment

Prediction

Our use of the term prediction refers to an architecture's ability to predict what the state of the world is or might be, what things might happen in the outside world, and what other things might happen as a consequence of the agent's actions. It should be clear that, for an architecture to be able to predict it needs to have a fairly good and consistent model of the outside world. In fact, architectures with no such model are unable to do prediction.

Architectures with this capability

Query Answering and Providing Explanations for Decisions

Query answering is the ability to query the agent about things like past episodes ("Where were you last night?"), or the current state of the world ("Are your fingernails clean?"). If not posed in natural language, some of these queries are quite simple if the agent simply has episodic or state information immediately available. While a number of architecture discussions omitted query answering, many have general problem-solving ability that could be applied in this direction.


It is also often desirable that an agent provide explanations of its actions. For instance, supervisors may monitor a system's performance, possibly because of the sensitivity of the domain, in which case the agent must provide justifications of its choices. More commonly, a system may make mistakes which must be corrected. Because of the complexity of most architectures, such debugging is difficult, if not intractable. A trace of the agent's processing, in the form of a decision explanation, would provide valuable information to the system designers in trying to amend the situation.

Architectures with this capability

Navigational Strategies

Agents constructed under the hypothesis of situated action often have rudimentary reactions built into the architecture. These built-in reactions give rise to the strategy that the agent will take under certain environmental conditions. Reactive agents, such as the Brooksian agents, have emergent navigational strategies. Other agents augment emergent strategies with a degree of explicit planning.

Architectures with this capability

Natural Language Understanding

Natural language understanding and generation abilities are required to communicate with other agents, particularly with people. Natural language understanding corresponds to receiving words from outside world, and natural language generation corresponds to sending words that may be compiled internal deliberation of the agent itself, to external world.

Architectures with this capability

Perception

Perception refers to the extraction of knowledge (usually in the form of signals) from the environment. One characteristic of perception is that it may integrate sensory information from different modalities. For example, in humans the modalities of perception correspond to the five senses: taste, touch, sight, sound, and smell.


Agents that sense the world and generate knowledge accessible to processes that reason are said to perceive the world. Perception drives a continuum of behaviors that extend from the simplicity of a thermostat which simply measures the temperature to the assumption used by some agents that objects containing all relevant information about things in the world get inserted into the agent's knowledge.


In this later case, the amount of perceptional information at any one time may be overwhelm the agent's processing abilities. One way to circumvent this problem in real domains is to include a system for focusing attention on relevant percepts. In this case, the architecture makes a deliberate decision to concentrate on particular environmental percepts and must be forced (perhaps by a high priority stimulus) to move its attention elsewhere.


In addition to attentional mechanisms, perception may also be corrupted by faulty transducers or some other problem with accurately sensing the environment. In some cases, the architectures are then supplied with the ability to support and recover from inaccurate sensing.

Architectures with this capability

Support for Inaccurate Sensing

Sensors provide incomplete information and the state of the agent is always behind the state of the external environment. Some agents account for this in the architecture while others make tacit or explicit assumptions (or requirements) that sensors be perfect. Several architectures support inaccuracies and delays in sensing. Others assume or require that sensors be perfect.

Architectures with this capability

Robotic Tasks

Navigation, sensing, grabbing, picking up, putting down and the host of Blocks' World tasks can be considered robotic. Agents that attempt to solve problems in dynamic environment must support these capabilities.

Architectures with this capability

Capabilities related to Execution

Real-Time Execution

While speed is an issue in all architectures to varying degrees, the ability to guarantee real-time performance places a tighter restriction on the speed requirements of the system. Real-time performance means that the agent is guaranteed to behave within certain time constraints as specified by the task and the environment. This is especially challenging in a dynamic environment because it provides an very tight time constraint on performance. Perfect rationality is perhaps impossible to guarantee when operating under a real-time constraint and thus some architectures will satisfice with bounded rationality to achieve this goal.

Architectures with this capability

Focused Behavior and Processing/Selective Attention

The designers of most intelligent agents intend their agents to be operative in a complex, dynamic environments, usually the "real world" or some subset thereof. This, however, often causes significant practical problems: the real world provides a literally overwhelming amount of information to the agent; if the agent were to attempt to sense and process all this information, there would be very few computational resources remaining for other processes such as planning or learning.


One way in which this problem is overcome is by incorporating some form of focusing mechanism, whereby the agent determines what sort of information it needs to attack the current problem. It looks for and processes all relevant information it can, but (more or less) ignores other extraneous data. By focusing all of its processes only on the problem at hand, the combinatorial explosion of information from the world can be sidestepped.

Architectures with this capability

Goal Reconstruction

Goal reconstruction is the ability of an agent to exploit short-cuts to return to a problem where it was last left off, even when the memory in which the problem was stored has been used for other purposes. This capability is implicit in some architectures and explicit in others. Kurt VanLehn argues that goal reconstruction is critical to mimic the human capability of quickly restarting a problem after being indefinitely interrupted. Teton employs goal reconstruction explicitly using two mechanisms in order to balance efficiency and speed with robustness.

Architectures with this capability

Responding Intelligently to Interrupts and Failures

The ability to respond intelligently to interrupts is extremely important for agents that must operate in a dynamic environment. In particular, interruptability may be an important feature that supports reactivity but neither property implies the other.


Architectures that tend to focus their attention on a particular activity, such as planning, at a particular moment in time may have some difficulty incorporating external, high priority perceptions into its behavior patterns. An architecture could simply treat the new situation as a standard goal and handle it in the normal course of cognitive processing. But, if the agent's success or survival depends on the timely handling of such a situation, it may be more appropriate to interrupt the current behavior according to the priority of the situation. This may simply involve stopping the current cognitive process in lieu of a more important process. Of course this raises the question of "clean up", that is whether the current process should be stopped immediately or put in a state such that the agent could return to it effectively after dealing with the interrupt.

Architectures with this capability

Human-like Math Capability

Humans often solve arithmetic problems the "long way". The optimal bit-based methods of the computer are not natural and, as such, not employed by humans. Several psychological experiments have been performed showing that, not only are the arithmetic operations used by humans not optimal, but the long-hand algorithms can be suboptimal and sometimes inconsistent. Some humans classify problems before approaching them (even the classifications can be inconsistent) and use a personal method that varies consistently with the class of problem.


Kurt VanLehn argues that a non-Last-In, First-Out (LIFO) goal reconstruction technique can reproduce this behavior. An essential component to the reproduction of this behavior is that goals cannot be managed by a LIFO stack. VanLehn's Teton architecture was designed specifically to model these types of behaviors. Additionally, the Soar architecture has also been applied to the cognitively-plausible solution of math problems.

Architectures with this capability

Personal tools