Properties

From Cognitive Architecture Wiki

Jump to: navigation, search

Agent properties identify and entail the techniques and methods that were used to realize a particular architecture or architectural component. For example, most architectures include some sort of memory. Agent properties characterize the memory: Is the memory declarative, procedural, episodic? Are there size limitations? Is memory uniformly accessed? Is it uniformly organized? These properties have fairly often been studied independent of integrated, cognitive architectures as part of artificial intelligence research. The links below briefly describe and define these architectural properties, mostly without direct reference to specific architectures.

Contents

Organization

Style of Control

An architecture's style of control defines how components in that architecture coordinate and cooperate during the operation of the system, for instance, how they share knowledge, how they react to changes in the environment or to changes to components themselves.

Architectures with this property

Forward and Backward Chaining

Forward-chaining and backward-chaining are properties of an architecture that refer to the maintenance of knowledge, while forward-planning and backward-planning refer to methods of planning usually utilizing means-ends analysis. Forward-chaining implies that upon assertion of new knowledge, all relevant inductive and deductive rules are fired exhaustively, effectively making all knowledge about the current state explicit within the state. Forward chaining may be regarded as progress from a known state (the original knowledge) towards a goal state(s). Backward-chaining by an architecture means that no rules are fired upon assertion of new knowledge. When an unknown predicate about a known piece of knowledge is detected in an operator's condition list, all rules relevant to the knowledge in question are fired until the question is answered or until quiescence. Thus, backward chaining systems normally work from a goal state back to the original state.


From this admittedly superficial description, it seems that backward-chaining, since it saves computation is superior to forward-chaining. However, since knowledge cascades, certain pieces of inductive knowledge can be missed. Additionally, the branching factor (the number of considerations at each state) may vary between forward and backward chaining and thus also drive a consideration of which method is the more efficient. This trade-off between computation and assuredness of knowledge must be measured and decided by the architect of the agent.

Architectures with this property

Impasse-Driven Control

Cognitive architectures often use knowledge to modulate the the particular system's style of control. However, when the knowledge necessary to make a decision in this methodology is missing, there is necessarily indecision about what action to take next. This type of deadlock is known as an impasse. At the occurrence of an impasse, the architecture may crash (e.g., this is what computer architectures do when a divide-by-zero is undertaken) or it may be imbued with the ability to pursue the cause of the impasse. Different architectural mechanisms (or the same mechanisms in a different context) then take over control of the on-going processing. This passage of control to a particular process in order to resolve the impasse is known as impasse-driven control.

Architectures with this property

Serial Processing

Serial processing is processing that occurs sequentially. There is an explicit order in which operations occur and in general the results of one action are known before a next action is considered. Serial processing systems may mimic the action of parallel systems, albeit with a concurrent (and usually serious) loss in efficiency. Compare to parallel processing.

Architectures with this property

Parallel Processing

In parallel processing systems, many events may be considered and acted upon simultaneously. Since a variety of actions may be considered simultaneously, coherence in behavior is an issue for parallel systems. A parallel system may be synchronous, in which there is an explicit parallel decision cycle or asynchronous. In asynchronous systems, there are usually a set of independent components which act autonomously to one another; this makes coherence an even more difficult problem. A parallel architecture does not necessarily imply parallel processing; for instance, the human cognitive architecture is inherently serial at the cognitive level even though the biological band is explicitly parallel. However, there may tremendous improvements to efficiency for some parallel processing strategies, compared to serial ones.

Architectures with this property

Asynchronous Processing

In modular architectures, the operation of the architecture can follow a set pattern, such as plan then execute, or the modules can operate independently of the other modules. This latter operation is a form of parallelism known as asynchronous control. In this case, the modules of the architecture interact only when passing information (perceived world knowledge, control plans, etc.) to one another.

Architectures with this property

Interruptible Processing

Interruption is the process by which an external event can trigger an agent to attend to the event. Interruption may be compared to polling. In polling systems, the agent spends a part of its execution cycle looking for new external events but the events themselves do not have the capability to divert control immediately. Since interrupts may occur asynchronously, an interruptible agent must have a way to efficiently store the current behavior context in order to return to it after the interrupt has been serviced. Such structured return to a previous state is exemplified by goal reconstruction, just one of many methods in which an architecture may be given an explicit capability to respond intelligently to the interruption event. Interruptibility adds to an agent's reactivity and efficiency. However, interruptibility can also cause problems in behavioral salience and coherence, especially when the interrupts are not prioritized according to their (relative) importance.

Architectures with this property

Open-Loop Processing

In open-loop systems, there is no feedback from the environment to the agent. In other words, the output from architectural processes are considered complete upon execution. This makes open-loop systems most appropriate for simulated rather than real environments since real environments would often not tolerate the assumption that tasks are performed perfectly by the architectural agent. However, open-loop systems are generally more efficient for the same reason.


Contrast this with Closed-Loop Processing.

Architectures with this property

Closed-Loop Processing

A system is said to perform closed-loop processing if the system feeds information back into itself. For example, many agents assume that once an action is sent to effector system, that it can simply manipulate its world model internally to a state that will be consistent with the world. This is an open loop model. The closed loop approach includes the examination of the world in an effort to validate the world model and is thus appropriate for real-world environments in which feedback is necessary to validate agent actions.


Another view of closed-loop processing may also refer to a similar mode of operation in which there is a limited amount of time for knowledge to be brought to bear or for an action's effects to be perceived. In this case, the loop is closed because feedback is required and there is a time window in which the feedback must be sensed. In this sense, closed-loop processing results in bounded rationality.

Architectures with this property

Hierarchical Organization

Intelligent systems may be organized into hierarchies, or levels, which correspond to different capabilities. Capabilities of the "lower" levels of organization are in a sense inherited by the levels above, providing both a means of modularizing individual capabilities, but also the ability to reuse methods in different branches of the hierarchy.


Simon (1962) proposed that a hierarchical decomposition is necessary for the construction of any complex assembly. Without stable sub-assemblies, complex systems require too many pieces of information (parts) to be considered simultaneously. Without decomposition, the exclusion (or failure) of any part causes a complete system failure. Overlooking parts in smaller subsystems is less likely (since there are less things to consider) and failures, which may also disable the entire system, are more easily traced by testing the operation of individual subsystems.


Levels in a hierarchy however need to be completely insulated from levels above below them. Such insulation is known as a strong level and is characteristic of the levels in a computer systems hierarchy (e.g., register-transfer level, program level, etc). However, in weak levels there is interaction between levels; a lower level may be said to "show through" in the upper level. In the human cognitive architecture, the level between the cognitive band and the rational band is weak, primarily because rationality is bounded by the limited resources of the human computational architecture. This allows the human architecture to be evident from human behavior; Newell characterized this as "psychology".

Architectures with this property

Modular Organization

A modular organization is one in which different functional components are separated from one another, a technique adopted from software engineering. This is in contrast to a composite organization in which there is no separation between functions. Modular organization is also distinct from hierarchical organization. Modular organization is chiefly concerned with the horizontal design of a system whereas hierarchical organization involves a consideration of the vertical nature of the design. Thus, each level in a hierarchical system may be sub-divided into functionally-distinct modules.

Architectures with this property

Memory, Knowledge and Representation

General Representation Issues

Symbolic World Model

Above the cognitive architecture and its basic operating programs, there (usually) exists a level that enables the system to store knowledge in some basic framework. This framework is based on symbols which serve to represent relations between the agent and its environment, and hence knowledge, within the system. Such symbolic abstraction occurs at the symbol level and derives its power from the physical symbol system hypothesis. Thus, symbolic abstraction enables, or at least facilitate, many architecture capabilities including planning and learning.

Architectures with this property

Size of the Knowledge Base

There are physical limits to the size of the knowledge base that may be supported by an individual agent. But, in addition to the physical limit on memory (just one example of the environment's capability to provide only limited resources), there may be limitations imposed by the agent's architecture and style of control. For example, Homer experiences a processing slow-down as its episodic knowledge base increases in size.

Architectures with this property

Glass Box Approach

Glass box knowledge representation may be defined as the ability of rules to examine each other. Architectures with this property automatically have meta-knowledge, and therefore may readily support meta-reasoning.


Glass box representation is useful for modular architectures, so that all modules have access to all knowledge. Glass box representation allows the rules and the architecture to share responsibility to examine, activate and rewards other rules. Also, learning (and esp. multi-method learning) are facilitated by glass box representation because learning modules may write.


The advantages of glass box representation are:

  1. increased flexibility
  2. easier meta-reasoning


Glass box knowledge representation stands in contrast to Black Box knowledge representation. Related properties are uniform access to knowledge and homogeneous representation.

Architectures with this property

Black Box Approach

Black box knowledge representation may be defined as the inability of rules to examine other rules. Architectures with this property are limited in their direct inferencing ability to see what other rules are up to. There are advantages to this:

  1. rule bases might be less fragile to changes
  2. rule bases might be more modular and easier to understand.


Black-box knowledge representation does not rule out meta-reasoning, but it would make it more circuituitous. Rules would have to observe each others effects and infer conditions.

Architectures with this property

Declarative Representations

Architectures with declarative representations have knowledge in a format that may be manipulated, decomposed and analyzed by the reasoning engine independent of its content. A classic example of a declarative representation is logic. The primary advantages of declarative knowledge is the ability to use knowledge in ways that the system designer did not foresee.


Declarative knowledge representation contrasts with procedural, which stores knowledge in a faster-to-access but less flexible format.


Often times, whether knowledge is viewed as declarative or procedural is not an intrinsic property of the knowledge base, but is a function of what is allowed to read from it. Production systems, for example, are declarative if productions may view themselves, and are procedural it they cannot. (cf. glass-box/black-box control knowledge).


A particular architecture may use both declarative and procedural knowledge at different times, taking advantage of their different advantages. The distinction between declarative and procedural representations is somewhat artificial in that they may easily be inter-converted, depending on the type of processing that is done on them.

Architectures with this property

Procedural Representations

Architectures with procedural representations encode how to do some task. In other words, procedural knowledge is skill knowledge. A simple example of human procedural knowledge is the ability to ride a bike. The specifics of bicycle-riding may be difficult to articulate but one can perform the task. One advantage of procedural representations is possibly faster usage in a performance system. Productions are a common means of representing procedural knowledge.


Procedural knowledge representation contrasts with declarative, which stores knowledge in a more flexible but harder to immediately use format.


Use of procedural knowledge in an agent raises the questions of whether the agent can "know what it knows" and the issue of penetrability of the knowledge. Use of this knowledge may not preclude the agent from this form of meta-knowledge and it certainly does not imply cognitive impenetrability. That the agent can demonstrate that it "knows what it knows" is illustrated in a Soar system which includes the ability to explain its actions. Cognitive impenetrability is not implied because for any operator learned, new, improved operators can be learned along with preference rules that lead to the emergence of cognitive penetrability. The precise bits corresponding to the original operator are neither understood nor changed, but the behavior exhibited by the operator has been penetrated.

Architectures with this property

Global Representation and Uniform Access to Knowledge

In architectures with global knowledge, different modules may read and/or write to common database(s). Often this knowledge is used to representing a world-view of what the agent believes is true in its environment.


An advantage of having global knowledge is that different modules may share their data and abilities for more intelligent combined behavior. This makes modular architectures more effective. Also, such knowledge is necessary for representing world-views: without it an architecture may only react to its present sensor readings (e.g. Subsumption Architecture.).


The disadvantages of having global knowledge are the expense of having to maintain its integrity (Truth-maintenance), and the danger of acting on data that is actually false.


A related property is knowledge homogenetity, which is a measure of how similar all knowledge is represented by the architecture.

Architectures with this property

Efficiency of Knowledge Access

As the knowledge bases of these architectures grow increasingly large, an efficient way of accessing specific knowledge in the knowledge bases becomes increasingly important.

Architectures with this property

Representational Consistency

Knowledge consistency is the property that a knowledge database contain no contradictions. It is extremely important for knowledge representations that may only either assert or deny statements, with no measure of partial belief. One such system is first order predicate calculus. Because all statements may be either true or false, it may be possible to store only part of the statements (a basis set), from which all true statements (or all false statements) may be derived. Statements that can't be derived from the basis set are assumed false (or true). This is the closed-world assumption. The primary advantage of knowledge consistency is ability to store less statements (using closed-world assumption).


An architecture that tolerates knowledge inconsistency generally treats its knowledge base as a set of competing hypotheses, or as a set of statements that it has varying amounts of confidence in. Often there is a numerical measure of belief. This technique is used in control knowledge too: rules may be graded by how well they perform. The advantages of tolerating inconsistent knowledge are:

  1. increased flexibility in representation,
  2. increased flexibility in learning and reasoning


Truth Maintenance Systems (TMS) may be used to maintain inconsistent or non-monotonic knowledge sets. A justification-based TMS works in a single context while an assumption-based TMS may support multiple contexts. Both the JTMS and ATMS work by maintaining a structure which connects derived knowledge to the knowledge from which it was derived. Thus, when some previous knowledge changes (due to changes in the world, incorporation of new knowledge, etc.) these structures are utilized to update all the knowledge which depended upon the changed knowledge. This technique is called dependency-directed backtracking.


A related property is learning monotonicity, which is whether an architecture may learn things that contradict what it already knows. If an architecture must maintain a consistent knowledge base then any learning strategy it uses must be monotonic.

Architectures with this property

Homogeneous Knowledge Representation

Architectures with global access to knowledge may store it in a uniform format in a central database, or may have it in a non-uniform format in a distributed fashion. The uniform method is often employed in systems with a general knowledge representation scheme like frames or first order predicate calculus. The non-uniform method is often used in loosely-coupled architectures for storing speciality knowledge not for general use. Knowledge uniformity is more an issue for modular, rather than centralized, architectures.


The advantages of knowledge uniformity are:

  1. modules may access all knowledge easily
  2. when modules are added or changed, the interfaces of others do not have to be modified.
  3. it is easier to modify modules to use new types of knowledge


The disadvantages of knowledge uniformity are:

  1. the design limitation that all knowledge be expressed in the same format
  2. inefficiency: if the knowledge is to be used by relatively few modules, it may waste memory and processing time to have it in a general format.


Homogeneous knowledge representation contrasts with heterogeneous representation.

Architectures with this property

Heterogeneous Knowledge Representation

The non-uniform method is often used in loosely-coupled architectures for storing speciality knowledge not for general use. Knowledge uniformity is more an issue for modular, rather than centralized, architectures.


A related property is uniform access to global knowledge, which is how accessible knowledge is to the architecture's modules.


Heterogeneous knowledge representation contrasts with homogeneous representation.

Architectures with this property

No Explicit Representation

Usually the representation of knowledge takes some form. Examples include: first-order predicate logic, frames, networks, scripts, etc. However, knowledge does not have to be represented explicitly.

Architectures with this property

Specific Examples of Representations and Memory Structures

Associative Memory

Content-addressed or associative memory refers to a memory organization in which the memory is accessed by its content (as opposed to an explicit address). Thus, reference clues are "associated" with actual memory contents until a desirable match (or set of matches) is found. Production systems are obvious examples of systems that employ such a memory. Associative memory stands as the most likely model for cognitive memories, as well. Humans retrieve information best when it can be linked to other related information. This linking is fast, direct and labyrinthian in the sense that the memory map is many-to-many and homomorphic.

Architectures with this property

Episodic Knowledge

Two particular types of knowledge -- procedural and declarative -- have been used extensively in the design and development of cognitive architectures. However, neither of these types of knowledge characterize the knowledge that humans use to remember events (such as, for example, graduations, birthday parties, and weddings). Such remembrances are called episodic knowledge. Since this knowledge is, by definition, experiential, it must learned by the agent rather than pre-encoded (of course, it is possible to conceive of episodic knowledge being pre-encoded but, once running, additions to the episodic memory would then be learned). This knowledge can confer the capability to perform protracted tasks or to answer queries about temporal relationships and to utilize temporal relationships.

Architectures with this property

Meta-knowledge

Meta-knowledge may be loosely defined as "knowledge about knowledge". Meta-knowledge includes information about the knowledge the system possesses, about the efficiency of certain methods used by the system, the probabilities of the success of past plans, etc. The meta-knowledge is generally used to guide future planning or execution phases of a system.

Architectures with this property

First-Order Logic Representation

Many of the architectures analyzed build upon a substrate of First Order Predicate Calculus. This is a very descriptive declarative representation with a well founded method of deriving new knowledge from a database. Its flexibility makes it a good choice when more than one module may add to or utilize a common database (c.f. Prodigy). Unfortunately, this flexibility has limitations. To maintain consistency, learning must be monotonic. This limits its effectiveness when there are incomplete domain theories.


First-order predicate logic is composed of statements that are assumed to be true. The statements are composed of:

  • atoms (symbols),
  • predicates (a function with one or more atomic arguments),
  • two substatements joined by a conjunction, disjunction, or implication,
  • a negated substatement, and
  • a statement with an existential or universal quantifier (in this case, atoms in the statements can be replaced by variables in the quantifier).


This representation allows facts and small amounts of knowledge to be flexibly entered, but efficiency and sensitivity to errors are its weaknesses in large knowledge bases. See the textbook by Rich and Knight for more information.

Architectures with this property

STRIPS-like Representation

STRIPS, or the Stanford Research Institute Problem Solver, was proposed by Fikes and Nilsson in 1971 and included a representation for operators that was intended to solve (or at least address) the frame problem. STRIPS uses well-formed formulas of the first-order predicate calculus and specifies operators by a precondition list, an add-list and a delete-list. The preconditions must be satisfied by the current state before an operator is applied. The effects of the operator are given by the add and delete lists. The add-list adds new, instantiated well-formed formulas (or wffs, logical descriptions of the world) to the current state. The delete-list removes wffs from the current state.


Although STRIPS did resolve some of the issues related to the frame problem, it (and all systems that use a STRIPS-like representation) suffers from a requirement for explicitness -- all actions (including secondary effects) must be included in the model of the operator. In complex worlds, this is often impossible.

Architectures with this property

Frame-Like Representations

A frame is a method of representation in which a particular class is defined by a number of attributes (or slots) with certain values (the attributes are filled in for each instance). Thus, frames are also known as slot-and-filler structures. Frame systems are also somewhat equivalent to semantic networks although frames are usually associated with more defined structure than the networks.


Like a semantic network, one of the chief properties of frames is that they provide a natural structure for inheritance. ISA Links connect classes to larger parent classes and properties of the subclasses may be determined at both the level of the class itself and from parent classes.


This leads into the idea of defaults. Frames may indicate specific values for some attributes or instead indicate a default. This is especially useful when values are not always known but can generally be assumed to be true for most of the class. For example, the class BIRD may have a default value of FLIES set to TRUE even though instances below it (say, for example, an OSTRICH) have FLIES values of FALSE.


In addition, the values of a particular attribute need not necessarily be filled with a value but may also indicate a procedure to run to obtain a value. This is known as an attached procedure. Attached procedures are especially useful when there is a high cost associated with computing a particular value, when the value changes with time or when the expected access frequency is low. Instead of computing the value for each instance, the values are computed only when needed. However, this computation is run during execution (rather than during the establishment of the frame network) and may be costly.

Architectures with this property

Network Representations

Networks are often used in artificial intelligence as schemes for representation. One of the advantages of using a network representation is that theorists in computer science have studied such structures in detail and there are a number of efficient and robust algorithms that may be used to manipulate the representations.

Architectures with this property

Trees and Graphs

A tree is a collection of nodes in which each node may be expanded into one or more unique subnodes until termination occurs. There may be no termination and an infinite tree results. A graph is simply a tree in which non-unique nodes are generated; in other words, a tree is a graph with no loops. The representation of the nodes and links is arbitrary. In a computer chess player, for example, nodes might represent individual board positions and the links from each node the legal moves from that position. This is a specific instance of a problem space. In general, problem spaces are graphs in which the nodes represent states and the connections between states represented by an operator that makes the state transformation.

ISA Links and Semantic Networks

In constructing concept hierarchies, often the most important means of showing inclusion in a set is to use what is called an ISA link, in which X is a member in some more general set Y. For example, a DOG ISA MAMMAL. As one travels up the link, the more general concept is defined. This is generally the simplest type of link between concepts in concept or semantic hierarchies. The combination of instances and classes connected by ISA links in a graph or tree is generally known as a semantic network. Semantic networks are useful, in part, because they provide a natural structure for inheritance. For instance, if a DOG ISA MAMMAL then those properties that are true for MAMMALs and DOGs need not be specified for the DOG; instead they may be derived via an inheritance procedure. This greatly reduces the amount of information that must be stored explicitly although there is an increase in the time required to access knowledge through the inheritance mechanism. Frames are a special type of semantic network representation.

Production Systems

A production system is a tool used in artificial intelligence and especially within the applied AI domain known as expert systems. Production systems consist of a database of rules, a working memory, a matcher, and a procedure that resolves conflicts between rules. These components are outlined below. Several different versions of productions systems have been developed, including the OPs series which culminated in OPS5 (see Forgy). OPS5 was modified to implement the Soar production system described elsewhere in this document.

Architectures with this property

Matching

The rules of a production consist of a condition and action in the form: (if x then y). The left-hand-side conditions (x and y may be arbitrarily complex conjunctions of expressions) are compared against the elements of working memory to determine if the conditions are satisfied. Matching is an computationally intense procedure although the RETE algorithm of OPS5 is significantly more efficient than a simple condition-by-condition matcher.

Conflict Resolution

At any point in processing, several productions may match to the elements of working memory simultaneously. Since production systems are normally implemented on serial computers, this results in a conflict: there is a non-unique choice about which action to take next. Most conflict resolution schemes are very simple, dependent on the number of conditions in the production, the time stamps (ages) of the elements to which the conditions matched, or completely random. One of the advantages of production systems is that the computational complexity of the matcher, while large, is deterministically finite and the conflict resolution scheme is trivial. This is in contrast to logicist systems in which declarative knowledge may be accessed instantly but the time required to use the knowledge (in a theorem prover, for instance) can not be pre-determined.

Actions

The actions of productions are manipulations to working memory. Elements may be added, deleted and modified. Since elements may be added and deleted, the production system is non-monotonic: the addition of new knowledge may obviate previous knowledge. Non-monotonicity increases the significance of the conflict resolution scheme since productions which match in one cycle may not match in the following because of the action of the intervening production. Some production systems are monotonic, however, and only add elements to working memory, never deleting or modifying knowledge through the action of production rules. Such systems may be regarded as implicitly parallel since all rules that match will be fired regardless of which is fired first.

Properties Related to the Learning Capability

Deliberative Learning

Though learning is widely accepted as useful, and perhaps necessary, in a generally intelligent agent, the question of what to learn is much more open to debate. It has been widely documented that learning can increase the variety of problems that can be accomplished as well as increase the efficiency with which the agent is able to perform tasks. But indiscriminant learning can actually decrease the efficiency of an agent if the learned knowledge is of low usefulness relative to the cost of implementing the knowledge.


For instance, a learned piece of knowledge may only cover the application of a few operators but the precondition matching of the knowledge may actually exceed that which would be required if a standard problem search was utilized to implement the operators. This is commonly referred to as the utility problem and is an important issue to consider if learning is to be a useful component of the architecture. An architecture may implement some sort of utility functions that weighs the costs of implementing a learned piece of knowledge with the actual benefits obtained by utilizing that knowledge. If the benefits are found to outweigh the costs, the knowledge will be stored, otherwise it will be discarded. Such an architecture will thus utilize deliberation to improve the effectiveness of its learning mechanisms. However, architectures which learn "reflexively" have the advantages of additional simplicity and cognitive plausibility.

Architectures with this property

Reflexive Learning

It is generally accepted by the artificial intelligence community that learning is a desirable and useful capability of a generally intelligent agent. This learning can take a number of forms, and the matter of which type of learning is most appropriate depends both on the researcher and the particular agent in question.


Reflexive learning is learning that is done "automatically", i.e. the agent does not consider the possible costs of learning a particular piece of knowledge. These costs hinge on the usefulness of knowledge: reflexive systems learn everything, even knowledge that does not promise to enhance the agent's behavior. This 'extra' knowledge threatens to slow the agent, since it must be searched each time the agent attempts to retrieve a piece of knowledge. Reflexive architectures try to compensate for this by employing a very efficient matching function, so that extraneous knowledge does not appreciably degrade performance.


It is argued that the reflexive model of learning is more psychologically valid than the deliberative model: humans, in general, cannot help but learn from their experiences, and we certainly are not able to explicitly 'unlearn' something if we decide that it is not worth retaining.

Architectures with this property

Generalization

Generalization is the ability to apply knowledge and information gained in completing some task to other tasks and situations. Humans generalize routinely. For example, knowing that one should always drive on the right on the road is a generalization from one's experience and observation in specific driving situations. However, in the countries of the United Kingdom, driving on the right is not correct. This is an example of over-generalization, which humans also do quite capably. Generalization can result from a number of different learning strategies including:

Architectures with this property

Monotonic Learning

If an agent may not learn any knowledge that contradicts what it already knows then it is said to learn monotonically. For example, it may not replace a statement with its negation. Thus, the knowledge base may only grow with new facts in a monotonic fashion. The advantages of monotonic learning are:

  1. greatly simplified truth-maintenance
  2. greater choice in learning strategies


Since learning consists of the addition of new facts to the database, it may not be appropriate for all environments, although many simulated environments may be assumed to be consistent. In these cases a non-monotonic learning method is necessary.

Architectures with this property

Non-monotonic Learning

An agent that may learn knowledge that contradicts what it already knows is said to learn non-monotonically. So it may replace old knowledge with new if it believes there is sufficient reason to do so. The advantages of non-monotonic learning are:

  1. Increased applicability to real domains,
  2. Greater freedom in the order things are learned in


Architectures that are constrained to add only knowledge consistent with what has already been learned are said to learn monotonically.

Architectures with this property

Utility Functions

Architectures sometimes employ utility functions to address the utility problem. The Utility Problem results in explanation-based learning systems when the method used to determine the usefulness of learned rules (the operationality criterion) is unrealistic. This is the case in most current systems since no techniques have been developed to make the operationality criterion as sophisticated as the environments to which EBL has been applied. Rules are generally learned too frequently and thus learning may actually slow down the system. Carbonell, et al. (1991) identify three factors that may contribute to this degradation in performance:

  • Low Application Frequency: The rule may be over-specific and thus applied too infrequently to be useful.
  • High Match Cost: The cost of matching a rule -- especially those which represent a long sequence of operations -- may be prohibitively expensive (e.g., the matching problem for pre-condition lists is NP-complete).
  • Low Benefit: The rule may have only marginal utility in the problem domain.

Architectures with this property

Properties Related to the Planning Capability

Means-Ends Analysis Technique

MEA (Means-Ends Analysis) is a problem solving strategy first introduced in GPS (General Problem Solver) [Newell & Simon, 1963]. The search process over the problem space combines aspects of both forward and backward reasoning in that both the condition and action portions of rules are looked at when considering which rule to apply. Differences between the current and goal states are used to propose operators which reduce the differences. The correspondence between operators and differences may be provided as knowledge in the system (in GPS this was known as a Table of Connections) or may be determined through some inspection of the operators if the operator action is penetrable. This later case, which is true of STRIPS-like operators, allows task-independent correlation of differences to the operators which reduce them. When knowledge is available concerning the importance of differences, the most important difference is selected first to further improve the average performance of MEA over other brute-force search strategies. However, even without the ordering of differences according to importance, MEA improves over other search heuristics (again in the average case) by focusing the problem solving on the actual differences between the current state and that of the goal.

Architectures with this property

Forward and Backward-Planning

Forward-planning and backward-planning refer to the method of means-ends analysis used by the architecture, while forward and backward-chaining refers to the way the architecture maintains knowledge. An architecture is said to be forward-planning when, given an impasse, the architecture examines all of the currently relevant operators in an effort to walk forward through the problem space to the goal. An architecture is said to be backward-planning when, given an impasse, the architecture looks at the goal to determine the operators which will yield the goal and walks backward through the problem space to the current state.


The selection of these methods can be architectural or dynamic. The reason for selecting one over the other resides in the a priori estimate of the number of operators needed to be examined (i.e. the size of the respective search spaces). Architecturally, this decision must be made either ad hoc or by building in a mechanism to force all logic trees to be condition-heavy or results-heavy. None of the architectures analyzed deal with such a mechanism.


The decision to utilize backward or forward-planning can be made dynamically by a weighted counting of the number of operators relevant to the current state and the same weighted counting of the number of operators that will result in the goal. The weights should be made proportional to the size of the new knowledge generated by each operator, in the forward case and the proportional to the size of the condition space in the backward case, since the number of operators selected next is proportional to the size of the state space.


In other words, the number of possible paths to examine for a solution is approximately the number of currently applicable operators (i.e., the branching factor) times the number of operators (i.e., number of nodes) to be applied next. The number of operators to be applied next will be proportional to the increase in the size of the state space produced by the application of the currently relevant operator.

Architectures with this property

Performance and Perception

Behavior

In some sense, every architecture may be said to exhibit some form of behavior in the sense that it goes about solving its particular problems in a unique way. However, behavior in the context of this document refers specifically to agent behaviors. For example, what are the properties associated with an agent's capabilities of navigation and manipulation for robotic agents? In general, the answer to this question is dependent upon how a particular agent utilizes knowledge in terms of goals and drives. Particular categorizations of behaviors include coherence, salience and adequacy with respect to some particular behavior or set of behaviors.

Architectures with this property

Coherence

Coherence refers to an agent's ability to resolve conflicts between competing or conflicting goals, resulting in behavior that transitions smoothly. Informally, a coherent agent is one that does not behave schizophrenically in the presence of environmental inputs that indicate different appropriate responses. The concept differs from rationality which is associated with high level cognition in that it is usually associated with reactive agents.

Architectures with this property

Saliency

Saliency refers to an intelligent agent's ability to act appropriate to the current situation. For example, an agent should not continue a routine task when endangered nor should it attempt survival behaviors (e.g., power re-supply) when not threatened (for this example, fully-charged).

Architectures with this property

Adequacy

Adequacy is the property of achieving all the behaviors needed to complete some particular task in an order (not necessarily completely pre-specified) appropriate for the task at hand. This attribute is generally associated with robotic agents.

Architectures with this property

Minimum Commitment Strategy

The Minimum Commitment Strategy refers to the method of waiting as long as possible in a planning or execution phase to bind variables or make decisions toward a goal. This is done in order to utilize as much information as possible, attempting to ensure the best possible action at the necessary time. This strategy relies on the notion that not all of the necessary knowledge will be available at any given plan time, so waiting could make more knowledge available and ensure a better plan.

Architectures with this property

Reflexive Response to Stimuli (Reactive Response)

A general paradigm for behavior is Perceive-Think-Act. However, in order to react quickly to dynamic environmental events, some architectures respond instantly to external stimuli, a quality often called reflexiveness. This may lend the system some extra speed when the event has been experienced previously (i.e., the reflex may be learned as well as innate) and the reaction can be performed instantly without having to plan a response. Other systems always deliberate before acting.

Architectures with this property

Sensing Strategies and Attention

Architectures with this property

Attentional Mechanisms

In general and especially in dynamic environments, there is information overload such that the amount of perceptual information to be processed is greater than the computational capability of the agent. One way to avoid this overload is to filter out parts of the perceptual field and pay particular attention to others; this is known as an attentional mechanism.

Eager versus Lazy Sensing

One type of attention is known as eager and lazy sensing. Some percepts are sensed eagerly (updated as often as possible or at a constant rate) while others are sensed only when there are resources available to make the perception or the particular percept is known to be needed. Thus, attention is placed on the eagerly sensed percepts. This constitutes a biased attentional mechanism in which the focus of attention is embedded in the architecture.

Distraction

The above represents only one example of many types of attentional mechanisms: others include attending to specific modalities or specific information within a modality (e.g., movement in a visual field). One disadvantage of attentional mechanisms is that they may lead to distraction. Percepts with low attention may actually represent more important information than the percepts with higher attention. The less important percept is distracting the agent from the more important one. In general, different situations require different attentions and there is a constant flux between the appropriate place to center attention.


Distraction is sometimes thought of as a feature of a cognitive architecture that models human distraction. When the feature is presented as such a model, the testing performed on the agent must analyze how distraction of the agent is similar to how humans are distracted, not simply that the agent can be distracted models the fact that humans can be distracted.

Deliberation/Operation Speed

An intelligent agent should be able to quickly process information from its sensors and from the knowledge it has in order to react in real-time, and so be able to navigate through its environment with reasonable speed and rationality. However, different architectures display vastly different processing speeds and reactivities. Many of these differences are based on assumptions about the speed-of-change in dynamic environments.

Architectures with this property

Personal tools