Theory

From Cognitive Architecture Wiki

Jump to: navigation, search

The following discussion does not necessarily relate to any specific architecture or agent property but instead present a number of theories and observations which have relevance to cognitive architecture research. There is no "theory of integrated, cognitive architectures" per se. However, many theoretical results, primarily from the fields of artificial intelligence, cognitive science, and cognitive psychology, do bear directly on the design, development, and analysis of cognitive architectures. Additionally, because researchers have focused on different aspects of the cognitive problem, some of the conjectures included are actually contradictory and/or exclusive of one another.

Contents

Intelligence and Artificial Intelligence

In Unified Theories of Cognition, Allen Newell defines intelligence as: the degree to which a system approximates a knowledge-level system. Perfect intelligence is defined as the ability to bring all the knowledge a system has at its disposal to bear in the solution of a problem (which is synonymous with goal achievement). This may be distinguished from ignorance, a lack of knowledge about a given problem space.


Artificial Intelligence, in light of this definition of intelligence, is simply the application of artificial or non-naturally occurring systems that use the knowledge-level to achieve goals. A more practical definition that has been used for AI is attempting to build artificial systems that will perform better on tasks that humans currently do better. Thus, at present, tasks like real number division are not AI because computers easily do that task better (faster with less error) than humans. However, visual perception is AI since it has proved very difficult to get computers to perform even basic tasks. Obviously, this definition changes over time but it does capture the essential nature of AI questions.

Symbols and Representation

A natural question to ask about symbols and representation is what is a symbol? Allen Newell considered this question in Unified Theories of Cognition. He differentiated between symbols (the phenomena in the abstract) and tokens (their physical instantiations). Tokens "stood for" some larger concept. They could be manipulated locally until the information in the larger concept was needed, when local processing would have to stop and access the distal site where the information was stored. The distal information may itself be symbolically encoded, potentially leading to a graph of distal accesses for information.


Newell defined symbol systems according to their characteristics. Firstly, they may form a universal computational system. They have:

  • memory to contain the distal symbol information,
  • symbols to provide a pattern to match or index distal information,
  • operations to manipulate symbols,
  • interpretation to allow symbols to specify operations, and,
  • capacities for:
    1. sufficient memory,
    2. composability (that the operators may make any symbol structure),
    3. interpretability (that symbol structures be able to encode any meaningful arrangement of operations).


Finally, Newell defined symbolic architectures as the fixed structure that realizes a symbol system. The fixity implies that the behavior of structures on top of it (i.e. "programs") mainly depend upon the details of the symbols, operations and interpretations at the symbol system level, not upon how the symbol system (and its components) are implemented. How well this ideal hold is a measure of the strength of that level.


The advantages of symbolic architectures are:

  1. much of human knowledge is symbolic, so encoding it in a computer is more straight-forward;
  2. how the architecture reasons may be analogous to how humans do, making it easier for humans to understand;
  3. they may be made computationally complete (e.g. Turing Machines).


These advantages have been considered as one of the fundamental tenets of artificial intelligence known as the physical symbol system hypothesis. The hypothesis proposes that a physical symbol system has the necessary and sufficient means for general intelligence.


Symbols represent knowledge -- including models of the world. Thus, at levels above the symbol (or architecture) level, knowledge may mediate behavior. This level is known as the knowledge level. Newell characterizes the symbol level in humans as the cognitive band.

Problem Space Hypothesis

Newell introduces the problem space principle as follows. "The rational activity in which people engage to solve a problem can be described in terms of (1) a set of states of knowledge, (2) operators for changing one state into another, (3) constraints on applying operators and (4) control knowledge for deciding which operator to apply next."


Some investigators have posited a domain-independent representation for knowledge called the problem space. Problem spaces, such as the type introduced by STRIPS, are commonly composed of a set of goals, a state or set of states, and a set of valid operators which contain the constraints under which the operator can be applied. The top-level goal is the problem originally posed to the agent. New goals are generated when the agent does not know how to apply any of its available operators rationally to move closer to its goal. The state consists of a set of literals that describe the knowledge of the agent and the present model of the world.

Real-Time Constraint on Cognition

Image:cogbands.png.


Human cognition has been shown to occur within well-defined time frames that are arithmetically related to the number of serial decisions needed to perform the task. The power law of learning is a phenomenological measure of this observation. When a task is first presented to an agent, the agent must deliberate over every move resulting in slow performance. As the agent learns, fewer and fewer serial decisions need to be made and the agent performs faster. Strict limits have been found in humans that limit the ability to make small numbers of serial decisions.


This data illustrates the real-time constraint on cognition: "There are available only ~100 operation times (two minimum system levels) to attain cognitive behavior out of neural-circuit technology." This constraint shows that the interaction between distinct computational units (whether in a parallel or serial organization) is minimal.


The data also provides some insight on the decomposition of perception, delivery of symbols to memory, deliberation, and action. This decomposition is not discussed here, but can be found in Newell's Unified Theory of Cognition. It also suggests that the hierarchy (or heterarchy) of deliberation consists of serial and parallel processes that are configured such that these limits emerge.


Many architectures of general intelligence seek to explain the hierarchy or to meet the constraint in order to better understand human cognition.

Knowledge Level

An AI system is said to be a knowledge-level system when it rationally brings to bear all its knowledge onto every problem it attempts to solve. Thus, knowledge is the medium of transaction at the knowledge level and the behavioral law is the principle of maximum rationality. This is analogous to a circuit-level description of electrical systems that utilize current and voltage as the medium and follow basic conservation and conversion laws such as Kirchoff's and Ohm's Laws.


Hierarchically, the knowledge level lies above the symbol level where all of the knowledge in the system is represented. In other words, a tacit assumption in AI is that a knowledge-level system must contain a symbol system (i.e. this is simply a re-statement of the physical symbol system hypothesis). According to Newell's unified theory of cognition, the symbol level corresponds to the cognitive band and the knowledge level to the rational band. However, in the latter case, humans only approximate a knowledge-level system, due to the constraints imposed by bounded rationality. In Unified Theories of Cognition, Newell suggested that psychology is the study of this approximation, when the architecture "shows through" and behavior is not mediated by knowledge alone.

Maximum Rationality Hypothesis

Several different views of the nature of rationality in intelligent behavior have been introduced in the study of cognitive architectures and in artificial intelligence in general. Two are briefly discussed here. Simon's discussion of bounded rationality is central to a discussion of rationality as well.


Maximum Rationality Hypothesis

The Maximum Rationality Hypothesis was proposed by Allen Newell in 1982 as the principle of rationality: "If an agent has knowledge that one of its actions will lead to one of its goals, then the agent will select that action". This formulation results in the law of behavior at the knowledge level. Thus, there is a direct connection between goals, knowledge and subsequent actions.


Principle of Rationality

Anderson offered a different perspective on rationality (called rational analysis) and formulated a different behavioral principle. The Anderson, the principle of rationality is: "The cognitive system optimizes the adaptation of the behavior of the organism". The primary difference between Anderson's formulation and Newell's is that Anderson considers optimality to be necessary for rationality. Newell's, on the other hand, does not say that the best action will be taken but only that there is a connection between goals and behavior, mediated by the knowledge available to the agent.


Rational Analysis

Anderson proposes that the best method for analyzing human cognitive behaviors lies in the analysis of the task rather than in attempting to analyze the methods used by the human to solve the problem.

Principle of Rationality: The cognitive system optimizes the adaptation of the behavior of the organism.

In support he quotes Marr:

An algorithm is likely understood more readily by understanding the nature of the problem being solved than by examining the mechanism (and the hardware) in which it is solved. (p27)


He implies that researchers have confused the analysis of tasks with the analysis of mechanisms because of the existence of signature data, a subject-universal invariant measure of performance for some task or group of tasks. Anderson argues that the appearance of these data have been taken as evidence indicating constraints on the architecture of human cognition, while he believes that the data indicate constraints of the task.


Anderson proposes three advantages that rational analysis provides:

  1. An understanding of the nature of the problem can provide strong guidance in the proposal of possible mechanisms.
  2. The task domain provides rationale for constraining the architecture.
  3. Mechanism-focused modeling faces critical indeterminacies that can affect computation or memory mechanisms such as serial versus parallel processing. Analysis of the task domain need not consider these directly.


Anderson states that to properly analyze the task domain from the perspective of the agent, one must also consider:

  1. Cost of computation in the behavior.
  2. That the agent may have adapted to an environment significantly different from the environment in which it is being tested.
  3. Performance measures must be aligned with the goals of the agent to ensure that the appropriate optimization problem is proposed.


Using these caveats Anderson proposes the following recipe for rational analysis:

  1. Precisely specify the goals of the agent.
  2. Develop a formal model of the environment to which the agent is adapted.
  3. Make the minimal assumptions about computational costs.
  4. Derive the optimal behavior of the agent considering (1)-(3).
  5. Examine the literature to see if the behaviors of the agent reproduce empirical human data.
  6. If predictions are off, iterate.


Anderson uses this rational analysis on three signature problems:

  1. Power law of learning
  2. Fan effect
  3. Categorization


In summary, Anderson believes that the mechanism-focused approaches to cognition are doomed by the identifiability problem: the mechanism of cognition is not uniquely defined by the task plus environment. More assumptions must be made to determine the mechanism of cognition than are required to analyze the task domain. Furthermore, the analysis of the task domain, properly constrained and oriented, reproduces the signature data found in the human psychological literature and is, therefore, sufficient for the de facto goals of AI.


Anderson says that cognitive architectures provide a notation for expressing the behavior, but the statement of the information processing problem in the task domain is the key to reproducing the signature data.


Compare Simon's critique.

Bounded Rationality - A Response to Rational Analysis

Simon criticizes Anderson's proposed rational analysis as misdirected based on the following three arguments:

  1. Humans are not optimal and only in some cases locally optimal;
  2. Assumptions made by cognitive modelers about how an agent performs architectural tasks, which Anderson labels unnecessary, are subsequently tacitly repeated by him in his analyses;
  3. Data regarding human behavior on isomorphic task domains explicitly denies the theory. (Question: Item 2 in Anderson's recipe states that one must model the environment to which the agent has adapted. Does this not limit the task to domain to particular isomorphs and thereby negate the criticism?)


Optimality

Evolution did not give rise to optimal agents, but to agents which are in some senses locally optimal at best, locally satisfactory in norm, and becoming extinct at worst. Thus, a theory based upon optimal behaviors is tenuous at best.


Optimization implies that the goals of the agent are known explicitly. When synthesizing or tasking an agent one can know or determine the goals of the agent, but when analyzing the behavior of an arbitrary agent, one does not know the goals. In fact, the range of rational goals can lead to such variant behavior that assumptions about the goals cannot be made with confidence. The example cited in depth by Simon is that of economic predictions.


Another implicit assumption underlying optimization is that the utility functions are known (see recipe item 3). In fact, real agents must often act with insufficient knowledge by estimating these. Estimates will range from accurate to wrong, from simple to sophisticated. Since rational analysis- a variant of which lead to the economic theories plagued with these problems- does not account for these phenomena, it cannot be taken as a panacea paradigm for analysis.


Assumptions

Anderson criticizes mechanism-focused cognitive modelers with making unnecessary assumptions about how an agent performs architectural functions such as memory management and computations. However in his analyses, he is forced to make similar assumptions. Examine the assumptions made by Anderson in his analyses:

  1. Fan Effect
  2. Power-law of Practice
  3. Categorization

Rationality versus behavior

While rational analysis can yield some information about cognition such as that a solution can be found, the particular solution found by particular subjects cannot necessarily be found. Anderson argues that by defining the environment to which the subject has adapted, the optimal solution will be the solution determined by the subject and that these constraints uniquely define the optimum. Simon argues that these constraints are not sufficient to determine uniqueness. Without a uniquely defined solution, subject-specific strategies cannot be determined nor studied.


Bounded Rationality

In 1957, Simon proposed the notion of

Bounded Rationality: that property of an agent that behaves in a manner that is nearly optimal with respect to its goals as its resources will allow.


Bounded rationality better describes agent behaviors than Anderson's optimal rationality approach for the following reasons:

  • agents are not optimal
  • the methods by which architectural tasks are performed significantly affect the agents behaviors
  • the representations of information and the strategies for solving problems must all be discovered by the agent
  • agents' behaviors across isomorphic task domains are not constant


In considering bounded rationality, Simon suggests that researchers not limit their focus to signature data but look for all the data they can in order to uncover the underlying processes. He concludes by providing a lower bound of relevance to cognitive analysis:

The exact ways in which neurons accomplish their functions is not important- only their functional capabilities and the organization of these.

Efflorescence of Adaptation

Humans demonstrate a remarkable amount of versatility in their behavior. Playing games, inventing games, entertainment, even recipes, are all demonstrative of the variety of daily and routine human activity. In considering this behavior, which he characterized as efflorescence of adaption, Newell argued that such behavior was demonstration of an underlying symbol system in humans. Such a symbol system can provide for the effectively infinite variety of behaviors and responses humans engage in on a regular (and mostly unnoticed) basis.

Cognitive Architecture: A Definition

An architecture can be defined simply as the portion of a system that provides and manages the primitive resources of an agent. For many cognitive architectures, these resources define the substrate upon which a physical symbol system is realized. Addressing the many issues surrounding the choice, definition, extent, and limits of these resources and their management is one of the purposes of this document. This analysis attempts to assist in determining the necessary, sufficient and optimal distribution of resources for the development of agents exhibiting general intelligence.


Architectures, in general, have divergent features that lead to different properties. For example, some utilize a uniform knowledge representation, some a heterogeneous representation, and others, no explicit representation at all. These decisions then lead to the support of specific capabilities. The choice of features is often made by following some explicit methodological assumptions, often driven by the domains and environments in which the architecture will be used. The variety of these choices are what is responsible for the variety of architectures. One way to further constrain the number of choices is to use examples of psychological or neuroscientific validity in architecture design. An additional advantage of this approach is that there is synergistic interchange between the studies of artificial and biological intelligence; in particular, Newell has proposed that computer modeling tools as represented by cognitive architectures now allow the formulation of unified theories of cognition.


However, many researchers purposely ignore the constraints posed by human cognition. Often this is because they are interested in developing agents which populate and behave effectively in some environment; studying the interactions between the architecture and the environment (which could be a static, problem-solving situation or a highly dynamic, reactive environment) is of primary concern. In this sense, the term cognitive architecture is a little misleading. Although it is used throughout the document, a better term might be agent architecture which would include both those systems that made a explicit attempt to model human psychology (i.e. cognitive architectures) and those which simply explored some aspects of general intelligent behavior.


Guide to Individual Architectures

Unified Theories of Cognition

A unified theory is a theory which attempts to explain the details of all mechanisms of all problems within some domain. All previous results should be reproduced and explained. A unified theory of cognition has as its domain all of the cognitive behavior of humans. Newell (1990) proposed that the current state-of-the-art in experimental psychology could now support such theories, based on years of accumulated results. These results are often clustered into more or less independent areas of specialization. For example, the ability to process language is independent of the ability to memorize dates, to solve mathematical problems, and the ability to learn to ski moguls. Non-cognitive psychological behavior is not included.


To assert a unified theory of cognition, one must propose mechanisms by which the results of these human cognitive experiments can be reproduced. The codification and simulation of these mechanisms is tantamount to designing an architecture for general intelligence. In this sense, if one wishes to build an artificially intelligent agent using the human as model, the architecture proposed for the agent could be considered a unified theory of cognition.


The development of a unified theory of cognition has been driven by the need or desire for an empirical argument or analytical proof of the sufficiency of a symbol-level system to support general intelligence. Without analytical tools to determine the necessary or sufficient structure needed to support some capability, it becomes tempting to build an agent that needs the capability using either a toolbox approach or a domain-specific module approach. While these approaches have the advantage of side-stepping the difficult problem of determining sufficiency, they leave the larger problem unaddressed.


By presenting an architecture for general intelligence as a unified theory of cognition, one can bring additional knowledge to bear on the analysis of sufficiency. A working model - the human brain - is certainly sufficient to display general intelligence. The assumption is that one should push the limits of the architecture to produce a capability before building some domain-specific or problem-specific tool to overcome the difficulty. Additionally, the set of data represented by experiments in human cognition provide a measure against which one can measure performance, from which one can gain inspiration and and insight for further architectural revisions.


Psychological Phenomena Addressed by Cognitive Architectures

Fan Effect

The Fan Effect is Anderson's explanation for the brain's ability to optimize memory retrieval by keeping better access to memories that are more likely to be relevant. The effect is a natural extension of the propositional network (see the definition of symbolic representation) Anderson uses to represent concepts in the brain. Concepts with greater probabilistic relevance are connected via conceptual links to other concepts, and the more connections (that is, the greater the fan), the more likely the central concept will be activated.


This effect was proposed with Anderson's ACT* methodology for concept classification as a model for the human brain, and was based on the assumption that the brain uses a spreading activation of concepts in order to do classification. His conclusion is that the associativity of the brain is based on the probabilistic nature of the environment it is exposed to, and that the ability to classify is an extension of this; the fan of the network is not the critical factor in classification. This is one of his arguments for the notion that to understand the workings of a cognitive architecture (namely, the human brain), one must look not within the architecture, but at the environment the architecture acts in. This is known as rational analysis.

Assumptions

  1. Information is stored and associated according to the likelihood of association.


Simon strongly discounts this point-of-view in his article titled "Cognitive Architectures: Comment". His argument is essentially one of bounded rationality.

Power Law of Practice

Subject universality is often considered necessary evidence of architectural mechanisms of behavior. One such universal observation from psychology is the power law of practice. This law simply states that the logarithm of the reaction time for a particular task decreases linearly with the logarithm of the number of practice trials taken. Qualitatively, the law simply says only that practice improves performance. However, the quantitative statement of the law and its applicability to a wide-variety of different human behaviors -- immediate-response tasks, motor-perceptual tasks, recall tests, text editing, and more high-level, deliberate tasks such as game-playing -- have suggested it as an architectural result of learning. For example, the power of law of practice has been called upon repeatedly to demonstrate the psychological validity of Soar's learning mechanism, chunking.

Rational Analysis

As an application of rational analysis, Anderson argues that the power law of practice is an environmental and not architectural constraint.

Assumptions

  1. Need probabilities are distributed according to Zipf's law.
  2. Desirability is a gamma function.
  3. Usage decays exponentially. This is a mechanistic assumption and is made with a priori notions of mechanism.
  4. Retrievals are a Poisson process.


(Several of these assumptions are discussed by Simon in a response to this work.)

Categorization

Anderson applies his rational analysis to the problem of categorization to show that its signature data can be explained by analysis of the task alone:

  1. To a degree, people extract central tendency of a set of instances.
  2. To a degree, people extract tendency of a set of instances from particular exemplars.
  3. Subjects pick up the existence of multiple central tendencies.
  4. Categorization is nonlinear in the size of the relevant feature space.
  5. There is an effect of category size.
  6. In some cases there are basic level categories.
  7. Feedback is necessary for categories to emerge.
  8. Category formation is positively correlated with predictive utility of the category.


The optimal algorithm Anderson asserts for categorization is Bayes's theorem which he subsequently simplifies since the theorem assumes that all information about the items to be characterized are known. On the basis of Bayesian analysis all of the eight signatures are justified.

Assumptions

Anderson makes several assumptions in this analysis, including:

  1. Feature space is fixed
  2. Features are independent
  3. Features naturally form disjoint partitions.
  4. Feature list is static.
  5. The probability that a new category is needed to characterize an object is known.


(Several of these assumptions are discussed by Simon in a response to this work.)

Cognitive Impenetrability

Pylyshyn argues that for purposes of cognitive science, there is a fundamental difference between the cognitive architecture and other levels of the system. The architecture acts as the realization of a theory of cognition, and without an architecture defined independently from other aspects of the system, the computational model of that theory cannot claim to be a literal model of a cognitive process. (This is surely a cognitive science approach, in which computational models are hoped to explain cognitive processes.) From this argument, he extends three reasons as a basis for this view:

  • Architecture-relativity of algorithms and strong equivalence: We can design an algorithm that corresponds to a specific cognitive process only when we have first made relevant assumptions about the architecture.
  • Architecture as a theory of cognitive capacity: The architecture provides cognitive constants (namely, capacity), while the algorithms provide parameters decreed by the information coming in.
  • Architecture as marking the boundary of representation-governed processes: A general assumption in cognitive science is that there is a domain of mental phenomena that can be explained in terms of representations and functions that operate over those representations (i.e., this is a cognitive science version of the physical symbol system hypothesis). Furthermore, these processes remain invariant over changes in goals and knowledge. From this, it is suggested that the architecture must be cognitively impenetrable.

A Modular Theory of Cognition - Society of Mind

Marvin Minsky, in his book The Society of Mind proposes a scheme of the same name, in which the mind is composed of many smaller processes which he calls agents. These agents by themselves cannot perform any thought processes, but when combined into "societies", true intelligence arises.


This theory attempts to provide a unified model of the mind, similar in scope to Newell's Unified Theory of Cognition, but with the premise that the mind's individual components lend little light on the mind as a whole; that it is only when connected and interacting that their purpose becomes clear. These agents can then be organized into various heterarchical or hierarchical structures, with those agents at the top commanding (i.e., turning on and off) those below, those at the bottom often muscle-motor agents. Interaction between agents can range from simple switching, or to conflict between two for solution to a goal, etc.


This excerpt from The Society of Mind sums very well the point of Minsky's book:


The power of intelligence stems from our vast diversity, not from any single, perfect principle. Eventually, very few of our actions and decisions come to depend on any single mechanism. Instead, they emerge from conflicts and negotiations among societies of processes that constantly challenge one another. --Chapter 30.8, page 308.

Benchmarks and Test-Beds for Cognitive Architectures

Hanks, Pollack, and Cohen discuss possible applications of benchmarks and test beds to general cognitive architectures. A benchmark is simply a standard task, representative of problems that will occur frequently in real domains. The advantage of using a benchmark is that comparative analysis of performance is possible; architectures are applied to the same task and the results of each measured against others. The problem with benchmarks is that they encourage focusing on the benchmarking problem instead of the real-world task and that benchmarks may be unconsciously prejudiced by their designers. In other words, the benchmarks should come from people who do not have an investment in the results of systems applied to the benchmark. Another problem with benchmarks, from the standpoint of AI, is that there are really no standard tasks for AI problems. However, several benchmarks have been proposed in AI, based on their recurrence. These include the Yale Shooting Problem and Sussman's anomaly.


Test beds are the environments in which the standard tasks may be implemented. In addition to the environment itself, these tools provide a method for data collection, the ability to control environmental parameters, and scenario generation techniques. The purpose of a test bed is to provide metrics for evaluation (objective comparison) and to lend the experimenter a fine-grained control in testing agents.


The use of test beds -- especially in small (highly abstracted) environments -- is somewhat controversial. There is a tension between bottom-up and top-down approaches to agent design. The former, which is somewhat reductionist, seeks to create agents by defining capabilities independent of one another. Test beds provide the means for developing these agents in a piece-meal fashion. This results in an incremental theory of behavior. The top-down approach is more engineering oriented: agents are built and then their performance tested. For such an approach, test beds offer only partial utility since abstracting away environmental considerations may make the agent appear more capable than it actually is (i.e. the results may not be as general as they appear).


In both these approaches, a small problem is used as an exemplar for very large problems. Yet there are issues in using small problems to predict or validate behavior on larger ones. Most significant are the issues of scalability and generality. In the first case, maintaining rationality with the addition of more knowledge and more capabilities may become impossible; efficiency decreases as the scale of the problem increases. Similarly, there may be interactions among capabilities and between individual capabilities and the environment that were not considered in the smaller problem. Thus, the system does not generalize to larger problems.


One way to avoid these issues is to experiment on full-scale systems. Controlled experimentation in such problems has been considered very difficult and even impossible. However, the systematic evaluation of large-scale systems will be necessary as cognitive architectures move out of the research laboratory and into real-world applications.

Mapping Simulation to the Real World

When designing a system, one would hope that it would be intended for use in the real world sooner or later. Yet it is impractical to assume that a first generation system can be adequately tested and analyzed in a dynamic environment. What these basic conclusions reveal is that almost every architecture which stands the test of time is going to have to make a transition from a simulated world to the real world.


This transition is no easy task to say the least. A simulation requires that some assumptions be made about the environment, and these assumptions could be crucial to the generality of any system that uses that environment. A classic example of this is a system which assumes that the necessary elements of the environment are instantly recognizable, and the activators of the system perform flawlessly. For instance, when designing a robot, we could assume it could recognize a green box, and we exploit this in the environment by representing it as a logic predicate GreenBox(P1). Similarly, we could have it execute its movement plan by outputting commands such as "Turn Left" and "Go Forward 2". If we actually expected to implement whatever architecture we designed in this system, the dynamic nature of the world and the difficulties of sensing would set us up for a rude awakening.


Even if the simulation environment does not have a such fundamentally slanted view of the world, the manipulation of the system environment could be enough to weaken the value of the testbed. Consider a system that uses an Explanation-based Learning algorithm and which always runs for a set(short) amount of time. Even though it would be likely that an agent could get really good at using EBL for goal reconstruction, opportunistic behavior, etc., this ignores the fact that if we let it run long enough, the system may slow down significantly because of the utility problem. By not addressing the basic problems of EBL, any progress made will be hampered. In this case it's not the environment per se which is affecting the development of the system, but the experimental setting itself.


In the former case, an author of a cognitive architecture should realize the drawbacks of the system under development, and yet also realize subverting them will can only harm the development and refinement of the architecture. The former case is much more tricky; how detailed can a model be and still be a model? There are no easy answers to this question. It helps to have a specific domain in mind for a system when testing, and emulating that domain as best as possible. But for a system which truly displays general intelligence, there is no one testbed which is wholly sufficient.

General Intelligence and the Time-Scale of Human Action

An agent capable of general intelligence approximates the knowledge level on an unbounded set of problems with little inherent knowledge of the domain. The capabilities needed to support general intelligence are not generally known (although many have been empirically determined to be of significant importance; e.g., learning) Additionally, no theory exists for determining either the necessary or sufficient structures needed to support particular capabilities and certainly not to support general intelligence (although see Unified Theories of Cognition for work in developing such theories).


As direction and inspiration towards the development of such theory or tools, Newell posits that one way to approach sufficiency is by modelling human cognition in computational layers or bands.


Image:cogbands.png


He suggests that these computational layers emerge from the natural hierarchy of information processing. The lowest layers comprising the biological band perform the most primitive tasks in a machine-efficient way. The next level up, the cognitive band, is postulated as the first strong layer and the first layer predominated by symbols. This can be taken to be the symbol layer. Many of the architectures studied in this document operate at the symbol level in order to provide information processing in the rational band.


The place where analytical tools could most benefit AI is in the analysis of necessary and sufficient support for capabilities. Also needed is a method of decomposition which could bound or define the capabilities necessary to support general intelligence.

Social Band

The time scale of human action extends from the relatively short-term behavior characterized by task behavior at the rational band to much longer periods. The chief characteristic of this longer time scale is that it is larger social; humans move throughout most hours, days and weeks in social communication with one another. Because the social band is obviously a very weak band, it is difficult to characterize in more complete terms. However, additional bands above the social band might include historical and evolutionary epochs as well.

Rational Band

The rational band is that part of the time scale of human action that is characterized by task behavior (classical reasoning). This band of primary concern to expert systems designers and logicist AI researchers. It is characterized by knowledge as the medium of transaction; thus the rational band and the knowledge level are somewhat equivalent terms. The rational band assumes that the knowledge representation is metaphysically and epistemologically adequate. However, for systems unconstrained by an underlying cognitive band, heuristic adequacy is a primary concern. This simply means that although knowledge may be applicable to a given problem, accessing and using the knowledge may be intractable (or so slow that it is effectively so). Such systems may be said to ignore the real-time constraint for cognition.

Cognitive Band

The cognitive band refers to the layer of cognition most often modeled by architectures that should exhibit general intelligence. At the top of the biological band, an agent first becomes aware of symbols in the architecture (i.e., symbols have been accessed in the lower band). Automatic (over-trained) responses are possible at this level. The lower third of the cognitive band comprises behaviors that utilize few symbols and a minimum of deliberation. These take ~100ms to perform and examples include lifting a cup or drinking.


The middle third of the cognitive band comprises actions that involve several serial decisions taking ~1 sec such as reading and comprehending a word. These actions are called composed operations because they are built from the simpler, elementary deliberate operations of the previous layer. The top third of the cognitive band comprises actions that take approximately 10 seconds such as reading a sentence or balancing a cup on a fulcrum. These actions may be considered the composition of both elementary deliberate operations and the composed operations of the previous level.


Agents which model the cognitive band typically apply the agent to problems within the layer or just above this layer in the rational band.

Biological Band

The biological band, which is situated just below the cognitive band in the time scale of human action, is characterized by the physiological properties of neurons. The neural level, within the biological band, is a strong level and may be described in terms of well-specified behavior laws (e.g., Hodgkin-Huxley action potential propagation, Rall's cable equations, etc.). Above the neural level however, is the neural circuit level which has not been as carefully scrutinized as the neuron level. The medium of transaction in this neural ciruit level is activation. The action of this level may be considered as symbol access, with patterns of activation across a number of neurons corresponding to a specific symbol. The biological band is the chief concern of connectionist reseachers and the addition of specific organization to neural circuits leads to an approximation of the lower levels of the cognitive band.

Personal tools