Knowledge representation: Difference between revisions
m (using an external editor) |
|||
(7 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{Stub}} | {{Stub}} | ||
Note: This stub is just copy/paste of some paragraphs I wrote '''many years''' ago when I was a graduate student - [[User:Daniel K. Schneider|Daniel K. Schneider]] | Note: This stub is just copy/paste of some paragraphs I wrote '''many years''' ago when I was a graduate student - [[User:Daniel K. Schneider|Daniel K. Schneider]] 19:31, 14 August 2007 (MEST) (be warned). | ||
== Introduction == | == Introduction == | ||
Line 7: | Line 7: | ||
Adopting a [[cognitivism|cognitivist]] stance one can describe cognitive processes as operations carried out on symbol structures. | Adopting a [[cognitivism|cognitivist]] stance one can describe cognitive processes as operations carried out on symbol structures. | ||
Some cognitivists '''I -> PS -> R''' model (as opposed to the [[behaviorism|behaviorist]] '''S->R''' model) | Some earlier cognitivists adopted the '''I -> PS -> R''' model (as opposed to the [[behaviorism|behaviorist]] '''S->R''' model) and tried to model human behavior with symbolic artificial intelligence techniques. | ||
I ----------------> PS ----------------> R | I ----------------> PS ----------------> R | ||
Interpretation of principle-oriented Complex Reaction | Interpretation of principle-oriented Complex Reaction | ||
complex situation problem solving | complex situation problem solving | ||
Line 18: | Line 18: | ||
which are interconnected in a well defined way. | which are interconnected in a well defined way. | ||
Knowledge representation is a very difficult matter, but to start with and in modeling terms, one can think of it as combination of | Knowledge representation is a very difficult matter, but to start with and in modeling terms, one can think of it as combination of data-structures and interpretive procedures that will lead to knowledgeable behavior. (The Handbook of AI:143). | ||
data-structures and interpretive procedures that will lead to | |||
For some authors knowledge is stored either in episodic or semantic memory. The further is organized in spacio-temporal dimensions, the second according semantic content-oriented principles, e.g. networks of concepts. | For some authors knowledge is stored either in episodic or semantic memory. The further is organized in spacio-temporal dimensions, the second according semantic content-oriented principles, e.g. networks of concepts. | ||
This kind of modeling led to research in [[artificial intelligence and education]], and [[intelligent tutoring system]]s in particular. | |||
This | |||
In the literature, there exist several knowledge representation models, some of which are complimentary, i.e. relate to different kinds of knowledge. Below we introduce some of these (but the whole article needs rewriting - [[User:Daniel K. Schneider|Daniel K. Schneider]]). | |||
i.e. | |||
== Plans == | == Plans == | ||
Intelligent behavior is to a great extent problem solving | |||
or planning activity, i.e. doing involves thinking. | or planning activity, i.e. doing involves thinking. | ||
Let's now examine in more detail the pioneering work of | Let's now examine in more detail the pioneering work of | ||
Line 137: | Line 34: | ||
first in psychology to integrate system- and information-theoretical | first in psychology to integrate system- and information-theoretical | ||
concepts for the cognitive analysis of action. | concepts for the cognitive analysis of action. | ||
Their framework grew out of Tolmans (51) cognitive maps | Their framework grew out of Tolmans (51) cognitive maps | ||
and Bartletts (46) schema. | and Bartletts (46) schema. | ||
Line 148: | Line 44: | ||
A complete description must contain all levels | A complete description must contain all levels | ||
otherwise much of the general structure can be lost. | otherwise much of the general structure can be lost. | ||
Without a superior ordering principle the | |||
ordering of many elements does not make sense. | ordering of many elements does not make sense. | ||
Line 164: | Line 60: | ||
device for describing very abstract strategies of | device for describing very abstract strategies of | ||
behavior as well as very detailed operations | behavior as well as very detailed operations | ||
in an | in an overall plan. | ||
"A plan is very hierarchical process in the | "A plan is very hierarchical process in the | ||
organism that can control the order in which | organism that can control the order in which | ||
Line 179: | Line 75: | ||
For Miller and al., Plans are hierarchical systems composed on several levels. | For Miller and al., Plans are hierarchical systems composed on several levels. | ||
When in | When in operation, they can be described with the | ||
aid of the well known feedback loop. | aid of the well known feedback loop. | ||
It becomes the structural | It becomes the structural | ||
Line 186: | Line 82: | ||
The Test-Operate-Test-Exit (TOTE) unit describes a | The Test-Operate-Test-Exit (TOTE) unit describes a | ||
process which operates under the following form: | process which operates under the following form: | ||
+-----------+ | +-----------+ | ||
Line 204: | Line 98: | ||
to certain knowledge. | to certain knowledge. | ||
If there is an incongruence a reaction is triggered. | If there is an incongruence a reaction is triggered. | ||
The operate phase | The operate phase describes what an organism does. | ||
After each (major) operation congruence is tested again | After each (major) operation congruence is tested again | ||
and the operation continues until that problem is solved. | and the operation continues until that problem is solved. | ||
Line 215: | Line 109: | ||
phase of a tote unit. | phase of a tote unit. | ||
== Human problem solving == | == Human problem solving and production systems == | ||
Miller's research later led to rational and quasi-rational models of action, e.g. Newell and Simon's Human Problem Solver (Newell 1972). | Miller's research later led to rational and quasi-rational models of action, e.g. Newell and Simon's Human Problem Solver (Newell 1972). | ||
The authors stated five very general propositions about the processes of human problem solving: | The authors stated five very general propositions about the processes of human problem solving: | ||
* Humans are representable as [[information processing]] systems when | * Humans are representable as [[human information processing|information processing]] systems when engaged in certain problem solving tasks. | ||
engaged in certain problem solving tasks. | * That representation can be formalized into detail. That means that we can simulate these processes and that we can describe a problem as data. | ||
* That representation can be formalized into detail. That means that we can simulate these processes and that we can describe a problem as | * Formalization means writing a system in which the whole information processing system can be implemented. A system consists of programs. Thus simulation can be complex and automated. | ||
data. | * Substantial subject and task differences exist among programs. This means that people attack problems differently and that problems are different among each other. | ||
* Formalization means writing a system in which the whole information | * The task environment determines to a large extent the behavior or the problem solver. It is basically the environment which is complex, the human problem solver has to reduce that complexity in order to solve a problem. | ||
processing system can be implemented. A system consists of programs. | |||
Thus simulation can be complex and automated. | |||
* Substantial subject and task differences exist among programs. This | |||
means that people attack problems differently and that problems are | |||
different among each other. | |||
* The task environment determines to a large extent the behavior or | |||
the problem solver. It is basically the environment which is complex, | |||
the human problem solver has to reduce that | |||
solve a problem. | |||
Their '''General Problem Solver''' model is based on so-called '''means-ends analysis'''. | |||
A problem is described as the difference between a | A problem is described as the difference between a | ||
current state A of the world and the desired state B of | current state A of the world and the desired state B of | ||
Line 245: | Line 130: | ||
The objects are described by their features and by the | The objects are described by their features and by the | ||
differences that can be observed between pairs of | differences that can be observed between pairs of | ||
objects. A problem can thus be decomposed into | |||
subproblems. An operator is something that can be | subproblems. An operator is something that can be | ||
applied to certain well-defined objects in order to | applied to certain well-defined objects in order to | ||
produce different or new | produce different or new objects. Now each problem | ||
solver possesses so-called "difference tables" that tell | solver possesses so-called "difference tables" that tell | ||
him how to apply an operator to reduce a | him how to apply an operator to reduce a | ||
Line 257: | Line 142: | ||
This describes a formalism close to the TOTE model | This describes a formalism close to the TOTE model | ||
In order to achieve some goal the problem solver has | In order to achieve some goal the problem solver has | ||
to decompose the problem into | to decompose the problem into manageable sub-problems | ||
to which he can apply known operators. | to which he can apply known operators. | ||
The GPS model layed the foundations for a programming technology called [[production system]] that later became the core of so-called expert systems. | |||
== Semantic Networks == | |||
See [[semantic network]] | |||
== Schemas and similar structures == | |||
Much human knowledge is probably organized in large | |||
organized chunks as many research in text understanding | |||
points out. | |||
Experimental research on story recall (Bartlett 32) | |||
already in 1932 convincingly demonstrated | |||
that subjects confronted with new information | |||
(a unknown story) | |||
try to relate it to already known knowledge | |||
in a particular and systematic way. | |||
Of a given story, | |||
many elements were deleted, deformed and new | |||
elements were added. | |||
This wouldn't be surprising, if it wouldn't happen | |||
in a systematic way: | |||
# Strange unknown elements of the story are translated into more well known concepts, | |||
# difficult concepts are skipped, | |||
# the story is reorganized in order gain (subjective) meaning. | |||
We can generalize these observation | |||
which since then have been replicated many times. | |||
Processing of "narrative" input relies heavily on stereotypical knowledge we have on social episodes. | |||
Furthermore, input is integrated into canonical structures | |||
that help organize the story and its memorization. | |||
Research on the understanding of other textual | |||
structures suggest that similar hypothesis can be made about objects | |||
such as typical roles of persons, institutions, etc. | |||
Some experimental research in more traditional cognitive science | |||
concerned so called "natural kinds", | |||
they way in which people classify real objects. | |||
The simplest theories state that | |||
normally all encountered objects are compared to | |||
proto-types, detailed descriptions of a kind (type), | |||
and understood in terms of allowable deviation. | |||
We think that the more sophisticated "feature-set" | |||
theory better fits reality: | |||
Individuals have a high sensibility for the correlations | |||
of characteristics they encounter in the environment. | |||
Furthermore they have the tendency to build abstractions | |||
for the set of characteristics that define a category. | |||
Of course, they also memorize sometimes specific | |||
examples. | |||
Consequently, when people | |||
encounter an objects it will be compared to the set of important | |||
characteristics defining classes of objects, but | |||
also to specific exemplaries which can serve as | |||
negative of positive proto-types. | |||
Such, more traditional research, | |||
works with simple objects. | |||
Generalizations could be attempted only for single | |||
objects and concepts. | |||
Unfortunately is is much harder to test how more | |||
complex structures might be represented. | |||
It is even worse to know how they might intervene | |||
and be what in a problem-solving process such | |||
as the understanding of "narrative" text. | |||
It can be convincingly shown, however, | |||
that people do "frame" perception in certain | |||
systematic ways (e.g. Tverski and Kahnemann ???). | |||
Such global patterns are probably not only | |||
the result of search, but they would | |||
be stored as complete chunks. | |||
Consequently, | |||
cognitive science proposes several forms | |||
of knowledge organizations which | |||
we would like to sketch out by given | |||
the definition of DeBeaugrande (81:90): | |||
;Frames are global patterns that contain common-sense knowledge about come central concept. | |||
: Frames state what things belong together in principle, but not in what order things will be done or mentioned. | |||
; Schemas are global patterns of events and states in ordered sequences linked by time proximity and causality. | |||
: Unlike frames, schemas are always arrayed in a progression, so that hypotheses can be set up about what will be done or mentioned next in a textual world. | |||
[[Category: Artificial intelligence]] | |||
; Plans are global patterns of events and states leading up to an intended | |||
: Plans differ from schemas in that a planner [...] evaluates all elements in terms of how they advance toward the planner's goal. | |||
; Scripts are stabilized plans called up very frequently to specify the roles of participants and their expected actions. | |||
: Scripts thus differ from plans be having a pre-established routine. | |||
Using such global patterns people can greatly | |||
reduce complexity in inference processes. | |||
They also allow to retain much more information | |||
in active storage at the time, | |||
i.e. one can understand very complex situations | |||
by integrating the information into these | |||
coherent knowledge structures instead of | |||
building up coherent meaning by manipulating | |||
small "local" concepts. | |||
The structures we cited are not the only one | |||
discussed, but they illustrate very nicely the | |||
general idea. | |||
It is important to realize that these structures[[Category: Artificial intelligence]] | |||
not only represent hooked up information, | |||
but that they have processing knowledge attached | |||
activated when used. | |||
Know-what and know-how is intrinsically related. | |||
== Bibliography == | |||
* Newell A & Simon H A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. | |||
[[Category: Psychological theories]] | [[Category: Psychological theories]] | ||
[[Category: Artificial intelligence]] | |||
[[Category: Knowledge representation]] |
Latest revision as of 18:03, 4 March 2013
Note: This stub is just copy/paste of some paragraphs I wrote many years ago when I was a graduate student - Daniel K. Schneider 19:31, 14 August 2007 (MEST) (be warned).
Introduction
Adopting a cognitivist stance one can describe cognitive processes as operations carried out on symbol structures.
Some earlier cognitivists adopted the I -> PS -> R model (as opposed to the behaviorist S->R model) and tried to model human behavior with symbolic artificial intelligence techniques.
I ----------------> PS ----------------> R Interpretation of principle-oriented Complex Reaction complex situation problem solving
In order to model this kind of behavior, one must assume that knowledge is structured, i.e. composed of distinguishable elements which are interconnected in a well defined way.
Knowledge representation is a very difficult matter, but to start with and in modeling terms, one can think of it as combination of data-structures and interpretive procedures that will lead to knowledgeable behavior. (The Handbook of AI:143).
For some authors knowledge is stored either in episodic or semantic memory. The further is organized in spacio-temporal dimensions, the second according semantic content-oriented principles, e.g. networks of concepts.
This kind of modeling led to research in artificial intelligence and education, and intelligent tutoring systems in particular.
In the literature, there exist several knowledge representation models, some of which are complimentary, i.e. relate to different kinds of knowledge. Below we introduce some of these (but the whole article needs rewriting - Daniel K. Schneider).
Plans
Intelligent behavior is to a great extent problem solving or planning activity, i.e. doing involves thinking. Let's now examine in more detail the pioneering work of Miller, Galanter and Pribham's (60) which can be seen as the first in psychology to integrate system- and information-theoretical concepts for the cognitive analysis of action. Their framework grew out of Tolmans (51) cognitive maps and Bartletts (46) schema. Action is viewed as hierarchically organized on different levels, Especially: Molar units are decomposed of molecular units. E.g. the structure of some behavior X might be decomposed into the sequence (A,B). Thus, X = AB. A can be composed of (a,b,c) and so on. The authors claim that a description of some behavior can not rely on only one level. A complete description must contain all levels otherwise much of the general structure can be lost. Without a superior ordering principle the ordering of many elements does not make sense.
The central concept of the theory is the plan. A plan is defined as hierarchy of instructions. I.e. it is a description of a behavior so that all possible actions are executed in the right sequence. This post-behavioral definition inserts into our general philosophy: The Plan here (as model) does match the researchers description of cognitive processes, but it serves also as instruction for the organism. Miller's and al. "plan" is a very general device for describing very abstract strategies of behavior as well as very detailed operations in an overall plan. "A plan is very hierarchical process in the organism that can control the order in which a sequence of operations is to be performed" (Miller and al.60:16). If a plan contains only a very general sketch of an global action we can define it as a general strategy of behavior. Its molecular components are different tactics that the individual can use. If plan gets to be executed, operations are guided by the plan, parts of the hierarchical steps in a plan are executed.
For Miller and al., Plans are hierarchical systems composed on several levels. When in operation, they can be described with the aid of the well known feedback loop. It becomes the structural cornerstone of behavior and updates the elder S-R model. The Test-Operate-Test-Exit (TOTE) unit describes a process which operates under the following form:
+-----------+ ------------> | TEST +------------------> +-----------+ (congruence) | ^ | | (incongruence) | | | | v | +-----------+ | OPERATE | +-----------+
Incoming information is compared during a test phase to certain knowledge. If there is an incongruence a reaction is triggered. The operate phase describes what an organism does. After each (major) operation congruence is tested again and the operation continues until that problem is solved. TOTE units now, can be hierarchical. Instead of executing simple operations, a complex test hierarchy can represent complex plans Plans are understood in a very wide sense. Strategically interpreted they can also represent values. Motivations then are expressed as testing phase of a tote unit.
Human problem solving and production systems
Miller's research later led to rational and quasi-rational models of action, e.g. Newell and Simon's Human Problem Solver (Newell 1972). The authors stated five very general propositions about the processes of human problem solving:
- Humans are representable as information processing systems when engaged in certain problem solving tasks.
- That representation can be formalized into detail. That means that we can simulate these processes and that we can describe a problem as data.
- Formalization means writing a system in which the whole information processing system can be implemented. A system consists of programs. Thus simulation can be complex and automated.
- Substantial subject and task differences exist among programs. This means that people attack problems differently and that problems are different among each other.
- The task environment determines to a large extent the behavior or the problem solver. It is basically the environment which is complex, the human problem solver has to reduce that complexity in order to solve a problem.
Their General Problem Solver model is based on so-called means-ends analysis. A problem is described as the difference between a current state A of the world and the desired state B of the world. Thus a problem generates the goal of reducing that difference, i.e. of transforming A into B, and it looks for means to do so. A problem can be stated more precisely in terms of objects, i.e. things which can be manipulated and operators. The objects are described by their features and by the differences that can be observed between pairs of objects. A problem can thus be decomposed into subproblems. An operator is something that can be applied to certain well-defined objects in order to produce different or new objects. Now each problem solver possesses so-called "difference tables" that tell him how to apply an operator to reduce a specific difference. That kind of knowledge is very task dependent and makes part of the task environment which contains also a description of the problem and the goal. This describes a formalism close to the TOTE model In order to achieve some goal the problem solver has to decompose the problem into manageable sub-problems to which he can apply known operators.
The GPS model layed the foundations for a programming technology called production system that later became the core of so-called expert systems.
Semantic Networks
See semantic network
Schemas and similar structures
Much human knowledge is probably organized in large organized chunks as many research in text understanding points out. Experimental research on story recall (Bartlett 32) already in 1932 convincingly demonstrated that subjects confronted with new information (a unknown story) try to relate it to already known knowledge in a particular and systematic way. Of a given story, many elements were deleted, deformed and new elements were added. This wouldn't be surprising, if it wouldn't happen in a systematic way:
- Strange unknown elements of the story are translated into more well known concepts,
- difficult concepts are skipped,
- the story is reorganized in order gain (subjective) meaning.
We can generalize these observation which since then have been replicated many times. Processing of "narrative" input relies heavily on stereotypical knowledge we have on social episodes. Furthermore, input is integrated into canonical structures that help organize the story and its memorization. Research on the understanding of other textual structures suggest that similar hypothesis can be made about objects such as typical roles of persons, institutions, etc.
Some experimental research in more traditional cognitive science concerned so called "natural kinds", they way in which people classify real objects. The simplest theories state that normally all encountered objects are compared to proto-types, detailed descriptions of a kind (type), and understood in terms of allowable deviation. We think that the more sophisticated "feature-set" theory better fits reality: Individuals have a high sensibility for the correlations of characteristics they encounter in the environment. Furthermore they have the tendency to build abstractions for the set of characteristics that define a category. Of course, they also memorize sometimes specific examples. Consequently, when people encounter an objects it will be compared to the set of important characteristics defining classes of objects, but also to specific exemplaries which can serve as negative of positive proto-types. Such, more traditional research, works with simple objects. Generalizations could be attempted only for single objects and concepts.
Unfortunately is is much harder to test how more complex structures might be represented. It is even worse to know how they might intervene and be what in a problem-solving process such as the understanding of "narrative" text. It can be convincingly shown, however, that people do "frame" perception in certain systematic ways (e.g. Tverski and Kahnemann ???). Such global patterns are probably not only the result of search, but they would be stored as complete chunks. Consequently, cognitive science proposes several forms of knowledge organizations which we would like to sketch out by given the definition of DeBeaugrande (81:90):
- Frames are global patterns that contain common-sense knowledge about come central concept.
- Frames state what things belong together in principle, but not in what order things will be done or mentioned.
- Schemas are global patterns of events and states in ordered sequences linked by time proximity and causality.
- Unlike frames, schemas are always arrayed in a progression, so that hypotheses can be set up about what will be done or mentioned next in a textual world.
- Plans are global patterns of events and states leading up to an intended
- Plans differ from schemas in that a planner [...] evaluates all elements in terms of how they advance toward the planner's goal.
- Scripts are stabilized plans called up very frequently to specify the roles of participants and their expected actions.
- Scripts thus differ from plans be having a pre-established routine.
Using such global patterns people can greatly reduce complexity in inference processes. They also allow to retain much more information in active storage at the time, i.e. one can understand very complex situations by integrating the information into these coherent knowledge structures instead of building up coherent meaning by manipulating small "local" concepts. The structures we cited are not the only one discussed, but they illustrate very nicely the general idea. It is important to realize that these structures not only represent hooked up information, but that they have processing knowledge attached activated when used. Know-what and know-how is intrinsically related.
Bibliography
- Newell A & Simon H A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.