Chapter 8: Concepts, Categories, Networks, and Schemas

Thus far, the author has considered how sensory data is received and processed into memories, and how those memories are accessed. The next set of questions concerns the way that this information, in aggregate, is stored and organized within the mind for future retrieval and use.

He again reflects on the distinction between declarative knowledge (knowing facts about things) and procedural knowledge (how to perform actions that lead to outcomes). The two are, of course, interrelated, but procedures do not encompass the universe of all facts: there are many things we know that do not seem to have a practical application, and practical application often omits the details about a think that are not salient to a specific task.

It is also recalled that our mind is constantly processing three kinds of information: the sensory data we are receiving at the moment, notions we form that have no basis in perception, and memories that pertain to both (past sensations and ideas).

There's a brief mention of artificial intelligence: storing data in computer memory is simple enough, but where AI fails is in being able to access the information that is relevant to a query, choose which bits of information are most salient, and respond in a meaningful way.

Ultimately, that is the functional goal of cognition. To organize a heap of information is fussy busywork unless this organization leads to efficiency and effectiveness when the information must be put to use.

Organization of Declarative Knowledge

The author begins with some basic definitions that are fundamental to symbolic knowledge: concept, category, network, and schema

There is a brief mention of the dysfunction of organization, in that it can form stereotypes. The belief that all apples are red will lead us to disregard a fruit of any other color from belonging to the category, or the schema of criminals may lead us to assume all people of a given ethnicity are to be included. In such instances, it is necessary to challenge the qualifiers of category and schema, and it can be said that education is a means of developing and refining our classification system.

Concepts and Categories

The manner in which the universe of things can be grouped into categories is largely arbitrary. Our understanding of things is based implicitly on categorization: to state that a cat is an animal is essentially to indicate that a category of "animals" exists, that the members of this category have certain properties that cause them to be included in that category, and that other categories exist on the same level.

Natural categorization is one of the most basic methods, which considers the nature of things, such that properties such as color, size, shape, location, etc. are used to sort things into categories. Natural categories are fairly stable: a cat is an animal and a pen is blue, and they will generally remain so for an indefinite amount of time.

This is opposed to functional categories, which considers the purpose to which things are used, providing categories such as food, writing instruments, building materials, and the like. The stability of these categories is also fairly stable, as it requires a permanent change in behavior to cause an item to be included or excluded from a category - e.g., a horse-drawn carriage is generally not considered in the category of transportation, though arguably it still serves as such in the rare instances it is used.

Nominal categorization, meanwhile, is arbitrary and highly subjective. When something is said to be a nuisance, it is categorized as such by a speaker - and may not be categorized as such by another person, or may be removed from the category at a later time merely by stating it is no longer a nuisance. Or consider the word "widow" that assigns a name to a state of being that has nothing to do with the object itself (a woman whose husband has died is not physically different than one who husband is alive or one who has never been married) nor any use to which it may be put.

The "classic view" of categorization involves deconstructing a concept into a specific set of featured components that define the category itself - that is, for a thing to be considered part of a category, it must have all of the defining features - and if it lacks any of the features, it is not to be considered part of the category. For example, the category of "bachelor" has three defining features: adult, male, and unmarried. Failing to meet any of those conditions excludes a person from that category.

This approach is popular among linguists, as it is the essence of language to assign a specific definition to a term, and in particular the definition defines the term as belonging to certain categories. Unfortunately, language becomes a tautology - a term is assigned a definition arbitrarily, and its correlation to reality is questionable. To define something as a "game" suggests not only that it has certain qualities that it may not - e.g., if the definition of game includes the concept of pleasure, some will not find a given game to be pleasurable. If the definition of zebra references its stripes, than a zebra born without stripes is not, by that definition, a zebra, though it is anatomically and genetically so.

As such, the feature-based approach to categorization has some attractive features, but subtle imperfections that have posed great difficulty, such that human language, which itself is used as an expression of human thought, is not only imperfect, but incapable of achieving perfection on the most basic level of defining the meaning or words.

The Prototype Theory suggests that categories are formed on a model that has certain characteristics that are considered typical of a category, but not absolutely critical to it. It may also have some qualifying characteristics that absolutely must be met in order for an item to be included - but even characteristics taken as absolute may be flexible.

For example, one might consider a game to be an enjoyable activity that involves two or more players and presents some degree of challenge - but not all games are enjoyable (which is subjective) or involve two players (solitaire is a game) nor presents a challenge (which is based on an individual's experience). Similarly, to say that a bird flies or a mammal has fur are qualities that are generally taken to be true, but yet creatures that do not meet those qualifications may still be included in the category.

As such, psychologists define two category types:

(EN: This strikes me as the schism between academic and practical knowledge: the academic, removed from reality, speaks with great confidence about the things he considers to be theoretically true, without evidence, whereas the practitioner speaks of things he has experienced, piecing together a theoretical framework afterward.)

In either case, a classical or fuzzy category is tested by instances in reality. An item is matched to a category by its similarities. If it is similar in every defining characteristic, it is an exemplar of that category - but nonconformity in some regards does not disqualify it because it may be found to be sufficiently similar to the category to be included.

Sufficient similarity is not merely a mathematical accounting - to match seven out of eight qualifying characteristics does not mean it can be accepted into a category (a silk flower may be like a flower in every way except that it is not bloomed of a living plant). There is a general sense that some characteristics are more important than others, such that comparison to a single criterion might qualify or disqualify a thing from inclusion

Some of the strongest classical categories can be found in the field of mathematics: categorizing a number as a prime number or an odd number has a very clear set of qualifications. A such, a number is either prime or it is not - and no number is "arguably" prime, nor any number that does not meet the characteristics have any other characteristic by which it might be considered a member of the category.

It is suggested that in development, children begin by learning about things without experiencing them, and come quickly to a categorization schema that tends to assume classic categories - and over time, their experience of the world challenges the validity of their categories, such that classical categories become fuzzy with experience.

(EN: This is no less true in adulthood - but my sense is the assumption that people change their categorization schema, rather than ignoring the evidence they are presented, is assumptive. Speak to anyone of strong religious or political beliefs and you will often discover a mind locked into classical categories that simply ignores reality and fails to adapt. In essence, people who cling to wrong beliefs are in many instances seeking to defend their prototypes/stereotypes against reality.)

A theory-based view of meaning (also called the "explanation based view") holds that categorization is based, implicitly or explicitly, on theory. For example, we consider an opponent to be a "good sport" if they are gracious about the outcome of a competition - neither bitter about losing nor smug about winning. We may base this category on behavior we have observed, but cannot refrain from theorizing: what elements of their behavior makes you consider them a good sport and what other elements they may not have demonstrated would also lead you to consider a person to be a good sport figure into the definition of the category.

The theory-based approach differentiates between essential and incidental characteristics that relate to a category. This distinction is refined by experience: as the amount of information we have about a thing increases, we reconsider whether it belongs in a category, and as the amount of information about many things in a category increases, we reconsider the way in which the category is defined.

There's an extended description of an experiment (Rips 1989) in which an imaginary creature is described that has qualities of a bird or an insect: as more information about the creature is revealed, test participants shifted the way in which it was categorized

Another example is androgyny: the defining characteristics that cause a person to be categorized as "woman" or "man" are often superficial: haircut, body shape, etc. The most reliable defining characteristic (sex organs) is not readily apparent to a casual observer, the genetic differentiation (XX or XY chromosome pair) is completely invisible, and in psychological terms, the difference between genders is not a matter of physiology at all. Ultimately, what differentiates man/woman begins very simply, but as the issue is pondered, the distinction becomes quite blurred.

Semantic Network Models

An older model (Collins 1969) that is still in use presently represents knowledge as a network of notes that represent concepts with a network of connections among them. For example, the concept of a robin is networked to other concepts: bird, animal, flight, feathers, nest, beak, egg, two-legged, wings, tail, spring, and so on.

In the network model, hierarchal networks for a sort of category-subcategory structure (a lion is a kind of mammal, which is a kind of animal), though the timing of responses suggests that higher level categories are broader and easy to identify" participants agree with "a lion is an animal" faster than they do with "a lion is a mammal" because the qualifies of mammal are more specific and the evaluation more detailed.

A network consists of more than hierarchal connections: it may contain semantic links that are binary (a lion lives in Africa is yes/no) and scalar (a lion is ferocious, but its ferocity is far greater than that of a rabbit, somewhat greater than that of a wolf, and about on par with that of a tiger - but at what point on this continuum does one assess an animal to be ferocious?).

There's some diversion about the basic level at which concepts are held: if an object is described as red, round, and grows on a tree, people are more inclined to say "apple" than "cherry" - and they are also more likely to identify it as an apple rather than "a fruit" or "a Rome apple."

(EN: My sense of this is that it pertains to life experience. Americans are most familiar with apples - but in a different nation they might think cherry or pomegranate or mango. In terms of specificity, I imagine someone raised on a farm might name a specific variety of apple, though that's a bit more of a stretch unless they are given more details.)

Through experimentation (Rosch 1976) it has been observed that most people tend to identify pictures of items on the basic level quickly. The more detail they have to work with, the more likely they are to respond with less or more specificity, but must apply some effort to their response.

Schematic Representations

A "schema" is loosely defined as a mental framework for organizing knowledge. It is not as rigidly defined as a system of categories or a semantic network, acknowledging that the way in which an individual organizes information is strongly influenced by his life experience, making schemas highly idiosyncratic. Additionally, schemas have flexibility in that ...

In addition to pertaining to individual nouns, schemas may also contain information about the relationships between things:

Schematization permits a greater flexibility of knowledge. The example given is four people on a park bench (a middle-aged man, a young woman, an elderly woman, and a woman in a nun's habit) - if a child falls and cries for "mama," our schematization of the concept of motherhood enables us to know immediately which of them the child is calling to, whereas categorization or networks would require quite some work to identify the child's mother.

Within schematic representation is the concept of a script, which is a structure that considers a plausible sequence of events in a given context. A script enables us to assess what has happened and what soon will happen by observing what is happening at the moment. In essence, a script is a schema of verbs rather than nouns.

The combination of schemas with scripts enables a broad understanding. The schemas of people (customer, waiter, hostess, manager) and of objects (tables, chairs, plates, silverware, food, money, etc.) within the setting are interwoven by the script of events that unfolds as a person enters a restaurant, takes a meal, pays, and leaves.

In another sense, the schemas and scripts of such a setting enable us to recognize and conveniently ignore what is expected, and focus our attention on what is novel or significant about the events that unfold. A person unfamiliar with a restaurant would not notice that the waiter is acting strangely and would pay no special attention to the smoke coming out of the kitchen, whereas one who was familiar with the scripts and schemas involved would know whether what he observes is typical or a sign of danger.

Scripts and schemas also enable people who have a similar understanding to speak in shorthand, leaving out details that will be understood. For example, the author relates a narrative of a visit to a doctors office in which "the nurse came and asked the patient to take off his clothes and the doctor prescribed an ointment." Taken literally, it would seem that the nurse asked him to undress in the waiting room because the part where she escorts him to an examination room is omitted, and we do not know if he actually undressed, merely that he was asked to do so, or that the doctor examined him before writing the prescription, and so on.

The author relates research (Bower 1979) that supports this notion: individuals were asked to read a number of stories and answer questions about them, many of which pertained to details that were not mentioned in the text of the stories, but which could be inferred from them. In essence, the storyteller omits details he feels to be inconsequential to the story and the reader confabulates those details in his understanding of it.

(EN: This relates to the "Reader Response" theory of literature, in which the reader creates the experience of a story based on the writer's text, but not exclusively limited to it, such that each reader contributes original elements and has a unique experience of fiction. It may also relate to some of the earlier discussions in this book pertaining to witness reliability and reading comprehension - the process by which facts are omitted, distorted, or added by the audience to any sequence of events, real or imagined.)

Of particular interest is that a person who relates an incident depends on those who hear his account as being members of his own culture - such that a greater amount of cultural misunderstanding comes from the details that are fabricated by the audience, and its difference to what the estimation of the teller was. That is, they are unable to complete the script, or they complete it in an unexpected way.

There's a sidebar about the use of scripts in everyday life - our routines and habits extend from the scripts by which we perceive the world to work. Breaking a bad habit or starting a good one is largely a matter of examining and rewriting our scripts to alter our behavior.

Representations of Procedural Knowledge

Procedural knowledge is implicit to performing a task, which seemed dreadfully mundane and uninteresting until the advent of robotics and computer intelligence: designing a mechanism to perform a routine action is simple - but instilling in that mechanism the kind of judgment human labor exercises as a matter of course to know when to continue, when to stop, and when to deviate from instructions in order to accomplish a goal has proven quite a challenge, and has garnered much interest from psychology.

In the course of attempting to make machines intelligent, psychologists have developed a number of models pertaining to the way in which human workers exercise intelligence in the course of performing a given task.

The simplest models of procedural knowledge involve serial processing, in which a complex task is divided into a number of more granular activities that are performed in a linear order, one operation at a time. Success at a process is assessed by the quality of the outcome - and it is assumed that a better process results in a better outcome.

Logic in the machine world is represented by if-then structures in which conditions are assessed in order to determine which action to take. The if-clause can be highly complex, taking into account multiple conditions.

The chief problem of computer programming is its serial nature: only one thing can be considered at a time, only one conclusion can be accepted, and only one thing can be done. In the real world, multiple factors must be considered, more than one conclusion may prove true, and there is more than one series of actions that can achieve a positive result. Computers are painfully inept at this, and unable to resolve conflicts and paradoxes.

(EN: In that sense, I believe that computer programmers and psychologists alike may be following a false trail, presuming computer intelligence to be analogous to human intelligence in ways that it is not. The utter failure of artificial intelligence to achieve a simulacrum of human intelligence, in spite of five decades of effort by some of the smartest people, suggests that there is something wrong with the basic assumption that the two are, or can even be considered to be, equivalent.)

Models for Declarative and Nondeclarative Knowledge

The author regards the adaptive control of thought (ACT) as an excellent theory that combines various forms of mental representation. Procedural knowledge is represented in the form of production systems whereas declarative knowledge is represented in a propositional network.

The ACT system also suggests the notion of an adaptive network: links between nodes are established and broken over time, gaining strength from the frequency with which they are used or atrophying from disuse. AS a result, activation is most likely to spread through the network following a path of the strongest links, but does not preclude the use or development of alternate paths.

There's a rather extended metaphor of a plumbing system in which ...

In terms of the acquisition of knowledge, the system has a three-stage process. First, there is a cognitive state in which we think about rules for executing a procedure (determining how to act). Second, there is an associative stage that considers the effectiveness and efficiency of our actions (deciding if the actions produced a good result). Finally, there is an autonomous stage, where we reenact the procedure without thinking it (following the course that worked well in the past, as a habit or a reflex).

The process of "proceduralization" is the transformation of slow and explicit information about procedures (knowing what) into sequences that can be executed quickly and without deliberation (knowing how). In essence, production rules are the building blocks from which procedures are formed.

The analogy of a manual transmission is used to suggest the way in which rules alone are insufficient: we recognize that pressing down the clutch disengages the transmission that applies propulsive force, and we separately recognize that pressing the brake pedal diminishes momentum. It is only when these two facts are combined that we develop a clutch-brake combination that becomes effective in practice and can be executed quickly and with a minimum of thought.

"Production tuning" is another significant element of proceduralization, by which we observe the results of following a procedure and adapt the procedure itself to be more efficient and effective. Returning to the manual-transmission analogy, we decrease the amount of time between clutch and brake to make stopping more efficient.

Models Based on the Human Brain

Thus far, the author has considered knowledge representation that is based on external models of human intelligence - looking to computer and engineering systems in the external world and assuming that the internal mind works is analogous - largely because the inner workings of the mind are largely unobservable and our conclusions about it are derivative and assumptive, and possibly quite wrong - but no less inaccurate than assuming similarity to external models.

The primary problem with external models is that they deal exclusively with serial processing in an orderly and logical fashion. The human brain is capable of parallel processing , conditional clauses are not consistently resolved, and the entire system can seem quite organic and disorderly given our inability to observe the exact programming rules and procedures.

In human behavior, the concept of "skill" is a rough assessment of procedural knowledge, and it can be observed that skills are learned, then improve with use, and are adapted in practice.

Closer investigation is difficult - a carpenter who trains his apprentice to make a dove-joint often speaks to the way he thinks the task ought to be done rather than the way he actually goes about doing it himself. Other skills seem ineffable - when a person relates how they are able to memorize a telephone number, it is entirely confabulation as he is not immediately aware of the mental process: after hearing or dialing the number a few times, he just seems to remember it. Still other skills seem entirely accidental - the ability to read writing when seen in a mirror or upside-down is not a skill that most people attempt to gain, but which they seem nonetheless to acquire.

Moreover, procedural knowledge is not an isolated element in the human experience. We learn to do things my a number of means:

Some of these phenomena can be explicit - we can recognize and make intentional choices as to our actions - but all of them can be implicit, and the degree to which an action is deliberate or deliberated varies.

It's noted that this understanding of knowledge is not merely theoretical, but is supported by various kinds of evidence: neuropsychology that examines the behavior and properties of brain cells, ethnographic observation of learning and task execution in the field, and various cognitive experiments with human subjects.

It is further observed that there are various methods and techniques for gaining declarative knowledge - we learn the properties and nature of things by study or casual observation, and are taught general techniques to memorize facts and consider the relations between things.

The approach to gaining procedural knowledge is entirely different: we can allegedly learn to do something by watching someone else do it or hearing an explanation of how it is done, but this is understanding the various declarative "building blocks" from which procedural knowledge can be assembled - it is not until we put our own hands to the work that we truly learn to do it.

Consider again the example of a manual-shift vehicle. To read how the systems work, or to hear someone else describe how to orchestrate the operation of the clutch, gearshift, gas, and brake does not enable us to immediately start driving: it is only as we take the wheel that we put this declarative knowledge into a procedural framework that we really learn how to drive it - and the process is largely implicit and experiential.

Parallel Processing: The Connectionist Model

Computer-inspired models of information processing are generally serial: one command is evaluated at a time in linear fashion, whereas psychobiological findings indicates human thought involves parallel processing, with many different operations simultaneously.

It's noted that some computers have attempted to simulate parallel processing, by means of having multiple threads of execution between which the computer switches, but this is not analogous, in spite of the fact that computers are now considered faster than human brains (a circuit fires in nanoseconds while neurons require about three milliseconds).

In the connectivist model, all knowledge is resident in neural networks in which each node represents a concept, and activates other nodes whose concepts are related. By the simultaneous operation of multiple nodes in the network, the brain can process information quickly, recognizing an image and priming the mind with a plethora of related concepts within a third of a second.

It's also suggested that the brain perceives many sensations simultaneously, much in the way that a person recognizes a word or a phrase at a glance without perceiving the individual letters that comprise it.

It is theorized that the neurons in the human brain exist in an active or inactive state, but that the active state may also be excitatory (stimulating other neurons) or inhibitory (preventing other neurons from becoming stimulated). Furthermore, to describe a neuron as "active" is a gross oversimplification of the complexity of its interactions, given that a single brain cell may have thousands of connections to other neurons, and each synapse may emit several chemicals and electrical activity in varying degrees. As such, it is far from being equivalent to an electronic transistor with a simple on/off state.

Another feature of the PDP model is the notion that we change our representation of knowledge in the course of using it. We test the validity and strength of existing connections and create new connections as we recognize the relationship between concepts. The alterations made are applied in future, such that we do not really make the same decision twice: though the outcome of the decision may coincide, the process is undertaken with a slightly different conceptual framework. In this way the mind is capable of significant growth and versatility.

This is another distinguishing factor between the human mind and the computer model: computers execute the same code until the code is rewritten, whereas humans rewrite their code routinely. This can be seen subtly in the way in which the mind develops over time, and more directly in the way in which the mind develops during the course of an experiment (recall that in iterative experiments, speed and accuracy begin to improve even in the second iteration).

This adaptability of the human mind is of particular interest to cognitive psychology: the difference between an initial reaction and a second reaction, given that the two are identical or highly analogous, is entirely internal and evidences the mind's cognitive functions.

The author then considers some of the flaws in the connectivist model of knowledge representation, not the least of which is that it is entirely theoretical and many aspects of the model are not well defined. The model does not satisfactorily explain the way in which we represent what we know about a memorable event, nor does it explain how established patterns of connections can be so quickly disconnected or even disregarded when we encounter contradictory information (e.g., people normally consider tomatoes, squash, and cucumbers to be vegetables but change immediately when they are reminded of the definition of "fruit.")

As an aside, the author considers the rapid development of computer technology in recent years, such that future research may be more fruitful than past. The present computer-based models of knowledge representation hail from an era when technology was laughably primitive - and while he cannot predict exactly how the theory will evolve, he expects that evolution is inevitable.

Some distinction is made between the connectivist and network model, chiefly in that connectivist models tend to be more hierarchal and orderly, with fewer connections between nodes and a logical basis for each connection. The network representation is considerably more cluttered, in which any connection between two concepts is considered regardless of whether there is an immediately apparent logic to the connection.

For example, a connectivist mapping between alligator and lion requires moving up the hierarchy alligator-reptile-animal and back down from animal-mammal-lion, or it might be made from alligagtor-Florida-America-World-Africa-Lion whereas the network connection links lion to alligator directly because the subject happened to see both in a box of animal crackers, or in a nature documentary, or on a visit to the zoo.

Domain Specificity in Cognition

There is some question as to whether research should focus on considering cognition across all domains of knowledge, or limit their experimentation to a single domain.

Early attempts at artificial intelligence took a domain general approach, and made little progress. Those geared toward more domain-specific knowledge (such as "Big Blue" that was programmed with knowledge f the game of chess) have been more successful, but their applicability to human cognition is questionable.

Domain-specificity requires the implicit acceptance of the assumption that the human mind is modular, such that the specific domains of knowledge operate independently. For that matter, much of psychological research makes the same assumption: to believe that when we test recognition of images or the ability to remember the definitions of words assumes that the only parts of the brain that active during the experiment are those that perform a very narrowly defined function.

In all, it seems that modularity is necessary to experimentation and modeling, but is not a characteristic of the human mind - but it is nonetheless of interest in psychological and psychobiological research, the results of which carry an implicit caveat that functions cannot be isolated.