Learning to Bridge Between Perception and Cognition

Robert L. Goldstone

Indiana University

Philippe G. Schyns

University of Glasgow

Douglas L. Medin

Northwestern University








Running head: Perceptual Learning

Learning to Bridge Between Perception and Cognition

In building models of cognition, it is customary to commence construction on the foundations laid by perception. Perception is presumed to provide us with an initial source of information that is operated upon by subsequent cognitive processes. And, as with the foundation of a house, a premium is placed on stability and solidity. Stable edifices require stable support structures. By this view, our cognitive processes are well behaved to the degree that they can depend upon the stable structures established by our perceptual system.

Considered collectively, the contributions to this volume suggest an alternative metaphor for understanding the relation between perception and cognition. The architectural equivalent of perception may be a bridge rather than a foundation. The purpose of a bridge is to provide support, but they do so by adapting to the supported vehicles. Bridges, by design, sway under the weight of heavy vehicles, built on the principle that it is better to bend than break. Bridges built with rigid materials are often less resilient than their more flexible counterparts. Similarly, the chapters collected here raise the possibility that perception supports cognition by flexibly adapting to the requirements imposed by cognitive tasks. Perception may not be stable, but its departures from stability may facilitate rather than hamper its ability to support cognition. Cognitive processes involved in categorization, comparison, object recognition, and language may shift perception, but perception becomes better tuned to these tasks as a result.

Insights From Perceptual Learning

One of the standard strategies of cognitive science is to establish a set of vocabulary elements, and to explain cognitive processes in terms of operations upon this set of elements. The names and natures of these elements depend upon the researcher's domain. In object recognition, these elements may be simple geometric solids ("geons", Biederman, 1987), textural aspects ("textons", Julesz, 1981), or primitive features (Treisman & Gelade, 1980). In speech perception, "minimal distinctive features" represent phonemes (Jakobson, Fant, & Halle, 1963). Objects to be compared or categorized are often described in terms of elementary features (Tversky, 1977). For complex scenarios, vocabularies of "conceptual primitives" (Schank, 1972) or "semantic primitives" (Wierzbicka, 1992) have been proposed. In many cases, a finite number of specific primitives are hypothesized. For example, about 36 geons, 10 minimal distinctive features, 20 conceptual primitives, and 30 semantic primitives have been proposed as sufficient to describe the basic entities in their respective domains. By combining these primitives in different arrangements, a small set of building blocks can construct a virtually infinite variety of entities.

An alternative account of cognition is that its building blocks are neither fixed nor finite, but rather adapt to the requirements of the tasks for which they are employed (Schyns & Rodet, in press). An extended development of this account is given by Schyns, Goldstone, and Thibaut (in press). Applied to perceptual learning, the claim is that perception does not provide a single breakdown of an object or event into building blocks. As notably argued by E. Gibson (1969), the perceptual interpretation of an entity depends on the observer's history, training, and acculturation. These factors, together with psychophysical constraints, mold the set of building blocks. There may be no single, privileged set of perceptual primitives because the building blocks themselves are adaptive.

Advancing from fixed to adaptive building blocks provides a new perspective on some old problems. One of the notorious difficulties with representations based on a limited set of elements is that it is hard to choose exactly the right set of elements that will suffice to accommodate all of the future entities that will need to be represented. On the one hand, if a small set of primitive elements is chosen, then it is likely that two entities will eventually arise that must be distinguished, but cannot with any combination of available primitives. On the other hand, if a set of primitives is sufficiently large to construct all entities that might occur, then it will likely include many elements that lie unused, waiting for their moment of need to possibly arise (Schyns et al, in press). However, by developing new elements as needed, newly important discriminations can cause the construction of building blocks that are tailored for the discrimination. If one goes for a short camping trip it is reasonable to pack cans of ready-made food. If one goes for a life-long camping trip, then one must pack tools that are useful in creating new food: fishing line, seeds, rake, and rope. By the same token, given the long and unforeseeable journeys we make, perceptual systems should be adaptive enough to develop new tools.

Evidence that people use a particular set of primitive elements is perfectly consistent with elements being developed via perceptual learning. The geons, textons, or conceptual primitives of componential theories may be the end product of a general perceptual learning strategy. Recent research in computer science has shown how sets of primitives, including oriented line segments, Gabor filters, and size detectors can be created by a system provided naturalistic scenes (e.g. Miikkulainen, Bednar, Choe, & Sirosh, this volume). In fact, it is more than a coincidence that computer systems often converge on primitives that bear striking similarities to those proposed by researchers advocating fixed primitives. Researchers explicitly devise their sets so as to capture important regularities in the environment -- the same regularities being captured by computer systems that learn from natural inputs. The advantages of learning, rather than simply positing, elements are that 1) mechanisms are in place for acquiring slightly different primitives if the environment is modified, and 2) specialized domains within the environment can have tailored sets of primitives designed for them (Edelman & Intrator, this volume).

We believe that the acquisition of new perceptual skills is of importance not simply for researchers in perceptual learning, but for other fields as well. Perceptually-minded researchers will eventually have to integrate learning into their theories. Early feature perception research indicated impressively small influences of learning and practice. In Treisman and Gelade's (1980) field-defining research on feature search, the influence of distractor letters in a conjunctive search remained essentially unchanged over 1664 trials, suggesting that new primitives could not be formed for conjunctions of color and shape. Although these results are generally replicable, they may have had the adverse effect of dissuading perceptual psychologists from exploring other training effects. In fact, Shiffrin and Lightfoot (this volume) report five-fold improvements in response times in a similar, conjunctive search paradigm in which the conjunctions are defined not by color and shape but by different line segments. Many studies of perception have underestimated the influence of training by the standard practice of eliminating the first few blocks of performance in a task. From a perceptual learning perspective, the common "nuisance effect" that performance does not stabilize until after several blocks of practice takes on notable interest. Researchers in perception cannot afford to disregard perceptual learning effects for the simple reason that they account for an impressively large proportion of variance in human performance.

In the sub-discipline of perception devoted to object recognition, perceptual learning also provides new insights. Perceptual learning can endow object recognition systems with greater testability and applicability. Perceptual "vocabulary" elements derived from learning systems can be compared with those assumed by object recognition theorists. To the extent that they agree, we have a potential mechanism that explains how a particular finite set of elements came into existence. Furthermore, by incorporating perceptual learning into object recognition systems, the systems can attain greater generality. For example, Biederman (1987) limits his geon theory to explaining "basic level" categorizations. That is, he proposes that different arrangements of geons can serve to discriminate cats from dogs, but not German Shepards from Golden Retrievers. This is a somewhat awkward limit on geon theory, for the simple reason that similar mechanisms appear to be used for basic level and more subordinate categorizations. In fact, Tanaka and Gauthier (this volume) present evidence that expertise can gradually shift whether a particular categorization is basic or not. Dog experts can categorize species of dogs as quickly as they can discriminate dogs from cats. This influence of expertise on categorization ability is predicted if perceptual learning can shift attention to discriminating features, or if it can develop entirely new features to aid discrimination. In either case, adding perceptual learning mechanisms to object recognition systems can extend their range of application, allowing them to accommodate closely related tasks.

Finally, a more general problem with treating object recognition as a separate process from learning is that no account is given for how object descriptions are initially internalized. Even under the assumption that we recognize objects by decomposing them into elements, we still need processes that learn object descriptions. One might assume that the first time an object is viewed, a description is formed and a trace is laid down for it. After this initial registration, standard object recognition routines are applied. This approach would preserve the separation of object recognition and learning, but given the strong influence of object familiarity on recognition, it is too gross a simplification. Object learning occurs simultaneously to, and interacts with, object recognition.

Perceptual learning may be relevant not just to perception researchers, but to those interested in higher-level cognition as well. Several of the chapters (Regier, this volume; Smith, Gasser, & Sandhofer, this volume) describe mutual facilitations between perceptual learning and language. Learning proper word usage often requires learning to attend established dimensions (Regier), or establishing dimensions (Smith et al).

One of the benefits of a perceptual learning perspective for researchers interested in concepts and categorization is to suggest an alternative to complex rule formation. Over the years, many researchers have proposed that concepts are represented by logical rules such as "white and (square or circle" (Bruner, Goodnow, & Austin, 1956; Nosofsky, Palmeri, & McKinley, 1994). However, combining disparate sources of evidence into Boolean expressions may be quite unnatural. Participants in these experiments seem to adopt a laborious problem-solving strategy that seems quite different from learning about common objects such as dogs, tables, or trees. The possibility raised by perceptual learning is that concept learning involves developing new perceptual features and abilities which might reduce much of the need for complex categorization rules. These rules seem only necessary when the perceptual underpinnings of our concepts are ignored. It is worth entertaining the possibility that natural concepts harness natural perceptual learning processes.

Extending beyond psychology, perceptual learning can also play an important role in neuroscience and computer science. Interest in neural plasticity, particularly within the somatosensory cortex, is at an all-time high. Surprisingly fruitful links between neural changes and behavior have been discovered (see Tanaka & Gauthier and Miikkulainen et al's chapters in this volume). For computer science, sensitivity to perceptual learning can help solve problems in pattern recognition, reasoning, and induction that would otherwise be intractable. Computer scientists, like their counterparts in psychology, have tried to build systems for induction, reasoning, and creativity by composition of primitive elements according to rules. Many current systems are too constrained because their primitives are too abstract and do not develop through interactions with the environment. Conversely, these systems are often too unconstrained as well, allowing all possible logical (e.g. boolean) combinations of a set of primitive elements. A serious examination of the constraints of perceptual learning, such as those suggested by Regier's spatial templates or Hochberg's saccadic transitions, provide helpful constraints to truncate the combinatorial explosion of features assumed by many artificial intelligence systems.

Mechanisms of Perceptual Learning

Arguably, the heyday of perceptual learning was the 1960s, largely due to the work of Eleanor and James Gibson. They generated considerable excitement for the field, and established the techniques, questions, and agenda for decades of work. One of the repercussions of the Gibsons' influence is that perceptual learning has become tied to the "ecological psychology" movement in many people's mind. The Gibsons argued for "direct perception" of environmental properties rather than mental computation of these properties. They argued that perceptual learning entails picking up previously unused external properties that enable organisms to be better in touch with the true external world (Gibson & Gibson, 1955). From this perspective, the major research goal is to determine what external properties are available to be picked up by people.

The chapters in this volume present a strikingly different approach to perceptual learning. In one way or another, all of the authors are interested in the internal mechanisms that drive perceptual learning. In several cases, a computational approach is taken, wherein theories of perceptual learning are formally instantiated in computer models. The computational approach to perceptual learning emphasizes internal constraints on perception. These constraints often take the form of architectural requirements for perceptual learning. Some of the chapters highlight the importance of architectures that allow for top-down feedback (Regier; Smith et al). Some stress the interconnectivity between different sensory modalities (De Sa and Ballard), and others argue for architectures that compress original object descriptions onto discriminating dimensions (Edelman & Intrator; Smith et al). Finally, some authors propose architectures that mirror the underlying topology of the modeled set of objects at a concrete (Miikkulainen et al) or abstract (Edelman & Intrator) level.

In all cases, the chapters propose rich internal structures in order to get perceptual learning off the ground. In doing so, they avoid the first year computer science graduate students' fallacy of "Let's just hook up a camera, give the computer years of television input, and watch it self-organize." The need for structurally sophisticated architectures to permit learning is most evident in Regier's and Eimas' chapters. Regier's layered connectionist network is a far cry from generic perceptrons. His network has many learning biases and predispositions, but these are correctly interpreted not as limiting weaknesses, but as testable predictions about the biases that people should, and apparently do, show. Eimas provides convincing evidence that infants come into the world with techniques for segmenting speech into parts that will later allow them to acquire the meaning-bearing units of language.

In short, it is dangerously wrong-headed to view perceptual learning as the opposite of innate disposition, or more generally, to view flexibility as the opposite of constraint. It is only by having a properly constrained architecture that perceptual learning is possible at all. Even if our goal is to have a single learning process that can acquire distinctly different perceptual vocabularies, we should not be surprised if the process needs to be domain-specific. Domain-general learning processes can be devised, but they rarely have the power to produce genuinely novel or emergent forms.

The contributions to this volume focus on perceptual learning from a mechanistic perspective. More than half of the chapters propose particular computational, typically neural network, models; all of the chapters make concrete proposals for what changes with learning. One mechanism, attention weighting, involves shifts of attention to diagnostic dimensions (Eimas; Regier). A second mechanism, detector creation, involves creating receptors that respond selectively to one specific type of input (Miikkulainen et al; Smith et al). These systems start with homogenous units, and gradually create specialized units. Once a functionally separated feature has been created, attention weighting strategies can apply to it. In this way, the first two mechanisms of perceptual learning have a strong dependency relation; selective attention to a feature first requires that the feature has been isolated. A related mechanism, dimensionalization, creates topologically ordered sets of detectors (Edelman & Intrator; Miikkulainen et al). Advantages of dimensionalization are that interference between dimensions is reduced and selective attention becomes possible, information from objects is compressed in efficient representations, and the topological architecture of the system can reflect the topology of the represented objects. This latter property is advantageous because it allows relations within (Miikkulainen et al) and between (Edelman & Intrator) real-world objects to be inferred by consulting internal representations that are functionally isomorphic to the represented objects.

A fourth mechanism, unitization, involves the construction of single functional units that can be triggered when a complex configuration arises (Shiffrin & Lightfoot; Tanaka & Gauthier). Via unitization, a task that originally required detection of several parts can be accomplished by detecting a single unit. Unitization may seem at odds with detector creation and dimensionalization. Whereas unitization integrates parts into single wholes, detector creation divides wholes into cleanly separable parts. This apparent contradiction can be transformed into a commonality at a more abstract level. Both mechanisms depend on the requirements established by tasks and stimuli. Objects will tend to be decomposed into their parts if the parts reflect independent sources of variation, or if the parts differ in their relevancy (Schyns & Murphy, 1994). Parts will tend to be unitized if the parts co-occur frequently, all parts indicating a similar response. Thus, unitization and decomposition are two sides of a process that builds appropriate sized representations for the tasks at hand.

Another mechanism that is inherently tied to the presented stimuli is Contingency detection. Several authors propose that perceptual learning proceeds by internalizing contingencies within a stimulus The suggestion is that parts of a stimulus predict other parts of the stimulus. Everyday objects are not random; they exhibit strong internal relations between their parts. By extracting these relations, people can develop associations without any explicit feedback. Instructive contingencies exist at many levels: between different visual regions of a single object (Hochberg), between different sensory modalities (De Sa & Ballard), and between compared objects (Smith et al). While the default way to implement contingency detection may be through associative learning, Hochberg argues that contingencies may drive perceptual learning by changing action procedures -- by changing patterns of successive eye fixations.

Issues in Perceptual Learning

The chapters raise several issues of general importance for theories of perceptual learning. As efforts to extend perceptual learning theory, we expect that the following issues will become central.

Constraints on Perceptual Learning

Perceptual learning mechanisms integrate two sources of constraints. One constraint on features that are developed to represent objects is based on the manner in which objects are grouped into categories. The second constraint arises from the perceptual biases that facilitate or prevent the extraction of features from the considered objects. One issue concerns the general applicability of perceptual learning mechanisms to conceptual development. Perceptual learning is often evidenced in mature concept learners, using highly unfamiliar materials. The rationale for this approach is simply that the standard stimuli of categorization experiments tend to "wear their features on their sleeves" and could prevent the learning of new stimulus features (Schyns et al., in press). Future research may reveal that perceptual learning principles are limited to the learning of very specialized categories such as X-rays, skin diseases, and so forth. Alternatively, it may turn out that perceptual learning has a broader scope and applies to the early stages of conceptual development. Mature categorizers, who tend to know the relevant featural analysis of most objects would only evoke perceptual learning mechanisms when they learn expert categorizations that require new features. Whether perceptual learning mechanisms have broad or limited application to the understanding of early concept learning is an empirical issue for developmental psychology.

A related issue concerns the nature of the perceptual biases that might apply to the learning of new concepts at different stages of development. For example, it is well known that babies' perceptual processes filter out the fine-grain details of the visual input. More generally, biases arising from the development of the system should be specified in order to understand how the initial perceptual organization might subsequently be affected by tasks and environment. This is obviously a difficult, chicken-and-egg problem, because the perceptual organization at any point of conceptual development might arise from perceptual learning at an earlier developmental stage. However, the interplay between perceptual biases and environmental constraints can be tracked throughout the history of the organism, providing a dynamic, but componential, conception of development.

For mature categorizers who tend to know the features that distinguish between common object classes, it will be particularly important to understand how these relevant features were first determined. For example, curved (e.g., faces) and edged (e.g., chairs, appliances) objects have highly distinct geometric properties that might constrain perceptual learning. While Lowe (1987) suggested that non-accidental properties of 2D edges were sufficient to recover the 3D structure of the parts of objects, Hoffman and Richards (1984) showed that other principles must apply to the segmentation of smooth objects. Generic perceptual constraints such as Hoffman and Richards' (1984) minima rule, but also symmetry and shared motion could offer grounding principles to bootstrap perceptual learning. The search for these principles, and their interactions with task constraints, is an important topic of future research in the perceptual learning that takes place when people interact with real faces, objects and scenes.

Who Teaches the Learner?

One way to compare different approaches to perceptual learning is by seeing "who" (more technically, what) is the teacher for what is learned. Mirroring the distinction between supervised and unsupervised neural networks, some theories use an explicit label or categorization as teacher, whereas for others the teacher is the stimulus itself. Systems that connect percepts to words understandably take advantage of the words as guides for building the percepts (Smith et al; Regier). Even within this approach there are variations in how feedback is used. Regier shows that associations between percepts and labels can be modified not only when they are paired together, but also when one but not the other is presented. By adjusting percept-to-word associations when other words are presented (implementing a mutual exclusivity constraint), the amount of learning resulting from a single episode is dramatically increased. Smith et al show how dimensionalization can be greatly facilitated by words that do not simply provide correct labels, but provide these labels in the context of a comparison between objects. The labels drive isolation of dimensions by highlighting particular commonalities shared by the compared objects. Both of these chapters augment the feedback provided by standard labels, and thus mitigate the force of a major criticism of feedback-based models of language -- that people do not receive enough feedback in the form of labels to drive learning. Labelling, properly implemented, can be a powerful and informative force for perceptual adaptation.

By a similar token, several of the other chapters point out ways in which the stimulus can itself be a surprisingly powerful source of information for learning. Instead of imagining that a label is provided externally for an object and that perception is adapted so as to better predict the label, one can view part of the stimulus as providing a label for other parts of the stimulus. This strategy is adopted by De Sa and Ballard, who go on to show that several modalities can simultaneously provide labels for each other, thereby pulling the system as a whole up by its own bootstraps. Similarly, regularities within and between objects can be internalized perceptually. Regularities among the shapes of presented objects can be internalized as eye fixation patterns (Hochberg). Strong interdependencies between features can be internalized by creating "chunks" that coalesce the separate features (Shiffrin & Lightfoot). By focusing on differences (sources of variation) instead of similarities within a set of objects, underlying features (Miikkulainen et al) or dimensions (Edelman & Intrator) can be extracted for describing the set.

In short, what can be learned is a joint function of the teacher and the capabilities of the learner. The teacher's information may be externally supplied, or may be intrinsically packaged in the objects to be learned. In the computational systems described here, the capabilities of the learner are expressed in terms of architectural structures. On both fronts, the chapters hint at the sophistication that will be needed to achieve human-like adaptability, in term of richly structured architectures and informative, naturalistic inputs.

Challenges for Perceptual Learning

As the mechanisms of perceptual learning become better understood, the challenge will be to relate these mechanisms to work in other domains. Three inviting domains of application are training, neuroscience, high-level cognition.

Perceptual learning has promise for explaining aspects of training not just in the laboratory, but in the field as well. In addition to obvious domains of perceptual skill acquisition, such as wine tasting and baby chicken gender discrimination (Biederman & Shiffrar, 1987), recent work suggests a strong perceptual component in the development of medical expertise (Norman, Brooks, & Coblentz, 1992; Myles-Worsley, Johnston, & Simons, 1988). In these extended training situations, the relation between perceptual learning and automaticity/attention will need to be characterized. For example, in Logan's (1988) theory of automatization, the influence of training is simply to expose the would-be expert to instances within a domain. Automatic performance relies upon the retrieval of specific stored instances, and performance improves as a function of the ease of retrieving relevant instances. Instance retrieval is a potential mechanism of perceptual learning, and is consistent with the finding that perceptual skills are often highly specific and restricted to the trained materials (Kolers & Smythe, 1984).

This conception links expert perceptual skills to automatic, non-strategic processing. In fact, researchers have contrasted perceptual and strategic consequences of learning. Upon learning a categorization in which size is important, people may explicitly give extra weight to the size dimension in their judgments (Nosofsky, 1986) or their actual perceptual representations for size may change (Goldstone, 1994). Judgmental processes are usually assumed to be under more strategic, cognitive control than are perceptual ones. However, expert perceptual processing is frequently under impressive strategic control. Experienced tasters often have an enhanced ability to analyze a food with respect to relevant compounds, by excluding or highlighting compounds. The pulls toward greater automaticity and greater strategic control with expertise will eventually have to be reconciled.

A second challenge is to provide a better grounding for perceptual learning changes within the brain. Perceptual changes may be accompanied by brain changes at several scales, ranging for changes in the specialization of individual neurons, to changes in the patterns of local interconnectivity between neurons, to reorganizations of entire topological maps. Real progress in the cognitive neuroscience of perceptual learning will involve more than simply identifying correlations between perceptual behavior and neural structures. It will involve describing the neural processes that implement functional mechanisms of change.

A third challenge is to connect perceptual learning with higher-level cognitive processes. It seems to be difficult to alter low-level perceptual processing by high-level strategies. Genuine changes in perceptual skills probably require perceptual training. Still, as perceptual expertise increases, so does one's verbal vocabulary for the domain, and evidence from wine tasters suggests that true experts' verbal and perceptual vocabularies are closely synchronized (Melcher & Schooler, 1996).

Several of the chapter authors argue for interactions between perceptual and cognitive processes. There are even grounds for pursuing the more radical possibility that perceptual processes may be co-opted for high-level, abstract cognition (Goldstone & Barsalou, in press). Instead of interactions between two systems, we would have two aspects of the same process. In favor of this hypothesis, many perceptual routines are also useful for general cognition. Selective visual attention processes may be borrowed for cognitive tasks requiring selective application of a criterial definition. Visual binding processes used to connect various features of an object may be useful in creating syntactically bound structured propositions. Sensory synesthesia may provide the early grounding for abstract analogies. In fact, individual differences in mental functioning provide evidence in favor of these conjectures. Schizophrenic patients often show parallel deficits with perceptual and cognitive selective attention tasks. For example, they have difficulty ignoring visual distractors perceptually, and linguistic difficulty inhibiting the incorrect interpretation of an ambiguous word. Conversely, people with autism demonstrate overly selective attentional processes perceptually and cognitively. They often attempt to shut out sensory stimulation by narrowing their perceptual focus, and their generalizations from training are often overly narrow. If this hypothesis has merit, then research on the processes of perceptual learning may tell us about general cognitive learning processes because the processes themselves may be shared.

Conclusions

In commenting on her 1963 review of perceptual learning, Eleanor Gibson in 1992 lamented, "I wound up pointing out the need for a theory and the prediction that 'more specific theories of perceptual learning are on the way.' I was wrong there -- the cognitive psychologists have seldom concerned themselves with perceptual learning" (Gibson, 1992; p. 322). The present volume provide evidence that Gibson's 1963 prediction was accurate after all. Theories of perceptual learning are available that are specific enough to be implemented on computers, and precise enough to make quantitative predictions regarding behavior. Perceptual learning may still not be one of the mainstream topics within cognitive psychology, but several laboratories seem aware of the need for adaptive perception. Researchers in cognition are interested in providing flexible support structures for higher-level processes, while researchers in perception are interested in explaining the major sources of variability in perception due to training and history. Together, these research programs hold the promise of uniting perceptual and cognitive adaptability.

References

Biederman, I. (1987). Recognition-by-components : a theory of human image understanding. Psychological Review, 94, 115-147.

Biederman, I., & Shiffrar, M. M. (1987). Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 640-645.

Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A study of thinking. New York: Wiley.

Gibson, E. J. (1992). An odyssey in learning and perception. Cambridge, MA: MIT Press.

Gibson, J. J., & Gibson, E. J. (1955). Perceptual learning: Differentiation or enrichment? Psychological Review, 62, 32-41.

Goldstone, R. L. (1994). influences of categorization on perceptual discrimination. Journal of Experimental Psychology: General, 123, 178-200.

Goldstone, R. L., & Barsalou, L. (in press). Reuniting perception and conception: The perceptual bases of similarity and rules. Invited article to a special issue of Cognition.

Hoffman, D. D. & Richards, W. A. (1984). Parts of recognition. Cognition, 18, 65-96.

Lowe, D. G. (1987). The viewpoint consistency constraint. International Journal of Computer Vision, 1, 57-72.

Jakobson, R. Fant, G., & Halle, M. (1963). Preliminaries to speech analysis : the distinctive features and their correlates. Cambridge, MA: MIT Press.

Julesz, B. (1981). Textons, the elements of texture perception, and their interaction. Nature, 290, 91-97.

Kolers, P. A., & Smythe, W. E. (1984). Symbol manipulation: Alternatives to the computational view of mind. Journal of Verbal Learning and Verbal Behavior, 23, 289-314.

Logan, G. D. (1988). Toward and instance theory of automatization. Psychological Review, 95, 492-527.

Melcher, J. M., & Schooler, J. W. (1996). The misremembrance of wines past: Verbal and perceptual expertise differentially mediate verbal overshadowing of taste memory. Journal of Memory and Language, 35, 231-245.

Myles-Worsley, M., Johnston, W. A., & Simons, M. A. (1988). The influence of expertise on X-ray image processing. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 553-557.

Norman, G. R., Brooks, L. R., & Coblentz (1992). The correlation of feature identification and category judgments in diagnostic radiology. Memory & Cognition, 20, 344-355.

Nosofsky, R. M (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57.

Nosofsky, R. M., Palmeri, T. J., & McKinley, S. C. (1994). Rule-plus-exception model of classification learning. Psychological Review, 101, 53-79.

Schank, R. (1972). Conceptual dependency : a theory of natural language understanding. Cognitive Psychology, 3, 552-631.

Schyns, P. G., Goldstone, R. L, & Thibaut, J. (in press). Development of features in object concepts. Behavioral and Brain Sciences.

Schyns, P. G., & Murphy, G. L. (1994). The ontogeny of part representation in object concepts. In Medin (Ed.) The Psychology of Learning and Motivation, 31, 305-354. Academic Press: San Diego, CA.

Schyns, P. G., & Rodet, L. (in press). Categorization creates functional features. Journal of Experimental Psychology: Learning, Memory, and Cognition.

Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention, Cognitive Psychology, 12, 97-136.

Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327-352.

Wierzbicka, A. (1992). Semantic primitives and semantic fields. in A. Lehrer & E. F. Kittay (Eds.) Frames, fields, and contrasts: New essays in semantic and lexical organization. New Jersey: LEA. (pp. 209-228).