Hanging Together: A Connectionist Model of Similarity

Robert L. Goldstone

Indiana University



Human judgments of similarity have traditionally been modelled by measuring the distance between the compared items in a psychological space, or the overlap between the items' featural representations. An alternative approach, inspired jointly by work in analogical reasoning (D. Gentner, 1983; K. T. Holyoak & P. Thagard, 1989) and interactive activation models of perception (J. L. McClelland & D. E. Rumelhart, 1981), views the process of judging similarity as one of establishing alignments between the parts of compared entities. A localist connectionist model of similarity, SIAM, is described wherein units represent correspondences between scene parts, and these units mutually and concurrently influence each other according to their compatability. The model is primarily applied to similarity rating tasks, but is also applied to other indirect measures of similarity, to judgments of alignment between scene parts, to impressions of comparison difficulty, and to patterns of perceptual sensitivity for matching and mismatching features.

Hanging Together: A Connectionist Model of Similarity

"We must all hang together, or assuredly we will all hang separately"

- Benjamin Franklin

An ability to assess similarity lies close to the core of human cognition. While William James (1890/1950) is enthused that "this sense of Sameness is the very keel and backbone of our thinking" (p. 459), Fred Attneave (1950) quips that "The question 'What makes things seem alike or seem different?' is one so fundamental to psychology that very few psychologists have been naive enough to ask it." (p. 516). The question of "What makes things seem similar?" is important because of an intrinsic interest in how people make comparions. Furthermore, similarity plays a pivotal role in theories of cognition (Goldstone, Medin, & Gentner, 1991; Markman & Gentner, 1993; Medin, Goldstone, & Gentner, 1993). Problem solving depends on the similarity of previously solved problems to current problems. Categorization depends on the similarity of objects to be categorized and category members. Memory retrieval depends on the similarity of retrieval cues and stored memories.

This chapter describes a new model of similarity which was inspired by work in analogical reasoning (Falkenhainer, Forbus, & Gentner, 1990; Gentner, 1983, Holyoak & Thagard, 1989) and interactive activation models of perception (McClelland & Elman, 1986; McClelland & Rumelhart, 1981). The model is based on the principle that determining the similarity of richly structured scenes requires placing the scenes' parts into alignment, and that these alignments mutually and simultaneously affect each other. Parts are placed into alignment if they are thought to correspond to each other or "hang together."

Formal Models of Comparison

Different types of comparisons seem to require distinct and specialized cognitive processes. Perceptual judgments of similarity such as "orange is like red" seem to have little in common with analogies like "The Cleveland Indians are like Cinderella" precisely because good analogies are supposed to transcend "superficial" perceptual properties. From this perspective, it is not suprising that there has been little cross-talk between researchers studying perceptual similarity and those studying metaphors and analogies. Although both similarity and analogy are examples of comparison making, strikingly different tools have been employed by researchers in the two camps.

Researchers who model similarity judgments typically collect judgments from many comparisons. Accomodating the fundamental, qualitative phenomena of similarity judgments such as monotonicity (adding features in common to two items should never decrease their similarity) is trivial. The challenge is to quantitatively predict the entire pattern of results that arise from a large set of similarity judgments.

Researchers who model analogical reasoning have typically invested a far greater proportion of their energies in devising representations for the compared items than to quantitatively fitting their model to a set of comparisons. Rather than simply using feature sets or dimensional coordinates, these researchers use representations that are sufficiently rich to allow structural, rather than superficial, commonalities to be detected.

The work reviewed in this chapter is an attempt to reunite perceptual and analogical comparisons (for other like-minded efforts, see Chalmers, French, & Hofstadter, 1992; French, 1995; Goldstone, 1994b). Borrowing from analogical reasoning research, entities will be described using more structured representations than simple, "flat" feature lists or vectors. Borrowing from perceptual similarity research, the basic method of evaluating the model will be to obtain quantitative fits to large sets of similarity ratings. Several experiments are designed to explicitly pit predictions of standard similarity models against the new model.

Models of Similarity

The unique aspects of the model to be empirically tested can be appraised by considering other influential models of similarity. The three most influential approaches toward similarity in cognitive psychology have been geometric, feature set, and transformational models.

Geometric models of similarity are exemplified by nonmetric multidimensional scaling (MDS) models (Caroll & Wish, 1974; Shepard, 1962a, 1962b; Torgerson, 1965). MDS models represent similarity relations between entities in terms of a geometric model that consists of a set of points embedded in a dimensionally organized metric space. The input to MDS routines may be similarity judgments, dissimilarity judgments, confusion matrices, correlation coefficients, joint probabilities, or any other measure of pairwise proximity. The output from an MDS routine is a geometric model of the data, with each object of the data set represented as a point in an N-dimensional space. The distance between two objects' points in the space is taken to be inversely related to the objects' similarity. In MDS, the distance between points i and j is typically computed by:


where n is the number of dimensions, Xik is the value of dimension k for item i, and r is a parameter that allows different spatial metrics to be used (r=1 is a city-block metric, r=2 is an Euclidean metric). Similarity is assumed be related by a monotonic decreasing function to the interpoint distance.

In feature set approaches to similarity, entities to be compared are represented in terms of underlying features. A feature may be any property, characteristic or aspect of a stimulus; features may be concrete or abstractions such as "symmetric" or "beautiful" (Tversky & Gati, 1982). Similarity is then assessed by measuring the overlap between the feature sets of the compared entities. Amos Tversky's Contrast model (1977; Gati & Tversky, 1982, 1984) is the best known feature set approach to similarity. In this model, entities are represented as a collection of features and similarity is computed by:

S(A,B) = q f(A«B) - a f(A-B) - b f(B-A).

The similarity of A to B, S(A,B), is expressed as a linear combination of the measure of the common and distinctive features. The term (A « B) represents the features that items A and B have in common. (A-B) represents the features that A has but B does not. (B-A) represents the features that B, but not A, possesses. The terms q , a , and b refer to weights for the common and distinctive components.

The Contrast model is based on assumptions of monotonicity and independence. According to monotonicity, S(A,B) ³ S(A,C) whenever A « B ³ A « C, A-C ³ A-B, and C-A ³ B-A. That is, similarity always increases with addition of common features and/or deletion of distinctive features. According to the independence assumption, the ordering of the joint effect of any two components on similarity is independent of the level of a third component.

A lesser known, but noteworthy, approach to similarity is based on the assumption that the judged similarity of two entities is inversely proportional to the number of cognitive operations required to transform one entity into another. A critical decision in these theories is to specify what transformational operations are allowed. Imai (1977; 1992) used sequences such as XXOXXXOXXOX where Xs represent white ovals and Os represent black ovals. The four conjectured transformations were mirror image (XXXXXOO Æ OOXXXXX), phase shift (XXXXXOO Æ XXXXOOX), reversal (XXXXXOO Æ OOOOOXX), and wave length (XXOOXXOO Æ XOXOXOXO). Imai found that sequences that are two transformations removed (e.g. XXXOXXXOXXXO and OOXOOOXOOOXO require a phase shift and a reversal to be equated) are rated as less similar than sequences that can be made identical with one transformation. In addition, sequences that can be made identical by more than one transformation (XOXOXOXO and OXOXOXOX can be made identical by either mirror image, phase shift, or reversal transformations) are more similar than sequences that only have one identity-producing transformation. Wiener-Ehrlich, Bart, and Millward (1980) were able to find even higher correlations between transformational distance and similarity ratings by deriving different transformation sets for individual subjects.

Transformational models of similarity are the closest neighbors to the alignment-based approach to be discussed. In both approaches, processes are described that provide links between the items to be compared. In transformational approaches, items are linked by global transformations that affect the entire item. In alignment-based approaches, the links are part-to-part correspondences that connect local aspects of two items.

Models of Analogy

A major theme of several recent computational models of analogy is that when entities are compared, correspondences are created between the entities' elements, and these correspondences mutually influence each other (French, 1995; Gentner, 1983; Hofstadter, 1995; Holyoak & Thagard, 1989; Hummel & Holyoak, in press). This process of determining part-to-part correspondences has been called structural alignment. As an example, the comprehension of the atom/solar system analogy requires setting up correspondences between the atom's nucleus and the sun, between electrons and planets, and so on.

In Gentner's Structure Mapping Theory (SMT) (1983) and Holyoak and Thagard's (1989) ACME (Analogical Constraint Mapping Engine) system, comparison processes serve to 1) place similar relations in correspondence and 2) place relations in correspondence that are consistent with other relational correspondences. According to Gentner's systematicity principle and Holyoak and Thagard's "uniqueness" and "relational consistency" constraints and, elements are mapped onto each other so as to tend to yield coherent relational correspondences as opposed to isolated or inconsistent correspondences. An isolated correspondence arises if there is a relational match between two domains, but the relation is not involved in other higher-order relations, where a higher-order relation is a relation between relations. Correspondences are inconsistent if they create many-to-one mappings.

Holyoak and Thagard's ACME shares many properties in common with Falkenhainer, Forbus, and Gentner's computer implementation (1989) of Gentner's SMT, and will be described here because it is a localist connectionist model, and has many architectural similarities to SIAM (Similarity as Interactive Activation and Mapping). ACME takes as input two domains represented by sentences in predicate logic. ACME constructs a network of nodes where each node represents a syntactically allowable pairing between elements from the two domains. A node is made for every such allowable pairing. Links between nodes represent constraints. If two nodes are mutually consistent with one another, there will be an excitatory link between them. If the nodes are inconsistent, there will be an inhibitory link connecting them. In comparing an atom to the solar system, there will be an inhibitory link between the "nucleus -> sun" node and the "nucleus -> planet" node because such a mapping configuration violates ACME's isomorphism constraint. The isomorphism constraint favors mappings that are structurally consistent and one-to-one. Semantic similarity and pragmatic importance also provide constraints.

Once the nodes have been constructed and the connections created, activation is allowed to spread from node to node. Eventually the network settles into a final state with a particular set of activated nodes representing a globally consistent set of matches. The terminally activated nodes represent the system's candidates for correspondences between the domains. The structural goodness of the final correspondences is formally measured by:


where WIJ is the weight of the connection between nodes I and J, and OI(t) is the output activation of node I at cycle t. The measure G monotonically increases as the number of cycles increases.

The Similarity as Interactive Activation and Mapping (SIAM) Model

The Importance of Alignment

There are important differences between geometric, featural, and transformational approaches to similarity, but there is also a significant commonality. None of these approaches stresses the alignment of parts of compared item in determining similarity, or the allied principle that alignments are interdependent. In contrast, this notion is central to models of analogical reasoning, and there is good evidence that the alignment of relational sructures is a major part of human's analogical reasoning (Clement & Gentner, 1991; Gentner & Toupin, 1986; Gentner, Ratterman, & Forbus, 1993; Gick & Holyoak, 1983).

Previous research (Gentner & Markmam 1994, 1995; Goldstone, 1994a; Goldstone & Medin, 1994a, 1994b; Markman & Gentner, 1993a, 1993b) has shown that there are also strong influences of alignment on similarity. Shared properties between objects increase similarity more if the properties belong to parts that correspond to each other. For example, in a pilot study by Goldstone (1991) using the materials in Figure 1, 20 University of Michigan undergraduates were shown triads consisting of A, B, and T, and were asked to say whether Scene A or B was more similar to T. The strong tendency to choose A over B in the first panel suggests that the feature "square" influences similarity. Other choices indicated that subjects also based similarity judgments on the spatial locations and shadings of objects as well as their shapes.

Insert Figure 1 about here

However, it is not sufficient to represent the left-most object of T as {Left, Square, Black} and base similarity on the number of shared and distinctive features. In the second panel, A is again judged to be more similar to T than is B. Both A and B have the features "Black" and "Square." The only difference is that for A and T, but not B, the "Black" and "Square" features belong to the same object. This is not incompatible with feature set representations as long as we include the possibility of conjunctive features in addition to simple features such as "Black" and "Square" (Gluck, 1991; Hayes-Roth & Hayes-Roth, 1977). By including the conjunctive feature "Black-Square," that is possessed by T and A, we can explain, using feature sets, why T is more similar to A than B. The third panel demonstrates the need for a "Black-Left" feature, and other data indicates a need for a "Square-Left" feature. Altogether, if we wish to explain the similarity judgments that people make we need a feature set representation that includes six features (three simple and three complex) to represent the square of T.

But, there are two objects in T, bringing the total number of features required to at least two times the six features required for one object. The number of features required increases still further if we include feature-triplets such as "Left-Black-Square." In general, if there are O objects in a scene, and each object has F features, then there will be OF simple features. There will be O conjunctive features that combine two simple features (i.e. pair-wise conjunctive features). If we limit ourselves only to simple and pairwise features, those features required to explain the pattern of similarity judgments in Figure 1, we still will require OF(F+1)/2 features per scene, or OF(F+1) features for two scenes that are compared to one another.

Thus, featural approaches to similarity require a fairly large number of features to represent scenes that are organized into parts. Similar problems exist for dimensional accounts of similarity. The situation for these models becomes much worse when we consider that similarity is also influenced by relations between features such as "Black to the left of white" and "square to the left of white." Spatial relations such as these have been shown to influence similarity judgments (Goldstone et al, 1991; Markman & Gentner, 1993a; Palmer, 1978). Considering only binary relations, there are O2F2R-OFR relations within a scene that contains O objects, F features per object, and R different types of relations between features.

The propositional representation of T might be Left-of(In-same-object(black, square), In-same-object(white, circle)). The propositional representation trades off processing complexity for efficiency. The representation is efficient in that only five non-syntactic symbols are required whereas the feature set representation required 24 features to obtain the same representational power. However, the propositional representation assumes that there will be processes that operate on the representation to determine the similarity. The process that operates on the feature set representation is quite simple. Similarity is computed by simply matching identical features between scenes, and increasing similarity as a function of this pool of shared features minus a function of the features that were not matched. For the propositional representation, processes are required that count shared arguments more for similarity if they are in the same order, in the same relation, and combined with the same arguments in the same relation.

The complex, propositional representation requires a more complex set of processes to use it than were required for featural or dimensional representations. Questions like "What is done with matching simple features that are not in the same relation?" arise with propositional, but not feature set, representations. However, it is likely that the added representational and processing complexity of propositional representations is a price worth paying for the ability to efficiently capture the structure of a scene, particularly when the scene has many structured elements. The model to be described is an attempt to provide a formal processing account that uses propositional representations in determining similarity.

One particularly important problem that arises when structured scenes are compared concerns establishing correspondences or alignments between scene elements. One may account for the choice of A over B as more similar to T in the second panel of Figure 1 by pointing out that, although both A and B have a black object, B's black object does not correspond to T's black object. It is assumed that the similarity of two scenes is increased more by a matching feature that occurs between corresponding, rather than noncorresponding, objects. As the lowest panel indicates, object correspondence may depend on location rather than shape.

In short, often times it is useful to describe scenes structurally. Landscapes, faces, stories, melodies, and drawings frequently contain multiple interrelated objects that, in turn, contain multiple features. Propositional representations capture structural aspects of scenes, but introduce complications in determining how to compare propositions for similarity. Processes must exist to place the elements from one scene into correspondence with the elements from the other scene. Correspondences are influenced by perceptual similarity, and by other correspondences (according to their consistency). Reciprocally, correspondences influence how much a particular matching element will influence similarity.

A Description of SIAM

Originally inspired by McClelland and Rumelhart's (1981) interactive activation model of word perception, SIAM also bears many resemblances to models of analogical reasoning (Falkenhainer, Gentner, & Forbus, 1989; Holyoak & Thagard, 1989) . Complete descriptions of SIAM are provided elsewhere (Goldstone, 1994; Goldstone & Medin, 1994a). The primary processing unit is the node. Nodes send and receive activation from other nodes. As in Holyoak and Thagard's ACME model, nodes represent hypotheses that two entities correspond to one another in two scenes. In SIAM, there are two types of nodes: feature-to-feature nodes and object-to-object nodes (the full version of SIAM [Goldstone, 1991] also has nodes that link relations between objects).

Feature-to-feature nodes each represent a hypothesis that two features correspond to each other. There will be one node for every pair of features that belong on the same dimension (e.g. "white" and "black" both belong to the "color" dimension). As the activation of a feature-to-feature node increases, the two features referenced by the node will be placed in stronger correspondence. All node activations range from 0 to 1. In addition to activation, feature-to-feature nodes also have a "Match" value -- a number between 0 and 1 that indicates how similar the two features' values on a dimension are. The Match value decreases monotonically as the similarity of two values decreases (Medin & Schaffer, 1978). Object-to-object nodes each represent a hypothesis that two objects correspond to one another. As the activation of an object-to-object node increases, the two objects will be said to be placed in stronger correspondence with each other.

At a broad level, SIAM works by first creating correspondences between the features of scenes. At first, SIAM has "no idea" what objects belong together. Once features begin to be placed into correspondence, SIAM begins to place objects into correspondence that are consistent with the feature correspondences. Once objects begin to be put in correspondence, activation is fed back down to the feature (mis)matches that are consistent with the object alignments. In this way, object correspondences influence activation of feature correspondences at the same time that feature correspondences influence the activation of object correspondences.

As in ACME and McClelland and Rumelhart's original work, activation spreads in SIAM by two principles: 1) nodes that are consistent send excitatory activation to each other and 2) nodes that are inconsistent inhibit each another. Nodes are inconsistent if they create two-to-one alignments -- if two elements from one scene would be placed into correspondence with one element of the other scene. Feature-to-feature nodes also excite and are excited by Object-to-object nodes. For example, the node that places Object A in correspondence with Object C is excited by the node that places a feature of A into correspondence with a feature of C. The excitation is bi-directional; a node placing two features in correspondence will be excited by the node that places the objects composed of the features into correspondence. Finally, match values excite feature-to-feature nodes. Features are placed in correspondence to the extent that their features match.

Processing in SIAM starts with a description of the scenes to be compared. Scenes are described in terms of objects that contain feature slots that are filled with particular feature values. Processing consists of activation passing. On each time cycle, activation spreads between nodes. Network activity starts by features being placed in correspondence according to their perceptual similarity, as determined by match values. Subsequently, nodes send activation to each other for a specified number of time cycles. The net input to a node i is given by:


where n is the number of afferent links to node i (including excitatory links from match values to nodes), Aj(t) is the activation of node j at time t, and Wij is the weight of the link going from unit j to unit i. In the current modeling, all weights are set equal to 1.0 (for excitatory connections) or -1.0 (for inhibitory connections). Neti(t) is the activation of node i normalized by the difference between the maximum (MAX) and minimum (MIN) activation values that i can possibly attain, given the number of inhibitory and excitatory afferents to i. If i has 2 inhibitory and 1 excitatory afferents, then MIN=-2 (if both inhibitory inputs were completely activated, and the excitatory input was zero) and MAX=1. The new activation of a node at time t+1 is a synchronously updated function of the old activation at time t and the net input received by the node:

if Neti(t) > 0.5 then

A(t+1) = A(t) + (1-A(t)) * (neti(t)-0.5) * B, otherwise

A(t+1)=A(t) - A(t) * (0.5-neti(t)) * B

where B is a parameter for the amount of activation adjustment.

The network's pattern of activation determines both the similarity of the scenes and the alignment of the scenes' features and objects. Nodes that have high activity will be weighted highly in the similarity assessment and their elements will tend to be placed in alignment.

At each time cycle, the similarity of the two scenes is computed by


a simplified version of the similarity formula used by Goldstone (1994a), where n is the number of feature-to-feature nodes required to represent two scenes (n = FO2 where F is the number of features in an object, and O is the number of objects in the scenes), Ai is the activation of node i (0 £ Ai £ 1), and the match value associated with node i lies between 0 and 1. Node activations, but not match values, change with processing. Generally speaking, nodes that represent correspondences that are consistent with many other strong correspondences will have their activation increased. Thus, similarity will be influenced by the perceptually determined featural similarities between scene elements (match values), but it will also be influenced by the activation or attention given to these similarities. In turn, the activation of a particular similarity will depend on its consistency with other emerging similarities.

In addition to computing scene similarity, SIAM also determines the strength of correspondence between scene elements, as represented by Ai values. Mapping accuracy, defined as subjects' ability to align objects in a scene in an optimal manner, is modeled by comparing object-to-object node activations. The node activations associated with one mapping of objects from one scene to another are compared with alternative mappings' activations. The probability of performing a particular mapping of scene elements is


where {C(M)} is the set of object-to-object nodes consistent with mapping M, and n is the total number of object-to-object nodes. According to this formula, a mapping is likely to be made if the objects placed in correspondence by the mapping are strongly aligned (their object-to-object Ai value is high) relative to the other object alignments. The formula also emphasizes object correspondences that approach 1.0, the maximum correspondence strength.

Empirical Tests of SIAM

In the early stages of developing SIAM, efforts were made to quantitatively fit SIAM to sets of similarity ratings obtained from many comparisons, and to compare these fits to feature-based models without alignment (Goldstone, 1994a). Once reasonable quantitative fits were found, specific predictions derived from SIAM were empirically tested on human subjects (Goldstone, 1996; Goldstone & Medin, 1994a, 1994b). Throughout this process, the model has been applied to data from a range of laboratory tasks, inlcuding similarity ratings, mapping judgments, detection response times, and speeded same/different judgments. These variations all fit under what Jacobs and Grainger (1994b) call the "horizontal generalizability" of a model. We have also begun addressing what they call "vertical generalizability," "a model's ability to generalize across different scales of the modeled process" (p. 1316), by correlating subjective judgment difficulty with architectural complexity with SIAM.

The Role of Alignment in Similarity

Our initial series of experiments (Goldstone, 1994a) that tested SIAM explored the role of feature alignment in similarity judgments. In particular, we were interested in whether and how the influence that a matching feature has on similarity depends on how well aligned it is. For example, in comparing cricket to baseball, does the shared white feature between baseballs and and cricket players' shirts increase the sports' similarity? Intuitively, one may feel it does not, because this shared feature occurs between parts that do not correspond to each other. We will call a matching feature between two entities a MOP (Match Out of Place) if it occurs between poorly aligned parts, and a MIP (Match In Place) if it occurs between properly aligned elements. Our initial studies quantitatively explored the influence of MOPS and MIPs on similarity in situations where decidng whether a match is aligned or not depends on the entire state of other matches between the scenes. In terms of SIAM's structure, parts tend to be aligned if they have many features in common, and once they are aligned, the alignments may influence how particular feature matches are weighted,

In a typical experiment, two scenes were shown side-by-side, and each scene was composed of two butterflies. Each butterfly varied on four dimensions: head type, tail type, body shading, and wing shading. For each pair of scenes, subjects assigned a similarity rating (higher numbers for greater similarity) and then indicated which butterflies corresponded to each other between the scenes. It was stressed to the subjects that they were supposed to rate the similarity of the whole left scene to the whole right scene. On each trial, one display was randomly constructed (the "starting display") and the other display (the "changed display") was constructed by selectively changing features of the starting display. Features were altered in one of the six ways shown in Figure 2. The changes can be abstractly represented by the alphabetic representation below each changed display, with the letters X, Y, A, and B denoting different features (body shading in the example).

The method labeled "XY Æ XY" simply duplicates the starting display to create the changed display. Thus, along the body shading dimension there are 2 MIPs. The method labeled "XY Æ YX" swaps the body shading of the starting display's butterflies in creating the changed display (as indicated by the reversed order of letters in the label). Thus, for this display, the same shadings are present in the two displays, but matching shadings belong to dissimilar, unaligned butterflies, producing 2 MOPs. "XYÆYB" introduces one new shading B and has one matching shading Y in common with the starting display. The matching shading belongs to dissimilar butterflies.

Insert Figure 2 about here

On half of the trials, in going from the starting display to the changed display, we changed the features of one dimension, in one of six ways. On the other half of trials, we changed two dimensions, each in one of six ways. On one third of the trials, the butterflies that corresponded to each other were placed in the same relative positions in the two scenes. On one third of the trials, the butterflies that corresponded to each other were reversed, and on one third of the trials, the butterflies were given new unrelated positions. The particular features that were switched, the positions of the butterflies, and the particular values that the dimensions had were all randomized.

The results reveal an influence of both matches in and out of place on similarity. First of all, similarity ratings for 0,1, and 2 MIPs averaged to be 5.5, 6.4, and 7.1, respectively. MOPs have a much smaller effect; the ratings for 0, 1, and 2 MOPs averaged to be 5.5, 5.5, and 5.9, respectively.

Additional support for the hypothesis that scene alignment influences similarity can be obtained by comparing trials where the subject makes the experimenter-expected mapping to those where the subject does not. If the subject makes the mapping that maximizes the number of matches in place (the expected mapping), then similarity is much greater than if subjects make a non-optimal mapping. Both mappings result in the same number of total scene matches; the expected mapping results in a greater number of MIPs relative to MOPs. Thus, the difference between expected and unexpected mapping trials provides further evidence that MIPs increase similarity more than MOPs.

A summary of the results from the first experiment reveals: (1) MIPs and MOPs both increase similarity, but MIPs increase similarity more, (2) if subject's give non-optimal mappings, similarity is lower than if they give the optimal mapping, and (3) MIPs simultaneously increase similarity ratings and mapping accuracy but MOPs increase similarity while decreasing mapping accuracy. The first two conclusions speak to our most central claim: the act of assessing similarity involves placing the parts of scenes in alignment with one another.

SIAM's Account of Alignment Effects. To convey an intuitive understanding of SIAM's account of the differential impact of MIPs and MOPs on similarity, a small example will be presented. Figure 3 shows the node activations and feature values when SIAM is presented with the starting display and the display labelled "XY Æ YX." This display produces two MOPs along the body shading dimension. Each of the 4 object-to-object nodes and 20 feature-to-feature nodes is represented by a slot in Figure 3, and the two values within the slot reflect the activations of these nodes after 10 and 20 cycles of processing have passed.

Insert Figure 3 about here

Figure 3 shows that properly aligned feature matches become more activated as processing continues, while improperly aligned feature matches become decreasingly activated. MOPs influence similarity less because of these decreased activations. Although MIPs and MOPs are both associated with match values of 1.0, similarity is computed as a weighted product of match values by their activations. Feature-to-feature nodes that are not consistent with other strongly activated nodes will tend to be de-activated, and this results in less impact of MOPs on similarity ratings. Mirroring these feature-level trends, optimally aligned objects become more strongly connected with time. As a result, mapping accuracy improves over time.

Assessing the SIAM Model. A sufficiently large amount of data was collected to quantitatively fit SIAM. Thirty-six different displays were presented with varying numbers and distributions of MIPs and MOPs. A version of SIAM was fit with two free parameters -- one for the number of cycles of activation passing allowed, and one for the relative influence of object-to-object activations on feature-to-feature activations. The correlation between the best fitting SIAM model and the empirically obtained data was .978 for the experiment reported, and .988 for a replication that did not require subjects to perform mappings after rating similarity.

Although these correlations seem impressive, they are misleading because any model that has a monotonic relation between shared features and similarity, and weights aligned features more than unaligned features, will correlate fairly highly with the human data. To properly evaluate SIAM, two simple alternative models of similarity were developed that were special cases of Tversky's contrast model. Both of these models have mechanisms that make aligned features more influential than unaligned features.

The first model, SCFM (Simple and Conjunctive Features Model) is based on the assumption that all of the features of the two scenes are listed, and similarity is a monotonic increasing function of the number of shared simple and conjunctive features weighted by their salience. Historical justification for this model comes from work that has shown that people are sensitive to configurations of cues in addition to simple cues (Gluck & Bower, 1988; Hayes-Roth & Hayes-Roth, 1977). If simple (e.g. black body) and conjunctive (e.g. black body with checkerboard wings) features are permitted, the advantage of MIPs over MOPs can be roughly accomodated. A scene with two objects ABCD and EFGH (each letter refers to a particular feature) would be described as possessing the following features:

simple features: {A,B,C,D,E,F,G,H}

2-way conjunctive features: {AB,AC,AD,BC,BD,CD,EF,EG,EH,FG,FH,GH}

3-way conjunctive features: {ABC,ABD,ACD,BCD, EFG,EFH,EGH,FGH}

4-way conjunctive features:{ABCD,EFGH}

Thus, the scene would be more similar to a scene with the same objects than with objects ABCH and EFGD. Although both of these comparisons have the same number (8) of shared simple features, any scene shares 30 simple and conjunctive features with itself (e.g. the scene has 30 features), as compared to the second scene's 16 matching features. As such, SCFM can predict that 2 MIPs increases similarity more than 2 MOPs. In testing SCFM, a linear regression model was used with four predictor terms:

similaritySCFM = a (number-of-shared-simple-features) + b (number-of-shared-2-way-conjunctive features) + c (number-of-shared-3-way-conjunctive-features) + d (number-of-shared-4-way-conjunctive-features)

The second model, WMMM (Weighted MIPs and MOPs Model) makes an explicit distinction between MIPs and MOPs, assigning a separate weighting term for each of these terms in a regression that predicts similarity:

similarityWMMM = a (number-of-MIPs) + b (number-of-MOPs) + c (number-of-MIPs X Number-of-MOPs).

The fits of SCFM and WMMM to the reported experiment are respectable, yielding correlations to human results of .94 (SCFM) and .968 (WMMM), with one (for WMMM) or two (for SCFM) more free parameters than SIAM. SIAM did correlate significantly greater than either of these other models, but to compare models more precisely, a stepwise linear regression determined if one of the models significantly increased fit with the empirical data when it was included in the other model. With this technique it was shown that SIAM accounted for trends in the data that could not be accounted for by SCFM or WMMM. On the other hand, neither of these other models significantly increased SIAM's fit when they were added to SIAM. An analysis of the largest data-model residuals suggests that SIAM had three advantages over these models:

1) SIAM predicts that object alignment, not simply object similarity, influences similarity. Particular comparisons allowed object alignment to be teased apart from object similarity. SIAM predicts that object alignment will, in the long run, serve as the basis for weighting feature matches; MIPs become relatively influential in similarity assessments, as compared to MOPs. SCFM, the model that bases similarity on simple and conjunctive features, essentially predicts that object similarity determines how much a feature match will count. Objects will tend to be aligned if they share many features, however object alignment also depends on the similarity of other object pairs in the scene. When alignment and similarity are dissociated, then object alignment, not sheer number of matching features, is a better predictor of scene similarity. For example, consider a situation in which one scene contains two objects that can be abstractly described as AAAA and BBBB, and another scene contains the objects AABC and DAAA. Although the object AABC is more similar to the object AAAA, it optimally aligns with BBBB. If we enforce the constraint that only one-to-one alignments between objects are allowed, then placing AAAA into correspondence with DAAA, and BBBB into correspondence with AABC maximizes the number of feature matches that occur between corresponding objects. Experimental tests show that, indeed, the feature matches between optimally aligned, rather than most similar, objects are most influential to similarity.

2) SIAM predicts 2 MOPs > 1 MOP = 0 MOPs. For the purpose of increasing similarity, 2 MOPs increase similarity significantly over 1 MOP, but in many cases, 1 MOP does not significantly increase similarity over scenes with 0 MOPs. This pattern specifically holds when the 2 MOPs are arranged as dictated by the method "XY Æ YX." In other words, when the dimension values of two butterflies are swapped to create the changed scene, then the matches associated with those values have a relatively strong influence on similarity. WMMM is unable to accomodate this nonlinearity because it assigns a single weight for all MOPs. The finding is naturally handled by SIAM's posited interaction between (in)consistent feature-to-feature nodes. Feature-to-feature nodes that consistently place feature values from the same dimension in correspondence send direct activation to one another. Two MOPs, if created by swapping feature values, will support one another, although they are inconsistent with the globally optimal object-to-object correspondences. A single MOP has no supporting partner, and thus will receive little activation.

SIAM also accurately predicts that a MOP that competes against a MIP will have very little influence on increasing similarity. For example, in Figure 2, the XY Æ XX display has a MOP on the feature black body shading. This MOP does not increase similarity. This effect is consistent with SIAM's account of similarity in which scene parts are placed in correspondence and inconsistent correspondences compete against one another. The MIP involving the same black body shading feature will strongly inhibit this MOP because it directly violates the one-to-one mapping constraint.

3) SIAM predicts the influence of feature distribution on similarity ratings. MIPs that are concentrated in object pairs (several matches exist between two specific butterflies) or in dimensions (for example, several matches exist on the body shading dimension) increased similarity more than MIPs that are distributed across object pairs and/or dimensions. Once again, WMMM cannot accomodate this result because it predicts that all comparisons that have the same numbers of MIPs and MOPs will be equally similar. In SIAM, the dimension-concentrated advantage stems from the influence of feature-to-feature nodes on (in)consistent feature-to-feature nodes. Feature-to-feature nodes for a particular dimension are inconsistent if they create a many-to-one mapping between feature values, otherwise they are consistent. Consistent feature-to-feature nodes mutually excite each other, thus increasing their influence on similarity. The object-concentrated advantage accrues from feature-to-object and object-to-feature connections. Objects with many concentrated feature matches will be placed in strong correspondence, and will feed activation back down to the individual feature matches.

In addition to these properties that produce a quantitative advantage, SIAM has a major conceptual advantage over SCFM and WMMM. SIAM presents a process model for determining alignments. In WMMM, a process for determining alignments is presumed but not described. SIAM, in addition to computing similarity, determines correspondences between objects. These mapping predictions have been correlated with subjects' mapping judgments ("which butterfly corresponds to this butterfly"), yielding reasonable fits with the same parameters used to optimize SIAM's fit to similarity ratings.

The Dynamics of Similarity

SIAM is unique among models of similarity in that it makes predictions about the dynamic time course of similarity. As SIAM executes more cycles of activation passing, feature-to-feature nodes become increasingly influenced by object correspondences. At first, how strongly two objects features correspond to each other depends mostly on the features' similarity; two features tend to be put into correspondence if they are identical or highly similar. With more time, feature correspondences depend increasingly on object correspondences. Specifically, features tend to be placed in strong correspondences if they belong to objects that are placed in correspondence. In turn, objects are placed in correspondence if they are featurally similar and consistent with other emerging correspondences. After several cycles, feature correspondences will reflect the strength of the object correspondences with which they are consistent.

One prediction of this temporal process is that the relative importance of MIPs compared with MOPs increases with processing time. This prediction is seen in Figure 3. In this example, as cycles increase, MOPs are decreasingly weighted and MIPs are increasingly weighted for similarity.

In support of this prediction, Goldstone and Medin (1994a) showed that MIPs become more influential with processing time relative to MOPs. In one experiment, subjects were required to decide whether two scenes contained the same objects (butterflies) within a specified time limit (1, 1.84, or 2.68 seconds). On half of the trials, the same butterflies were presented in the two displays, and on the other half of trials, selected features were altered. The rate of incorrect responses on "different" trials was assumed to be directly related to the scenes' similarity -- as different scenes become more similar it gets harder to respond that they are different.

As with the previously described experiment, trials paired a starting display and changes to the starting display by one of the 6 methods shown in Figure 2. By comparing "same"/"different" judgments across different displays with differing numbers of feature matches that belonged to corresponding and noncorresponding pairs of objects, it was possible to determine the influence of each type of match. As evident in Figure 4, when subjects were given a fairly long amount of time to make their judgments, MIPs were much more important in determining errors and response times to respond "different" than were MOPs. When subjects were compelled to respond within a short deadline, the two types of feature match were much closer in importance. This is exactly the qualitative pattern of results predicted by SIAM.

Insert Figure 4 about here

To quantiatively fit SIAM to the same/different judgment task, some additional assumptions are required. SIAM's similarity estimate is converted into "same," "different," and "overtime" responses by augmenting SIAM with a boundary-crossing model. At any given time step, Gaussian random noise is added to SIAM's estimate and the resulting value is compared to upper and lower boundaries. If the result exceeds the upper boundary, a "same" response is given, and if the response falls below the lower boundary, a "different" response is given. If neither boundary is crossed before the deadline is reached, the response is classified as "overtime." Finally, the upper and lower boundaries are not fixed across processing time. They gradually converge together. By having the boundaries converge at different rates, the three different response time deadlines are modelled (Busemeyer & Rapoport, 1988). Short deadline trials are simulated by relatively fast convergence rates for the boundaries.

These added processing assumptions add to the number of free parameters in SIAM. A six parameter version of SIAM (variance of Gaussian noise, distance between upper and lower response boundaries, three parameters for convergence rates of the different deadlines, and the number of SIAM cycles that correspond to one second of human processing) was fit to 39 data points of same/different/overtime responses. The fit again was high (pearson correlation = .973), and SIAM significantly augmented other models. In particular, using the stepwise linear regression technique to see whether Models A+B accounted for human performance better than Model A alone, it could be concluded that SIAM considerably increased the fit of three alternative models.

First, it significantly augmented a 4 parameter model that based responses on overall numbers of matching features. This model's weakness was clear: because it did not distinguish between MIPs and MOPs, it was unable to account for the considerably greater influence of MIPs on increasing error rates on "different trials."

Second, SIAM significantly augmented a three parameter model that distinguishes between MIPs and MOPs, assigning separate regression weights to these variables. The problem with this model is that, although it can handle greater influence of MIPs than MOPs on error rates, it cannot account for the interaction between the type of feature match (MIP or MOP) and response deadline.

Finally, a six parameter regression model of similarity was tested that assigned different regression weights for MIPs and MOPs for each of the three deadlines. This model resulted in a regression equation of:

% Errors on "different" trials = 7.4 (fast-deadline MOPs) + 14.1 (fast-deadline MIPs) + 0.9(medium-deadline MOPs) + 10.2 (medium-deadline MIPs) + 0.5 (slow-deadline MOPs) + 9.2 (slow-deadline MIPs).

These regression coefficients confirm that MIPs are more influential than MOPs, and that the relative importance of MIPs compared to MOPs is greater at slower deadlines. SIAM significantly improves the fit of even this third model. By analyzing the data/model residuals for individual comparisons, it becomes clear that SIAM's superiority stems from its ability to predict different rates of alignment. The third regression model allows MIPs to be differently weighted at different deadlines, but assumes that all MIPs receive equal weight at a given deadline. In contrast, SIAM naturally predicts that alignments take varying amounts of time to compute. A large influence of MIPs relative to MOPs can take a long time to develop if the objects are not placed in strong alignment, but can develop rapidly if the objects have no strong competitors. For example, even at a fast deadline, SIAM correctly predicts that MIPs will count much more than MOPs when the compared scenes have many MIPs and no MOPs. In this situation, the proper alignments are quickly determined. For displays that have many MOPs, SIAM predicts that alignments will be slowly determined and MIPs may not have a strong advantage over MOPs even at long deadlines. In sum, SIAM's ability to account for the empirical results is not only based on its ability to predict the interaction between type of feature match (MIP vs MOP) and time, but also on its specific dynamic process for determining alignments.

SIAM's predictions for dynamically changing similarity assessments has also been applied to existing data sets. In particular, some empirical discrepencies between results from different similarity tasks can be explained by tasks requiring different times. For example, although Corter (1987) was mostly interested in the commonalities between same/different judgments and similarity ratings, systematic differences in his results are explained if SIAM is used to model both judgments, but more cycles of processing are allowed for the untimed similarity ratings than for speeded same/different judgments. In particular, relational properties (the position of one line relative to other lines in stick figures) assume greater importance than simple properties (absolute orientation and position of a line) for similarity ratings, whereas this trend is dimished or reversed for same/different judgments. In SIAM, relational similarities require object-to-object alignments and therefore require extended time to determine, whereas simple feature matches, like MOPs, can be registered before object-to-object alignments are determined. This explanation can also account for Beck's (1966) dissociations between perceptual grouping judgments and similarity judgments. For the faster perceptual grouping task, relative orientation is less influential than simple, absolute orientation, but the opposite is true for slower similarity ratings. In general, by modelling different tasks with the same SIAM model under different parameters, underlying commonalitities between the tasks can be observed. Finding a dissociation between two tasks does not imply that one of the tasks involves something other than the assessment of similarity; it may simply show that the two tasks employ different parameters within a unified similarity computation (see also Goldstone, 1994b).

Mapping Judgments

Although similarity assessments have been the primary focus of our modeling efforts, SIAM also offers predictions for subjects' judgments about what objects correspond or "map" to each other. By polling object-to-object nodes and comparing their relative activations, SIAM makes predictions concerning how often a particular alignment of objects will be made.

Nondiagnostic features and Mapping. One somewhat counterintuitive prediction made by SIAM is that nondiagnostic features, features that are possessed by all objects within a scene, may still increase mapping accuracy. A nondiagnostic feature is a feature that all of the objects within a scene share. A mapping is considered correct if it places optimally aligned objects into correspondence. Objects are optimally aligned if they belong to the consistent set of correspondences that produces the greatest number of feature matches between corresponding objects.

Figure 5 provides an example of two of displays used in an experiment to test this prediction (Goldstone, 1991). The two displays differ only in whether nondiagnostic features are shared between displays. In the left scenes of Figure 5, three dimensions are non-diagnostic: wings, body, and tail. These dimensions are nondiagnostic because both butterflies within the scenes have the same values on these three dimensions. In the top display, the butterflies in the right scene do not have any nondiagnostic features in common with the left butterflies. In the bottom display, the nondiagnostic features are shared by the right scene's butterflies.

Insert Figure 5 about here

In the experiment, each trial began with the simultaneous display of the initial scene and the changed scene. The subjects' task was to rate the two scenes' similarity on a scale from one to nine. After subjects gave their similarity rating, they were asked to place the butterflies from the left scene "into correspondence" with the butterflies on the right scene, by pressing one key if they believed that the top butterfly of the left scene corresponded to the top butterfly of the right scene, and the two bottom butterflies corresponded to each other, and pressing another key to denote the opposite mappings. Results from the experiment indicated that shared nondiagnostic features did increase mapping accuracy. The results for 2 of the 8 displays are shown in Figure 5, showing an almost two-fold increase in error rate when nondiagnostic features are not shared between scenes. The result is surprising because some analyses would suggest that nondiagnostic features cannot influence mapping performance. By definition, nondiagnostic features cannot serve as cues, by themselves, for determining what objects correspond to each other.

In SIAM, the more features (diagnostic and nondiagnostic) that two objects share, the more strongly the objects will be placed in correspondence, and consequently, the more strongly all feature matches shared by the objects will be activated, including the diagnostic feature match. The diagnostic feature match will, in turn, feed its activation back to the object-to-object correspondence that is part of the optimal mapping. The presence of shared nondiagnostic features places objects in strong correspondence, objects send activation to all consistent features, and the newly strengthened diagnostic features excite consistent, but not inconsistent, object correspondences. If the two scenes do not agree on their nondiagnostic features, no objects will be placed in strong correspondence, and no substantial level of activation will be fed back to the diagnostic feature. In this way, SIAM predicts that even features that provide no cue about what objects correspond to each other can still increase mapping accuracy. In SIAM, mapping accuracy is greater in the lower display because the optimally aligned feature matches on the head dimension receive activation from the strongly activated object-to-object node.

Figure 6 provides a demonstration of SIAM's behavior on the displays shown in Figure 5. In this demonstration, only default parameter values are used (number of cycles does not have a default value, and so is set to an arbitrary value of 10 cycles). The match values for matching and mismatching features, and the final activations of all feature-to-feature and object-to-object nodes are shown.

Insert Figure 6 about here

SIAM's account of the mapping advantage for shared nondiagnostic features can be seen by analyzing activations of correspondences. The matching head features between butterflies, the only feature that is diagnostic for determining proper object correspondences, receive more activation when there are shared nondiagnostic features (activation = .982) than when the nondiagnostic features are not shared (activation = .974). In addition, the correct object-to-object correspondences (Object A Æ Object C, Object B Æ Object D) receive greater activation when nondiagnostic features are shared (activation = .846) rather than unshared (activation = .592). The positive feedback interaction between object and feature correspondences explains both of these differences.

Judgement Complexity. As described earlier, one possible way to explain the results from Figure 1 without alluding to alignment-based processes is to argue that people are sensitive to conjunctions of simple features (Gluck, 1991; Hayes-Roth & Hayes-Roth, 1977). In the second panel, for example, T might be more similar to A than it is to B because subjects represent T as possessing the feature "black square," a feature that A but not B possesses. This "conjunctive features" explanation can explain why feature matches often increase similarity more when they occur between similar objects.

One source of data of potential use in discriminating between SIAM and the conjunctive features explanation is "ease of comparison" judgments. The conjunctive features model claims that similarity is a function of shared simple and conjunctive features. SIAM claims that similarity is based on feature match values, weighted by a function of feature-to-feature node activations. These models have different computational complexities. In the conjunctive features model, a scene with O objects and F simple features requires O(2F-1) simple and conjunctive features to be represented. If we only require simple and two-way features (features that combine two simple features) to be represented, OF(F+1)/2 features are still required. In SIAM, the same scene requires FO2 feature-to-feature connections and O2 object-to-object connections. Thus, SIAM and the conjunctive features model both have at least quadratic growth (the conjunctive feature model may have more accelerated growth), but their growth depends on different terms. The number of features that the conjunctive features model posits increases quadratically as F increases linearly. The number of node-to-node connections that SIAM posits increases quadratically as O increases linearly.

If we assume that the difficulty associated with making a similarity judgment is related to the number of features/nodes required, then the complexity analysis of the conjunctive features model and SIAM affords a test of the models. The top display of Figure 7 shows scenes with 2 objects, with each object containing 4 shaded-square features. The bottom display shows scenes with 4 objects, each containing 2 features. Assuming a positive relation between information required and judgment difficulty, the conjunctive features model predicts the top scenes to be harder to compare than the bottom scenes. SIAM makes the opposite prediction. Results sugggested that subjects found the bottom scenes to be more difficult to compare, both based on forced choice judgments ("Which comparison was more difficult for you to make?") and time required for subjects to make similarity assessments.

Insert Figure 7 about here

One plausible account for this finding is that objects and object roles constrain what scene parts are compared. For the 2-object/4-scene display, each square must be compared to only two squares, assuming that only squares in the same positions in their objects are compared. For the 4-object/2-scene display, each square must be compared to four squares. SIAM only constructs feature-to-feature nodes for features that are from the same dimension (here, each vertical square position is a dimension). Objects place constraints on the element correspondences that are considered. If the objects impose a large degree of structure on their elements, then the elements will be constrained in their candidate correspondences. As a result, fewer correspondences need be considered, and the comparison is easier.

Nonmonotonicities in Similarity

An assumption of monotonicity is one of the most basic assumptions of feature-based models such as the Contrast model (Tversky, 1977) and dimension-based models such as standard multidimensional scaling (Caroll & Wish, 1974). According to the assumption of monotonicity, adding a shared aspect in common to two items should never decrease similarity. In the Contrast model, monotonicity is assumed because similarity is constrained to be an increasing function of the measure of the shared features between the compared objects. In multidimensional scaling formulations of similarity, decreasing the distance between objects along a dimension cannot increase their dissimilarity even if the r parameter in the distance formula is given a negative value.

Empirical violations of monotonic similarity have been found that seem to be due to an alignment process. Goldstone (1994a) reported situations in which scenes with objects in opposite positions received lower similarity ratings that scenes with objects in unrelated positions. Scenes contained two butterflies, as shown in Figure 8. Subjects were asked to rate the similarity of the entire scenes. Butterflies were placed in one of three positions: same, opposite, and unrelated. In the same condition (Figure 8A), butterflies that were featurally similar were placed in the same relative positions in their scenes. In the opposite condition (Figure 8C) the butterflies' positions were swapped going from one scene to the other. Butterflies in the unrelated condition (Figure 8B) were given new positions. Despite the fact that scenes in the opposite condition share a global feature (in this case, they share the upper-left/lower-right configuration) that unrelated scenes do not, in several experiments unrelated scenes were rated as more similar than opposite scenes.

Insert Figure 8 about here

The assumption that scenes in the opposite condition share a global feature, expressible as "backward slash arrangement of butterflies," that unrelated scenes do not was supported by other comparisons. The scenes in Figure 2D were rated as more similar than the scenes in Figure 2E. That is, when the two sets of butterflies did not have clear alignments on the basis of their featural similarities, then the feature "upper-left/lower-right diagonal" did increase similarity.

Alignment-based models of similarity such as SIAM can accommodate higher similarity for unrelated than opposite scenes. For opposite scenes, there will be pressure from higher-level nodes to place butterflies in correspondence that are not optimally aligned, because of the butterflies' spatial positions. This incorrect mapping will, to some extent (depending on parameters) inhibit the correct butterfly alignment. If the incorrect alignment receives more activation, then all of the mismatching features associated with the poorly aligned, dissimilar objects will receive substantial attention. Moreover, the correct alignment will receive less activation, thereby reducing the attention paid to properly aligned matches. By arranging butterflies in opposite positions, the true correspondences are made less conspicuous. Butterflies in unrelated positions, by contrast, do not generate higher-level nodes that strongly inhibit the correct object-to-object alignment.

SIAM's explanation, that inconsistent mappings compete for attention, is noteworthy because of its reliance on alignment. Models of similarity that posit independent sets of features, and matches that do not influence each other, do not have the notion of alignments that are consistent or inconsistent with each other. For example, in the Contrast Model, features may be said to be placed in alignment when they are placed in the shared features components [the (A«B) term]. However, this alignment of features is not consistent or inconsistent with other alignments. Feature matches are consistent or inconsistent only when scenes are described hierarchically and structurally. It seems difficult to explain the finding that unrelated scenes are often significantly more similar than opposite scenes unless the notion of inconsistent competing correspondences is invoked.

Parametric Manipulation of Nonmonotonicities. Although SIAM can predict nonmonotonicities, under most parameter values it predicts monotonicities. Goldstone (1996) conducted a series of experiments with the now familiar butterfly stimuli to test whether parametric manipulations within SIAM that yield nonmonotonicities have experimental equivalents that yield nonmonotonic similarity judgments in human subjects.

Essentially, SIAM predicts that a nonmonotonic relation between shared features in two scenes and scene similarity can arise if the shared features belong to poorly aligned objects. For example, in Figure 2, compare the scenes labeled "Starting scene" and "XY Æ YB." The feature Y (instantiated as a particular color, rather than a body shading, in the experiments) is shared between the two scenes, but it does not belong to optimally aligned butterflies. A nonmonotonic relation between shared features and similarity arises if these two scenes receive a lower similarity rating than do the starting scene and the scene labeled "XY Æ AB." Assuming proper stimulus construction and color randomization, such a result would indicate that replacing the A feature (not present in the starting scene) with the Y feature (present in the starting scene) decreases similarity.

In SIAM, replacing the A feature with the Y feature in the scene that is compared to the starting scene has two effects; one effect increases similarity and the other decreases similarity. The substitution will increase the physical match value, Mi, of the node that represents the correspondence between colors of butterflies with the Y feature. SIAM's similarity estimate is based on the featural similarity, denoted by Mi values, between the scenes. However, adding the Y feature match will also alter Ai (node activation) values. In particular, the shared Y feature will tend to make SIAM increase the activation of nodes that align dissimilar butterflies. In turn, as these nodes become activated, they will decrease the activation of inconsistent nodes. In the presented case, the nodes that are inconsistent with placing the two Y features in correspondence are nodes that place optimally aligned features and objects in correspondence. The influence that a Mi value has on scene similarity is weighted by its node's activation, and consequently, adding a common feature can actually decrease similarity if the addition decreases the activations of other nodes that place highly similar features in alignment.

SIAM usually predicts monotonicity because the similarity gain caused by increasing Mi values is larger than the similarity loss resulting from increasing the influence of mismatching features and decreasing the influence of matching features. However, certain Mi values can produce nonmonotonicities. Mi values denote the physical similarity of features. Goldstone (1996) explicitly manipulated the physical similarity between nonidentical features in order to explore the influence of Mi on nonmonotonicity. The six display types shown in Figure 2 were presented to subjects, and the changed dimension always involved body color. Subjects simply rated the similarity on a scale from 1-9. The Mi parameter was manipulated by changing the similarities of mismatching body colors. When Mi values were high, even mismatching bodies would have similar hues.

The results from the experiment are shown in Figure 9. These results indicate two small but significant nonmonotonicities. First, the display that involved XY Æ XX never received average higher similarity ratings than scenes that involved XY Æ XB, and in one case, actually received a significantly lower similarity rating. In other words, adding a color to Scene Q that matches a color from Scene R can decrease similarity if there is already another color in Scene Q that matches the same color from Scene R. The second, and stronger evidence for a nonmonotonicity comes from situations where the displays with scenes of the type "XY Æ AB" are judged to be more similar than displays with "XY Æ YB." That is, the "Y" feature match decreases similarity when 1) mismatching features have an intermediate level of similarity to each other, and 2) when the "Y" feature match belongs to poorly aligned butterflies.

Insert Figure 9 about here

The 18 displays shown in Figure 9 were also presented to SIAM. Only one parameter, Mi ("color match value") was allowed to vary, number of cycles was arbitrarily set to 15, and the match value associated with dimensions other than body color was set at 0.45. The resulting predictions are show in Figure 10. SIAM qualitatively captures a number of the empirical results. Most importantly, SIAM predicts both of the types of nonmonotonicity that are found. Scenes XY and XX are predicted to be less similar than are Scenes XY and XB at times. In addition, the Scenes XY and YB are predicted to be less similar than are Scenes XY and AB. Furthermore, the parameter values that yield these two nonmonotonicities are similar, and correspond to intermediate color similarity values.

Insert Figure 10 about here

Although Figure 10 shows that SIAM can provide an account for some of the empirical nonmonotonicities, the figure does not itself provide an explanation for SIAM's behavior. Such an explanation is provided in Figure 11, which shows the match values and final node activations for the XY Æ YB and XY Æ AB displays. There is a separate cell for each object-to-object and feature-to-feature correspondence (see the figure caption for details). The only difference between the two displays is that the lowest, leftmost slot contains a feature match between poorly aligned objects for the XY Æ YB display, and a feature mismatch between the same objects for the XY Æ AB display.

Insert Figure 11 about here

When the similarity of mismatching color features is low (color mismatch value = 0.0), then optimal alignments are strongly weighted, and the activations associated with poorly aligned object features are weak. As such, a single color mismatch does not cause a large increase to the strength of poorly aligned mismatching features. When the similarity of mismatching color features is intermediate (color mismatch value = 0.40) then the weight given to mismatching color features is considerably increased because alignments based on body color are less clear. For example, the activation given to one mismatching body color feature has a four-fold increase (from 0.10 to .41 ) when color similarity increases from 0.0 to 0.40. Consequently, this mismatching feature will decrease similarity far more in the intermediate color similarity condition. Finally, when color similarity is high (color mismatch value = 0.70), activations of mismatching features increase only modestly, and the increase in activation given to mismatching features decreases similarity less drastically because the mismatch values are far closer to 1.0.

In sum, nonmonotonicities in SIAM are generated when poorly aligned matches pose significant competition to the preferred matches. Further experiments and simulations revealed that intermediate levels of display timing also produced nonmonotonicities for both human subjects (manipulating display duration) and SIAM (manipulating number of cycles). It is more than a coincidence that for both variables, an intermediate level produces the greatest degree of nonmonotonicity. When few cycles are processed or Mi values are high, then the preferred matches have only slightly more influence than the less preferred matches. When many cycles are processed, or Mi values are very low, then the poorly aligned matches do not strongly compete against the strong optimal alignments. Only at intermediate levels of these variables are both criteria (strong competition and strong preference for aligned matches) met.

General Remarks on SIAM and Connectionist Models

A three-stage modelling strategy has been adopted for testing SIAM. The first stage involved quantitative fits of SIAM to fairly large databases. Multiple versions of SIAM were developed in this first stage, and the criteria for evaluating models was overall quality of fit to human experimental results. In the second stage, predictions made by SIAM were explicitly tested in human experiments. For example, simulations revealed that SIAM predicts that shared nondiagnostic features increase mapping accuracy. This finding was empirically tested, revealing the same trend for human subjects. Also in this stage, predictions from SIAM were generated for tasks other than similarity ratings, including same/different judgments, categorizations, perceptual match sensitivity, mapping judgments, and other indirect measures of similarity. "Critical experiments" were developed in the third stage, to pit SIAM's predictions against other models of similarity. Examples of these critical experiments include explorations of judgment difficulty and nonmonotonicities. The ordering of these stages might seem reversed: Shouldn't detailed quantitative fits be pursued only after qualitative results have been properly simulated? In practice, taking SIAM's qualitative behavior seriously enough to treat them as predictions first required verifying that SIAM was "in the right neck of the woods" as far as modelling human behavior, and this required quantitative fits over numerous comparisons.

Shortcomings and Extensions of SIAM

SIAM's strengths over alternative models has been stressed, but it also suffers some architectural/conceptual limitations (for the time being, its empirical failings will be ignored).

SIAM Ignores the Binding Problem. SIAM is given as input scene descriptions that have features bound to objects, and objects bound to relational roles. These descriptions presume that some other process binds features to objects and objects to relations. It should be noted, though, that this binding operation probably occurs on a much faster time scale than SIAM uses. It takes subjects about two seconds to say that two scenes have the same butterflies, but work by Treisman suggests that the process of binding a feature to an object can take place in less than 300 milliseconds (Treisman & Schmidt, 1982). Still, a more complete version of SIAM would create the structured scene descriptions that it uses. Some recent models of analogy, most notably Hummel & Holyoak's (in press) LISA model, has begun to make progress in this direction.

All Possible Object and Feature Connections are Constructed. Before activation adjustment begins, SIAM creates an object-to-object node for every possible pair of objects, and a feature-to-feature node for every pair of features that fall on the same dimension, requiring O2(F+1) nodes. It is implausible that people create all of these connections, if each connection stands for non-zero attention to a particular correspondence.

An alternative version of SIAM was developed to address the criticism. In this version, SIAM creates connections across time, based on previously established connections. This model would not typically develop any links between completely dissimilar objects. Links are only developed if the system "notices" that the two objects are related, and this only occurs if they share features in common. On each cycle, new feature matches and mismatches are noticed, and this builds stronger links between the relevant objects, making further (mis)matches between the objects more likely to be noticed. Despite this model's greater plausibility, it was not described more fully here because it involves many more parameters and its processing is quite a bit more complex. An idealized caricature of a model is often times a more effective explanatory aid than a "truer" model would be.

SIAM is Not Stochastic: (SIAM Versus SIAM). It is interesting to compare two models that were inspired by McClelland and Rumelhart's (1981) interactive activation model of word perception: SIAMI (Stochastic, Interactive Activation Model, Jacobs & Grainger, 1992, 1994a) and SIAMII (Similarity, Interactive Activation, and Mapping). SIAMI extended McClelland and Rumelhart's model by incorporating a normally distributed response criterion to generate predictions about RT distributions as well as mean RTs in lexical decision tasks. McClelland (1991) independently added a stochastic information transmission component to an interactive activation model of speech recognition.

SIAMII, on the other hand, is currently a deterministic model. SIAMII predicts mapping probabilities by applying a variant of Luce's choice rule to deterministic activations, rather than by actually creating different mappings on different trials. This short-hand technique for modelling mapping judgments is not in keeping with SIAM's "process model" philosophy, and has associated problems. For example, the relation between correct mappings and similarity ratings is not accommodated. An important finding from Goldstone (1994a) was that similarity ratings were higher when subjects produced the optimal object mappings. In a sense, this result is very much in keeping with the spirit of SIAM - the way in which the system aligns objects has a large influence on the resulting similarity of the scenes. However, SIAM cannot model this result because it is a deterministic model; if the same scene description is presented, the same pattern of node activations will develop. Because mapping accuracies are only descriptively modelled by Luce's choice rule, SIAM does not predict what a mapping will be on any trial in particular.

For this reason, it may be desirable to convert SIAMII from a deterministic to a stochastic model. To obtain variability in SIAM, normally distributed noise could be added to feature-to-feature and object-to-object node activations. On every trial, SIAM would output the single set of object alignments that surpassed a threshold level of cummulative activation, instead of outputting object alignment probabilities.

Localist Connectionist Modelling

The strengths and weaknesses of localist connectionist models have been widely discussed elsewhere, and often time boil down to increased interpretability at the expense of the model's ability to automatically generalize across patterns. For engineering uses of connectionist models, generalization may be the more important property. As long as the system functions properly, it may not be important if the engineer knows exactly how the functionality is achieved. For cognitive models, this is patently not the case, and interpretability looms larger.

SIAM provides examples of three types of interpretation and prediction with localist models. First and most basically, particular units have intrinsic definitions. A specific feature-to-feature node has an activation level that develops over time, and this activation, by definition, determines the influence of a particular feature match on similarity. This hypothesized influence can be compared to the actual influence of the feature match on similarity assessments. Second, derived interpretations of units can be posited given their role within a system of equations. For example, one may interpret the activation of a feature-to-feature unit to be proportional to the attention placed on the feature match. The link between attention and activation level, though not directly implied by the equations, is plausible, and makes further empirical predictions. By this interpretation, one would expect, correctly in this case, that people have greater perceptual sensitivity at detecting matches and mismatches between aligned than unaligned objects (Goldstone, 1994a). Third, intepretations can be given to a model's entire architecture. For example, assuming that comparison difficulty is proportional to the number of units required, SIAM predicts judgment difficulty to increase as quadratic function of the number of objects per scene, and as a linear function of the number of features per object. In short, localist models provide many opportunities, direct and indirect, for deriving empirical predictions.


The current research agenda has been to develop a computational model of similarity that explicitly accommodates structured scenes. Generalizing over many specific experiments and simulations, a few general conclusions can be drawn. First, the act of comparing things naturally involves aligning the things' parts. Even when subjects are not instructed to do so, even when indirect measures of similarity are used, subjects set up correspondences between the parts of the things they are comparing. On hearing "Marilyn Monroe looks like Madonna," a person responds "Nah, Marilyn's eyes don't look anything like Madonna's ears." This is a highly anomalous remark because it departs from the natural tendency for people to place parts of compared scenes in correspondence. Mismatches between noncorresponding parts do not diminish similarity very much, or at all, if the correspondences are strong.

Second, similarity assessments are well captured by an interactive activation process between feature and object correspondences. At the core of SIAM is an interactive activation process between feature, object, and role correspondences. As was true of the original interactive activation process proposed by McClelland and Rumelhart (1981), nodes representing consistent hypotheses excite one another, and nodes representing inconsistent hypotheses inhibit each other. In SIAM, each node represents an hypothesis that two entities from two scenes correspond to each other. Feature, object, and role correspondences simultaneously constrain one another. In interactive activation models, processing of lower-level information is influenced by higher levels. It is this "top-down" scheme that allows McClelland and Rumelhart to predict effects of word context on letter identification. In SIAM, the top-down scheme allows object alignment information to guide the attention paid to feature matches and mismatches.

Third, scene alignment is based on global consistency of correspondences. Positing that scene alignment depends on a single aspect or dimension cannot explain the empirical results. No feature, by itself, determines alignment -- not even spatial location or butterfly heads. Furthermore, locally-determined similarities do not determine alignments. An object is not always placed in alignment with the object that it is most similar to; when this alignment conflicts with the alignment that maximizes the number of feature matches between consistently drawn correspondences, then results show that the globally consistent alignment is preferred. Because the empirically supported mechanism of alignment employs global consistency, it is important to explore what global consistency requires. Most importantly, global consistency is applicable only with hierarchically or propositionally structured scenes. If scenes are described as "flat" lists of features, then no two feature alignments are inconsistent. Issues of consistency arise because objects contain features and if features match, then so should the objects that contain the features. Similar dependencies exist between features from the same dimension, between roles and objects, and between objects.

Fourth, how much a feature match counts depends on the particular things being compared. The salience of a feature match cannot be determined until the actual comparison takes place. In SIAM, the actual process of comparing objects influences feature saliences (node activations), although features also have a perceptual contribution to their salience (match value). On the one hand, similarity researchers have typically ignored the determinants of feature salience (but see Tversky, 1977 for some principles), explicitly deferring to domain experts to supply the saliences that will be plugged into similarity measures. On the other hand, psychophysicists tell us about the salience of specific features, for example, relating the illuminance of an object to its phenomenal experience. The strategy adopted here has been to address feature salience, but generalizing over specific features. Domain-general principles for determining when a feature match will count as a feature match can be developed, and part of this effort will involve modelling the process of placing scenes in correspondence.


Attneave, F. (1950). Dimensions of similarity. American Journal of Psychology, 63, 516-556.

Beck, J. (1966). Effect of orientation and of shape similarity on perceptual grouping. Perception and Psychophysics, 1, 300-302.

Busemeyer, J. R., & Rapoport, A. (1988). Psychological models of deferred decision making. Perception & Psychophysics, 32, 91-134.

Carroll, J. D., & Wish, M. (1974). Models and methods for three-way multidimensional scaling. In D. H. Krantz, R. C. Atkinson, R. D. Luce, & P. Suppes (Eds.) Contemporary developments in mathematical psychology (Vol. 2, pp. 57-105). San Francisco: Freeman.

Chalmers, D. J., French, R. M., & Hofstadter, D. R. (1992). High-level perception, representation, and analogy: A critique of artificial intelligence methodology. Journal for Experimental and Theoretical Artificial Intelligence, 4, 185-211.

Clement, C., & Gentner, D. (1991). Systematicity as a selection constraint in analogical mapping. Cognitive Science, 15, 89-132.

Corter, J. E. (1987). Similarity, confusability, and the density hypothesis. Journal of Experimental Psychology: General, 116, 238-249.

Falkenhainer, B., Forbus, K.D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial Intelligence, 41, 1-63.

French, R. M. (1995). The subtelty of sameness. MIT Press: Cambridge.

Gati, I., & Tversky, A. (1982). Representations of qualitative and quantitative dimensions. Journal of Experimental Psychology: Human Perception and Performance, 8, 325-340.

Gati, I., & Tversky, A. (1984). Weighting common and distinctive features in perceptual and conceptual judgments. Cognitive Psychology, 16, 341-370.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7, 155-170.

Gentner, D., & Markman, A. B. (1994). Structural alignment in comparison: No difference without similarity. Psychological Science, 5, 152-158.

Gentner, D., & Markman, A. B. (1995). Similarity is like analogy. In C. Cacciari (Ed.), Similarity in Language, Thought, and Perception. (pp. 111-148). Brussels: BREPOL.

Gentner, D., Ratterman, M. J., & Forbus, K. D. (1993). The roles of similarity in transfer: Separating retrievability from inferential soundness. Cognitive Psychology, 25, 524-575.

Gentner, D., & Toupin, C. (1986). Systematicity and surface similarity in the development of analogy. Cognitive Science, 10(3), 277-300.

Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15, 1-39.

Goldstone, R. L. (1991). Similarity, Intearctive Activation, and Mapping. Unpublished doctoral dissertation. University of Michigan.

Goldstone, R. L. (1994a). Similarity, Interactive Activation, and Mapping. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 3-28.

Goldstone, R. L. (1994b). The role of similarity in categorization: Providing a groundwork. Cognition, 52, 125-157.

Goldstone, R. L. (1996). Alignment-based nonmonotonicities in similarity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 988-1001.

Goldstone, R.L., Gentner, D., & Medin, D.L. (1989). Relations Relating Relations. Proceedings of the Eleventh Annual Conference of the Cognitive Science Society. Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Goldstone, R. L., & Medin, D. L. (1994a). The time course of comparison. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 29-50.

Goldstone, R.L., & Medin, D.L. (1994b). Interactive Activation, Similarity, and Mapping. in K. Holyoak and J. Barnden (Eds.) Advances in Connectionist and Neural Computation Theory, Vol. 2: Analogical Connections. (pp. 321-362). Ablex : New Jersey.

Goldstone, R.L., Medin, D.L., & Gentner, D. (1991). Relations, Attributes, and the non-independence of features in similarity judgments. Cognitive Psychology. 222-264.

Gluck, M. A. (1991). Stimulus generalization and representation in adaptive network models of category learning. Psychological Science, 2, 50-55.

Gluck, M. A., & Bower, G. H. (1988). From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology: General, 117, 227-247.

Hayes-Roth, B., & Hayes-Roth, F. (1977). Concept learning and the recognition and classification of exemplars. Journal of Verbal Learning and Verbal Behavior, 16, 321-338.

Hofstadter, D. H. (1995). Fluid concepts and creative analogies. Basic Books: New York.

Holyoak, K. J., & Thagard, P. (1989). Analogical mapping by constraint satisfaction. Cognitive Science, 13, 295-355.

Hummel, J. E., & Holyoak, K. J. (in press). distributed representations of structure: A theory of analogical access and mapping. Psychological Review.

Imai, S. (1977). Pattern similarity and cognitive transformations. Acta Psychologica, 41, 433-447.

Imai, S. (1992). Fundamentals of cognitive judgments of pattern. in H. Geissler, S. W. Link, & J. T. Townsend (Eds.) (pp. 225-266). Cognition, information processing, and psychophysics: Basic issues. Hillsdale, NJ: LEA.

Jacobs, A. M., & Grainger, J. (1992). Testing a semistochastic variant of the interactive activation model in different word recognition experiments. Journal of Experimental Psychology: Human Perception and Performance, 18, 1174-1188.

Jacobs, A. M., & Grainger, J. (1994a). A dual read-out model of word context effects in letter perception: Further investigations of the word superiority effect. Journal of Experimental Psychology: Human Perception and Performance, 6, 1158-1176.

Jacobs, A. M., & Grainger, J. (1994b). Models of visual word recognition - Sampling the state of the art. Journal of Experimental Psychology: Human Perception and Performance, 20, 1311-1334.

James, W. (1890/1950). The principles of psychology: Volume I. Dover: New York.

Markman, A. B., & Gentner, D. (1993a). Structural alignment during similarity comparisons. Cognitive Psychology, 25, 431-467.

Markman, A. B., & Gentner, D. (1993b). Splitting the differences: A structural alignment view of similarity. Journal of Memory & Language, 32, 517-535.

McClelland, J. L. (1991). Stochastic interactive processes and the effect of context on perception. Cognitive Psychology, 23, 1-44.

McClelland, J. L., & Rumelhart, D.E. (1981). An interactive activation model of context effects in letter perception: Part 1. An account of basic findings. Psychological Review, 88, 375-407.

McClelland, J.L., & Elman, J.L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86.

Medin, D.L., Goldstone, R.L., & Gentner, D. (1993). Respects for similarity. Psychological Review, 100, 254-278.

Medin, D. L., & Schaffer, M. M. (1978). A context theory of classification learning. Psychological Review, 85, 207-238.

Palmer, S. E. (1978). Structural aspects of visual similarity. Memory & Cognition, 6, 91-97.

Shepard, R. N. (1962a) The analysis of proximities: Multidimensional scaling with an unknown distance function. Part I. Psychometrika, 27, 125-140.

Shepard, R. N. (1962b) The analysis of proximities: Multidimensional scaling with an unknown distance function. Part II. Psychometrika, 27, 219-246.

Treisman, A., & Schmidt, H. (1982). Illusory conjunctions in the perception of objects. Cognitive Psychology, 14, 107-141.

Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327-352.

Tversky, A., & Gati, I. (1982). Similarity, separability, and the triangle inequality. Psychological Review, 89, 123-154.

Wiener-Ehrlich, W. K., & Bart, W. M. (1980). An analysis of generative representation systems. Journal of Mathematical Psychology, 21(3), 219-246.

Figure Captions

Figure 1. Materials used in a pilot experiment by Goldstone (1991). The numbers show the percentage of subjects choosing A over B as being more similar to T.

Figure 2. In several experiments, trials consisted of the starting display and one of the six changed displays. Matches In Place (MIPs) and Matches Out of Place (MIPs) are only considered for body shadings.

Figure 3. An example of processing in SIAM. The example is based on a comparison of the starting display (in Figure 2) and the display labelled "XYÆYX." The 2 MOPs occur on the body dimension. The two values within a slot show node activations after 10 and 20 cycles have transpired. The values in parentheses reflect match values (1.0 = matching feature, 0.0 = mismatching feature).

Figure 4. Results from Goldstone and Medin (1994a). The number of MOPs (matches out of place) had a greater influence on the rate of incorrect "different" responses for short deadlines than for medium or long deadlines. In contrast, the number of MIPs (matches in place) had a greater influence on incorrect "different" responses for long and medium deadlines than for short deadlines.

Figure 5. In the top display, the two scenes do not share their nondiagnostic features. In the bottom display, nondiagnostic features are shared, yielding more accurate mapping judgments. The wings, bodies, and tails of the butterflies are nondiagnostic because they do not provide information about which butterflies correspond to each other.

Figure 6. Simulation from SIAM for the two displays in Figure 5. Each slot refers to a node that places objects or features into alignment. The values in each slot denote the node activations when nondiagnostic features are shared (on left) and when nondiagnostic features are not shared (on right). Match values are shown in parentheses.

Figure 7. Results from Goldstone (1991) indicate that subjects judge the top scene comparison to be easier than the bottom comparison.

Figure 8. Five types of spatial positions for butterflies in experiments by Goldstone (1994a). A nonmonotonicity is indicated by the higher similarity ratings for unrelated than opposite displays when butterflies have clear corresponding partners between the left scene and the right scene.

Figure 9. Results from Goldstone (1996). Two nonmonotonicities are apparent at the intermediate level of color similarity. The display with XYÆ YB is rated less similar than the display with XYÆ AB, and the display with XYÆ XX is rated less similar than the display with XYÆ XB.

Figure 10. Simulation results for SIAM, varying match values (Mi) for different body colors. When intermediate values for Mi are given, SIAM predicts the two nonmonotonicities shown in Figure 9.

Figure 11. Simulation results from SIAM for two displays (XYÆ YB and XYÆ AB) at three values of color mismatch. Each rectangle refers to a node that places objects or features from two scenes into alignment. Within a rectangle, each row contains the following information in order: match value for XYÆ YB display, activation level for XYÆ YB display, match value for XYÆ AB display, and activation level for XYÆ AB display. The three rows within a slot show the results when Mi=0.0, Mi=0.4, and Mi=0.7. The summary similarity ratings at the bottom of the figure indicate that XYÆ AB is rated more similar than XYÆ YB, but only when Mi=0.4.

Author Notes

The research reported in this chapter has benefitted greatly from comments and suggestions by Dedre Gentner, Arthur Markman, Douglas Medin, Robert Nosofsky, Richard Shiffrin, and Linda Smith, and Jesse Spencer-Smith. This research was funded by National Science Foundation Grant SBR-9409232. Correspondences concerning this article should be addressed to Robert Goldstone, Psychology Department, Indiana University, Bloomington, Indiana 47405. The author can be reached by electronic mail at rgoldsto@indiana.edu, and further information about the laboratory can be found at http://cognitrn.psych.indiana.edu/.