Dr. John
Line
Office Hours
Classes Resume Publications Research
Line

What did you see? A face or a word?

It turns out that the stimulus contained neither: it was just visual noise. But if you had to guess, what would you say? And what happened in your brain to make you say face or word?

We studied this question using EEG brainwave recording technology. In the field of face recognition there is a signature response that is associated with faces. This component of EEG data is called the N170, because it is negative-going and occurs at about 170 ms (thousands of a second) after the face appears. Various studies have localized the source of this component to the visual perceptual regions at the back of the brain and on the sides behind the ears. This is where it is thought that complex object recognition like face identification occurs.

We know this are is active when you see a face. But what happens when you just think you see a face? Would that feeling be associated with greater activity in this face processing region?

An Experiment

To test this idea, we presented faces and words in visual noise. Here are two examples:

We sometimes decreased the brightness of the faces and words to make them hard to see:

Unbeknownst to the subject, on some trials we presented just noise:

There are two important details.

First, we asked subjects to respond on each trial, even if they thought they were just guessing. In fact, because the noise-alone condition containes neither face nor word, they had to guess on these trials.

The second important detail is that the noise was exactly the same on each trial. This becomes important later on.

What we found

The basic question we're asking is whether the brain produces more activity in the face processing regions when subjects think they see a face. To answer this question, we looked only at the noise-alone trials. We separated the EEG data into two groups that were associated with face and word responses. We are particularly interested in the size of the N170 component, because in prior research this component responds strongly to faces and weakly to other stimuli like cars or butterflies.

To answer the question, we looked at the size of the N170 for trials associated with 'face' and 'word' responses when just the noise was presented. Suprisingly, we found a difference, as shown on the figure below:

The x-axis of this graph represents time since the noise-alone stimulus was presented, which is noted at time zero with a vertical dashed line. The y-axis is Amplitude in microvolts, which is what is measured during EEG brainwave recording. The N170 component is the negative going dip near the astrisk (*). The two curves correspond to the two types of responses (face and word) and the data demonstrate a significantly larger N170 when observers thought they saw a face in the noise-alone stimulus.

These results are published in an article in Psychonomic Bulletin and Review, and a preprint may be found here. The article contains more information about the study, including several possible interpretations of the data that we consider.

Summary

We interprete the above results to suggest that there is ongoing activity in the perceptual regions of the brain. When this activity is high in the face processing regions, it can make you think you see a face. While this is our leaning hypothesis for our results, we are conducting further studies to determine the link between brain activity and behavior in face recognition and related tasks.