Who would've guessed...? After years of not getting round to it, then (literally) months of preparation, I've been able over the past few weeks to run a new Language and Cognition experiment. Here are some preliminary results...
The study I've set up investigates possible correlations in "perceptual narrowing", comparing participants' ability to discriminate native vs. non-native phonemic contrasts with their ability to discriminate native vs. non-native Japanese vs. Caucasian faces (for anyone interested, there's some background information on my website). The experiment consists of two forced-choice discrimination tasks—XAB/ABX, for faces and sounds, respectively—plus a lexical familiarity pretest (to test whether participants are familiar with all of the English words carrying the phonemic contrasts).
Participants
To date, 43 Japanese undergraduate students have taken part in the first run of the experiment. The results below are based on 40 participants (including 4 left-handers: see below). Two participants, whose accuracy scores fell below two standard deviations from the overall mean, were dropped from the analysis. One other participant (subject #3), who reported spending her early childhood in the US, was also excluded from the present analysis.
Results—Faces
Apriori, one might have expected that (for Japanese participants) Japanese faces would be easier to discriminate than Caucasian faces. Right? Except, that's not what the data suggest (at first blush). See Figures 1 and 2. As these figures show, the Japanese faces elicit somewhat less accurate and slower responses, though it remains to be seen whether the differences are statistically reliable. (Of course, this might not be a fatal problem: it could just be that the 30 pairs of Western faces are just easier for contingent reasons. It depends on how the Western participants judge the two sets, and I won't find this out until April. In any case, I'm mainly interested in probing cross-modal correlations within-subjects, so even if the overall results don't turn out quite as expected, there may still be interesting within subject correlations.
|
Figure 1. Overall, Asian faces are hardest to discriminate (though the effect may not be robust)... |
|
Figure 2...and Japanese participants take longer to judge Japanese faces than Caucasian ones (overall). |
At this point in the analysis, the interesting quirks come when one looks more closely. Among the other independent and control variables, besides
NATIVENESS, I manipulated ISI—time between the offset of the target face and presentation of the two possible matches (500 vs. 1000ms), Presentation Time of the target face (100 vs. 200) msecs, and Orientation—whether the faces are presented upright or inverted (upside-down). I also recorded the between-subjects factor of Handedness, just because I have a hunch about this...
Taking Orientation first, Figure 3 shows that this matters a lot: as predicted, in every condition, upright faces are easier to identify than the same faces presented upside-down. No surprise here. What is surprising, though, is that the inversion effect is probably only significant for Asian faces (in other words, for native faces!). Curiously, inverted Caucasian faces are not much harder to discriminate than when presented upright.
|
Figure 3. Interactions between Nativeness and Inversion. The chart here shows that—as expected—upright faces are easier to discriminate than inverted ones. No surprise here. What is surprising, though, is that the inversion effect is probably only significant for Asian faces (in other words, for native faces) |
I haven't even started the interpretive stats yet, so some of these initial results may turn out to be spurious, but it looks like all of these factors interact in interesting ways. Perhaps most surprisingly, handedness seems to interact with the other variables in an interesting way: left-handers are better than right-handers overall—much better in certain conditions—but also appreciably worse in others. See Figures 4 and 5 below. (Less surprising was the apparently significant main effect of Orientation, though it is still curious that this effect is larger for left-handers than right-handers.)
|
Figure 4. As might have been predicted, it's harder—for everyone—to discriminate faces overall when they are upside down (INV) than the right way up. That's what we see in Figure 3. What is less expected are the sub-patterns revealed in Figure 4: left-handers seem to show a huge effect of orientation for native (Japanese) faces, but not for non-native (Caucasian) ones; right-handers, by contrast, are better overall at discriminating non-native faces, but relatively poor at discriminating native ones. |
|
Figure 5. Shorter ISI results in greater accuracy, but only for native faces (for left-handers) and for non-native faces for (right-handers). Go figure! |
Which is all very weird, until I looked again and realized that there were only 4 left-handed subjects in the set (42). So it may really be a freak result. More anon.
Sounds
As for the sounds, these have worked out almost exactly as predicted. Figures 6 and 7 shows the accuracy scores and response latencies to trials involving minimal phonemic contrasts in English and Japanese, embedded in real lexical items. The blue bars show the discriminability of the control items: /t/ vs. /d/, phonemic contrasts common to both languages. The red bars show participants' performance on test items, i.e. contrasts that are not shared by both languages: /l/ vs. /r/ in the English trials, phonemically short vs. long segments in the case of the Japanese trials. Within the test items, 'more difficult' English contrasts are those where the critical phoneme is non-initial in the syllable (e.g.
prays/plays), in contrast to the more perceptible syllable-initial contrasts (
low/row); for the Japanese items, differences in vowel length were predicted to be harder to detect than consonantal length contrasts.
Figures 6 and 7 clearly suggest main effects of
LANGUAGE (English vs. Japanese) and
TRIAL TYPE (Control vs. Test), also—possibly—a small main effect of
ITEM DIFFICULTY, as well as a reliable interaction between language and trial type, as reflected in both dependent measures: the Japanese participants in the study are significantly less accurate and slower in correctly discriminating non-native phonemic contrasts—i.e.,
l vs. r—than in discriminating native ones (67% vs. 98%). Notice that there is no appreciable difference in the responses to English and Japanese control items (both >95%).
|
Figure 6. Sounds—Accuracy across Conditions |
|
Figure 7. Sounds—RT across conditions |
The sounds experiment also manipulated ISI, in this case, the delay between presentation of the second alternative sound and the target sound (either 2250 or 3250 msecs). Ceteris paribus, longer ISIs should increase task difficulty. However, as Figures 8 and 9 show, increasing ISI appears to have had no significant effect on overall accuracy, though there was a (possibly significant) increase in response latency in the English/Test Conditions.
|
Figure 9. Sounds. Effects of ISI on response latency (Note that the RTs here are based only on correct responses.)
|