For Clara Cohen, language is all about patterns. The postdoctoral psychology researcher has been interested in linguistic patterns since she was an undergraduate learning Russian, and now, thanks to advances in technology, she can study patterns in language as they occur in real time.

Through the Center for Language Science and the Language and Bilingualism Lab in the Department of Spanish, Italian and Portuguese, Cohen is using an eye-tracking device to examine the subtle differences between how English and bilingual speakers process singular and plural nouns.

Although only a small faction of the larger scope of linguistics research, Cohen’s study could potentially unlock new information about how language is processed across the globe, with possible benefits for those with dyslexia and even Apple’s computerized personal assistant, Siri.

Cohen suggests that for English speakers, determining whether or not a noun is plural might actually occur before a person even hears the “s” at the end of the word. In fact, there are subtle cues in the duration of singular and plural words that could help a listener predict plurality.

“One pattern that people will have heard throughout their lives as they speak English is that before a plural suffix, the stem of a noun is a little bit shorter,” Cohen said. “So ‘cats’ with a suffix sounds shorter than ‘cat.’”



Although English speakers have developed this fine-tuned language processing, speakers of many other languages, like Spanish, might not need to pay attention to subtle changes in duration since the article before a noun indicates whether it’s singular or plural (e.g., el for singular or los for plural).

To test how quickly English and non-English speakers process plurality, Cohen monitors study participants with an eye-tracking device. The device works by shining a harmless and invisible infrared light on the subject’s eye. Based on the position of the light’s reflection, the attached optical video sensor uses an algorithm to determine the direction and duration of a subject’s gaze.

For the purposes of the study, Cohen is interested in where a subject’s eye travels while listening to a speaker say a sentence with either singular or plural words.

Study participants sit in a soundproof room and watch a computer screen while wearing headphones and resting their heads on a chin rest (similar to one you’d find at the eye doctor). Four images appear on the screen — a seal, a bun, a bunny and a herd of seals. Cohen’s voice comes on through the headphones: “The man looked at the seals.”

As the participant’s eyes look toward the image of the seals, the reflection of the infrared light gives the sensor real-time information on how fast the participant processed the sentence.

According to Cohen, these precise calculations wouldn’t be possible without advances in technology.

“Before we had eye trackers, the way we would interpret how people understood a sentence is by giving them a question like, ‘Is the last noun in this sentence singular or plural?’ And then we'd play them a sentence and have them press a button for yes or no,” Cohen said. “Based on how fast they pressed the button, we could determine whether the sentence was easier or harder to process.”

Examining language through these older methods provides less informative data, as these yes-or-no questions are unable to capture information in real time.



“So by determining where someone is looking as a sentence unfolds over time, we can determine at what point they figured out what was being asked of them,” Cohen said.



The experiments conducted here at Penn State are only part of the puzzle — in May, Cohen will travel to Tarragona, Spain, on a National Science Foundation Partnerships for International Research (PIRE) grant to recreate the study with monolingual and bilingual Spanish participants.

“Everyone who I'll be working with in Spain has taken English in high school, but the monolinguals haven't used it since and the bilinguals still use it,” Cohen said. “I'm going to be looking at whether the monolinguals are insensitive to this duration difference in Spanish. For the bilinguals, I'm interested in how quickly they can shift their awareness of cues.”

Although Cohen’s study may only be a small piece of linguistics research, there are a variety of benefits that could come from the results.

Aside from changing the way foreign languages are understood and taught, a possible implication is improving how automatic speech recognition systems — like Siri — process language.

“Simply knowing that you have this pattern of durational differences depending on whether or not there's a suffix can help automatic speech recognition systems say, ‘This sound is a little bit longer than I expected and there's a suffix, therefore, it must be a verb and not a noun,’ Cohen said. “So this understanding of patterns of pronunciation can help improve the accuracy of automatic speech recognition.”



And for those with auditory processing disorder — a condition often found in those with dyslexia, which hinders a person’s ability to hear subtle changes in pitch and duration of sounds — Cohen’s results could eventually lead to changes in how the disorder is managed.



“Understanding exactly how important durational variation is in speech, whatever language you're speaking, might help people with these types of auditory processing disorders better manage the effects on their lives.”

But for Cohen, one of the greatest personal benefits of the study is the thrill of uncovering more patterns in language.

“Learning a foreign language has its own rules and principles,” Cohen said. “It doesn’t have to be a mystery.”

For more IT stories at Penn State, visit http://news.it.psu.edu.