Babies can recognize speech — and differentiate between two different languages — while they’re still in the womb, shows a new study.
Researchers from the University of Kansas measured babies’ heart rates when playing them a recording of a person speaking in English and also in Japanese. They found that babies’ heart rates changed when hearing the Japanese recording, indicating that early speech recognition was occurring.
“Research suggests that human language development may start really early — a few days after birth,” said lead author Utako Minai, associate professor of Linguistics at the University of Kansas.
“Babies a few days old have been shown to be sensitive to the rhythmic differences between languages. Previous studies have demonstrated this by measuring changes in babies’ behavior; for example, by measuring whether babies change the rate of sucking on a pacifier when the speech changes from one language to a different language with different rhythmic properties,” explained Minai.
Those prior studies had Minai and her team searching for more answers.
“This early discrimination led us to wonder when children’s sensitivity to the rhythmic properties of language emerges, including whether it may, in fact, emerge before birth,” Minai said. “Fetuses can hear things, including speech, in the womb. It’s muffled, like the adults talking in a ‘Peanuts’ cartoon, but the rhythm of the language should be preserved and available for the fetus to hear, even though the speech is muffled.”
One previous study had suggested that babies could distinguish between languages; however, Minai and her team sought to iron out what they saw as discrepancies in the study that may have left in question the cause of the perceived speech recognition.
“The previous study used ultrasound to see whether fetuses recognized changes in language by measuring changes in fetal heart rate,” Minai said. “The speech sounds that were presented to the fetus in the two different languages were spoken by two different people in that study. They found that the fetuses were sensitive to the change in speech sounds, but it was not clear if the fetuses were sensitive to the differences in language or the differences in speaker, so we wanted to control for that factor by having the speech sounds in the two languages spoken by the same person.”
Minai and her team also used a more sensitive piece of equipment — a fetal biomagnetometer, which assesses magnetic fields and electrical currents — than in other previous studies.
“The biomagnetometer is more sensitive than ultrasound to the beat-to-beat changes in heart rate,” said Kathleen Gustafson, a research associate professor in the Department of Neurology at the medical center’s Hoglund Brain Imaging Center. “Obviously, the heart doesn’t hear, so if the baby responds to the language change by altering heart rate, the response would be directed by the brain.”
The study found just that. Comparing the babies’ responses to hearing English and Japanese, which are aurally different and have distinct rhythms, their heart rates changed when hearing a passage of Japanese after having heard English.
When babies heard consecutive passages in English, their heart rates didn’t change, indicating that ones who heard Japanese picked up on the different language being spoken.
“The results came out nicely, with strong statistical support,” Minai said. “These results suggest that language development may indeed start in utero. Fetuses are tuning their ears to the language they are going to acquire even before they are born, based on the speech signals available to them in utero. Prenatal sensitivity to the rhythmic properties of language may provide children with one of the very first building blocks in acquiring language.”