How Babies’ Brilliant ‘Onboard Computers’ Sort Language From Sound Soup - Early Learning Nation

How Babies’ Brilliant ‘Onboard Computers’ Sort Language From Sound Soup

A baby smiles while being tested by a Magnetoencephalography (MEG) machine
Photo: Institute for Learning & Brain Sciences (I-LABS), UW.

When you see a baby gazing on the world, you might imagine a little sponge passively soaking up information. Don’t let that baby face fool you. What’s actually going on is computational wizardry so sophisticated that it outpaces any known machine, sorting multiple data feeds and running statistics millisecond by millisecond to extract and analyze essential information about the baby’s environment. Those little brains are busy. And a large chunk of that analysis involves cracking the complicated code of human linguistics — a task at which, says language expert Dr. Patricia Kuhl, babies are sheer geniuses.

As co-director of the Institute for Brain and Learning Sciences at the University of Washington, Kuhl holds the Bezos Family Foundation Endowed Chair for Early Childhood Learning and is an internationally recognized authority on early language and brain development. She is also an enthusiastic believer in the brilliance of babies, with plenty of scientific data to back her up.

“No computer, no matter how sophisticated, can do what a baby can do in listening to language input and deriving the words, grammar and the sound contrasts that create language,” she says. “No artificial intelligence in the world has been able to do that so far.”

Thanks to the University of Washington I-LAB’s magnetoencephalography (MEG) brain-imaging machine, a one-of-a-kind mechanical tour de force, researchers have been able to scan babies’ brains in a way that is safe, noninvasive and silent to pick up with millimeter accuracy the magnetic fields that respond as a baby is listening and learning. It is the first machine in the world able to record babies’ brain imaging as they are learning, providing what Kuhl calls a “tsunami of data” about what’s going on in their brains.

Long before a baby can understand a single word, their brains are crunching statistics to sort out the frequency with which a particular sound occurs. Babies start out as “citizens of the world,” Kuhl says, able to distinguish all the sounds associated with human language from background noise no matter what country or what language they encounter. At six to eight months, the brain scans of babies throughout the world light up equally in response to all human language sounds. About two months later, an incredible shift occurs: babies begin to ignite only to the languages around them to the exclusion of other languages. If the language in the baby’s environment is Japanese, for example, their brain scans will show no response to English Rs and Ls. But if the language in their world is English, multiple areas of their brains will light up in response to the abundance of Rs and Ls they hear. By 10 months, she says, a trained ear can hear that Chinese babies are babbling differently from the French babies who are babbling differently from the American infants.

A Talking Timeline

Although all babies learn at different rates, language authority Dr. Patricia Kuhl says their mastery of language generally follows a dependable timeline.

  • At 12 weeks or so, babies are cooing and attempting to mimic the unique sounds of speech.
  • Around 7 months, babies begin to babble, not just the vowel sounds of cooing, but adding consonants as their lips and tongues develop and coordinate and move quickly enough to produce sounds like, “muh-muh,” or “ba-ba-ba.”
  • Around 10 months, babies have begun to reproduce the sounds and patterns of their culture, with Chinese babies beginning to sound Chinese and French babies beginning to make the round, pouty-lipped “eau” that is meaningful in their language.
continue reading…

Babies’ brains are not automatons though, Kuhl says, and this rich learning only happens when the input comes from social interaction with other human beings.

“Our studies contrasted a foreign language tutor, who spoke either Mandarin or Spanish, with the exact same material presented over a DVD,” she says. “The babies crawl up to the television and watch it. But the tests afterwards that measure learning demonstrated that those watching TV (or listening to an audio recording) learned nothing. They were just like the control (subjects) who had only listened to English.”

The only group of babies who actually learned were listening and interacting with a person, which lit up their brain scans in ways that showed that they were “socially electrified, watching everything that was going on and learning like crazy,” Kuhl says.

The takeaway is that the brain’s statistical learning is only part of the language equation. Those statistical processes launch when babies are engaged with a social “other,” she says. The human mind is hard-wired to pick out patterns, and babies need social interaction to put those patterns in context, to receive the smiles and nods and reinforcement that say, “Yes, you’re really onto something with that ‘ba-ba’ thing.”

Some parents think that because their babies aren’t saying words yet, the parents don’t need to talk to them, but precisely the opposite is true, Kuhl says. The brain is waiting for the back-and-forth that helps it sort language out of the sound soup that surrounds the baby. And that call-and-response not only activates the auditory centers in their brain but the areas that involve motor response, which indicates that even before the baby can talk, engaging with people speaking to them activates the motor centers that will be needed for response once the child’s vocal mechanism develops enough to form words.

A third, equally important, finding in Kuhl’s research is the particular way in which parents and caregivers speak with their babies.

Researchers call the speech pattern “parentese,” and it is distinctly different from baby talk — not “Um’s a widdle cutie patootie,” but “Is that a ball? Is it a red ball?” Parentese is a grammatically simple, here-and-now vocabulary that is slow and deliberate, with a singsong, upbeat tone that grabs the baby’s attention. As the child’s brain tries to map what sounds are happening most frequently, the “linguistic units” need to be clear. Parentese makes them clearer and cleaner.

As further evidence of babies’ sophisticated computational capacity, the researchers’ data show that babies in bilingual or even multilingual environments are learning those languages, too. As long as the input is happening and social interaction is present, Kuhl says, babies can hear Mom speaking one language, Dad speaking another and even Grandma speaking a third and their brains keep learning the sounds, words and grammar of the additional languages, with no confusion and no slow-down on the developmental timeline.

Plainly, Kuhl says, the U.S. needs to start teaching foreign languages much, much earlier — before the first birthday, if other languages are to be natural for the child.

With this incredible human gift of language comes great responsibility, Kuhl says. A baby’s mind-boggling on-board computer isn’t a turn-key operation: It needs deliberate language input, stimulation and interaction, early and often. Experience builds the brain.

“It’s not as though you just birth the baby, turn it on like a Christmas tree and it’s done,” she says. “No computer in the world can learn like a baby, but that comes with a responsibility for the adults in the culture to provide the social, cognitive and linguistic world that brain needs.”


Resources:

K.C. Compton worked as a reporter, editor and columnist for newspapers throughout the Rocky Mountain region for 20 years before moving to the Kansas City area as an editor for Mother Earth News. She has been in Seattle since 2016, enjoying life as a freelance and contract writer and editor.

Get the latest in early learning science, community and more:

Join us