When we open our mouths to speak and words come out, we don’t think too much about the mechanics of spoken language.

However, human speech is planned out, says Brad Story, professor of speech, language and hearing sciences.

“Understanding how human beings can use an air space to encode all the complexities of our language and launch it out into the atmosphere for some listener to hear is a question that’s never truly been answered,” he says.

“It is understanding the process of encoding and decoding information by transforming movement into sound that’s really one of the questions of nature that keeps us going. There are practical applications but there’s also answering that basic question of nature.”

Story and his team are working to understand how a person figures out how to change shapes in the throat and mouth to make sounds that produce intelligible speech.

They’ll use that data to build computational models of the acoustic possibilities of speech production.

That can lead researchers to understanding the cause of a wide variety of speech disorders. It also helps scientists better understand what children are doing with their tongue, lips and jaw as they are developing the ability to speak, Story says.

Ultimately, that understanding could have clinical applications and it may also lead to technical applications, such as speech synthesizers that would re-create the speech production system, not just creating sounds that sound like speech.

That could be particularly important for children, whose speech doesn’t sound like an adult’s.

Story’s current research is focusing on children’s speech development.

“Our model is replicating the air space of the throat and the mouth, and the vibration of the vocal folds which are in the larynx, and all of the movements that go into changing the shape of that airspace to produce speech,” he says.

“It’s a much more complete model, it’s more like a simulation than a synthesizer.”