Researchers at Cornell University have reinvented the microphone, and installed one inside a wearable device that can detect body sounds other than spoken words, which can give clues to one’s mood and eating patterns.

The market is full of electronic devices, wearables and smartphone apps that use advanced speech recognition technology to allow the user to issue commands and operate the device. However, researchers found out that that the microphones in these gadgets can be used not only to figure out what words come out of our mouths, but also how we say them, to indicate emotions or stress. Furthermore, the microphone can detect nonverbal sounds -- laughing, chewing, yawning, grunting, heavy breathing, and other body sounds -- that can give clues to our health.   

This technology that focuses on nonverbal body sounds is behind a new device invented by researchers at Cornell University. It looks simple enough, like any other headset or earpiece. But inside is a microphone which is unlike the usual type found in consumer devices. The prototype piezoelectric detector has a microphone that picks up nonverbal sound waves transmitted through the skull. Sounds such as chewing or laughing, which can indicate your mood or state of well-being. Researchers say the sensor can even track eating habits continuously because the mic is always on.

“We see ‘quantified self’ and health tracking taking off, but one unsolved problem is how to track food consumption in an automated way,” Dr. Tanzeem Choudhury, lead researcher and associate professor of information science at Cornell, told MIT Technology Review. “This can reliably detect the onset of eating and how frequently are you eating.”

That information could be added to other nutritional info like the number of calories consumed, meal types, and serving sizes to give one a holistic picture of his or her eating habits.

The microphone as an additional sensor can be a good complement to ones that can measure blood pressure, heart rate, temperature, steps and sleeping patterns. These advanced sensors have been touted as revolutionizing personalised healthcare. Yet, the microphone can provide the subtleties and nuances of body sounds and human languages that these other sensors may not provide, at least in the way they process signals from the body. Combining all these sensors could give us a broader and clearer picture of our health.

The research team at Cornell wants this mic-as-biosensor technology to be used in existing, off-the-shelf smartphones. But they are alsothinking of bigger things other than personalized healthcare.

“This could be a bridge between tracking pollution and coughing and other respiratory sounds to get a better measure of how pollution is affecting the population,” Dr. Choudhury said, referring to how entire cities can use the technology as an epidemiological tool. The device has background noise-cancelling technology to make it more reliable in noisy environments.

Speech has long been known as an indicator of stress. And Dr. Choudhury and her team had earlier worked on an app that uses a smartphone’s microphone in new ways. Their Android-run app, called StressSense, captures and analyses voice characteristics such as amplitude and frequency. By noting changes in how a user speaks, the app detects if the user is stressed out.

Other researchers are also using other nonverbal cues in detecting mood and well-being. A company called Affectiva has developed facial recognition technology that recognizes emotions. The company lets users share “face data” to digitize the myriad emotions that a human face can convey.