Smartphones, smart speakers could pick out drunk drivers by analyzing voice patterns

When checked against breath alcohol results, changes in the participants’ voice patterns as the experiment went on predicted alcohol intoxication with 98% accuracy.

Technology identifies drunk drivers through voice analysis (photo credit: Rutgers Center of Alcohol & Substance Use Studies/Journal of Studies on Alcohol and Drugs)
Technology identifies drunk drivers through voice analysis
(photo credit: Rutgers Center of Alcohol & Substance Use Studies/Journal of Studies on Alcohol and Drugs)

Drunk driving is an epidemic around the world. Detecting excessive alcohol consumption was made possible about 70 decades ago by Robert F. Borkenstein of the Indiana state police who invented the breathalyzer used chemical oxidation and photometry to determine alcohol concentrations. 

The results can be confirmed in a lab by a blood-alcohol content (BAC) test that can detect alcohol in the blood for up to 12 hours after drinking. But now, a more accurate and quick technique has been tested by researchers at Stanford University Medical School in California and the University of Toronto in Canada. Sensors in smartphones and smart speakers could help determine a person’s level of alcohol intoxication based on the changes in their voice. 

It was tested on 18 adults – aged 21 to 62 and 72% male –who were given a weight-based dose of alcohol and randomly assigned a series of tongue twisters – one before drinking and one each hour up to seven hours after drinking. A smartphone was placed on a table within 1 to 2 feet to record their voices. The team used digital programs to isolate the speakers’ voices, broke them into one-second increments, and analyzed measures such as frequency and pitch.

When checked against breath alcohol results, changes in the participants’ voice patterns as the experiment went on predicted alcohol intoxication with 98% accuracy.

What was the study about?

The study entitled “Detection of alcohol intoxication using voice features: A controlled laboratory study” has just been published in the Journal of Studies on Alcohol and Drugs. 

“The accuracy of our model genuinely took me by surprise,” said lead researcher and Stanford emergency medicine Prof. Brian Suffoletto. “While we aren’t pioneers in highlighting the changes in speech characteristics during alcohol intoxication, I firmly believe our superior accuracy stems from our application of cutting-edge advancements in signal processing, acoustic analysis, and machine learning.”

Suffoletto said the goal of such analysis is to deliver “just-in-time interventions” to prevent injury and death resulting from motor vehicle or other accidents. The best intervention tool would be easy to use and readily available, and the near-ubiquitous nature of smartphones and smart speakers make them an obvious tool for helping alert people that they’ve become intoxicated.

“While one solution could be frequently checking in with someone to gauge their alcohol consumption, doing so could backfire by being annoying, at best, or by prompting drinking, at worst,” he said. “So, imagine if we had a tool capable of passively sampling data from an individual as they went about their daily routines and surveil for changes that could indicate a drinking episode to know when they need help.”

He predicted that surveillance tools may eventually combine several sensors for gait, voice, and texting behavior. He suggested that much-larger studies need to be done on people with a wide variety of ethnic backgrounds to confirm the validity of voice patterns as an indicator of intoxication. It may also be helpful to build relationships with companies that are already collecting speech samples through smart speakers.

Advertisement

He sees this research as a call to action, urging the National Institutes of Health to develop data repositories for these types of digital biomarkers. The ultimate goal is to develop an intervention system that people are willing to use and can help prevent injuries and save lives.