When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Stanford study shows smartphones could predict when a person is drunk with 98% accuracy

A recent study found the potential for smartphones to detect when a person is drunk. The Stanford University study discovered a 98% chance of smartphones accurately identifying when a person is intoxicated by relying on voice patterns.

The study involved 18 individuals aged 21 and over whose voice patterns were analyzed by sensors to detect whether they were drunk or not. The research was published in the Journal of Studies on Alcohol and Drugs.

Brian Suffoletto, the associate professor of emergency medicine at Stanford University in the U.S., mentioned that he was surprised by the accuracy of the results and added that further studies are required to confirm the validity of these findings.

He suggested these findings could help reduce road injuries and deaths due to drunk driving or the likes in the future and added:

“While we aren’t pioneers in highlighting the changes in speech characteristics during alcohol intoxication, I firmly believe our superior accuracy stems from our application of cutting-edge advancements in signal processing, acoustic analysis, and machine learning.”

In the study, the participants were given alcohol dosage based on their body weight and had to finish their drinks within an hour.

Then, they were given a series of tongue twisters. The participants had to repeat these out loud every hour for up to seven hours, while their voices were recorded by smartphones.

Before drinking, the scientists measured the participants’ breath alcohol levels and made the participants record the tongue twisters as well. The breath alcohol levels were also measured after every 30 minutes for up to seven hours.

After this, the researchers looked at parameters such as frequency, and pitch in one-second increments by isolating the speaker’s voice through software.

The researchers compared the findings with the breath alcohol results and found them surprisingly similar, indicating a 98% accuracy of the findings of the model they developed.

According to Prof. Suffoletto, behaviors like walking and texting combined with voice pattern sensors could be used to determine a person's level of intoxication.

He also added that timing plays a vital role in identifying when a person requires help. So, when a person initiates drinking, they could be informed about their consumption limits to prevent excessive intoxication.

Via The Independent

Report a problem with article
startallback screenshots
Next Article

StartAllBack working relentlessly to improve Windows 11 23H2 Explorer, Taskbar, system tray

Windows Server 2022 image in red indicating bug or issue or error
Previous Article

Microsoft: IOMMU, VBS part of what's causing EPYC Windows Server VMware BSODs, boot fails

Join the conversation!

Login or Sign Up to read and post a comment.

9 Comments - Add comment