MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
speech
Search

Even the Best Speech Recognition Systems Exhibit Bias, Study Finds

Friday April 2, 2021. 03:00 PM , from Slashdot
An anonymous reader quotes a report from VentureBeat: Even state-of-the-art automatic speech recognition (ASR) algorithms struggle to recognize the accents of people from certain regions of the world. That's the top-line finding of a new study published by researchers at the University of Amsterdam, the Netherlands Cancer Institute, and the Delft University of Technology, which found that an ASR system for the Dutch language recognized speakers of specific age groups, genders, and countries of origin better than others. The coauthors of this latest research set out to investigate how well an ASR system for Dutch recognizes speech from different groups of speakers. In a series of experiments, they observed whether the ASR system could contend with diversity in speech along the dimensions of gender, age, and accent.

The researchers began by having an ASR system ingest sample data from CGN, an annotated corpus used to train AI language models to recognize the Dutch language. When the researchers ran the trained ASR system through a test set derived from the CGN, they found that it recognized female speech more reliably than male speech regardless of speaking style. Moreover, the system struggled to recognize speech from older people compared with younger, potentially because the former group wasn't well-articulated. And it had an easier time detecting speech from native speakers versus non-native speakers. Indeed, the worst-recognized native speech -- that of Dutch children -- had a word error rate around 20% better than that of the best non-native age group. In general, the results suggest that teenagers' speech was most accurately interpreted by the system, followed by seniors' (over the age of 65) and children's. This held even for non-native speakers who were highly proficient in Dutch vocabulary and grammar. One solution to remove the bias is to mitigate it at the algorithmic level. '[We recommend] framing the problem, developing the team composition and the implementation process from a point of anticipating, proactively spotting, and developing mitigation strategies for affective prejudice [to address bias in ASR systems],' the researchers wrote in a paper detailing their work.

'A direct bias mitigation strategy concerns diversifying and aiming for a balanced representation in the dataset. An indirect bias mitigation strategy deals with diverse team composition: the variety in age, regions, gender, and more provides additional lenses of spotting potential bias in design. Together, they can help ensure a more inclusive developmental environment for ASR.'

Read more of this story at Slashdot.
rss.slashdot.org/~r/Slashdot/slashdot/~3/fTADcyJXJ4M/even-the-best-speech-recognition-systems-exhibi...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Sat 20 - 07:11 CEST