HomeTech PlusTECH & OTHER NEWSResearchers find that debiasing doesn’t eliminate racism from hate speech detection models

Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

Current AI hate speech and toxic language detection systems exhibit problematic and discriminatory behavior, research has shown. At the core of the issue are training data biases, which often raise during the dataset creation process. When trained on biased datasets, models acquire and exacerbate biases, for example flagging text by Black authors as more toxic than text by white authors.

Toxicity detection systems are employed by a range of online platforms including Facebook, Twitter, YouTube, and publications. While one of the premiere providers of these systems, Alphabet-owned Jigsaw, claims it’s taken pains to remove bias from its models following a study showing it fared poorly on Black-authored speech, it’s unclear the extent to which this might be true of other AI-powered solutions.

To see whether current model debiasing approaches can mitigate biases in toxic language detection, researchers at the Allen Institute investigated techniques to address lexical and dialectal imbalances in datasets. Lexical biases associate toxicity with the presence of certain words, like profanities, while dialectal biases correlate toxicity with “markers” of language variants like African-American English (AAE).

Hate speech detection racism

In the course of their work, the researchers looked at one debiasing method designed to tackle “predefined biases” (e.g., lexical and dialectal). They also explored a process that filters “easy” training examples with correlations that might mislead a hate speech detection model.

According to the researchers, both approaches face challenges in mitigating biases from a model trained on a biased dataset for toxic language detection. In their experiments, while filtering reduced bias in the data, models trained on filtered datasets still picked up lexical and dialectal biases. Even “debiased” models disproportionately flagged text in certain snippets as toxic. Perhaps more discouragingly, mitigating dialectal bias didn’t appear to change a model’s propensity to label text by Black authors as more toxic than white authors.

In the interest of thoroughness, the researchers embarked on a proof-of-concept study involving relabeling examples of supposedly toxic text whose translations from AAE to “white-aligned English” were deemed nontoxic. They used OpenAI’s GPT-3 to perform the translations and create a synthetic dataset — a dataset, they say, that resulted in a model less prone to dialectal and racial biases.

Hate speech detection racism

“Overall, our findings indicate that debiasing a model already trained on biased toxic language data can be challenging,” wrote the researchers, who caution against deploying their proof-of-concept approach because of its limitations and ethical implications. “Translating” the language a Black person might use into the language a white person might use (1) robs the original language of its richness and (2) makes potentially racist assumptions about both parties. Moreover, the researchers note that GPT-3 likely hasn’t been exposed to many African American English varieties during training, making it ill-suited for this purpose.

“Our findings suggest that instead of solely relying on development of automatic debiasing for existing, imperfect datasets, future work focus primarily on the quality of the underlying data for hate speech detection, such as accounting for speaker identity and dialect,” the researchers wrote. “Indeed, such efforts could act as an important step towards making systems less discriminatory, and hence safe and usable.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

By VentureBeat Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS