Twitter vows to fix bias image cropping issue

twitter-logo-app.jpg
Image: Brett Jordan

Twitter has pledged that it will continually test its algorithms for bias and give users more choice in how images appear on its platform.

“While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm,” Twitter CTO Parag Agrawal and CDO Dantley Davis wrote in a blog post.

“We should’ve done a better job of anticipating this possibility when we were first designing and building this product.

“We are currently conducting additional analysis to add further rigor to our testing, are committed to sharing our findings, and are exploring ways to open-source our analysis so that others can help keep us accountable.”

See also: What is bias in AI really, and why can’t AI neutralize it?

The pair added that Twitter would decrease its reliance on using machine learning for image cropping, by giving users greater visibility and control over what their images look like in a tweet. They do not specify exactly how the company would achieve that, but said Twitter has “started exploring different options to see what will work best”.

“We hope that giving people more choices for image cropping and previewing what they’ll look like in the tweet composer may help reduce the risk of harm,” they said.

Must read: AI and ethics: The debate that needs to be had

These commitments come after users discovered that Twitter’s image preview cropping tool was automatically favouring white faces over someone who was Black. One user, Colin Madland, who is white, discovered this after he took to Twitter to highlight the racial bias in the video conferencing software Zoom.

When Madland posted an image of himself and his Black colleague, whose head was being erased when using a virtual background on a Zoom call because the algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland.

Other users, such as Tony Arcieri, tested this further. He did this by comparing a photo of US senator Mitch McConnell and Barack Obama, and found that Twitter’s algorithm cropped out the former President.

Last year, Google announced it was working to make its artificial intelligence and machine learning models more transparent to defend against bias, using a technology called TCAV.

Short for Testing with Concept Activation Vectors, the technology, in theory, has the ability to understand signals that could surface bias.

Related Coverage

LEAVE A REPLY

Please enter your comment!
Please enter your name here