Impatience, a lack of sufficient training data and misguided ambitions are some of the reasons why certain AI-based applications get their decision-making catastrophically wrong.
In 2016, Microsoft unveiled Tay, a chatbot that “tweeted like a teen.” The tool, supposed to engage in playful and conversational interactions with humans online, soon got a taste of the racist and misogynistic content that permeates today’s social media platforms. Subsequently, the chatbot’s tweets began to reflect the kind of data it was being exposed to. The history of AI contains several instances of incorrect decision-making similar to this. While we can have a chuckle at Tay’s near-overnight transformation, the other, darker examples of AI failure are not worthy of laughs or even sarcastic applause.
1. Incorrect Recommendations for Cancer Treatment
A few years after its Watson supercomputer had beaten the world’s best “Jeopardy!” player, IBM reconfigured it as a medical tool. IBM claimed that the tool could accurately recommend effective treatments for cancer. The claims were proved to be unfounded, with multiple instances of Watson recommending incorrect treatments for several cancer patients. Like with other examples of AI failure, biased training data, insufficient development time and its inability to come good on IBM’s lofty promises were the main reasons for the misfire.
Examples like these further reinforce the belief that healthcare may not be ready for extensive AI implementation just yet.
2. Discriminatory Recidivism Recommendations
The COMPAS is an AI-based decision support tool that several prisons in the US currently use. The system is mainly used to assess the chances of the likelihood of a convict becoming a recidivist post-release. The system has been in the headlines for the wrong reasons due to multiple instances of discriminatory decision-making against people of African American origin. So, while Caucasian criminals would readily receive parole and bail, black convicts were denied these benefits due to how the system was developed and trained.
The problem of AI discrimination against black people is a long and well-documented one.
3. Misogynistic and Preferential Recruitment Decisions
Amazon is known for its continued reliance on automation tools to get various eCommerce operations done. In 2014, the organization debuted an AI-based recruitment tool to assess the resumes of applicants before recommending the chosen “best” candidates to their hiring team. Ideally, the system was supposed to take “100 resumes and spit out the top 5 for recruitment.” Before long, it was found that the tool preferred male candidates over female ones, an unfortunate reflection of the skewed sex ratio of the IT industry.
Amazon discontinued the use of the project a few years after unveiling it because of its impact on the firm’s reputation.
There are certain commonalities across all these AI decision-making failures—rushed training and development, a lack of inclusivity during the creation of algorithms and a few others. Often, companies tend to forget that AI models need the time and patience to “learn” and perform their tasks with minimal mistakes.