Enterprises must navigate through challenges to ensure responsible and ethical deployment of AI solutions, says GlobalData

Responsible AI refers to the ideal that AI projects, whether based on predictive AI or generative AI, are deployed in a manner that safeguards privacy, does not cause harm, is as transparent as possible, is free from bias, and is fair to all that are impacted by them. The recent lawsuit filed by the New York Times against Microsoft and OpenAI for copyright infringement highlights the challenges our society faces in implementing AI in a responsible manner.

Rena Bhattacharyya, Chief Analyst of Enterprise Technology and Services at GlobalData comments: “Responsible AI has once again been catapulted into the headlines due to the emergence of Generative AI. The ease with which consumers can access Open AI’s ChatGPT has made the concerns posed by the new technology, such as hallucinations and data privacy, readily apparent and easily comprehendible to even casual users.”

GlobalData’s latest reports, “Generative AI Watch: Lessons Learned for Implementing Responsible AI (Part 1)” and “Generative AI Watch: Lessons Learned for Implementing Responsible AI (Part 2),” found that in addition to concerns related to copyright protections, enterprises must contend with issues related to explainability, bias,  ethics,  hallucinations, toxicity and poisoning,  and data privacy and leakage when implementing Responsible AI strategies.

Bhattacharyya concludes: “Challenges related to Responsible AI have existed for years, but they have grown in number with the launch of generative AI and have also become more pressing now. Organizations deploying AI must ensure that they are using the technology in a way that is responsible and ethical; otherwise, they risk significant damage to their brand reputation, if not legal and financial repercussions. It is a highly ambitious goal – and getting there is a daunting task.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here