Racial bias in artificial intelligence: Testing Google, Meta, ChatGPT and Microsoft chatbots

Google’s public apology after its Gemini artificial intelligence (AI) produced historically inaccurate images and refused to show pictures of White people has led to questions about potential racial bias in other big tech chatbots.

Gemini, formerly known as Google Bard, is one of many multimodal large language models (LLMs) currently available to the public. The human-like responses offered by these LLMs can change from user to user. Based on contextual information, the language and tone of the prompter, and training data used to create the AI responses, each answer can be different even…

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here