Apple restricts employee use of ChatGPT. Here’s why

ChatGPT on a phone

Getty Images/NurPhoto

Generative AI models continually improve their performance by utilizing user interactions to refine their algorithms. As a result, even confidential information in your prompts could potentially be used to further train the model.

For that reason, data privacy is one of the biggest challenges surrounding generative AI — including ChatGPT in particular.

Fears over data leaks have caused many companies like Verizon, JPMorgan Chase and Amazon to restrict usage of ChatGPT by employees. Now Apple joins the list.

According to documents reviewed by the Wall Street Journal, ChatGPT and other external AI tools, such as Microsoft owned Github Copilot, have been restricted for some employees.

The worries arise from the potential for unintentional release of private information when utilizing these models, which has happened before.

The most recent example is the ChatGPT Mar. 20 outage, which allowed some users to see titles from other users’ chat history. This event caused Italy to temporarily ban ChatGPT.

OpenAI has tried addressing concerns regarding data before. In late April, OpenAI released a feature which allows users to turn off their chat history. This gives users more control of their own data, allowing them to choose what chats can be used to train OpenAI’s models or not.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here