HomeTech PlusTECH & OTHER NEWSChatGPT can leak training data, violate privacy, says Google's DeepMind

ChatGPT can leak training data, violate privacy, says Google’s DeepMind

chatgpt-extract-fig1poem.png

By repeating a single word such as “poem” or “company” or “make”, the authors were able to prompt ChatGPT to reveal parts of its training data. Redacted items are personally identifiable information.

Google DeepMind

Scientists of artificial intelligence (AI) are increasingly finding ways to break the security of generative AI programs, such as ChatGPT, especially the process of “alignment”, in which the programs are made to stay within guardrails, acting the part of a helpful assistant without emitting objectionable output. 

One group of University of California scholars recently broke alignment by subjecting the generative programs to a barrage of objectionable question-answer pairs, as ZDNET reported

Also: Five ways to use AI responsibly

Now, researchers at Google’s DeepMind unit have found an even simpler way to break the alignment of OpenAI’s ChatGPT. By typing a command at the prompt and asking ChatGPT to repeat a word, such as “poem” endlessly, the researchers found they could force the program to spit out whole passages of literature that contained its training data, even though that kind of leakage is not supposed to happen with aligned programs. 

The program could also be manipulated to reproduce individuals’ names, phone numbers, and addresses, which is a violation of privacy with potentially serious consequences. 

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

The researchers call this phenomenon “extractable memorization”, which is an attack that forces a program to divulge the things it has stored in memory. 

“We develop a new divergence attack that causes the model to diverge from its chatbot-style generations, and emit training data at a rate 150× higher than when behaving properly,” writes lead author Milad Nasr and colleagues in the formal research paper, “Scalable Extraction of Training Data from (Production) Language Models”, which was posted on the arXiv pre-print server. There is also a more accessible blog post they have put together.  

The crux of their attack on generative AI is to make ChatGPT diverge from its programmed alignment and revert to a simpler way of operating. 

Generative AI programs, such as ChatGPT, are built by data scientists through a process called training, where the program in its initial, rather unformed state, is subjected to billions of bytes of text, some of it from public internet sources, such as Wikipedia, and some from published books.

The fundamental function of training is to make the program mirror anything that’s given to it, an act of compressing the text and then decompressing it. In theory, a program, once trained, could regurgitate the training data if just a small snippet of text from Wikipedia is submitted and prompts the mirroring response. 

Also: Today’s AI boom will amplify social problems if we don’t act now

But ChatGPT, and other programs that are aligned, receive an extra layer of training. They are tuned so that they will not simply spit out text, but will instead respond with output that’s supposed to be helpful, such as answering a question or helping to develop a book report. That helpful assistant persona, created by alignment, masks the underlying mirroring function.

“Most users do not typically interact with base models,” the researchers write. “Instead, they interact with language models that have been aligned to behave ‘better’ according to human preferences.”

To force ChatGPT to diverge from its helpful self, Nasr hit upon the strategy of asking the program to repeat certain words endlessly. “Initially, [ChatGPT] repeats the word ‘poem’ several hundred times, but eventually it diverges.” The program starts to drift into various nonsensical text snippets. “But, we show that a small fraction of generations diverge to memorizing: some generations are copied directly from the pre-training data!”

deepmind-2023-extracted-memorization-example-part-1

ChatGPT at some point stops repeating the same words and drifts into nonsense, and begins to reveal snippets of training data.

Google DeepMind

deepmind-2023-extracted-memorization-example-part-2.png

Eventually, the nonsense begins to reveal whole sections of training data (the sections highlighted in red).

Google DeepMind

Of course, the team had to have a way to figure out that the output they are seeing is training data. And so they compiled a massive data set, called AUXDataSet, which is almost 10 terabytes of training data. It is a compilation of four different training data sets that have been used by the biggest generative AI programs: The Pile, Refined Web, RedPajama, and Dolma. The researchers made this compilation searchable with an efficient indexing mechanism, so that they could then compare the output of ChatGPT against the training data to look for matches.

They then ran the experiment — repeating a word endlessly — thousands of times, and searched the output against the AUXDataSet thousands of times, as a way to “scale” their attack. 

“The longest extracted string is over 4,000 characters,” say the researchers about their recovered data. Several hundred memorized parts of training data run to over 1,000 characters. 

“In prompts that contain the word ‘book’ or ‘poem’, we obtain verbatim paragraphs from novels and complete verbatim copies of poems, e.g., The Raven,” they relate. “We recover various texts with NSFW [not safe for work] content, in particular when we prompt the model to repeat a NSFW word.”

They also found “personally identifiable information of dozens of individuals.” Out of 15,000 attempted attacks, about 17% contained “memorized personally identifiable information”, such as phone numbers.

Also: AI and advanced applications are straining current technology infrastructures

The authors seek to quantify just how much training data can leak. They found large amounts of data, but the search is limited by the fact that it costs money to keep running an experiment that could go on and on.

Through repeated attacks, they’ve found 10,000 instances of “memorized” content from the data sets that is being regurgitated. They hypothesize there’s much more to be found if the attacks were to continue. The experiment of comparing ChatGPT’s output to the AUXDataSet, they write, was run on a single machine in Google Cloud using an Intel Sapphire Rapids Xeon processor with 1.4 terabytes of DRAM. It took weeks to conduct. But access to more powerful computers could let them test ChatGPT more extensively and find even more results.

“With our limited budget of $200 USD, we extracted over 10,000 unique examples,” write Nasr and team. “However, an adversary who spends more money to query the ChatGPT API could likely extract far more data.” 

They manually checked almost 500 instances of ChatGPT output in a Google search and found about twice as many instances of memorized data from the web, suggesting there’s even more memorized data in ChatGPT than can be captured in the AUXDataSet, despite the latter’s size.

Also: Leadership alert: The dust will never settle and generative AI can help

Interestingly, some words work better when repeated than others. The word “poem” is actually one of the relatively less effective. The word “company” is the most effective, as the researchers relate in a graphic showing the relative power of the different words (some words are just letters):

deepmind-2023-repeated-tokens-that-extract-memorized-content

Google DeepMind

As for why ChatGPT reveals memorized text, the authors aren’t sure. They hypothesize that ChatGPT is trained on a greater number of “epochs” than other generative AI programs, meaning the tool passes through the same training data sets a greater number of times. “Past work has shown that this can increase memorization substantially,” they write. 

Asking the program to repeat multiple words doesn’t work as an attack, they relate — ChatGPT will usually refuse to continue. The researchers don’t know why only single-word prompts work: “While we do not have an explanation for why this is true, the effect is significant and repeatable.”

The authors disclosed their findings to OpenAI on August 30, and it appears OpenAI might have taken steps to counter the attack. When ZDNET tested the attack by asking ChatGPT to repeat the word “poem”, the program responded by repeating the word about 250 times, and then stopped, and issued a message saying, “this content may violate our content policy or terms of use.”

chatgpt-refuses-to-repeat-2023.png

Screenshot by ZDNET

One takeaway from this research is that the strategy of alignment is “promising” as a general area to explore. However, “it is becoming clear that it is insufficient to entirely resolve security, privacy, and misuse risks in the worst case.”

Also: AI ethics toolkit updated to include more assessment components

Although the approach that the researchers used with ChatGPT doesn’t seem to generalize to other bots of the same ilk, Nasr and team have a larger moral to their story for those developing generative AI: “As we have repeatedly said, models can have the ability to do something bad (e.g., memorize data) but not reveal that ability to you unless you know how to ask.”

Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS