ChatGPT-assisted bots are spreading on social media

Robot hiding in a mug, 3d rendering - stock illustration

Westend61/Getty Images

For many users, scrolling through social media feeds and notifications is like wading in a cesspool of spam content. A new study identified 1,140 AI-assisted bots that were spreading misinformation on X (formerly known as Twitter) about cryptocurrency and blockchain. 

But bot accounts posting this type of content can be hard to spot, as the researchers from Indiana University found. The bot accounts used ChatGPT to generate their content and were hard to differentiate from real accounts, making the practice more dangerous for victims.

Also: You can demo Meta’s AI-powered multilingual speech and text translator. Here’s how

The AI-powered bot accounts had profiles that resembled those of real humans, with profile photos and bios or descriptions about crypto and blockchain. They made regular posts generated with AI, posted stolen images as their own, and made replies and retweets.

The researchers discovered that the 1,140 Twitter bot accounts belonged to the same malicious social botnet, which they referred to as “fox8.” A botnet is a network of connected devices —  or, in this case, accounts — that are centrally controlled by cybercriminals. 

Also: 4 things Claude AI can do that ChatGPT can’t 

Generative AI bots have been getting better at mimicking human behaviors. This means traditional and state-of-the-art bot detection tools, like Botometer, are now insufficient. These tools struggled to identify and differentiate bot-generated content from human-generated content in the study, but one stood out: OpenAI’s AI classifier, which was able to identify some bot tweets.

How can you spot bot accounts?

The bot accounts on Twitter exhibited similar behavioral patterns, like following each other, using the same links and hashtags, posting similar content, and even engaging with each other. 

Also: New ‘BeFake’ social media app encourages users to transform their photos with AI

Researchers combed over the tweets of the AI bot accounts and found 1,205 self-revealing tweets. 

Out of this total, 81% had the same apologetic phrase: 

“I’m sorry, but I cannot comply with this request as it violates OpenAI’s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.”

The use of this phrase suggests that the bots are instructed to generate harmful content that goes against OpenAI’s policies for ChatGPT. 

The remaining 19% used some variation of “As an AI language model” language, with 12% specifically saying, “As an AI language model, I cannot browse Twitter or access specific tweets to provide replies.”

Also: ChatGPT vs. Bing Chat vs. Google Bard: Which is the best AI chatbot?

The fact that 3% of the tweets posted by these bots linked to one of three websites (cryptnomics.org, fox8.news, and globaleconomics.news) was another clue. 

These sites look like normal news outlets but had notable red flags, like the fact that they were all registered around the same time in February 2023, had popups urging users to install suspicious software, all seem to use the same WordPress theme, and have domains that resolve to the same IP address.

Malicious bot accounts can use self-propagation techniques in social media by posting links with malware or infectable content, exploiting and infecting a user’s contacts, stealing session cookies from users’ browsers, and automating follow requests.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here