As technology gets smarter with developments such as generative AI, so do cybersecurity attacks. Google’s new cybersecurity forecast reveals the rise of AI brings new threats you should be aware of.
On Wednesday, Google launched its Google Cloud Cybersecurity Forecast 2024, a report put together through a collaboration with numerous Google Cloud security teams that deep dives into the cyber landscape for the upcoming year.
Also: ChatGPT down for you yesterday? OpenAI says DDoS attack was to blame
The report found that generative AI and large language models (LLMs) will be utilized in various cyber attacks such as phishing, SMS, and other social engineering operations with the purpose of making content and material, such as voice and video, appear more legitimate.
For example, dead giveaways of phishing attacks such as misspellings grammar errors, and lack of cultural context will be more challenging to spot when generative AI is employed since it does a great job at mimicking natural language.
In other instances, attackers can feed an LLM legitimate content and generate a modified version that suits the attacker’s goals but keeps the same style of the original input.
Also: Australia to investigate Optus outage that impacted millions
The report also predicts how LLMs and other generative AI tools that are offered as a paid service to help attackers deploy their attacks more efficiently with less effort involved will be increasingly developed.
However, malicious AI or LLMs won’t even be entirely necessary since using generative AI to create content, such as drafting an invoice reminder, isn’t in itself malicious, but it can be exploited by attackers to target victims for their own goals.
For instance, ZDNET has previously covered how scammers are using AI to impersonate the voices of family members or friends in need to swindle money from them.
Another potential generative AI threat includes information operations. Simply by using AI prompts, attackers can use generative AI models to create fake news, fake phone calls, and deepfake photos and videos.
Also: What are passkeys? Experience the life-changing magic of going passwordless
According to the report, these operations could enter the mainstream news cycle. The scalability of these operations could reduce the public trust in news and online information, to the point where everyone becomes more skeptical or stops trusting the news they consume.
“This could make it increasingly difficult for businesses and governments to engage with their audiences in the near future,” says the report.
Although attackers are using AI to make their attacks stronger, cyber defenders can also leverage the technology to counter with more advanced defenses.
Also: 3 ways Microsoft’s new Secure Future Initiative aims to tackle growing cyber threats
“AI is already providing a tremendous advantage for our cyber defenders, enabling them to improve capabilities, reduce toil, and better protect against threats,” said Phil Venables, CISO, Google Cloud on AI. “We expect these capabilities and benefits to surge in 2024 as the defenders own the technology and thus direct its development with specific use cases in mind.”
Some use cases of generative AI for defenders include leveraging it to synthesize large amounts of data, yield actionable detections, and take action at a quicker speed.
Artificial Intelligence