Generative AI: The New Frontier in Cybersecurity
A new level of sophistication is emerging in cyberattacks and defense using large language models
February 7, 2024
A clerk at the Hong Kong branch of a multinational company recently received a message from the CFO inviting him to a video call to discuss a confidential transaction. The clerk was at first suspicious but his concerns dissipated when he saw several other employees from the finance department including the CFO in the video meeting.
The clerk was told to wire a total of 200 million Hong Kong dollars ($25.6 million) to five bank accounts, which he did. Unfortunately, it was a scam. Everyone in the meeting except for the clerk was a deepfake, according to Hong Kong police as reported by public broadcaster RTHK.
"I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference,” said acting senior superintendent Baron Chan to the press
Generative AI is bringing cyber attacks to a new level of sophistication and reach, enabling even low-skilled and less-funded criminals to commit such acts.
“We expect to see generative AI and LLMs being leveraged by hackers to personalize and slowly scale their campaigns,” said Phil Venables, who is the CISO of Google Cloud. “New AI capabilities will enable threat actors that once were limited by reduced resources and capabilities to further scale their campaigns.”
LLM-Enabled Threats
Large language models (LLMs) offer various ways for hackers to carry out their nefarious acts. One is in a traditional phishing campaign.
“Gone are the days of ignoring requests from the Nigerian prince looking for a banking partner,” said Christopher Cain, who is the threat research manager at OpenText Cybersecurity. “An LLM can help clean up the language or remove obvious errors that non-native speakers would make. The technology can also be used to make the content of the messages unique and more customized to relevant issues.”
Another use of an LLM is to create code for the attack. This could allow for making a realistic landing page, without having to know how to use a programming or scripting language. For example, a hacker can take a screenshot of a website of a bank and the LLM will create the code for it.
What this means is that there will need to be new approaches for dealing with social engineering attacks. “Cyber security education and awareness is now more important than ever,” said Cain. “Proper security practices and procedures along with audits and safeguards are part of addressing the issue. Internally, it is placing a phone call to confirm any emails requesting a login or financial data are, in fact, actual requests.”
Consider that LLMs themselves are becoming targets for hackers as well.
“One of greatest concerns with LLMs, especially publicly available and open source LLMs, is securing against prompt injection vulnerabilities and attacks, like jailbreaks,” said Nicole Carignan, who is the vice president of strategic cyber AI at Darktrace. “A threat actor can take control of the LLM and force it to produce malicious outputs because of the implicit confusion between the control and data planes in LLMs.”
Besides prompt injection, there are other types of attacks on LLMs.
“There is stealing a model,” said Ashvin Kamaraju, who is the global vice president of engineering and cloud at Thales. “Once a cyber criminal has access to a model’s makeup, they are able to study its abilities and actively attempt to exploit existing vulnerabilities all within a testing environment. Then there is data poisoning. This attack targets public datasets used to train deep-learning models. They can then manipulate it or corrupt it with spurious data.”
In light of this, it is critical for there to be strong security across the AI development supply chain. This means having monitoring systems in place, such as LLMOps, to help detect model drift and anomalies. There would also need to be a clear ingestion process for the data, which would include data versioning.
Generative AI Defense
Generative AI technology can be helpful in fending off cyber security attacks as well, since it is adept at detecting patterns in huge, unstructured datasets.
“Generative AI can help to expedite threat analysis, provide tighter and smarter access controls and help with troubleshooting,” said Eyal Manor, who is the vice president of product at Check Point Software Technologies.
The company recently launched Check Point Infinity AI Copilot, which is powered by generative AI. Using a chat interface, it can save IT teams up to 90% of the time needed for administrative tasks, according to the company.
To be sure, there are numerous cyber security copilots on the market. Some of them are from companies such as Microsoft, Google and Crowdstrike. While such technologies are still in the early phases, they show promise.
“As most companies are testing the business viability of AI, they too will see the benefits of Cyber-AI processes,” said Jim Guinn, who is a partner and principal of cybersecurity and critical infrastructure at EY. “My position is Cyber-AI will have a slower adoption rate than the business processes AI can help automate, but it will come.”
This article first appeared on IoT World Today's sister site, AI Business.
Read more about:
AsiaAbout the Author
You May Also Like