A new cybersecurity arms race has broken out as the world sees an explosion in the launch of generative artificial intelligence (AI) tools that can create original content from text prompts.
Readily available services such as ChatGPT and Microsoft’s Bing Image Creator have enhanced the ability of scammers to create hoax news reports, images, sounds and videos. These allow cybercriminals to steal identities to commit financial fraud or create more convincing “phishing” campaigns aimed at getting users to click on links that expose their computers or company networks to malware, ransomware attacks and data breaches.
At a conference hosted by global cybersecurity provider Kaspersky in Almaty, Kazakhstan, this week, the firm’s global research and analysis team research show that South Africa had seen a 14% increase in users exposed to phishing attacks in the first three months of 2023, compared to the last quarter of 2022.
Nigeria had seen a 17% increase and Egypt 53%, indicating the potential scope for escalation of cybercrime across Africa.
While these attacks were not a result of generative AI, they indicated the scope of the threat.
Vladislav Tushkanov, lead data scientist at Kaspersky, told the conference that three separate categories of generative AI were already being used in scams:
• Diffusion networks, a type of AI that can “generate any kind of image from a text description”, by learning patterns from existing examples, and then using those patterns to generate similar material.
• Deepfakes, which “insert people’s faces into videos and animate still portraits”. DeepFaceLab is a leader in the field.
• Large language models, which “generate any kind of text and solve text-based problems”. ChatGPT is the best known.
Said Tushkanov: “These technologies have a bright side but also a dark side. Each technology can bring value for business, but also introduce vulnerabilities and enable cybercriminals.”
He gave examples of a British energy company being breached using voice deepfakes to impersonate employees and the US government issuing an official warning that deepfakes were being used to apply for remote jobs, well before the current generative AI explosion began.
More recently, in September 2022, deepfake videos were made of Elon Musk endorsing a cryptocurrency scam. It was so effective he had to announce on Twitter that it was “defs not me”.
“It turned out, through our research, that deepfakes were being sold on the 'darknet' (a version of the web accessible only through specialised browsers) for many use cases, including creating advertisements for crypto scams and harassment on social media,” said Tushkanov.
It is some consolation to potential victims that the technology is still rudimentary and fakes can be identified relatively simply.
“This technology is still very basic,” he told Business Times. “Right now it only generates images over which you have almost no control. But they are getting better and better because we have much more computational resources, we have faster graphics processors and better hardware, so we can generate better images and videos.
These [services] allow cybercriminals to steal identities to commit financial fraud, or create more convincing 'phishing' campaigns
“You will be able to generate a picture of any person in any time in any environment. It can be captivating, and it can draw attention, but then it can also be used by cyber criminals.”
The warning echoed Microsoft chief economist Michael Schwarz, talking in a World Economic Forum panel in Geneva on Wednesday, when he said AI could help make humans more productive and revolutionise the way most businesses operate, but “guardrails” had to be erected.
“I am confident AI will be used by bad actors, and yes it will cause real damage,” he said. “It can do a lot of damage in the hands of spammers with elections and so on.”
Craig Rosewarne, MD of South African cybersecurity consultancy Wolfpack Information Risk, told Business Times “We've seen that criminals are still not as tech-savvy operating in South Africa,Generally, we see it more coming from Western African regions and some attacks coming in from Eastern Europe. We are starting to see more cybercrime-as-a-service being used, where you've got the whole underground economy operating together, where some people use a platform for launching ransomware or denial-of-service attacks that get rented out.
“But watch this space. We're going to start seeing a lot more of it happening. We saw IBM this week announcing a layoff of about 30% of its non-client-facing workforce. So as companies are starting to adopt and use AI, obviously cyber criminals are going to start to use it more and more.
“ChatGPT still has safeguards built into it so if you ask it to go find security vulnerabilities on the Wolfpack website, it will say that it cannot do it. But of course, they are ways of posing questions, for example, asking how would you go about doing it?
“If we compare it to the iPhone, it's at the iPhone 1 stage. Of course, we’re now sitting on iPhone 15. With this, it's going to just multiply dramatically, so big things are coming.”
That means there is still time for the business world to prepare.
Tushkanov said the answer, for now, was not technology, but awareness: “Understanding how AI changes the world and educating the public about AI is of utmost importance.”





Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.