While South Africa is still finalising a national AI policy framework, legal experts warn that businesses are already exposed to real risks in how they use AI tools — from potential loss of intellectual property (IP) ownership to the leakage of confidential data and copyright disputes.
On Wednesday the cabinet approved the draft national AI policy for public comment, a precursor to national AI legislation. The AI Policy aims to strengthen Government’s ability to regulate and adopt AI responsibly, while encouraging local innovation, supporting job creation and improving access to AI skills, government said in a statement.
“Like most countries, South Africa is trying to understand AI’s impact before formulating policy and regulation,” said Darren Olivier, trademark attorney and partner at Adams & Adams.
“But that doesn’t mean AI is unregulated. Existing laws — including those governing copyright, patents and data protection — already apply to how AI is used and what it produces.”
For decades, businesses have relied on a stable IP framework: trademarks to protect brands, copyright for creative work and patents for inventions. Although these laws were not written with AI in mind, they remain applicable, Olivier said.
For example, an employee who inputs confidential information into an AI system may inadvertently compromise trade secret protection. AI-generated content can also infringe copyright. Olivier pointed to a case in the US in which authors sued AI developers over the alleged unauthorised use of copyright material to train their systems.
One of the biggest risks, he said, is the lack of transparency on the data used to train many AI tools. If training materials include copyrighted books, code or designs that are used without permission, the outputs may carry that infringement forward — exposing end users to legal risk.
“We do need regulation because AI is so powerful — and, in many instances, worryingly so. It’s like having a Lamborghini in your garage without a driver’s licence,” Olivier said.
Some organisations are already responding by introducing internal AI governance frameworks, treating AI not merely as an IT tool but as a business risk requiring active oversight.
“It’s not just about risk mitigation,” said Olivier. “It’s also about value creation and building trust.”
Companies with robust AI governance are likely to enjoy stronger relationships with customers, regulators and investors. “No-one has all the answers yet,” he said. “But there are basic principles and best practices that can — and should — be implemented today.”
As regulators and corporates navigate AI’s complexities, South Africa — and the rest of the continent — have an opportunity to shape its own AI future, Olivier said
For now, much of Africa remains on the margins of the global AI value chain. Most leading models are developed in the US, Europe and China, often trained on data that does not reflect local languages or contexts.
This provides an opportunity for African innovation, particularly given the continent’s strong culture of entrepreneurship and problem-solving under difficult conditions.
An innovative mindset, combined with comparatively lighter regulation than in some Western markets, could make it easier for African organisations to develop AI tools tailored to local needs — much as the continent leapfrogged traditional banking through mobile technology, said Olivier.
“Africa has innovation in its DNA. We’re entrepreneurial by nature, and that makes the continent a fertile ground for AI development.”









Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.