South Africa withdraws national AI policy after at least 6 of 67 academic citations found to be AI-generated hallucinations



TL;DR

South Africa’s Communications Minister Solly Malatsi withdrew the country’s draft national AI policy after News24 discovered that at least 6 of its 67 academic citations were AI-generated hallucinations, citing fake articles in real journals. The policy had been approved by Cabinet in March and published for public comment. Malatsi called it an “unacceptable lapse” and promised consequence management. The scandal leaves South Africa without an AI governance framework and raises questions about institutional capacity to regulate the technology.

South Africa’s Department of Communications and Digital Technologies spent months drafting a national artificial intelligence policy. It proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. It outlined five pillars of AI governance: skills capacity, responsible governance, ethical and inclusive AI, cultural preservation, and human-centred deployment. It adopted a risk-based approach modelled on the EU AI Act. Cabinet approved the draft on 25 March. The Government Gazette published it on 10 April for public comment. And then News24, the South African news outlet, checked the bibliography and discovered that at least six of the document’s 67 academic citations did not exist. The journals were real. The articles were not. The authors credited with foundational research on AI governance had never written the papers attributed to them. Editors at the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the cited articles had never been published in their pages. The most plausible explanation, according to Communications Minister Solly Malatsi, is that the drafters used a generative AI tool and published the output without verifying a single reference. A government policy designed to govern artificial intelligence was undermined by the artificial intelligence it failed to govern.

The withdrawal

Malatsi announced the withdrawal on 27 April, calling the fictitious citations an “unacceptable lapse” that “compromised the integrity and credibility of the draft policy.” He said consequence management would follow for those responsible for drafting and quality assurance. “This failure is not a mere technical issue,” the minister said. The parliamentary portfolio committee chair offered a more concise assessment, suggesting the department “skip using ChatGPT this time” when redrafting. The document will be revised before being reissued for public comment, but no timeline has been given. South Africa is now without a formal AI governance framework at a time when governments worldwide are grappling with how to regulate AI, and the country’s credibility as a serious participant in that conversation has taken a blow that will outlast the policy revision.

The scandal is not simply that fake citations appeared in a government document. It is that they appeared in a government document about artificial intelligence, written by the department responsible for the country’s digital technology strategy, during the exact period when the world’s most consequential AI governance debates are being fought in Brussels, Washington, and Beijing. The EU AI Act, the most ambitious regulatory framework for artificial intelligence, is grappling with delayed standards and an implementation timeline that has been pushed back to 2027 for high-risk systems. The United States has no federal AI legislation and is watching states legislate independently while the White House attempts to preempt their efforts. China has enacted AI regulations but applies them selectively. Into this landscape, South Africa offered a policy that could not survive a bibliography check.

The pattern

South Africa’s hallucinated citations are an extreme case of a problem that is quietly spreading across institutions that use generative AI for research and drafting. A study published in Nature found that 2.6 per cent of academic papers published in 2025 contained at least one potentially hallucinated citation, up from 0.3 per cent in 2024. If that rate holds across the roughly seven million scholarly publications from 2025, more than 110,000 papers contain invalid references. GPTZero, a Canadian detection startup, analysed more than 4,000 research papers accepted at NeurIPS 2025, one of the world’s premier AI conferences, and found over 100 hallucinated citations across at least 53 papers. In a separate multi-model study, only 26.5 per cent of AI-generated bibliographic references were entirely correct. The problem is structural: large language models generate citations through probabilistic token prediction rather than information retrieval. They do not look up papers. They predict what a citation should look like based on the patterns in their training data, and when the prediction is confident enough, they produce a reference that reads as authoritative but points to nothing.

The South African case is distinctive not because the technology hallucinated, which is a well-documented and inherent limitation of generative AI, but because the hallucinations were published in an official government policy document that passed through Cabinet approval without anyone verifying the references. The drafting process included civil servants, subject matter consultations, and ministerial review. Dumisani Sondlo, the department’s AI policy lead, had previously described the policy development as “an act of acknowledging that we don’t know enough.” That acknowledgment did not extend to acknowledging that the tool being used to help draft the policy was itself unreliable. The six fake citations that News24 identified are the ones that were caught. Whether additional citations in the document’s 67 references are genuine has not been publicly confirmed. The entire bibliography is now under suspicion, and by extension, so is the analytical foundation on which the policy’s proposals were built.

The implications

The immediate consequence is that South Africa’s AI governance timeline has been reset. The draft policy, which was intended to position the country as a leader in responsible AI adoption on the African continent, will need to be redrafted, reconsulted, and resubmitted. The institutional credibility damage extends beyond the policy itself. If the department responsible for governing AI cannot verify whether the sources in its own policy document are real, the question becomes whether it has the capacity to evaluate the AI systems it proposes to regulate. The policy envisioned a multi-regulator model in which AI governance and human oversight would be embedded within existing supervisory frameworks rather than centralised under a single authority. That model requires each participating regulator to have sufficient technical understanding to assess AI systems in their sector. The hallucination scandal does not inspire confidence that the coordinating department meets that threshold.

The broader lesson is not that governments should avoid using AI in policy development. It is that the failure mode of AI is not dramatic. It does not crash. It does not display an error message. It produces fluent, formatted, confident text that looks exactly like the output of a competent researcher. The fake citations in South Africa’s AI policy were not obviously wrong. They were plausible. They cited real journals. They attributed work to real people. They followed the formatting conventions of academic references. The only way to catch them was to check whether each one actually existed, a task that requires exactly the kind of methodical human verification that AI is supposed to make unnecessary. Growing public distrust of AI is not irrational. It is a response to a technology that is simultaneously powerful enough to draft a national policy and unreliable enough to fabricate the evidence that policy rests on. South Africa’s embarrassment is singular, but the underlying failure, using AI without the capacity to verify its output, is not. It is happening in universities, law firms, newsrooms, and government departments around the world. South Africa is simply the first government to publish the receipts. The challenges of implementing AI regulation are real, but they begin with a prerequisite that South Africa’s department did not meet: understanding what the technology does before trying to write the rules for it.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews



Researchers at the University of Washington have developed a new prototype system that could change how people interact with artificial intelligence in daily life. Called VueBuds, the system integrates tiny cameras into standard wireless earbuds, allowing users to ask an AI model questions about the world around them in near real time.

The concept is simple but powerful. A user can look at an object, such as a food package in a foreign language, and ask the AI to translate it. Within about a second, the system responds with an answer through the earbuds, creating a seamless, hands-free interaction.

A Different Approach To AI Wearables

Unlike smart glasses, which have struggled with adoption due to privacy concerns and design limitations, VueBuds takes a more subtle approach. The system uses low-resolution, black-and-white cameras embedded in earbuds to capture still images rather than continuous video.

These images are transmitted via Bluetooth to a connected device, where a small AI model processes them locally. This on-device processing ensures that data does not need to be sent to the cloud, addressing one of the biggest concerns around wearable cameras.

To further enhance privacy, the earbuds include a visible indicator light when recording and allow users to delete captured images instantly.

Engineering Around Power And Performance Limits

One of the biggest challenges the research team faced was power consumption. Cameras require significantly more energy than microphones, making it impractical to use high-resolution sensors like those found in smart glasses.

To solve this, the team used a camera roughly the size of a grain of rice, capturing low-resolution grayscale images. This approach reduces battery usage and allows efficient Bluetooth transmission without compromising responsiveness.

Placement was another key consideration. By angling the cameras slightly outward, the system achieves a field of view between 98 and 108 degrees. While there is a small blind spot for objects held extremely close, researchers found this does not affect typical usage.

The system also combines images from both earbuds into a single frame, improving processing speed. This allows VueBuds to respond in about one second, compared to two seconds when handling images separately.

Performance Compared To Smart Glasses

In testing, 74 participants compared VueBuds with smart glasses such as Meta’s Ray-Ban models. Despite using lower-resolution images and local processing, VueBuds performed similarly overall.

The report showed participants preferred VueBuds for translation tasks, while smart glasses performed better at counting objects. In separate trials, VueBuds achieved accuracy rates of around 83–84% for translation and object identification, and up to 93% for identifying book titles and authors.

Why This Matters And What Comes Next

The research highlights a potential shift in how AI-powered wearables are designed. By embedding visual intelligence into a device people already use, the system avoids many of the barriers faced by smart glasses.

However, limitations remain. The current system cannot interpret color, and its capabilities are still in early stages. The team plans to explore adding color sensors and developing specialised AI models for tasks like translation and accessibility support.

The researchers will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona, offering a glimpse into a future where everyday devices quietly become intelligent assistants.



Source link