We spent much of our time discussing the relevance of artificial intelligence within the workplace, and yet it is apparent that not enough is being said about the limitations of AI. Deloitte’s recent AI error not only resulted in a partial refund to the Australian government, but it also made it evident that the technology was not to be trusted without supervision. From the look of things, the hallucinations found in the Deloitte report took the shape of non-existent academic papers and self-generated quotes attributed to a federal judge.
While this degree of error might be overlooked in a high school paper where no one is affected by the inaccurate data, on a governmental scale, Deloitte’s AI-based errors could be injurious. The incident is a good reminder that while AI may be advanced in its own ways, blind reliance on the technology can hurt the organization. AI literacy and human oversight are indispensable for an organization that is determined to operate alongside AI.

Deloitte’s recent AI blunder in a report for the Australian Government reminds us about the importance of AI literacy and human oversight. (Image: Pexels)
Deloitte’s AI Error Does Little to Slow Down Its Investments in AI
Let’s first understand the botched AI report by Deloitte. The Australian Department of Employment and Workplace Relations had previously commissioned a AU$439,000 (approximately USD $290,000) “independent assurance review” from the consulting firm, which was then completed and uploaded onto the department website. Soon after, a few different sources, like the Australian Financial Review, began to point out errors in the report. They found it to be littered with fabricated references to research papers that did not exist.
A corrected version of the report was later uploaded to the website, but Deloitte’s AI hallucinations made it clear that trusting reports was no longer as easy as trusting the source. The company then agreed to return the final installment of the payment that was made to them under this particular contract.
Deloitte’s AI Investment Ramp-up Despite Errors
Ironically, just as reports of Deloitte AI’s error emerged, the company also simultaneously announced its investments in more AI technology. The company is set to introduce the Anthropic Claude chatbot to nearly 500,000 employees next week. The two are also collaborating to co-create a certification program to train over 15,000 professionals on Claude, who can then help integrate the AI further across Deloitte’s operations.
The partnership between the two companies is not a new one, and there are plans for the team to offer combined products and services to clients in the coming months. To make this possible, Deloitte will establish the Claude Center of Excellence in collaboration with trained specialists.
Just as we’ve seen from Goldman Sachs and BNY Mellon, Deloitte is also rumored to be looking at creating AI agent “personas” to operate within and represent the workforce. These changes are indicative of a continued trust in AI, despite its limitations.
“Anthropic shares our passion for safety and reliability, along with our assessment that enterprise AI should be both powerful and principled,” Ranjit Bawa, Global Technology and Ecosystems and Alliances Leader at Deloitte Global, said in a post regarding the collaboration.
What the Deloitte Controversy Teaches Us About AI Use
Deloitte’s AI error has much to offer us in terms of the importance of caution and regulation around AI use. While the technology is business-altering in its own right, it is still limited in its capabilities. An employee who faked data or botched an important report would not find themselves welcome back to the job the next day, but cancelling AI is not an option for most businesses that have invested heavily in it.
There is also the matter of accountability that must be addressed. If AI “did the work” and conducted the research, but the report was signed off by employees, who is held accountable for the error? There is little that can be done to reprimand AI, but there are ways to make employees more aware of their role in fact-checking the data that is generated. This may double the work for the team, but deadlines and deliverables need to be planned according to this proposed evolution of work.
The ethical aspect of this is also important to note. Deloitte may have been able to dodge a full scandal with its quiet revision and partial refund, but a smaller business may not be as lucky. Just as employees lose faith in managers who send AI-generated emails, customers and clients are likely to lose some amount of trust in a business that doesn’t do its due diligence and verify the data it shares.
Addressing Training and Workforce Planning In the Era of AI
The problem of AI hallucinations makes AI literacy and close human supervision an essential requirement when using AI. Unchecked AI outputs can skew the results of the most well-meaning projects and lead to much embarrassment for organizations when discovered. Deloitte is making its own attempts to train and serve its workforce with advanced AI tools, but other businesses also need to take employee training more seriously in terms of AI.
Does Deloitte’s experience with AI hallucinations mean the technology is no longer going to be used? Matters are quite the opposite as the company continues to ramp up its investments in the tool. If AI is truly here to stay, it is essential to set up regulations and checks surrounding the use of AI rather than let errors from the secretive use of this tech take over.
As governments start shifting their attention to the regulation of AI, HR’s strategic edge lies in agile planning and foresight, using every opportunity to improve the systems of operation that currently prevail.
Subscribe to The HR Digest for more insights on workplace trends, layoffs, and what to expect with the advent of AI.




