Resources

AI incident sends broader warning on workplace culture

Written by Hubspot Author | Oct 9, 2025 3:00:00 AM

Source: HR Daily (subscription service) - 8.10.2025

A consulting firm's "disastrous" use of generative AI in a government-commissioned report demonstrates a failure of alignment between technology and culture, and highlights a risk many organisations are yet to address, a culture expert says.The question for today's leaders is not whether their organisations can use AI, but whether their culture makes them "trustworthy custodians of AI", because when it comes to the workplace, culture is "the true operating system", says Culture Plus Consulting founder Felicity Menzies.

A culture that "prizes speed and efficiency above all else will use AI to move faster – sometimes at the expense of quality or ethics", she tells HR Daily.

Conversely, one that is grounded in reflection, integrity, and curiosity will use AI to deepen insight and broaden participation.

If we want trustworthy AI, we must first build trustworthy cultures that balance ambition with ethics, innovation with introspection, and speed with accountability.

Felicity Menzies

When AI is deployed without the right cultural foundations – including transparency, accountability, and ethical awareness – trust collapses. But by treating culture as a strategic asset, organisations can "turn AI from a reputational risk into a trusted ally".

The challenge is to build trustworthy cultures that balance "ambition with ethics, innovation with introspection, and speed with accountability", Menzies says. It's why AI implementation is never just a technical project, it's also a cultural one.

Avoiding high-profile incidents

The discovery of AI-generated errors in a government-commissioned report, which will reportedly see Deloitte refund part of its $440k fee, has garnered public attention because fees in the consulting industry are high, and firms rely on being seen as trusted experts, Menzies says.

(The Department of Workplace Relations has posted a reference to the errors on its website, stating, "Deloitte conducted this independent assurance review and has confirmed some footnotes and references were incorrect", along with corrected versions of the documents, which ironically pertain to the Government's much-criticised and technology-based Robodebt scheme.)

It doesn't help that the professional consulting sector is already under scrutiny following the PwC tax scandal, or that Deloitte professes expertise on AI adoption, Menzies notes.

"I think we can assume that all consulting firms will be using AI to generate material, to organise material, to write reports," she says. But just as someone would oversee a junior employee doing client work, employers need to ensure humans with "expertise and experience and authority" are checking AI's work.

She notes that if a human writing a report decided to "just make stuff up", they'd likely lose their job. "So why would we then have a different tolerance level for the user of the system?"

A contributing factor is that, unlike in Europe, where legislation is compelling employers to adhere to ethical principles when using AI, Australia is still relying on voluntary adherence. Sooner or later Australia is likely to follow Europe's lead, but Menzies sees this incident as "the first of many" and advises employers to take action now.

"I think that needs to be a big education piece," she says, starting at board level, and flowing down through leadership. She suspects there's a lack of skills and expertise in the market here, not so much around the technology, but in terms of ethics and governance to mitigate risk.

It's not a tech issue, Menzies stresses, but rather a leadership, governance and risk-management issue.

Oversight of output

"It is important to say, 'We have used AI in this', but more important is having that human oversight of the AI system and its output," Menzies says.

To mitigate the risk of AI hallucinations and to maintain trust, there needs to be a human in the loop to scrutinise output. That person should be an expert in the content, not the technology, she notes. It's a reason to keep training people – if too many jobs are outsourced to AI, organisations might be left without the expertise required to adequately fact-check its output.

When disaster strikes, Menzies says it's important not only to admit failure, but to restore client confidence by implementing safety nets and guardrails, in much the same way employers do in the work health and safety space.

"If there's a case of sexual harassment or other harm in the workplace, you want the organisation not just to say, 'Oh, we're sorry'. You want the organisation to say, 'We've acknowledged that there's a gap here, and there's a risk, and we're closing the gap and this is how'."

An organisation might have a policy that addresses AI and ethics, but as with those designed to reduce the risk of bullying and harassment, it needs to be more than just a piece of paper.

"This is why it becomes cultural," Menzies says. Leaders need to model compliance, and employees need to understand what it looks like at different levels of the organisation and in different scenarios. In the case of drafting reports, for example, it should be clear what needs to be checked by who and at what stage.

Menzies notes there are levels and types of risk that might warrant different risk assessments and controls. "For a consulting firm whose whole business is the expertise that they're selling, I would assume that they would [have] partner oversight, where [a major report] needs to be signed off and scrutinised."

Five essential principles

To navigate the intersection of AI and culture, organisations need environments that shape everyday choices, not just compliance, Menzies says.

She says five essential "cultural principles" are:

  • transparency – disclose when and how AI is used, and be open about its limitations;
  • accountability – maintain human oversight at every decision point;
  • ethical literacy – equip teams to understand how bias, hallucination, and automation can distort outcomes;
  • inclusion – involve diverse voices in AI design and review to surface unseen risks; and
  • learning orientation – treat missteps as opportunities to refine systems and strengthen governance.

"These principles ensure AI serves the organisation's values – not the other way around," Menzies says.