How to stop AI from ‘hallucinating’ on you
SHARE THIS ARTICLE
While HR teams often carry the expectation of setting the example for the rest of the organisation, the successful implementation of this and similar changes requires a joint venture driven by executives across the entire business, writes Anshu Arora.
Since the rapid adoption of generative AI tools like ChatGPT and Gemini, the corporate world has been experiencing a productivity boon and chokehold all at once.
On the one hand, employees are freeing up time to focus on more valuable work, but on the other, they’re relying on AI output to the detriment of quality work. 66 per cent of workers aren’t double-checking the work AI produces for them, and as a result, might be overlooking wildly inaccurate or even fully fabricated outputs, known as AI hallucinations.
The common denominator often overlooked here is the lack of understanding and training behind AI itself. Only 24 per cent of employees have engaged in formal or informal AI training, so most of the workforce isn’t ready to effectively use and critically evaluate these powerful tools.
This lack of AI literacy creates a barrier to productivity, innovation, and ethical deployment. If left undressed, all the progress our workplaces have made will be meaningless. If businesses can’t trust the results, or in the worst-case scenario, the work itself, is AI really adding value?
The ‘garbage in, garbage out’ HR crisis
While the HR sector is undoubtedly going through a tech overhaul, including the use of AI, the training provided for these tools remains limited, as 44 per cent of Australian HR professionals lack confidence in their AI skills.
What’s more, we are facing a “garbage-in-garbage-out” crisis where the speed of AI output simply magnifies the consequences of bad data input. However, the more immediate danger lies in the quality of the information feeding AI tools. This is why businesses need to treat their internal knowledge base as a critical asset, ensuring it is audited, categorised, and up-to-date before it is integrated with any AI model.
To execute this properly, HR departments should not only be across AI but also lead it. Employee reliance on poor-quality training data or outdated internal documents directly leads to unreliable outputs and creates significant liability risks. For example, if an employee uses an AI tool referencing an old, inaccurate HR policy to draft advice for a manager, that incorrect output is no longer just a minor mistake, but a corporate liability potentially affecting the business from top to bottom.
The hidden cost of bad prompts
The hidden cost of bad prompts isn't just a slow output but a fundamentally unusable or misleading one, which directly translates into lost productivity, flawed decision-making, and significant organisational liability. The risk is amplified by a widespread lack of transparency, with 57 per cent of workers in Australia saying they hide their use of AI from managers. Meaning, prompts entering the system are not approved by business protocols.
For example, HR deals with sensitive employee data daily. A prompt that doesn't specify appropriate confidentiality boundaries or data handling protocols can lead to privacy breaches, regulatory violations, and erosion of employee trust. In Australia alone, with the Privacy Act reforms and workplace relations complexities, the margin for error is razor-thin.
This level of precision doesn't happen by accident. It requires specific training that HR teams need to mandate across the organisation. This training, which is crucial for accelerating growth and adoption, needs to cover the prompting fundamentals, including instructing AI on its role and purpose, defining the end goal, specifying the exact outputs needed, and identifying appropriate reference sources. Once the foundation is set, both businesses and employees will reap the benefits as the prompts will reflect up-to-date processes while making sure AI can be a trusted tool.
Moving past ‘play’ to ‘profit’
For AI to become a reliable co-pilot driving genuine business value, business leaders must prioritise education in how to use it properly. This requires more than a one-off workshop. It demands establishing a culture of continuous AI proficiency, defining use-case champions, mandating workflow integrations, and establishing robust prompt libraries. This is why leaders need to view their AI setup as the very foundation of their business.
Beyond simply choosing the ‘right’ AI model, investing in data integrity and expert prompting skills is essential. This focus mitigates the risk of hallucinations, builds organisational trust in AI, and transforms the technology into a high-value asset.
While HR teams often carry the expectation of setting the example for the rest of the organisation, the successful implementation of this and similar changes requires a joint venture driven by executives across the entire business. Those teams that successfully embed this holistic strategy will ultimately shape how their companies grow, hire, and lead in the AI era.
Anshu Arora is the director of customer success and growth at RMIT Online.