How Australian organisations are dealing with workforce AI adoption
SHARE THIS ARTICLE
As employees take AI adoption into their own hands, the efficacy of internal HR, workplace, and educational policies and systems is being tested, writes Peter Philipp.
It’s often said that real transformation begins when those closest to everyday challenges devise the solutions. This is ringing true in the AI space, but many Australian organisations are underprepared for what it means.
The way AI is introduced into workplaces is changing. Rigid and formalised approaches to AI adoption are being replaced by individuals experimenting with various AI tools as part of their daily workflows.
This may or may not be prohibited by current AI policies.
In some cases, there may still be no internal policy prohibiting this type of AI use. One survey last year found only 18 per cent of Australian organisations that adopted AI had a written policy governing its use within six months, rising to 21 per cent within a year. Over one-third had no plans to create a policy at all.
Where ‘AI in the workplace’ policies do exist, the point isn’t that they should be more strictly enforced. Rather, these policies need to be adjusted and new protections put in place to ensure there is transparency about the extent to which AI is being used and how it interacts with an organisation’s data.
The rise of AI in the workplace
The absence of – or lack of uniformity around – workplace policies for AI use, combined with organisations' slow adoption of AI and the growing number of tools available, means that organisations may no longer be as in control of AI use as they thought.
Staff in roles ranging from marketing to sales to operations are using their own initiative to introduce AI tools into their work in some way to streamline everyday tasks and automate time-consuming processes.
Organisations have the same ambition as individuals – to improve workplace productivity – but are moving at a slower pace, adopting only a small handful of the vast number of AI tools available.
It’s clear that employees have circumvented this bottleneck and, in doing so, ushered in the practice of bringing their own AI into the workplace.
Case in point: a recent study by McKinsey found that “three times more employees are using generative AI for a third or more of their work than their leaders imagine,” with millennial staff driving that uptake and pace of change. Staff prefer internally sanctioned AI tools where available, but aren’t willing to let the slow pace of change internally become a blocker to their AI experimentation and use.
This is supported by Microsoft and LinkedIn’s Work Trend Index, which found that “without guidance or clearance from the top, employees are taking things into their own hands and keeping AI use under wraps”, with 78 per cent of users partaking in using their own AI tools.
There are challenges in allowing this situation to continue. Inconsistent usage across internal functions or departments may create data silos and security risks.
In addition, a hands-off approach to these tools may lead to confusion or even disillusionment if teams are not adequately guided or supported. The research consistently shows that employees want to be trained in how to use AI to benefit their work and their organisations, but that training and support are lagging in adoption.
Workplace policies and systems, along with learning and development (L&D) programs, now need to evolve to catch up.
One employee, one idea, and one experiment at a time
When done well, grassroots AI adoption can evolve from small pilot projects into strategic assets that redefine entire organisations. Once organisations recognise this and take it to heart, they can really embed the benefits of AI into their culture.
Clear frameworks, policies that protect data integrity and security, upskilling programs to boost AI fluency, and the necessary technology infrastructure all play a part in equipping organisations for adopting AI.
As the use of AI tools evolves from simple tasks into more complex, multi-step, collaborative processes, maintaining comprehensive context becomes vital for quality and consistency. Without a unified view of data and connections, businesses risk AI tools working separately, either overcomplicating tasks to fill context gaps or settling for weaker outcomes.
Fintech innovator Klarna is one company that has brought together information across multiple disparate and siloed systems and improved the quality of that information. It enables employee teams to ask an AI-powered assistant called Kiki, built on graph technology, anything from resource needs to internal processes to how teams should work. Indeed, when an employee needs to understand how teams are structured or wants to find a specific process document, Kiki doesn’t just return isolated data points. It shows how information connects to everything around it – linking people to projects, documents to teams, and processes to outcomes. The approach means employees can see not just what they asked for, but also the surrounding context that makes the information useful.
This makes it easier for employees to experiment with new ideas since the data they need is accessible and reliable. In this way, graph technology provides freedom for grassroots experimentation and guardrails for sustainable, enterprise-wide AI adoption.
Ultimately, meaningful AI implementation does not need to be dictated from above; it flourishes when workers are empowered to learn, experiment, and integrate AI where it genuinely adds value. It is a lesson in trust, agility, and recognising that technology’s most profound impact emerges from the bottom up: one employee, one idea, and one experiment at a time.
Peter Philipp is the general manager in ANZ at Neo4j.
RELATED TERMS
The term "workforce" or "labour force" refers to the group of people who are either employed or unemployed.