The time paradox: Why AI upskilling must be built into the workday
SHARE THIS ARTICLE
We are standing on the cusp of an AI-enabled era, and yet, one of the biggest barriers to meaningful adoption isn’t technology. It’s time, writes Steve Bennetts.
More specifically, it’s the lack of structured, protected time during working hours for employees to actually learn how to use AI in ways that improve performance, drive value, and reduce inequity.
Crucially, conversations with chief human resources officer (CHROs) across the country reveal a shared and pressing anxiety: their internal teams are already further behind on the AI journey than anticipated. These leaders recognise they must shift their strategy from passive encouragement to active design. They need to make time for upskilling in the workday, not simply leave it to the individual employee’s discretion.
As HR leaders, we’re being asked to steward our organisations through rapid digital transformation. But if we don’t address what I call the “time paradox”, which is expecting already time-poor employees to “find time” for AI upskilling outside of their day-to-day responsibilities, we risk undermining the very success of that transformation.
AI adoption won’t work if it’s optional homework
It’s no secret that AI will define the next chapter of work. According to the 2025 Qualtrics Employee Experience Trends Report, nearly half of all employees are already using AI tools weekly, yet only 52 per cent say they’ve received training from their employer. Even fewer (59 per cent) feel involved in deciding how AI will reshape their job design.
Too often, the narrative from leadership is: “Go learn AI. You’ve got our blessing.” But when? Between back-to-back meetings? After a 12-hour shift? During their lunch break?
Let’s be clear: telling people to figure out AI in their own time, especially in workplaces already grappling with burnout and productivity pressures, is a strategy that will only benefit a privileged few. Knowledge workers with discretionary time, no caring responsibilities, and access to education may well manage. But the vast majority, such as frontline staff, part-time workers, shift-based employees, carers, and those living with disability, won’t.
This is how we unintentionally create a new layer of digital inequity in the workforce.
Without structural time investment, we leave people behind
The current approach is too reactive. Companies introduce AI tools, then expect employees to keep up. Where is the structured learning pathway? Where are the clear, protected time blocks to explore these tools, test them, understand their implications, and receive coaching?
When training is left to discretionary effort, only those with surplus time can participate, creating a skills gap along socioeconomic, gendered, and demographic lines. AI adoption becomes not just a technological challenge, but a human equity issue.
Imagine asking a nurse, teacher, or customer service rep to take time out of their day, or worse, their evening, to complete an AI course. For many, that time simply doesn’t exist, and without thoughtful design, those who most need AI to support their roles are the least likely to benefit from it.
The risk isn’t just exclusion – it’s exposure
When employees lack AI training and are left to self-navigate, they inevitably turn to whatever tools are accessible. Often, that means using public large language models or apps with unclear data privacy standards, inadvertently exposing sensitive customer or employee data, breaching compliance rules, or making biased decisions without understanding how AI-generated outputs are produced.
Untrained employees using powerful AI tools is like handing someone a car without teaching them to drive. They might get to the destination, but the risks along the way are enormous. From legal liability to reputational harm, the cost of underinvesting in enablement is steep.
HR must co-lead AI transformation, not just communicate it
HR leaders should not wait to be invited to the AI table. This is our domain, because successful AI adoption isn’t just about tools. It’s about job design, change management, workforce planning, equity, and culture.
If we are introducing AI into customer service, for example, what does that mean for KPIs? Will we expect agents to handle double the volume? How do we measure quality versus quantity? What skills need to be developed, and how will we support employees through that change?
This is why we’re seeing more organisations bake AI education into everyday workflows, not as optional learning, but as core capability development. They understand that successful transformation requires shared accountability between technology and people leaders.
A 3-step framework for baking AI learning into the workday
For HR leaders ready to resolve the time paradox, the solution requires a design overhaul, not just a policy announcement. It needs a strategic commitment, which may include structural mandates. Consider championing this three-pronged approach:
1. Allocate and protect time (The 10 per cent rule):
Mandate a minimum percentage of the work week (e.g., 10 per cent of an employee’s time, or four hours in a 40-hour week) explicitly for future-skilling, inclusive of AI enablement. This time must be non-negotiable and protected from meetings. HR must work with line managers to adjust KPIs and workloads to reflect this capacity shift, treating it as a core part of the job description, not a bonus task.
2. Integrate learning into workflow (The ‘just-in-time’ model):
Move beyond generic courses. Utilise AI-powered tools that offer micro-learning prompts and usage tutorials within the work application itself. For example, when a customer service agent receives an AI-generated summary, a quick module should appear, coaching them on how to refine the prompt for a better output. This makes learning immediately relevant and productive.
3. Establish ‘AI coaching pods’ (The social equity model):
Formalise internal coaching where early AI adopters mentor smaller groups within their team during protected work time. This peer-to-peer, social learning model fosters cultural change, democratises knowledge, and allows employees to safely test tools and share ethical boundary guidelines within a trusted, monitored environment.
A call to action for HR leaders
We rightly worry about bias being built into AI. But we should be just as concerned about the bias and inequity that may emerge if only a small segment of our workforce is equipped to use it.
The time paradox is real: if we don’t bake AI learning into the job, we’re effectively saying it’s only for those who can afford to learn it in their own time. That’s not just a talent development failure, it’s a competitive, cultural, and ethical one.
It’s time to treat AI enablement as a strategic investment, not a side hustle, and make learning intentional, inclusive, and in-built. After all, when we design for equity, we will build stronger, safer, and more successful organisations.
Steve Bennetts is the head of growth and strategy for employee experience in Asia-Pacific and Japan at Qualtrics.