AI won’t solve every mental health challenge. But when done right, it can make support more accessible, less stigmatised, and available when it’s needed most, writes Johnathon Petersen.
In 1966, a chatbot called ELIZA was created to mimic the responses of a psychotherapist – an early, almost rudimentary use of artificial intelligence in mental health. Fast forward to today, AI tools have evolved significantly, and we’re now seeing real, scalable potential to support wellbeing in the workplace – if designed responsibly.
The Avanade Trendlines: AI Value research found that while 80 per cent of healthcare organisations plan to move AI into full production by the end of 2025, global trust in AI has declined by 22 per cent. The opportunity is there – but confidence isn’t keeping pace. That’s why trust, ethics, and transparency must be built into every stage of AI development, especially in a highly individualised and sensitive space like workplace mental health.
Avanade, in collaboration with the Corporate Mental Health Alliance Australia (CMHAA), recently had the opportunity to explore this challenge firsthand.
Guiding employees to the right support – when they need it most
According to CMHAA’s Leading Mentally Healthy Workplaces Survey, almost half (46 per cent) of employees reported experiencing burnout, and there has been a 26 per cent increase in poor or worsening mental health over the past year.
While AI has the potential to reduce barriers to mental health support, the challenge in many workplaces isn’t a lack of resources; it’s knowing where to begin.
Using Microsoft Copilot Studio, we built a self-service AI-powered chatbot that answers questions and helps users access trusted mental health resources more effectively. Designed to be private, secure, and approachable, the chatbot enables users to explore support resources in a way that feels empowering rather than overwhelming. When faced with sensitive topics, the chatbot will direct the individual to contact qualified professionals.
This approach ensures the tool is helpful without overstepping its purpose. It’s about enabling employees to take the first step when they do not know exactly how or where to ask for help – and empowering individuals through a conversational approach, rather than resorting to a search engine that is limited in its capacity to comprehend.
Importantly, this isn’t about replacing human connection – it’s about complementing existing wellbeing programs with tools that are empathetic, ethical, and easy to access.
Responsible AI starts with real-world collaboration
This chatbot is just one example of how AI can support workplace mental health, and it highlights a broader imperative. As AI becomes embedded across the workforce, from summarising meetings to supporting wellbeing, the demand for clear governance, ethical design, and real-world testing is only growing.
That’s why collaboration matters. Tools dealing with sensitive issues like mental health must be designed with a people-first mindset, built with security and trust at their core.
AI will never replace the human connection, nor should it. But when developed with care, it can lower barriers to support and complement the wellbeing strategies which human resources (HR) teams already have in place.
AI can play a meaningful role in your workplace wellbeing strategy – but only if implemented responsibly. Here are three guiding principles:
- Start with trust: Use tools that prioritise privacy, transparency, and evidence-based content.
- Augment to amplify impact: Let AI enhance existing support systems, not substitute the human element.
- Design with empathy: Involve your people. Co-create and test solutions with the employees who will use them.
AI won’t solve every mental health challenge. But when done right, it can make support more accessible, less stigmatised, and available when it’s needed most.
Johnathon Petersen is a modern workplace architect at Avanade Australia.