Today’s HR leaders are sitting on a goldmine of data, trust, and influence. This position also makes them prime targets for increasingly sophisticated cyber attacks, writes Erich Kron.
We all trust HR, especially when an email lands in our inbox with their name on it.
Topics like payroll updates, policy changes, and benefit reminders are routine, often time-sensitive, and inherently trusted. That trust is exactly what makes HR the perfect disguise for cyber criminals.
And the data proves it. Of the top 10 phishing email templates people clicked on, a staggering 98.4 per cent had subject lines tied to internal topics, and nearly half (45.2 per cent) of those were tied directly to HR. These messages don’t just look official, they feel personal, making them far harder to flag as suspicious.
And the risk is growing alongside the rise of shadow AI, as more than half of Australians admit they’re not upfront about using AI at work. When sensitive information is fed into unsecured tools, it can inadvertently leak, giving cyber criminals even more context to craft realistic, highly targeted attacks.
Psychology behind HR-themed phishing
If you got an email titled “URGENT: Payroll Issue Detected” from your HR team, would you open it? Chances are, you would. And that’s exactly the trap.
Year after year, Australians are also caught up in scams that imitate recruitment or HR processes, from fake job ads to onboarding portals and payroll messages. When people are already expecting employment-related communication, they’re far less likely to question what lands in their inbox. That same trust is what criminals exploit in phishing campaigns.
HR emails tend to be more trusted. They involve sensitive topics that trigger an emotional reaction. The message feels urgent. It feels important. And it feels personal. These scams bypass our logical filters and rely on “mental shortcuts” (or heuristics) to make snap judgements.
One of the biggest is authority bias. We tend to trust messages that appear to come from formal figures of power, with HR acting as the official internal voice between stakeholders. Over time, we learn to trust what they say and become familiar with receiving updates from HR, which lowers our guard when an attacker sneaks in pretending to be them.
Then there’s representativeness. If an email looks like what we expect from HR, where they’re branded correctly, well-written, and using familiar tone and templates, we assume it’s legitimate. Our brains are wired to take shortcuts, and when something fits neatly into an existing and familiar template, we’re less likely to question it.
Lastly, cyber criminals often exploit social proof, tapping into the fear of missing out and nudging us to comply, fast. With phishing tactics becoming smarter, even the most well-trained employees can be caught off guard, especially if it convincingly mirrors internal communication styles too closely.
Timing is everything
They’re no longer simple, opportunistic attacks. Cyber criminals now are deploying advanced, sector-specific campaigns designed to exploit industry workflows and pressure points at exactly the right moment.
Take the ongoing Dropbox-themed phishing campaigns targeting Australian healthcare entities, where employees are receiving emails claiming to share urgent documents. While subtle doubts may occasionally arise about their legitimacy, the sense of urgency and timing are so precisely calculated that most recipients do not pause to verify before clicking.
Attackers know that during peak periods, like payroll cycle, performance reviews, or tax season, everyone’s inboxes are overflowing. And in these high-pressure moments, overwhelmed employees are far more likely to click without thinking.
Reduce shadow AI before it’s used against you
Over a third of Australian professionals are uploading sensitive company information to AI platforms. While it may seem harmless, this opens the door for criminals to exploit unsecured data and potentially use it against you.
The real issue is that employees often don’t know or understand the risk of these actions. And while building visibility and guardrails around AI use might feel complex or expensive, a simple awareness effort can go a long way.
By continuously educating and training employees to pause before pasting sensitive information into a chatbot, organisations can drastically reduce the risk before it ever becomes a threat or is used against them.
Be more aware and always be sceptical
Today’s HR leaders are sitting on a goldmine of data, trust, and influence. This position also makes them prime targets for increasingly sophisticated cyber attacks. With phishing emails becoming smarter, more personalised, and often AI-assisted, employee trust in email is more fragile than ever.
That’s why security awareness is critical. The golden rule is to never trust and always be sceptical and verify. HR can play a powerful role in driving change and building a strong security culture through internal communications. By combining targeted awareness training, practical verification steps, and clear AI guidelines, organisations can empower HR and all employees to spot and stop phishing attacks before they cause harm.
Erich Kron is a CISO adviser at KnowBe4.