Powered by MOMENTUM MEDIA
lawyers weekly logo
Stay connected.   Subscribe  to our newsletter
Advertisement
Law

Use of AI criticised by FWC

By Michael Byrnes | |7 minute read
Use Of Ai Criticised By Fwc

A recent decision by the Fair Work Commission highlights various considerations for HR professionals arising from the use of artificial intelligence models or apps in FWC matters, writes Michael Byrnes.

 
 

The use of artificial intelligence in a general protections application brought by a former employee has been criticised by deputy president Tony Slevin of the Fair Work Commission in the decision of Mr Branden Deysel v Electra Lift Co. [2025] FWC 2289.

While the use of ChatGPT by the applicant did not prove to be determinative, as his case had a range of other problems (including being brought almost 900 days after the 21-day time limit and there being no “exceptional circumstances” to justify the delay, as required by the Fair Work Act), it was apparent that the use of the AI model may have either led to, or encouraged, the applicant to bring the claim, by giving poor “advice” as to the merits of his claim.

Deputy president Slevin observed (at paragraph 6 of the decision):

“As to the merits of the claim, Mr Deysel confirmed during the conference that he had used an artificial intelligence large language model, ChatGPT, in preparing his application. So much was clear from the deficiencies in the application which failed to address the matters required to make good a claim that Part 3-1 of the Fair Work Act had been contravened. The application also included an extract from advice given by ChatGPT which was that various employment and other statutory obligations had been contravened by the Respondent. The advice suggested that Mr Deysel commence various legal actions against the Respondent, including making application under s. 365 of the Act. I can see no basis for this advice.”

The deputy president went on to note the danger of solely relying on artificial intelligence for legal advice, observing that even ChatGPT itself appreciated its own limits by cautioning that an appropriate professional should be consulted before initiating proceedings. Unfortunately, that admonition seems to have been ignored in this matter. As the deputy president observed (at paragraph 7):

“ChatGPT also advised Mr Deysel to consult a legal professional or union representative to determine the appropriate course of action. He did not do so. Mr Deysel simply followed the suggestion made by ChatGPT and commenced the proceedings. The circumstances highlight the obvious danger of relying on artificial intelligence for legal advice. The result has been Mr Deysel commencing proceedings that are best described as hopeless and unnecessarily wasting the resources of the Commission and the Respondent in doing so.”

The decision highlights various considerations for HR professionals arising from the use of artificial intelligence models or apps in Fair Work Commission matters:

  1. The availability and use of AI might encourage more aggrieved former employees to bring applications before the FWC, as they may believe that they do not need professional assistance to initiate and successfully prosecute such a claim.
  2. HR professionals need to be wary of relying upon AI to prepare responses to FWC applications – it may prove to be a false economy, with the desire to save money on professional assistance leading to the defence of a claim being compromised or undermined. The current limitations of AI aren’t confined to the employee side of the record.
  3. While AI may have a role to play in either the preparation for or conduct of FWC proceedings, in its current form, it needs to be used with care. It appears the FWC is alive to the possibility of it being used and will not hesitate to be critical of that use if it leads to unmeritorious claims or arguments being made.
  4. The issue of the use of AI in proceedings is not confined to the FWC. In a recent NSW Supreme Court decision, the judges commented upon the need for judicial vigilance in the use of AI, especially but not only by unrepresented litigants. While this observation was not made in the context of an employment law case, the same caution would apply to such cases in various courts, including the Federal Court.

Michael Byrnes is a partner in employment law at Swaab.