A company called KPMG has uncovered 28 instances of staff using AI to cheat on internal exams since July, including a registered company auditor who uploaded training materials to an AI tool to answer test questions.
The case has prompted the firm to strengthen its AI detection systems and commit to disclosing AI-related misconduct in its annual results, setting a new transparency benchmark across the industry.
The partner completed mandatory AI training in July but breached company policy by uploading a recommended reference manual to an AI platform to assist with an exam response. KPMG’s internal AI detection tools flagged the activity in August.
KPMG tightens AI monitoring after internal probe
The partner has self-reported to Chartered Accountants ANZ, which is now investigating the matter. The remaining 27 cases involve staff at the manager level or below.
KPMG Australia chief executive Andrew Yates said the incidents reflect broader challenges businesses face as AI adoption accelerates.
“Like most organisations, we have been grappling with the role and use of AI as it relates to internal training and testing. It’s a very hard thing to get on top of, given how quickly society has embraced it,” Yates told The Australian Financial Review.
“As soon as we introduced monitoring for AI in internal testing in 2024, we found instances of people using AI outside our policy. We followed with a significant firm-wide education campaign and have continued to introduce new technologies to block access to AI during testing.”
KPMG said it will now publicly disclose AI-related cheating cases in its annual results and ensure staff meet self-reporting obligations in misconduct cases. The move increases pressure on rival accounting firms to adopt similar transparency standards.
Current regulations do not require accounting firms to notify the Australian Securities and Investments Commission of misconduct, such as exam cheating, unless there is a disciplinary finding. KPMG said it voluntarily informed the regulator during discussions about the matter.
Earlier in February, a set of lawyers were fined $12,000 for using AI in their court documents after a federal judge said they submitted “hallucinated” material in court.


