AI in accountancy: why vague communications create real liability 

March 30, 2026

Accountancy firms are not short of AI announcements. Thousands of press releases have promised revolutionary workflows, faster audit delivery and reduced errors. Where firms are imprecise is in their AI communications — credibly explaining to clients, regulators and prospective talent what their tools actually do, where human judgement still applies, and what happens when something goes wrong. That gap is widening faster than most firms have noticed. 

The issue is so acute partly because of the sheer speed of adoption in an industry not known for acceleration. According to Wolters Kluwer’s 2025 Future Ready Accountant report, AI usage across accountancy firms has more than quadrupled in a single year, rising from 9% to 41%. Investment is accelerating: 77% of firms plan to increase AI spend over the next three years. The profession is moving fast. What it is not doing, with the same urgency, is measuring what that speed costs. 

In June 2025, the Financial Reporting Council published its first guidance on AI in audit alongside a thematic review of the six largest UK firms. What it found should concentrate minds. The Big Four — alongside BDO and Forvis Mazars — had embedded AI tools into audit processes without formally assessing the effect on audit quality. Five of the six firms had no KPIs for the tools they were using. Monitoring existed primarily to track usage for licensing purposes. The FRC’s language was unambiguous: “There is no formal monitoring performed by the firms to quantify the audit quality impact.” 

Why AI errors in audit are different from other industries  

This is where accountancy’s AI problem diverges from almost every other sector. A hallucination in a customer service chatbot surfaces within hours. A pricing error in an AI-generated financial model gets caught in the next review cycle. But an error embedded in an audit of a company’s 2024 accounts may not surface until a restatement, a regulatory investigation, or litigation years from now. By that point, the staff who ran the engagement have moved on, the audit file reflects decisions that were made but not fully explained, and the firm is defending work it can no longer reconstruct.  

The risk is asynchronous. The consequences arrive long after the cause, in a profession where the whole purpose is to be the reliable record. 

What vague AI communications actually risk for accountancy firms

This is also why communications matters beyond optics. When a firm describes an “AI-powered audit” or claims “strategic differentiation” through AI without explaining what the tools do, how outputs are reviewed, or where human judgement intervenes, they are not just missing a marketing opportunity. They are creating an accountability gap.  

Richard Moriarty, CEO of the Financial Reporting Council, has said firms must “reinvent or reimagine the parts of the job that add real value” and develop a compelling vision for the profession’s future. The tension he is describing is real. But many AI announcements do the opposite — highlighting capability while obscuring where human judgement actually intervenes. ICAS guidance is explicit: professional judgement cannot be automated. 

What responsible AI communication in accountancy looks like

Dext’s AI Assist offers a working model of what responsible communication looks like. The tool tracks how accountants interact with the platform, recommends ways to automate repetitive tasks, and keeps every recommendation visible for human review. Crucially, Dext explains this clearly — what the tool does, how it works, where the human remains in the loop. That transparency is not just good marketing. It is the audit trail that protects the firm if a decision is ever questioned. 

Specificity, then, represents a new form of risk management. The firms that document what their AI does, where it is used, and how human judgement is applied will be better positioned when errors surface — and some will. The firms that have stuck to abstractions will find, when that moment comes, that their own words have made the situation worse. 

Greg Noble is an account manager in London  

Greg Noble
Account Manager / United Kingdom
Article Link
The case for the media tour 
Read More
Article Link
AI in accountancy: why vague communications create real liability 
Read More
Article Link
The algorithm keeps sending readers back to the journalist
Read More
Article Link
Brett Farmiloe: Why the Publications That Matter to Your Audience Might Not Have a Dot-Com
Read More
Article Link
How AI search is describing Australian industry superannuation brands right now, and what comms leaders can do about it 
Read More