
Research Brief · Academic · 5 min read
On the paper
Human + AI in Accounting: Early Evidence from the Field
Jung Ho Choi (Stanford GSB) · Chloe Xie (MIT Sloan)
One of the clearest empirical studies to date on what happens when working accountants actually use generative AI in production. The findings complicate the replacement narrative considerably.
For most of the past decade, accounting has occupied an unflattering position at the top of every list of professions most exposed to automation. The reasoning has usually been simple to the point of being lazy: accounting involves a lot of repetitive, rules-based work, therefore computers will do it, therefore accountants are in trouble. The narrative has been widespread enough to affect enrolment in accounting programmes and to raise genuine anxiety across the profession.
A recent working paper by Jung Ho Choi, an assistant professor of accounting at Stanford Graduate School of Business, and Chloe Xie of MIT Sloan, provides some of the first serious empirical evidence on whether that narrative holds up. It does not, at least in the form it has most often been told. What the data actually shows is more interesting — and more useful — than the simple replacement story.
The study combined survey responses from 277 accountants with task-level operational data from 79 small and mid-sized firms actively using AI-powered accounting tools. The analysis examined how the introduction of AI affected both the volume of work an individual accountant could handle and the quality of the work they produced. What emerges is a picture of genuine augmentation rather than substitution — with important caveats about who benefits, and by how much.
— The Four Headline Findings —
7.5 days
Faster monthly close for accountants using AI tools vs. traditional methods
8.5%
Less time spent on routine back-office processing
+12%
Increase in reporting granularity at firms using generative AI
62%
Of surveyed accountants are concerned about AI-generated errors
FINDING 01
AI-using accountants serve more clients, not fewer.
The strongest single result in the paper is a capacity expansion rather than a labour displacement. Accountants equipped with generative AI tools were able to support more clients per week than their peers using traditional methods, while simultaneously completing monthly statement preparation approximately 7.5 days faster. Routine back-office processing time fell by 8.5%. The time recovered by those efficiency gains did not disappear; it was redirected.
The redirection itself is what matters. Accountants did not use the recovered hours to process more transactions faster. They used them for the parts of the job that had previously been squeezed: business communication, quality assurance, client-facing advisory conversations. In the authors’ framing, AI helped with the “setup” — pulling information, connecting bank transactions, tracking vendor data — and that setup work was precisely the bottleneck preventing accountants from operating at higher client volumes.
FINDING 02
Quality did not fall. It rose.
The most counter-intuitive result in the study is that the throughput gains did not come at the usual cost of quality. If anything, the opposite. Firms using generative AI tools posted a 12% increase in reporting granularity — the level of detail captured in their bookkeeping records. Rather than grouping expenses into broad catch-all buckets such as “payroll,” AI-assisted firms broke payroll down into specific sub-categories: bonuses, benefits, meals, and similar. The resulting financial reports are more informative, easier to audit, and more useful for the underlying business to act on.
This matters more than it first appears. Most adoption studies of new technology in knowledge work show a short-term quantity-for-quality trade-off as practitioners climb the learning curve. The Choi and Xie findings suggest that, in accounting bookkeeping specifically, the technology is well-matched enough to the work that the trade-off either does not appear or is smaller than expected. The authors attribute this to the augmentation model — the AI is handling the tedious categorisation work that accountants were rushing through, freeing the human to apply judgement at a higher resolution.
The technology is not here to replace the human being — it’s here to augment the experts who are already in place.
Chloe Xie · MIT Sloan School of Management
FINDING 03
Seniority determines how much you gain.
One of the study’s more consequential findings for how firms structure AI deployment is that gains are not evenly distributed across experience levels. Senior accountants extracted substantially more value from AI tools than junior staff did — despite being, on many metrics, the group less in need of efficiency help.
The mechanism is straightforward. Senior accountants treated the AI as a collaborator whose outputs required verification. They noticed when the model’s confidence dropped on a particular suggestion and stepped in to apply human judgement where it was most needed. Junior staff, lacking the pattern recognition that comes with years of experience, were more likely to accept AI-generated outputs at face value — including outputs that the system itself had flagged as uncertain. The net result: senior accountants extracted both the efficiency and the quality gains; junior accountants captured efficiency but with a higher residual error rate. The research offers an early empirical look at how AI-generated errors can propagate through human-in-the-loop accounting systems, and where they are most likely to slip through.
FINDING 04
Accountants themselves are cautiously positive.
The survey side of the study captured what working accountants actually think about the tools they are now using. The answer is not uniformly enthusiastic, but it is more constructive than the popular narrative would suggest. Sixty-two percent expressed concern about AI-generated errors. Forty-three percent worried about data security implications. Thirty-seven percent reported anxiety specifically about the effect on job stability.
Against those concerns, though, nearly half of surveyed accountants reported that generative AI tools helped them meet deadlines more reliably and improved the accuracy of their work. Almost two-thirds identified the automation of routine tasks as the single biggest benefit of adopting AI. The picture is of a profession that is neither naively enthusiastic nor reflexively defensive — accountants see real benefits from the technology and also see real risks, and they are engaging with both.
— What the Findings Do Not Yet Tell Us —
The next frontier is audit, tax, and valuation.
The study’s sample is concentrated on bookkeeping activities — the foundational recording and organisation of day-to-day financial transactions. That is where the tools are currently mature and where the operational data exists to study. More complex and higher-stakes domains of accounting work — external audit, tax strategy, business valuation, complex advisory — remain largely untouched by the research, for the simple reason that they remain largely untouched by the technology.
That gap will close over the next survey cycle. Audit platforms are already beginning to incorporate AI for document review and anomaly detection. Tax tools are integrating LLM-based research assistants. Valuation work, with its heavy reliance on contextual judgement, is furthest behind. Whether the augmentation finding generalises to those higher-stakes domains — where the cost of an undetected error is much larger — is the empirical question the profession needs answered next.
— Reader Questions —
Eight questions on the study.
What is the main finding of the Choi and Xie study?
That generative AI acts as an augmentation tool rather than a replacement for accountants. AI-using accountants supported more clients, closed the books faster, and produced more granular reports — without a measurable trade-off in quality.
How large was the sample?
Survey responses from 277 accountants combined with task-level operational data from 79 small and mid-sized accounting firms that use AI-powered tools.
How much faster do AI-using accountants close the books?
Approximately 7.5 days faster than accountants using traditional methods, with an 8.5% reduction in time spent on routine back-office processing.
Does report quality suffer when AI is used?
No. Firms using generative AI tools posted a 12% increase in reporting granularity — breaking expenses into more specific categories rather than grouping them into broad buckets. Quality rose rather than fell.
Who benefits most from AI tools in accounting?
Senior accountants. They treat the AI as a collaborator, verify its outputs, and step in where model confidence is low. Junior staff see smaller gains because they are more likely to accept AI outputs at face value, including uncertain ones.
What are accountants most worried about?
Error propagation was the top concern at 62%, followed by data security at 43% and job stability at 37%. These concerns coexist with positive views of the efficiency and accuracy benefits.
Which accounting tasks is AI actually being used for today?
Primarily bookkeeping — transaction classification, reconciliation, and routine record-keeping. More complex domains such as external audit, tax strategy, and valuation remain largely untouched by the current generation of tools.
What should firm leaders take from this?
Three things. Deploy AI to expand capacity rather than to cut headcount. Pair AI tools with senior reviewers, not junior ones, where output quality matters. And treat the current bookkeeping-focused rollout as the first frontier, not the last — the higher-value domains are coming next.
