"It Depends": Why Finance AI Still Needs Humans
The marketing genius and technical reality of human-in-the-loop systems
The potential for AI to automate functions like finance is touted by legacy platforms, emerging technology solutions, investors and even some finance leaders. But it's clear to me that the solutions most likely to succeed all feature "Human in the Loop" (HITL).
The amount of marketing collateral and online discourse around AI in finance is bewildering. Yes, there will unquestionably be huge benefits over time but it can be very difficult to separate wheat from the chaff in understanding what these platforms currently use AI to do, where they actually use AI (you are perfectly within your rights to ask this of vendors and require a detailed response) and then what it means for the user.
Focusing on an HITL approach in a product is pretty sensible–done right it serves both as a practical necessity for accuracy and a psychological bridge for user adoption. Has struck me recently, though, as to whether this is singularly a technical requirement for success or just a clever strategy to better ensure adoption by users.
Is HITL playing it safe or a perfectly honest marketing message?
Walk through any finance technology conference or browse vendor websites, and you'll notice a pattern. Nearly every AI solution emphasizes how it "empowers finance professionals" rather than replaces them. The message is carefully crafted: AI handles the mundane while humans focus on "strategic work." It's a compelling narrative that addresses the elephant in every room–will AI take my job?
This positioning is brilliant from a go-to-market perspective. Finance teams are naturally conservative and risk-averse. They're also the ones who, importantly for the vendor, control the purchasing decisions. By promoting HITL, vendors transform from threats into partners. The AI becomes a helpful copilot or assistant rather than a replacement. The fear factor disappears, replaced by curiosity about efficiency gains.
Perhaps there’s even a pricing element that, paradoxically, it’s possible to charge more for a platform that requires human oversight than one without. This plays to the conspiracy believers that it’s a marketing ploy.
Or maybe this messaging is actually honest.
When humans actually matter: connecting the dots
Working in finance, whether in accounting, FP&A or at the leadership level, frequently requires judgment and subjective analysis. So much of what makes up finance is based on assumptions around known unknowns (eek), topics lacking clarity or application of accounting guidelines to situations that don't easily fit into one box or the other.
Take for example an unusual or limited-information transaction in the month close process–should it be expensed, accrued, or flagged as an issue? A reliable answer to this usually is guided by accountant knowledge, years of finance experience, and contextual information perhaps only available at the business level.
What's also interesting here is how platforms will likely evolve to include a greater degree of business activity and data for AI to review. It's compelling to think of AI accessing vendor contracts, email threads about ongoing negotiations, Slack messages about project status, even calendar entries about customer meetings–all to better determine the nature of a transaction and how to handle it in accounting. The AI could theoretically triangulate from multiple data sources to understand context the way a senior accountant does through experience and conversation–and with much greater speed.
But until we're there–and we're definitely not there yet–it's really only feasible for a human to step in and close the loop. The gap between available data and required judgment remains too wide for AI to bridge alone.
The adoption game hinges on understanding accountants (not just accounting)
But, being honest, the HITL emphasis is also shrewd business strategy. I've seen enough technology implementations to know that the human element often is the determining factor between success and failure. Sometimes there are change-resisters who just throw up roadblocks, or at an extreme people who feel that their job is at risk then deliberately sabotage the project. You’d be amazed at what can happen.
HITL flips this dynamic. When the finance manager can review and override AI suggestions, they feel in control. When they see their corrections training the system, they become invested in its success. Each month, as they trust the AI with more routine decisions, they're actively participating in their own role evolution rather than being dragged along and eventually snuffed out.
This psychological element might be even more important than the technical requirements. A solution that's 90% accurate with enthusiastic users will succeed where a 95% accurate solution with resistant users fails. Vendors who understand this truth position HITL not as a limitation but as truly as a differentiating feature.
Finance as judgement not calculation
Yet still it's tempting to view HITL as primarily a marketing strategy–a temporary comfort blanket until AI gets good enough to fully automate finance functions. There's certainly some truth to this. Basic transaction coding and invoice matching will likely need less human oversight over time. The "human in the loop" for these tasks is partly about building trust during the transition.
But this cynical view misses something important. The aspects of finance that require judgment, context, and accountability aren't edge cases–they're core to the function. Financial reporting isn't just about processing transactions; it's about telling the business story accurately and defensibly. FP&A isn't just about financial metrics and drawing lines on a chart; it's about understanding strategy and market dynamics.
The vendors emphasizing HITL might be solving for adoption, but they're also acknowledging a fundamental truth: finance is as much about judgment as calculation. Correspondingly, it can be asserted that the most successful AI implementations will be those that enhance human judgment rather than attempt to replace it.
So where does this leave us…
I can’t see any other (partial) conclusion than: HITL serves dual purposes, technical necessity and adoption catalyst. Smart vendors recognize both aspects and design their solutions accordingly. They're not being dishonest when they emphasize human empowerment–they're being realistic about both current AI capabilities and the nature of finance work.
I think that for finance leaders evaluating AI solutions, understanding this duality is really helpful. Yes, some vendors probably do oversell HITL to quell fears about job displacement. But I think the best solutions genuinely need human expertise to function effectively in the complex, judgment-heavy world of finance. Going beyond that statement right now risks getting drowned in untethered hyperbole about AI taking over the world and replacing everyone except plumbers (which of course are separately at risk from the robots).
The interesting question isn't whether HITL is necessary or just clever marketing (it's really kind of both). After all, in a field where "it depends" is often a critical component in developing an accurate answer, keeping humans in the loop ensures a better outcome for the user and customer.
And AI’s contribution:
I chucked the title and subtitle into ChatGPT 5 and asked it to generate a radical image. Not bad, and the words are all correct.
I am a big fan of HITL, especially with something as important as money. The combo is a really solid set of check and balances. The human brain is limited in what it can recall or reference which is where AI brings a lot of power, but much superior as far as reasoning goes. When it comes to money and finances, I'd want the powerful data of AI with some analysis but would rely on the HITL to provide deeper, real life experience and wisdom that AI lacks.