Artificial intelligence has become one of the most heavily invested technologies in modern finance, as organizations race to improve productivity and decision-making in an increasingly volatile global environment. Yet despite years of experimentation, many large enterprises report that AI’s impact at scale has been far more limited than expected.

According to Werner van Rossum, a finance and analytics transformation leader who has led large-scale, enterprise-wide initiatives in capital-intensive global organizations, the problem is rarely the algorithms themselves.

“There’s a widespread assumption that AI will fix decision-making,” van Rossum says. “In reality, it often exposes problems that were already there.”

Van Rossum’s perspective is shaped by direct experience designing and leading enterprise-wide finance and analytics transformations, where AI initiatives were deployed alongside fundamental changes to data architecture, performance measurement, and decision governance. Most recently, his work focused on unifying corporate performance metrics, finance data, and analytics into a harmonized foundation intended to support consistent decision-making across functions and regions.

That effort forms part of a broader, multi-year enterprise-wide transformation delivered through staged releases over several years, with the first major deployment completed earlier this year. Senior leadership has publicly described the overall program as one of the most comprehensive finance and data transformations undertaken at scale, while industry observers have characterized it not as a systems upgrade, but as a rethinking of how information, analytics, and decision-making are connected across a large organization.

That experience has given van Rossum a clear view of why AI initiatives often succeed in pilots but struggle to scale.

Werner van Rossum. (credit: Philip Piletic)

The pilot illusion

“AI pilots usually work,” van Rossum explains. “They’re controlled, localized, and designed around a narrow use case. The moment you try to scale them across an enterprise, all the underlying inconsistencies surface.”

Those inconsistencies are rarely technical. They are structural.

In large organizations, the same performance indicator may be defined differently across regions, business units, or systems. Data reconciliation often happens after the fact rather than being designed for consistency upfront. Governance models evolve to manage risk and compliance, but not necessarily to support timely, confident decisions.

When AI is layered on top of that environment, it does not resolve these issues. It accelerates them.

“You end up with faster answers to questions leaders don’t fully trust,” van Rossum says. “And when trust is missing, decisions slow down instead of speeding up.”

Why data foundations matter more than algorithms

Much of the public discussion around enterprise AI focuses on model accuracy, automation potential, and advanced techniques. Van Rossum argues that this emphasis misses the real constraint.

In globally scaled organizations, particularly in finance-intensive environments, the limiting factor is almost always the data foundation.

“Harmonized data isn’t a technical nice-to-have,” he says. “It’s a prerequisite for any form of automation that leaders are willing to rely on.”

Organizations that struggle to scale AI, he notes, tend to share common characteristics: fragmented data architectures, inconsistent definitions of performance, and analytics environments built around bespoke or localized solutions rather than shared foundations. AI does not resolve these conditions. It magnifies them.

“If leaders didn’t trust the numbers before AI,” van Rossum says, “they won’t trust them after.”

This dynamic helps explain why many AI initiatives remain stuck in experimentation. Without a coherent semantic layer and clearly defined ownership of core metrics, AI outputs remain advisory. When decisions carry real financial or operational consequences, executives revert to manual analysis, parallel models, or judgment calls.

Finance’s role in enterprise AI readiness

While AI initiatives are often led by technology teams, van Rossum believes organizations are rediscovering the central role finance plays in whether AI can scale responsibly.

“Finance shouldn’t be trying to become an AI lab,” he says. “Its real contribution is creating clarity, consistency, and trust in the data that decisions are built on.”

As the function responsible for defining performance, maintaining controls, and supporting executive decision forums, finance sits at the intersection of data, governance, and accountability. That position makes it critical to AI readiness, even if finance teams are not developing models themselves.

In the transformation work van Rossum has led, progress came less from introducing new tools than from simplifying and harmonizing performance measures. Reducing bespoke reporting, aligning analytics explicitly to decision forums, and stabilizing metric definitions proved more impactful than adding analytical sophistication.

“What surprised many people,” he recalls, “was that reducing complexity actually improved decision-making. Once we stopped trying to explain everything, discussions became more focused, and decisions moved faster.”

A design problem, not a technology one

The broader lesson emerging from enterprise experience, van Rossum argues, is that AI success depends less on technological capability than on organizational design.

“Technology can accelerate insight,” he says. “But it can’t compensate for unclear decision rights, fragmented governance, or inconsistent definitions.”

Large organizations are continuously changing. Systems are modernized in phases. Legacy and new platforms coexist. Data sources multiply. In that environment, AI systems trained on unstable foundations struggle to deliver consistent value.

As investment in AI continues to grow, the gap between expectation and outcome may widen for organizations that neglect these fundamentals. Conversely, those that treat data harmonization, analytics design, and governance as strategic capabilities are better positioned to scale automation effectively.

“AI will absolutely reshape finance,” van Rossum says. “But not because the models get smarter. It will matter because organizations finally design their data and analytics foundations to work at enterprise scale.”

In an environment where capital allocation, supply chains, and market conditions can shift rapidly, organizations that cannot trust their data struggle to act decisively when it matters most. For leaders frustrated by slow or inconclusive decision-making, the implication is straightforward: before asking what AI can do, they need to ask whether their organizations are ready to trust the answers.

This article was written in cooperation with Philip Piletic