Ground Realities: Leadership Insights on AI ROI

5/5 (2)

5/5 (2)

Over the past year of moderating AI roundtables, I’ve had a front-row seat to how the conversation has evolved. Early discussions often centred on identifying promising use cases and grappling with the foundational work, particularly around data readiness. More recently, attention has shifted to emerging capabilities like Agentic AI and what they mean for enterprise workflows. The pace of change has been rapid, but one theme has remained consistent throughout: ROI.

What’s changed is the depth and nuance of that conversation. As AI moves from pilot projects to core business functions, the question is no longer just if it delivers value, but how to measure it in a way that captures its true impact. Traditional ROI frameworks, focused on immediate, measurable returns, are proving inadequate when applied to AI initiatives that reshape processes, unlock new capabilities, and require long-term investment.

To navigate this complexity, organisations need a more grounded, forward-looking approach that considers not only direct gains but also enablement, scalability, and strategic relevance. Getting this right is key to both validating today’s investments and setting the stage for meaningful, sustained transformation.

Here is a summary of the key thoughts around AI ROI from multiple conversations across the Asia Pacific region.

1. Redefining ROI Beyond Short-Term Wins

A common mistake when adopting AI is using traditional ROI models that expect quick, obvious wins like cutting costs or boosting revenue right away. But AI works differently. Its real value often shows up slowly, through better decision-making, greater agility, and preparing the organisation to compete long-term.

AI projects need big upfront investments in things like improving data quality, upgrading infrastructure, and managing change. These costs are clear from the start, while the bigger benefits, like smarter predictions, faster processes, and a stronger competitive edge, usually take years to really pay off and aren’t easy to measure the usual way.

Ecosystm research finds that 60% of organisations in Asia Pacific expect to see AI ROI over two to five years, not immediately.

The most successful AI adopters get this and have started changing how they measure ROI. They look beyond just money and track things like explainability (which builds trust and helps with regulations), compliance improvements, how AI helps employees work better, and how it sparks new products or business models. These less obvious benefits are actually key to building strong, AI-ready organisations that can keep innovating and growing over time.

Head of Digital Innovation

2. Linking AI to High-Impact KPIs: Problem First, Not Tech First

Successful AI initiatives always start with a clearly defined business problem or opportunity; not the technology itself. When a precise pain point is identified upfront, AI shifts from a vague concept to a powerful solution.

An industrial firm in Asia Pacific reduced production lead time by 40% by applying AI to optimise inspection and scheduling. This result was concrete, measurable, and directly tied to business goals.

This problem-first approach ensures every AI use case links to high-impact KPIs – whether reducing downtime, improving product quality, or boosting customer satisfaction. While this short-to-medium-term focus on results might seem at odds with the long-term ROI perspective, the two are complementary. Early wins secure executive buy-in and funding, giving AI initiatives the runway needed to mature and scale for sustained strategic impact.

Together, these perspectives build a foundation for scalable AI value that balances immediate relevance with future resilience.

CIO

3. Tracking ROI Across the Lifecycle

A costly misconception is treating pilot projects as the final success marker. While pilots validate concepts, true ROI only begins once AI is integrated into operations, scaled organisation-wide, and sustained over time.

Ecosystm research reveals that only about 32% of organisations rigorously track AI outcomes with defined success metrics; most rely on ad-hoc or incomplete measures.

To capture real value, ROI must be measured across the full AI lifecycle. This includes infrastructure upgrades needed for scaling, ongoing model maintenance (retraining and tuning), strict data governance to ensure quality and compliance, and operational support to monitor and optimise deployed AI systems.

A lifecycle perspective acknowledges the real value – and hidden costs – emerge beyond pilots, ensuring organisations understand the total cost of ownership and sustained benefits.

Director of Data & AI Strategy

4. Strengthening the Foundations: Talent, Data, and Strategy

AI success hinges on strong foundations, not just models. Many projects fail due to gaps in skills, data quality, or strategic focus – directly blocking positive ROI and wasting resources.

Top organisations invest early in three pillars:

  • Data Infrastructure. Reliable, scalable data pipelines and quality controls are vital. Poor data leads to delays, errors, higher costs, and compliance risks, hurting ROI.
  • Skilled Talent. Cross-functional teams combining technical and domain expertise speed deployment, improve quality, reduce errors, and drive ongoing innovation – boosting ROI.
  • Strategic Roadmap. Clear alignment with business goals ensures resources focus on high-impact projects, secures executive support, fosters collaboration, and enables measurable outcomes through KPIs.

Strengthening these fundamentals turns AI investments into consistent growth and competitive advantage.

CTO

5. Navigating Tool Complexity: Toward Integrated AI Lifecycle Management

One of the biggest challenges in measuring AI ROI is tool fragmentation. The AI lifecycle spans multiple stages – data preparation, model development, deployment, monitoring, and impact tracking – and organisations often rely on different tools for each. MLOps platforms track model performance, BI tools measure KPIs, and governance tools ensure compliance, but these systems rarely connect seamlessly.

This disconnect creates blind spots. Metrics sit in silos, handoffs across teams become inefficient, and linking model performance to business outcomes over time becomes manual and error prone. As AI becomes more embedded in core operations, the need for integration is becoming clear.

To close this gap, organisations are adopting unified AI lifecycle management platforms. These solutions provide a centralised view of model health, usage, and business impact, enriched with governance and collaboration features. By aligning technical and business metrics, they enable faster iteration, responsible scaling, and clearer ROI across the lifecycle.

AI Strategy Lead

Final Thoughts: The Cost of Inaction

Measuring AI ROI isn’t just about proving cost savings; it’s a shift in how organisations think about value. AI delivers long-term gains through better decision-making, improved compliance, more empowered employees, and the capacity to innovate continuously.

Yet too often, the cost of doing nothing is overlooked. Failing to invest in AI leads to slower adaptation, inefficient processes, and lost competitive ground. Traditional ROI models, built for short-term, linear investments, don’t account for the strategic upside of early adoption or the risks of falling behind.

That’s why leading organisations are reframing the ROI conversation. They’re looking beyond isolated productivity metrics to focus on lasting outcomes: scalable governance, adaptable talent, and future-ready business models. In a fast-evolving environment, inaction carries its own cost – one that may not appear in today’s spreadsheet but will shape tomorrow’s performance.

AI Research and Reports
0
7 AI Myths in Financial Services

5/5 (2)

5/5 (2)

Large organisations in the banking and financial services industry have come a long way over the past two decades in cutting costs, restructuring IT systems and redefining customer relationship management. And, as if that was not enough, they now face the challenge of having to adapt to ongoing global technological shifts or the challenge of having to “do something with AI” without being AI-ready in terms of strategy, skills and culture.  

Most organisations in the industry have started approaching AI implementation in a conventional way, based on how they have historically managed IT initiatives. Their first attempts at experimenting with AI have led to rapid conclusions forming seven common myths. However, as experience with AI grows, these myths are gradually being debunked. Let us put these myths to a reality check. 

1. We can rely solely on external tech companies

Even in a highly regulated industry like banking and financial services, internal processes and data management practices can vary significantly from one institution to another. Experience shows that while external providers – many of whom lack direct industry experience – can offer solutions tailored to the more obvious use cases and provide customisation, they fall short when it comes to identifying less apparent opportunities and driving fundamental changes in workflows. No one understands an institution’s data better than its own employees. Therefore, a key success factor in AI implementation is active internal ownership, involving employees directly rather than delegating the task entirely to external parties. While technology providers are essential partners, organisations must also cultivate their own internal understanding of AI to ensure successful implementation.

2. AI is here to be applied to single use cases  

In the early stages of experimenting with AI, many financial institutions treated it as a side project, focusing on developing minimum viable products and solving isolated problems to explore what worked and what didn’t. Given their inherently risk-averse nature, organisations often approached AI cautiously, addressing one use case at a time to avoid disrupting their broader IT landscape or core business. However, with AI’s potential for deep transformation, the financial services industry has an opportunity not only to address inefficiencies caused by manual, time-consuming tasks but also to question how data is created, captured, and used from the outset. This requires an ecosystem of visionary minds in the industry who join forces and see beyond deal generation. 

3. We can staff AI projects with our highly motivated junior employees and let our senior staff focus on what they do best – managing the business 

Financial institutions that still view AI as a side hustle, secondary to their day-to-day operations, often assign junior employees to handle AI implementation. However, this can be a mistake. AI projects involve numerous small yet critical decisions, and team members need the authority and experience to make informed judgments that align with the organisation’s goals. Also, resistance to change often comes from those who were not involved in shaping or developing the initiative. Experience shows that project teams with a balanced mix of seniority and diversity in perspectives tend to deliver the best results, ensuring both strategic insight and operational engagement. 

4. AI projects do not pay off 

Compared to conventional IT projects, the business cases for AI implementation – especially when limited to solving a few specific use cases – often do not pay off over a period of two to three years. Traditional IT projects can usually be executed with minimal involvement of subject matter experts, and their costs are easier to estimate based on reference projects. In contrast, AI projects are highly experimental, requiring multiple iterations, significant involvement from experts, and often lacking comparable reference projects. When AI solutions address only small parts of a process, the benefits may not be immediately apparent. However, if AI is viewed as part of a long-term transformational journey, gradually integrating into all areas of the organisation and unlocking new business opportunities over the next five to ten years, the true value of AI becomes clear. A conventional business case model cannot fully capture this long-term payoff. 

5. We are on track with AI if we have several initiatives ongoing 

Many financial institutions have begun their AI journey by launching multiple, often unrelated, use case-based projects. The large number of initiatives can give top management a false sense of progress, as if they are fully engaged in AI. However, investors and project teams often ask key questions: Where are these initiatives leading? How do they contribute? What is the AI vision and strategy, and how does it align with the business strategy? If these answers remain unclear, it’s difficult to claim that the organisation is truly on track with AI. To ensure that AI initiatives are truly impactful and aligned with business objectives, organisations must have a clear AI vision and strategy – and not rely on number of initiatives to measure progress.

6. AI implementation projects always exceed their deadlines 

AI solutions in the banking and financial services industry are rarely off-the-shelf products. In cases of customisation or in-house development, particularly when multiple model-building iterations and user tests are required, project delays of three to nine months can occur. This is largely because organisations want to avoid rolling out solutions that do not perform reliably. The goal is to ensure that users have a positive experience with AI and embrace the change. Over time, as an organisation becomes more familiar with AI implementation, the process will become faster. 

7. We upskill our people by giving them access to AI training  

Learning by doing has always been and will remain the most effective way to learn, especially with technology. Research has shown that 90% of knowledge acquired in training is forgotten after a week if it is not applied. For organisations, the best way to digitally upskill employees is to involve them in AI implementation projects, even if it’s just a few hours per week. To evaluate their AI readiness or engagement, organisations could develop new KPIs, such as the average number of hours an employee actively engages in AI implementation or the percentage of employees serving as subject matter experts in AI projects. 

Which of these myths have you believed, and where do you already see changes?  

Singapore Fintech Festival 2024
0