Over the past year, Ecosystm has conducted extensive research, including surveys and in-depth conversations with industry leaders, to uncover the most pressing topics and trends. And unsurprisingly, AI emerged as the dominant theme.
Here are some insights from our research.
Click here to download ‘AI in BFSI: Success Stories & Insights’ as a PDF
From personalised recommendations to streamlined operations, AI is transforming the products, services and processes in the BFSI industries. While leaders realise that AI holds significant potential, turning that potential into reality is often tough. Many BFSI organisations struggle to move beyond AI pilots because of some key barriers.
Despite the challenges, BFSI organisations are witnessing early AI success in these 3 areas:
- 1. Customer Service & Engagement
- 2. Risk Management & Fraud Detection
- 3. Process Automation & Efficiency
Customer Service & Engagement Use Cases
- Virtual Assistants and Chatbots. Delivering real-time product information and customer support
- Customer Experience Analysis. Analysing data to uncover trends and improve user experiences
- Personalised Recommendations. Providing tailored financial products based on user behaviour and preferences
“While we remain cautious about customer-facing applications, many of our AI use cases provide valuable customer insights to our employees. Human-in-the-loop is still a critical consideration.” – INSURANCE CX LEADER
Risk Management & Fraud Detection Use Cases
- Enhanced Credit Scoring. Improved assessment of creditworthiness and risks
- Advanced Fraud Detection. Easier detection and prevention of fraudulent activities
- Comprehensive Risk Strategy. Assessment of risk factors to develop effective strategies
“We deployed enterprise-grade AI models that are making a significant impact in specialised areas like credit decisioning and risk modelling.” – BANKING DATA LEADER
Process Automation and Efficiency
- Backend Process Streamlining. Automating workflows and processes to boost efficiency
- Loan & Claims Processing. Speeding up application and approval processes
- Invoice Processing. Automating invoice management to minimise errors
“Our focus is on creating a mindset where employees see AI as a tool that can augment their capabilities rather than replace them.” – BANKING COO
Large organisations in the banking and financial services industry have come a long way over the past two decades in cutting costs, restructuring IT systems and redefining customer relationship management. And, as if that was not enough, they now face the challenge of having to adapt to ongoing global technological shifts or the challenge of having to “do something with AI” without being AI-ready in terms of strategy, skills and culture.
Most organisations in the industry have started approaching AI implementation in a conventional way, based on how they have historically managed IT initiatives. Their first attempts at experimenting with AI have led to rapid conclusions forming seven common myths. However, as experience with AI grows, these myths are gradually being debunked. Let us put these myths to a reality check.
1. We can rely solely on external tech companies
Even in a highly regulated industry like banking and financial services, internal processes and data management practices can vary significantly from one institution to another. Experience shows that while external providers – many of whom lack direct industry experience – can offer solutions tailored to the more obvious use cases and provide customisation, they fall short when it comes to identifying less apparent opportunities and driving fundamental changes in workflows. No one understands an institution’s data better than its own employees. Therefore, a key success factor in AI implementation is active internal ownership, involving employees directly rather than delegating the task entirely to external parties. While technology providers are essential partners, organisations must also cultivate their own internal understanding of AI to ensure successful implementation.
2. AI is here to be applied to single use cases
In the early stages of experimenting with AI, many financial institutions treated it as a side project, focusing on developing minimum viable products and solving isolated problems to explore what worked and what didn’t. Given their inherently risk-averse nature, organisations often approached AI cautiously, addressing one use case at a time to avoid disrupting their broader IT landscape or core business. However, with AI’s potential for deep transformation, the financial services industry has an opportunity not only to address inefficiencies caused by manual, time-consuming tasks but also to question how data is created, captured, and used from the outset. This requires an ecosystem of visionary minds in the industry who join forces and see beyond deal generation.
3. We can staff AI projects with our highly motivated junior employees and let our senior staff focus on what they do best – managing the business
Financial institutions that still view AI as a side hustle, secondary to their day-to-day operations, often assign junior employees to handle AI implementation. However, this can be a mistake. AI projects involve numerous small yet critical decisions, and team members need the authority and experience to make informed judgments that align with the organisation’s goals. Also, resistance to change often comes from those who were not involved in shaping or developing the initiative. Experience shows that project teams with a balanced mix of seniority and diversity in perspectives tend to deliver the best results, ensuring both strategic insight and operational engagement.
4. AI projects do not pay off
Compared to conventional IT projects, the business cases for AI implementation – especially when limited to solving a few specific use cases – often do not pay off over a period of two to three years. Traditional IT projects can usually be executed with minimal involvement of subject matter experts, and their costs are easier to estimate based on reference projects. In contrast, AI projects are highly experimental, requiring multiple iterations, significant involvement from experts, and often lacking comparable reference projects. When AI solutions address only small parts of a process, the benefits may not be immediately apparent. However, if AI is viewed as part of a long-term transformational journey, gradually integrating into all areas of the organisation and unlocking new business opportunities over the next five to ten years, the true value of AI becomes clear. A conventional business case model cannot fully capture this long-term payoff.
5. We are on track with AI if we have several initiatives ongoing
Many financial institutions have begun their AI journey by launching multiple, often unrelated, use case-based projects. The large number of initiatives can give top management a false sense of progress, as if they are fully engaged in AI. However, investors and project teams often ask key questions: Where are these initiatives leading? How do they contribute? What is the AI vision and strategy, and how does it align with the business strategy? If these answers remain unclear, it’s difficult to claim that the organisation is truly on track with AI. To ensure that AI initiatives are truly impactful and aligned with business objectives, organisations must have a clear AI vision and strategy – and not rely on number of initiatives to measure progress.
6. AI implementation projects always exceed their deadlines
AI solutions in the banking and financial services industry are rarely off-the-shelf products. In cases of customisation or in-house development, particularly when multiple model-building iterations and user tests are required, project delays of three to nine months can occur. This is largely because organisations want to avoid rolling out solutions that do not perform reliably. The goal is to ensure that users have a positive experience with AI and embrace the change. Over time, as an organisation becomes more familiar with AI implementation, the process will become faster.
7. We upskill our people by giving them access to AI training
Learning by doing has always been and will remain the most effective way to learn, especially with technology. Research has shown that 90% of knowledge acquired in training is forgotten after a week if it is not applied. For organisations, the best way to digitally upskill employees is to involve them in AI implementation projects, even if it’s just a few hours per week. To evaluate their AI readiness or engagement, organisations could develop new KPIs, such as the average number of hours an employee actively engages in AI implementation or the percentage of employees serving as subject matter experts in AI projects.
Which of these myths have you believed, and where do you already see changes?
The White House has mandated federal agencies to conduct risk assessments on AI tools and appoint officers, including Chief Artificial Intelligence Officers (CAIOs), for oversight. This directive, led by the Office of Management and Budget (OMB), aims to modernise government AI adoption and promote responsible use. Agencies must integrate AI oversight into their core functions, ensuring safety, security, and ethical use. CAIOs will be tasked with assessing AI’s impact on civil rights and market competition. Agencies have until December 1, 2024, to address non-compliant AI uses, emphasising swift implementation.
How will this impact global AI adoption? Ecosystm analysts share their views.
Click here to download ‘Ensuring Ethical AI: US Federal Agencies’ New Mandate’ as a PDF.
The Larger Impact: Setting a Global Benchmark
This sets a potential global benchmark for AI governance, with the U.S. leading the way in responsible AI use, inspiring other nations to follow suit. The emphasis on transparency and accountability could boost public trust in AI applications worldwide.
The appointment of CAIOs across U.S. federal agencies marks a significant shift towards ethical AI development and application. Through mandated risk management practices, such as independent evaluations and real-world testing, the government recognises AI’s profound impact on rights, safety, and societal norms.
This isn’t merely a regulatory action; it’s a foundational shift towards embedding ethical and responsible AI at the heart of government operations. The balance struck between fostering innovation and ensuring public safety and rights protection is particularly noteworthy.
This initiative reflects a deep understanding of AI’s dual-edged nature – the potential to significantly benefit society, countered by its risks.
The Larger Impact: Blueprint for Risk Management
In what is likely a world first, AI brings together technology, legal, and policy leaders in a concerted effort to put guardrails around a new technology before a major disaster materialises. These efforts span from technology firms providing a form of legal assurance for use of their products (for example Microsoft’s Customer Copyright Commitment) to parliaments ratifying AI regulatory laws (such as the EU AI Act) to the current directive of installing AI accountability in US federal agencies just in the past few months.
It is universally accepted that AI needs risk management to be responsible and acceptable – installing an accountable C-suite role is another major step of AI risk mitigation.
This is an interesting move for three reasons:
- The balance of innovation versus governance and risk management.
- Accountability mandates for each agency’s use of AI in a public and transparent manner.
- Transparency mandates regarding AI use cases and technologies, including those that may impact safety or rights.
Impact on the Private Sector: Greater Accountability
AI Governance is one of the rare occasions where government action moves faster than private sector. While the immediate pressure is now on US federal agencies (and there are 438 of them) to identify and appoint CAIOs, the announcement sends a clear signal to the private sector.
Following hot on the heels of recent AI legislation steps, it puts AI governance straight into the Boardroom. The air is getting very thin for enterprises still in denial that AI governance has advanced to strategic importance. And unlike the CFC ban in the Eighties (the Montreal protocol likely set the record for concerted global action) this time the technology providers are fully onboard.
There’s no excuse for delaying the acceleration of AI governance and establishing accountability for AI within organisations.
Impact on Tech Providers: More Engagement Opportunities
Technology vendors are poised to benefit from the medium to long-term acceleration of AI investment, especially those based in the U.S., given government agencies’ preferences for local sourcing.
In the short term, our advice to technology vendors and service partners is to actively engage with CAIOs in client agencies to identify existing AI usage in their tools and platforms, as well as algorithms implemented by consultants and service partners.
Once AI guardrails are established within agencies, tech providers and service partners can expedite investments by determining which of their platforms, tools, or capabilities comply with specific guardrails and which do not.
Impact on SE Asia: Promoting a Digital Innovation Hub
By 2030, Southeast Asia is poised to emerge as the world’s fourth-largest economy – much of that growth will be propelled by the adoption of AI and other emerging technologies.
The projected economic growth presents both challenges and opportunities, emphasizing the urgency for regional nations to enhance their AI governance frameworks and stay competitive with international standards. This initiative highlights the critical role of AI integration for private sector businesses in Southeast Asia, urging organizations to proactively address AI’s regulatory and ethical complexities. Furthermore, it has the potential to stimulate cross-border collaborations in AI governance and innovation, bridging the U.S., Southeast Asian nations, and the private sector.
It underscores the global interconnectedness of AI policy and its impact on regional economies and business practices.
By leading with a strategic approach to AI, the U.S. sets an example for Southeast Asia and the global business community to reevaluate their AI strategies, fostering a more unified and responsible global AI ecosystem.
The Risks
U.S. government agencies face the challenge of sourcing experts in technology, legal frameworks, risk management, privacy regulations, civil rights, and security, while also identifying ongoing AI initiatives. Establishing a unified definition of AI and cataloguing processes involving ML, algorithms, or GenAI is essential, given AI’s integral role in organisational processes over the past two decades.
However, there’s a risk that focusing on AI governance may hinder adoption.
The role should prioritise establishing AI guardrails to expedite compliant initiatives while flagging those needing oversight. While these guardrails will facilitate “safe AI” investments, the documentation process could potentially delay progress.
The initiative also echoes a 20th-century mindset for a 21st-century dilemma. Hiring leaders and forming teams feel like a traditional approach. Today, organisations can increase productivity by considering AI and automation as initial solutions. Investing more time upfront to discover initiatives, set guardrails, and implement AI decision-making processes could significantly improve CAIO effectiveness from the outset.
In 2024, business and technology leaders will leverage the opportunity presented by the attention being received by Generative AI engines to test and integrate AI comprehensively across the business. Many organisations will prioritise the alignment of their initial Generative AI initiatives with broader AI strategies, establishing distinct short-term and long-term goals for their AI investments.
AI adoption will influence business processes, technology skills, and, in turn, reshape the product/service offerings of AI providers.
Ecosystm analysts Achim Granzen, Peter Carr, Richard Wilkins, Tim Sheedy, and Ullrich Loeffler present the top 5 AI trends in 2024.
Click here to download ‘Ecosystm Predicts: Top 5 AI Trends in 2024.
#1 By the End of 2024, Gen AI Will Become a ‘Hygiene Factor’ for Tech Providers
AI has widely been commended as the ‘game changer’ that will create and extend the divide between adopters and laggards and be the deciding factor for success and failure.
Cutting through the hype, strategic adoption of AI is still at a nascent stage and 2024 will be another year where companies identify use cases, experiment with POCs, and commit renewed efforts to get their data assets in order.
The biggest impact of AI will be derived from integrated AI capability in standard packaged software and products – and this will include Generative AI. We will see a plethora of product releases that seamlessly weave Generative AI into everyday tools generating new value through increased efficiency and user-friendliness.
Technology will be the first industry where AI becomes the deciding factor between success and failure; tech providers will be forced to deliver on their AI promises or be left behind.
#2 Gen AI Will Disrupt the Role of IT Architects
Traditionally, IT has relied on three-tier architectures for applications, that faced limitations in scalability and real-time responsiveness. The emergence of microservices, containerisation, and serverless computing has paved the way for event-driven designs, a paradigm shift that decouples components and use events like user actions or data updates as triggers for actions across distributed services. This approach enhances agility, scalability, and flexibility in the system.
The shift towards event-driven designs and advanced architectural patterns presents a compelling challenge for IT Architects, as traditionally their role revolved around designing, planning and overseeing complex systems.
Generative AI is progressively demonstrating capabilities in architectural design through pattern recognition, predictive analytics, and automated decision-making.
With the adoption of Generative AI, the role of an IT Architect will change into a symbiotic relationship where human expertise collaborates with AI insights.
#3 Gen AI Adoption Will be Confined to Specific Use Cases
A little over a year ago, a new era in AI began with the initial release of OpenAI’s ChatGPT. Since then, many organisations have launched Generative AI pilots.
In its second-year enterprises will start adoption – but in strictly defined and limited use cases. Examples such as Microsoft Copilot demonstrate an early adopter route. While productivity increases for individuals can be significant, its enterprise impact is unclear (at this time).
But there are impactful use cases in enterprise knowledge and document management. Organisations across industries have decades (or even a century) of information, including digitised documents and staff expertise. That treasure trove of information can be made accessible through cognitive search and semantic answering, driven by Generative AI.
Generative AI will provide organisations with a way to access, distill, and create value out of that data – a task that may well be impossible to achieve in any other way.
#4 Gen AI Will Get Press Inches; ‘Traditional’ AI Will Do the Hard Work
While the use cases for Generative AI will continue to expand, the deployment models and architectures for enterprise Generative AI do not add up – yet.
Running Generative AI in organisations’ data centres is costly and using public models for all but the most obvious use cases is too risky. Most organisations opt for a “small target” strategy, implementing Generative AI in isolated use cases within specific processes, teams, or functions. Justifying investment in hardware, software, and services for an internal AI platform is challenging when the payback for each AI initiative is not substantial.
“Traditional AI/ML” will remain the workhorse, with a significant rise in use cases and deployments. Organisations are used to investing for AI by individual use cases. Managing process change and training is also more straightforward with traditional AI, as the changes are implemented in a system or platform, eliminating the need to retrain multiple knowledge workers.
#5 AI Will Pioneer a 21st Century BPM Renaissance
As we near the 25-year milestone of the 21st century, it becomes clear that many businesses are still operating with 20th-century practices and philosophies.
AI, however, represents more than a technological breakthrough; it offers a new perspective on how businesses operate and is akin to a modern interpretation of Business Process Management (BPM). This development carries substantial consequences for digital transformation strategies. To fully exploit the potential of AI, organisations need to commit to an extensive and ongoing process spanning the collection, organisation, and expansion of data, to integrating these insights at an application and workflow level.
The role of AI will transcend technological innovation, becoming a driving force for substantial business transformation. Sectors that specialise in workflow, data management, and organisational transformation are poised to see the most growth in 2024 because of this shift.