Organisations aiming to improve customer experience are seeing diminishing returns, moving away from the significant gains before and during the pandemic to incremental improvements. Many organisations experience stagnant or declining CX and NPS scores as they prioritise profit over customer growth and face a convergence of undifferentiated digital experiences. The evolving digital landscape has also heightened baseline customer expectations.
In 2024, CX programs will be focused and measurable – with greater involvement of Sales, Marketing, Brand, and Customer Service to ensure CX initiatives are unified across the entire customer journey.
Organisations will reassess CX strategies, choosing impactful initiatives and aligning with brand values. This recalibration, unique to each organisation, may include reinvesting in human channels, improving digital experiences, or reimagining customer ecosystems.
#2 Sentiment Analysis Will Fuel CX Improvement
Organisations strive to design seamless customer journeys – yet they often miss the mark in crafting truly memorable experiences that forge emotional connections and turn customers into brand advocates.
Customers want on-demand information and service; failure to meet these expectations often leads to discontent and frustration. This is further heightened when organisations fail to recognise and respond to these emotions.
Sentiment analysis will shape CX improvements – and technological advancements such as in neural network, promise higher accuracy in sentiment analysis by detecting intricate relationships between emotions, phrases, and words.
These models explore multiple permutations, delving deeper to interpret the meaning behind different sentiment clusters.
#3 AI Will Elevate VoC from Surveys to Experience Improvement
In 2024, AI technologies will transform Voice of Customer (VoC) programs from measurement practices into the engine room of the experience improvement function.
The focus will move from measurement to action – backed by AI. AI is already playing a pivotal role in analysing vast volumes of data, including unstructured and unsolicited feedback. In 2024, VoC programs will shift gear to focus on driving a customer centric culture and business change. AI will augment insight interpretation, recommend actions, and predict customer behaviour, sentiment, and churn to elevate customer experiences (CX).
Organisations that don’t embrace an AI-driven paradigm will get left behind as they fail to showcase and deliver ROI to the business.
#4 Generative AI Platforms Will Replace Knowledge Management Tools
Most organisations have more customer knowledge management tools and platforms than they should. They exist in the contact centre, on the website, the mobile app, in-store, at branches, and within customer service. There are two challenges that this creates:
Inconsistent knowledge. The information in the different knowledge bases is different and sometimes conflicting.
Difficult to extract answers. The knowledge contained in these platforms is often in PDFs and long form documents.
Generative AI tools will consolidate organisationalknowledge, enhancing searchability.
Customers and contact centre agents will be able to get actual answers to questions and they will be consistent across touchpoints (assuming they are comprehensive, customer-journey and organisation-wide initiatives).
#5 Experience Orchestration Will Accelerate
Despite the ongoing effort to streamline and simplify the CX, organisations often implement new technologies, such as conversational AI, digital and social channels, as independent projects. This fragmented approach, driven by the desire for quick wins using best-in-class point solutions results in a complex CX technology architecture.
With the proliferation of point solution vendors, it is becoming critical to eliminate the silos. The fragmentation hampers CX teams from achieving their goals, leading to increased costs, limited insights, a weak understanding of customer journeys, and inconsistent services.
Embracing CX unification through an orchestration platform enables organisations to enhance the CX rapidly, with reduced concerns about tech debt and legacy issues.
#1 State-sponsored Attacks Will Alter the Nature Of Security Threats
It is becoming clearer that the post-Cold-War era is over, and we are transitioning to a multi-polar world. In this new age, malevolent governments will become increasingly emboldened to carry out cyber and physical attacks without the concern of sanction.
Unlike most malicious actors driven by profit today, state adversaries will be motivated to maximise disruption.
Rather than encrypting valuable data with ransomware, wiper malware will be deployed. State-sponsored attacks against critical infrastructure, such as transportation, energy, and undersea cables will be designed to inflict irreversible damage. The recent 23andme breachis an example of how ethnically directed attacks could be designed to sow fear and distrust. Additionally, even the threat of spyware and phishing will cause some activists, journalists, and politicians to self-censor.
#2 AI Legislation Breaches Will Occur, But Will Go Unpunished
With US President Biden’s recently published “Executive order on Safe, Secure and Trustworthy AI” and the European Union’s “AI Act” set for adoption by the European Parliament in mid-2024, codified and enforceable AI legislation is on the verge of becoming reality. However, oversight structures with powers to enforce the rules are currently not in place for either initiative and will take time to build out.
In 2024, the first instances of AI legislation violations will surface – potentially revealed by whistleblowers or significant public AI failures – but no legal action will be taken yet.
#3 AI Will Increase Net-New Carbon Emissions
In an age focused on reducing carbon and greenhouse gas emissions, AI is contributing to the opposite. Organisations often fail to track these emissions under the broader “Scope 3” category. Researchers at the University of Massachusetts, Amherst, found that training a single AI model can emit over 283T of carbon dioxide, equal to emissions from 62.6 gasoline-powered vehicles in a year.
Organisations rely on cloud providers for carbon emission reduction (Amazon targets net-zero by 2040, and Microsoft and Google aim for 2030, with the trajectory influencing global climate change); yet transparency on AI greenhouse gas emissions is limited. Diverse routes to net-zero will determine the level of greenhouse gas emissions.
Some argue that AI can help in better mapping a path to net-zero, but there is concern about whether the damage caused in the process will outweigh the benefits.
#4 ESG Will Transform into GSE to Become the Future of GRC
Previously viewed as a standalone concept, ESG will be increasingly recognised as integral to Governance, Risk, and Compliance (GRC) practices. The ‘E’ in ESG, representing environmental concerns, is becoming synonymous with compliance due to growing environmental regulations. The ‘S’, or social aspect, is merging with risk management, addressing contemporary issues such as ethical supply chains, workplace equity, and modern slavery, which traditional GRC models often overlook. Governance continues to be a crucial component.
The key to organisational adoption and transformation will be understanding that ESG is not an isolated function but is intricately linked with existing GRC capabilities.
This will present opportunities for GRC and Risk Management providers to adapt their current solutions, already deployed within organisations, to enhance ESG effectiveness. This strategy promises mutual benefits, improving compliance and risk management while simultaneously advancing ESG initiatives.
#5 Productivity Will Dominate Workforce Conversations
The skills discussions have shifted significantly over 2023. At the start of the year, HR leaders were still dealing with the ‘productivity conundrum’ – balancing employee flexibility and productivity in a hybrid work setting. There were also concerns about skills shortage, particularly in IT, as organisations prioritised tech-driven transformation and innovation.
Now, the focus is on assessing the pros and cons (mainly ROI) of providing employees with advanced productivity tools. For example, early studies on Microsoft Copilot showed that 70% of users experienced increased productivity. Discussions, including Narayana Murthy’s remarks on 70-hour work weeks, have re-ignited conversations about employee well-being and the impact of technology in enabling employees to achieve more in less time.
Against the backdrop of skills shortages and the need for better employee experience to retain talent, organisations are increasingly adopting/upgrading their productivity tools – starting with their Sales & Marketing functions.
Earlier in the year, Microsoft unveiled its vision for Copilot, a digital companion that aims to provide a unified user experience across Bing, Edge, Microsoft 365, and Windows. This vision includes a consistent user experience. The rollout began with Windows in September and expanded to Microsoft 365 Copilot for enterprise customers this month.
Many organisations across Asia Pacific will soon face the question on whether to invest in Microsoft 365 Copilot – despite its current limitations in supporting all regional languages. Copilot is currently supported in English (US, GB, AU, CA, IN), Japanese, and Chinese Simplified. Microsoft plans to support more languages such as Arabic, Chinese Traditional, Korean and Thai over the first half of 2024. There are still several languages used across Asia Pacific that will not be supported until at least the second half of 2024 or later.
Access to Microsoft 365 Copilot comes with certain prerequisites. Organisations need to have either a Microsoft 365 E3 or E5 license and an Azure Active Directory account. F3 licenses do not currently have access to 365 Copilot. For E3 license holders the cost per user for adding Copilot would nearly double – so it is a significant extra spend and will need to deliver measurable and tangible benefits and a strong business case. It is doubtful whether most organisations will be able to justify this extra spend.
However, Copilot has the potential to significantly enhance the productivity of knowledge workers, saving them many hours each week, with hundreds of use cases already emerging for different industries and user profiles. Microsoft is offering a plethora of information on how to best adopt, deploy, and use Copilot. The key focus when building a business case should revolve around how knowledge workers will use this extra time.
Maximising Copilot Integration: Steps to Drive Adoption and Enhance Productivity
Identifying use cases, building the business proposal, and securing funding for Copilot is only half the battle. Driving the change and ensuring all relevant employees use the new processes will be significantly harder. Consider how employees currently use their productivity tools compared to 15 years ago, with many still relying on the same features and capabilities in their Office suites as they did in earlier versions. In cases where new features were embraced, it typically occurred because knowledge workers didn’t have to make any additional efforts to incorporate them, such as the auto-type ahead functions in email or the seamless integration of Teams calls.
The ability of your organisation to seamlessly integrate Copilot into daily workflows, optimising productivity and efficiency while harnessing AI-generated data and insights for decision-making will be of paramount importance. It will be equally important to be watchful to mitigate potential risks associated with an over-reliance on AI without sufficient oversight.
Implementing Copilot will require some essential steps:
Training and onboarding. Provide comprehensive training to employees on how to use Copilot’s features within Microsoft 365 applications.
Integration into daily tasks. Encourage employees to use Copilot for drafting emails, documents, and generating meeting notes to familiarise them with its capabilities.
Customisation.Tailor Copilot’s settings and suggestions to align with company-specific needs and workflows.
Automation.Create bots, templates, integrations, and other automation functions for multiple use cases. For example, when users first log onto their PC, they could get a summary of missed emails, chats – without the need to request it.
Feedback loop.Implement a feedback mechanism to monitor how Copilot is used and to make adjustments based on user experiences.
Evaluating effectiveness.Gauge how Copilot’s features are enhancing productivity regularly and adjust usage strategies accordingly. Focus on the increased productivity – what knowledge workers now achieve with the time made available by Copilot.
Changing the behaviours of knowledge workers can be challenging – particularly for basic processes that they have been using for years or even decades. Knowledge of use cases and opportunities for Copilot will not just filter across the organisation. Implementing formal training and educational programs and backing them up with refresher courses is important to ensure compliance and productivity gains.
The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.
This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).
Risk Classification and Regulations in the EU AI Act
At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.
For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.
Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach
The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.
A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.
Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.
Adopting AI Risk Management in Organisations: The Singapore Approach
Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.
The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.
To Comply with Upcoming AI Regulations, Risk Management is the Path Forward
As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.
Digital Workplace. As with other industries with a high percentage of knowledge workers, BFSI organisations are grappling with granting remote access to staff. Cloud-based collaboration and Fintech tools, BYOD policies, and sensitive data traversing home networks are all creating new challenges for cyber teams. Modern approaches, such as zero trust network access, privilege management, and network segmentation are necessary to ensure workers can seamlessly but securely perform their roles remotely.
Looking Beyond Technology: Evaluating the Adequacy of Compliance-Centric Cyber Strategies
The BFSI industry stands among the most rigorously regulated industries, with scrutiny intensifying following every collapse or notable breach. Cyber and data protection teams shoulder the responsibility of understanding the implications of and adhering to emerging data protection regulations in areas such as GDPR, PCI-DSS, SOC 2, and PSD2. Automating compliance procedures emerges as a compelling solution to streamline processes, mitigate risks, and curtail expenses. Technologies such as robotic process automation (RPA), low-code development, and continuous compliance monitoring are gaining prominence.
The adoption of AI to enhance security is still emerging but will accelerate rapidly. Ecosystm research shows that within the next two years, nearly 70% of BFSI organisations will have invested in SecOps. AI can help Security Operations Centres (SOCs) prioritise alerts and respond to threats faster than could be performed manually. Additionally, the expanding variety of network endpoints, including customer devices, ATMs, and tools used by frontline employees, can embrace AI-enhanced protection without introducing additional onboarding friction.
However, there is a need for BFSI organisations to look beyond compliance checklists to a more holistic cyber approach that can prioritise cyber measures continually based on the risk to the organisations. And this is one of the biggest challenges that BFSI CISOs face. Ecosystm research finds that 72% of cyber and technology leaders in the industry feel that there is limited understanding of cyber risk and governance in their organisations.
In fact, BFSI organisations must look at the interconnectedness of an intelligence-led and risk-based strategy. Thorough risk assessments let organisations prioritise vulnerability mitigation effectively. This targeted approach optimises security initiatives by focusing on high-risk areas, reducing security debt. To adapt to evolving threats, intelligence should inform risk assessment. Intelligence-led strategies empower cybersecurity leaders with real-time threat insights for proactive measures, actively tackling emerging threats and vulnerabilities – and definitely moving beyond compliance-focused strategies.