2023 has been an eventful year. In May, the WHO announced that the pandemic was no longer a global public health emergency. However, other influencers in 2023 will continue to impact the market, well into 2024 and beyond.
Global Conflicts. The Russian invasion of Ukraine persisted; the Israeli-Palestinian conflict escalated into war; African nations continued to see armed conflicts and political crises; there has been significant population displacement.
Banking Crisis. American regional banks collapsed – Silicon Valley Bank and First Republic Bank collapses ranking as the third and second-largest banking collapses in US history; Credit Suisse was acquired by UBS in Switzerland.
Climate Emergency. The UN’s synthesis report found that there’s still a chance to limit global temperature increases by 1.5°C; Loss and Damage conversations continued without a significant impact.
Power of AI. The interest in generative AI models heated up; tech vendors incorporated foundational models in their enterprise offerings – Microsoft Copilot was launched; awareness of AI risks strengthened calls for Ethical/Responsible AI.
Click below to find out what Ecosystm analysts Achim Granzen, Darian Bird, Peter Carr, Sash Mukherjee and Tim Sheedy consider the top 5 tech market forces that will impact organisations in 2024.
Click here to download ‘Ecosystm Predicts: Tech Market Dynamics 2024’ as a PDF
#1 State-sponsored Attacks Will Alter the Nature Of Security Threats
It is becoming clearer that the post-Cold-War era is over, and we are transitioning to a multi-polar world. In this new age, malevolent governments will become increasingly emboldened to carry out cyber and physical attacks without the concern of sanction.
Unlike most malicious actors driven by profit today, state adversaries will be motivated to maximise disruption.
Rather than encrypting valuable data with ransomware, wiper malware will be deployed. State-sponsored attacks against critical infrastructure, such as transportation, energy, and undersea cables will be designed to inflict irreversible damage. The recent 23andme breach is an example of how ethnically directed attacks could be designed to sow fear and distrust. Additionally, even the threat of spyware and phishing will cause some activists, journalists, and politicians to self-censor.

#2 AI Legislation Breaches Will Occur, But Will Go Unpunished
With US President Biden’s recently published “Executive order on Safe, Secure and Trustworthy AI” and the European Union’s “AI Act” set for adoption by the European Parliament in mid-2024, codified and enforceable AI legislation is on the verge of becoming reality. However, oversight structures with powers to enforce the rules are currently not in place for either initiative and will take time to build out.
In 2024, the first instances of AI legislation violations will surface – potentially revealed by whistleblowers or significant public AI failures – but no legal action will be taken yet.

#3 AI Will Increase Net-New Carbon Emissions
In an age focused on reducing carbon and greenhouse gas emissions, AI is contributing to the opposite. Organisations often fail to track these emissions under the broader “Scope 3” category. Researchers at the University of Massachusetts, Amherst, found that training a single AI model can emit over 283T of carbon dioxide, equal to emissions from 62.6 gasoline-powered vehicles in a year.
Organisations rely on cloud providers for carbon emission reduction (Amazon targets net-zero by 2040, and Microsoft and Google aim for 2030, with the trajectory influencing global climate change); yet transparency on AI greenhouse gas emissions is limited. Diverse routes to net-zero will determine the level of greenhouse gas emissions.
Some argue that AI can help in better mapping a path to net-zero, but there is concern about whether the damage caused in the process will outweigh the benefits.

#4 ESG Will Transform into GSE to Become the Future of GRC
Previously viewed as a standalone concept, ESG will be increasingly recognised as integral to Governance, Risk, and Compliance (GRC) practices. The ‘E’ in ESG, representing environmental concerns, is becoming synonymous with compliance due to growing environmental regulations. The ‘S’, or social aspect, is merging with risk management, addressing contemporary issues such as ethical supply chains, workplace equity, and modern slavery, which traditional GRC models often overlook. Governance continues to be a crucial component.
The key to organisational adoption and transformation will be understanding that ESG is not an isolated function but is intricately linked with existing GRC capabilities.
This will present opportunities for GRC and Risk Management providers to adapt their current solutions, already deployed within organisations, to enhance ESG effectiveness. This strategy promises mutual benefits, improving compliance and risk management while simultaneously advancing ESG initiatives.

#5 Productivity Will Dominate Workforce Conversations
The skills discussions have shifted significantly over 2023. At the start of the year, HR leaders were still dealing with the ‘productivity conundrum’ – balancing employee flexibility and productivity in a hybrid work setting. There were also concerns about skills shortage, particularly in IT, as organisations prioritised tech-driven transformation and innovation.
Now, the focus is on assessing the pros and cons (mainly ROI) of providing employees with advanced productivity tools. For example, early studies on Microsoft Copilot showed that 70% of users experienced increased productivity. Discussions, including Narayana Murthy’s remarks on 70-hour work weeks, have re-ignited conversations about employee well-being and the impact of technology in enabling employees to achieve more in less time.
Against the backdrop of skills shortages and the need for better employee experience to retain talent, organisations are increasingly adopting/upgrading their productivity tools – starting with their Sales & Marketing functions.


Earlier in the year, Microsoft unveiled its vision for Copilot, a digital companion that aims to provide a unified user experience across Bing, Edge, Microsoft 365, and Windows. This vision includes a consistent user experience. The rollout began with Windows in September and expanded to Microsoft 365 Copilot for enterprise customers this month.
Many organisations across Asia Pacific will soon face the question on whether to invest in Microsoft 365 Copilot – despite its current limitations in supporting all regional languages. Copilot is currently supported in English (US, GB, AU, CA, IN), Japanese, and Chinese Simplified. Microsoft plans to support more languages such as Arabic, Chinese Traditional, Korean and Thai over the first half of 2024. There are still several languages used across Asia Pacific that will not be supported until at least the second half of 2024 or later.
Access to Microsoft 365 Copilot comes with certain prerequisites. Organisations need to have either a Microsoft 365 E3 or E5 license and an Azure Active Directory account. F3 licenses do not currently have access to 365 Copilot. For E3 license holders the cost per user for adding Copilot would nearly double – so it is a significant extra spend and will need to deliver measurable and tangible benefits and a strong business case. It is doubtful whether most organisations will be able to justify this extra spend.
However, Copilot has the potential to significantly enhance the productivity of knowledge workers, saving them many hours each week, with hundreds of use cases already emerging for different industries and user profiles. Microsoft is offering a plethora of information on how to best adopt, deploy, and use Copilot. The key focus when building a business case should revolve around how knowledge workers will use this extra time.
Maximising Copilot Integration: Steps to Drive Adoption and Enhance Productivity
Identifying use cases, building the business proposal, and securing funding for Copilot is only half the battle. Driving the change and ensuring all relevant employees use the new processes will be significantly harder. Consider how employees currently use their productivity tools compared to 15 years ago, with many still relying on the same features and capabilities in their Office suites as they did in earlier versions. In cases where new features were embraced, it typically occurred because knowledge workers didn’t have to make any additional efforts to incorporate them, such as the auto-type ahead functions in email or the seamless integration of Teams calls.
The ability of your organisation to seamlessly integrate Copilot into daily workflows, optimising productivity and efficiency while harnessing AI-generated data and insights for decision-making will be of paramount importance. It will be equally important to be watchful to mitigate potential risks associated with an over-reliance on AI without sufficient oversight.
Implementing Copilot will require some essential steps:
- Training and onboarding. Provide comprehensive training to employees on how to use Copilot’s features within Microsoft 365 applications.
- Integration into daily tasks. Encourage employees to use Copilot for drafting emails, documents, and generating meeting notes to familiarise them with its capabilities.
- Customisation. Tailor Copilot’s settings and suggestions to align with company-specific needs and workflows.
- Automation. Create bots, templates, integrations, and other automation functions for multiple use cases. For example, when users first log onto their PC, they could get a summary of missed emails, chats – without the need to request it.
- Feedback loop. Implement a feedback mechanism to monitor how Copilot is used and to make adjustments based on user experiences.
- Evaluating effectiveness. Gauge how Copilot’s features are enhancing productivity regularly and adjust usage strategies accordingly. Focus on the increased productivity – what knowledge workers now achieve with the time made available by Copilot.
Changing the behaviours of knowledge workers can be challenging – particularly for basic processes that they have been using for years or even decades. Knowledge of use cases and opportunities for Copilot will not just filter across the organisation. Implementing formal training and educational programs and backing them up with refresher courses is important to ensure compliance and productivity gains.

The impact of AI on Customer Experience (CX) has been profound and continues to expand. AI allows a a range of advantages, including improved operational efficiency, cost savings, and enhanced experiences for both customers and employees.
AI-powered solutions have the capability to analyse vast volumes of customer data in real-time, providing organisations with invaluable insights into individual preferences and behaviour. When executed effectively, the ability to capture, analyse, and leverage customer data at scale gives organisations significant competitive edge. Most importantly, AI unlocks opportunities for innovation.
Read on to discover the transformative impact of AI on customer experiences.
Click here to download ‘Customer Experience Redefined: The Role of AI’ as a PDF

Generative AI has stolen the limelight in 2023 from nearly every other technology – and for good reason. The advances made by Generative AI providers have been incredible, with many human “thinking” processes now in line to be automated.
But before we had Generative AI, there was the run-of-the-mill “traditional AI”. However, despite the traditional tag, these capabilities have a long way to run within your organisation. In fact, they are often easier to implement, have less risk (and more predictability) and are easier to generate business cases for. Traditional AI systems are often already embedded in many applications, systems, and processes, and can easily be purchased as-a-service from many providers.

Unlocking the Potential of AI Across Industries
Many organisations around the world are exploring AI solutions today, and the opportunities for improvement are significant:
- Manufacturers are designing, developing and testing in digital environments, relying on AI to predict product responses to stress and environments. In the future, Generative AI will be called upon to suggest improvements.
- Retailers are using AI to monitor customer behaviours and predict next steps. Algorithms are being used to drive the best outcome for the customer and the retailer, based on previous behaviours and trained outcomes.
- Transport and logistics businesses are using AI to minimise fuel usage and driver expenses while maximising delivery loads. Smart route planning and scheduling is ensuring timely deliveries while reducing costs and saving on vehicle maintenance.
- Warehouses are enhancing the safety of their environments and efficiently moving goods with AI. Through a combination of video analytics, connected IoT devices, and logistical software, they are maximising the potential of their limited space.
- Public infrastructure providers (such as shopping centres, public transport providers etc) are using AI to monitor public safety. Video analytics and sensors is helping safety and security teams take public safety beyond traditional human monitoring.
AI Impacts Multiple Roles
Even within the organisation, different lines of business expect different outcomes for AI implementations.
- IT teams are monitoring infrastructure, applications, and transactions – to better understand root-cause analysis and predict upcoming failures – using AI. In fact, AIOps, one of the fastest-growing areas of AI, yields substantial productivity gains for tech teams and boosts reliability for both customers and employees.
- Finance teams are leveraging AI to understand customer payment patterns and automate the issuance of invoices and reminders, a capability increasingly being integrated into modern finance systems.
- Sales teams are using AI to discover the best prospects to target and what offers they are most likely to respond to.
- Contact centres are monitoring calls, automating suggestions, summarising records, and scheduling follow-up actions through conversational AI. This is allowing to get agents up to speed in a shorter period, ensuring greater customer satisfaction and increased brand loyalty.
Transitioning from Low-Risk to AI-Infused Growth
These are just a tiny selection of the opportunities for AI. And few of these need testing or business cases – many of these capabilities are available out-of-the-box or out of the cloud. They don’t need deep analysis by risk, legal, or cybersecurity teams. They just need a champion to make the call and switch them on.
One potential downside of Generative AI is that it is drawing unwarranted attention to well-established, low-risk AI applications. Many of these do not require much time from data scientists – and if they do, the challenge is often finding the data and creating the algorithm. Humans can typically understand the logic and rules that the models create – unlike Generative AI, where the outcome cannot be reverse-engineered.
The opportunity today is to take advantage of the attention that LLMs and other Generative AI engines are getting to incorporate AI into every conceivable aspect of a business. When organisations understand the opportunities for productivity improvements, speed enhancement, better customer outcomes and improved business performance, the spend on AI capabilities will skyrocket. Ecosystm estimates that for most organisations, AI spend will be less than 5% of their total tech spend in 2024 – but it is likely to grow to over 20% within the next 4-5 years.

It’s been barely one year since we entered the Generative AI Age. On November 30, 2022, OpenAI launched ChatGPT, with no fanfare or promotion. Since then, Generative AI has become arguably the most talked-about tech topic, both in terms of opportunities it may bring and risks that it may carry.
The landslide success of ChatGPT and other Generative AI applications with consumers and businesses has put a renewed and strengthened focus on the potential risks associated with the technology – and how best to regulate and manage these. Government bodies and agencies have created voluntary guidelines for the use of AI for a number of years now (the Singapore Framework, for example, was launched in 2019).
There is no active legislation on the development and use of AI yet. Crucially, however, a number of such initiatives are currently on their way through legislative processes globally.
EU’s Landmark AI Act: A Step Towards Global AI Regulation
The European Union’s “Artificial Intelligence Act” is a leading example. The European Commission (EC) started examining AI legislation in 2020 with a focus on
- Protecting consumers
- Safeguarding fundamental rights, and
- Avoiding unlawful discrimination or bias
The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.
This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).
Risk Classification and Regulations in the EU AI Act
At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.

For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.

Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach
The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.
A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.
Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.
Adopting AI Risk Management in Organisations: The Singapore Approach
Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.
The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.
To Comply with Upcoming AI Regulations, Risk Management is the Path Forward
As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.

The Banking, Financial Services, and Insurance (BFSI) industry, known for its cautious stance on technology, is swiftly undergoing a transformational modernisation journey. Areas such as digital customer experiences, automated fraud detection, and real-time risk assessment are all part of a technology-led roadmap. This shift is transforming the cybersecurity stance of BFSI organisations, which have conventionally favoured centralising everything within a data centre behind a firewall.
Ecosystm research finds that 75% of BFSI technology leaders believe that a data breach is inevitable. This requires taking a new cyber approach to detect threats early, reduce the impact of an attack, and avoid lateral movement across the network.
BFSI organisations will boost investments in two main areas over the next year: updating infrastructure and software, and exploring innovative domains like digital workplaces and automation. Cybersecurity investments are crucial in both of these areas.

As a regulated industry, breaches come with significant cost implications, underscoring the need to prioritise cybersecurity. BFSI cybersecurity and risk teams need to constantly reassess their strategies for safeguarding data and fulfilling compliance obligations, as they explore ways to facilitate new services for customers, partners, and employees.

The primary concerns of BFSI CISOs can be categorised into two distinct groups:
- Expanding Technology Use. This includes the proliferation of applications and devices, as well as data access beyond the network perimeter.
- Employee-Related Vulnerabilities. This involves responses to phishing and malware attempts, as well as intentional and unintentional misuse of technology.
Vulnerabilities Arising from Employee Actions
Security vulnerabilities arising from employee actions and unawareness represent a significant and ongoing concern for businesses of all sizes and industries – the risks are just much bigger for BFSI. These vulnerabilities can lead to data breaches, financial losses, damage to reputation, and legal ramifications. A multi-pronged approach is needed that combines technology, training, policies, and a culture of security consciousness.
Training and Culture. BFSI organisations prioritise comprehensive training and awareness programs, educating employees about common threats like phishing and best practices for safeguarding sensitive data. While these programs are often ongoing and adaptable to new threats, they can sometimes become mere compliance checklists, raising questions about their true effectiveness. Conducting simulated phishing attacks and security quizzes to assess employee awareness and identify areas where further training is required, can be effective.
To truly educate employees on risks, it’s essential to move beyond compliance and build a cybersecurity culture throughout the organisation. This can involve setting organisation-wide security KPIs that cascade from the CEO down to every employee, promoting accountability and transparency. Creating an environment where employees feel comfortable reporting security concerns is critical for early threat detection and mitigation.
Policies. Clear security policies and enforcement are essential for ensuring that employees understand their roles within the broader security framework, including responsibilities on strong password use, secure data handling, and prompt incident reporting. Implementing the principle of least privilege, which restricts access based on specific roles, mitigates potential harm from insider threats and inadvertent data exposure. Policies should evolve through routine security audits, including technical assessments and evaluations of employee protocol adherence, which will help organisations with a swifter identification of vulnerabilities and to take the necessary corrective actions.
However, despite the best efforts, breaches do happen – and this is where a well-defined incident response plan, that is regularly tested and updated, is crucial to minimise the damage. This requires every employee to know their roles and responsibilities during a security incident.
Tech Expansion Leading to Cyber Complexity
Cloud. Initially hesitant to transition essential workloads to the cloud, the BFSI industry has experienced a shift in perspective due to the rise of inventive SaaS-based Fintech tools and hybrid cloud solutions, that have created new impetus for change. This new distributed architecture requires a fresh look at cyber measures. Secure Access Service Edge (SASE) providers are integrating a range of cloud-delivered safeguards, such as FWaaS, CASB, and ZTNA with SD-WAN to ensure organisations can securely access the cloud without compromising on performance.
Data & AI. Data holds paramount importance in the BFSI industry for informed decision-making, personalised customer experiences, risk assessment, fraud prevention, and regulatory compliance. AI applications are being used to tailor products and services, optimise operational efficiency, and stay competitive in an evolving market. As part of their technology modernisation efforts, 47% of BFSI institutions are refining their data and AI strategies. They also acknowledge the challenges associated – and satisfying risk, regulatory, and compliance requirements is one of the biggest challenges facing BFSI organisations in the AI deployments.

The rush to experiment with Generative AI and foundation models to assist customers and employees is only heightening these concerns. There is an urgent need for policies around the use of these emerging technologies. Initiatives such as the Monetary Authority of Singapore’s Veritas that aim to enable financial institutions to evaluate their AI and data analytics solutions against the principles of fairness, ethics, accountability, and transparency (FEAT) are expected to provide the much-needed guidance to the industry.
Digital Workplace. As with other industries with a high percentage of knowledge workers, BFSI organisations are grappling with granting remote access to staff. Cloud-based collaboration and Fintech tools, BYOD policies, and sensitive data traversing home networks are all creating new challenges for cyber teams. Modern approaches, such as zero trust network access, privilege management, and network segmentation are necessary to ensure workers can seamlessly but securely perform their roles remotely.

Looking Beyond Technology: Evaluating the Adequacy of Compliance-Centric Cyber Strategies
The BFSI industry stands among the most rigorously regulated industries, with scrutiny intensifying following every collapse or notable breach. Cyber and data protection teams shoulder the responsibility of understanding the implications of and adhering to emerging data protection regulations in areas such as GDPR, PCI-DSS, SOC 2, and PSD2. Automating compliance procedures emerges as a compelling solution to streamline processes, mitigate risks, and curtail expenses. Technologies such as robotic process automation (RPA), low-code development, and continuous compliance monitoring are gaining prominence.
The adoption of AI to enhance security is still emerging but will accelerate rapidly. Ecosystm research shows that within the next two years, nearly 70% of BFSI organisations will have invested in SecOps. AI can help Security Operations Centres (SOCs) prioritise alerts and respond to threats faster than could be performed manually. Additionally, the expanding variety of network endpoints, including customer devices, ATMs, and tools used by frontline employees, can embrace AI-enhanced protection without introducing additional onboarding friction.
However, there is a need for BFSI organisations to look beyond compliance checklists to a more holistic cyber approach that can prioritise cyber measures continually based on the risk to the organisations. And this is one of the biggest challenges that BFSI CISOs face. Ecosystm research finds that 72% of cyber and technology leaders in the industry feel that there is limited understanding of cyber risk and governance in their organisations.
In fact, BFSI organisations must look at the interconnectedness of an intelligence-led and risk-based strategy. Thorough risk assessments let organisations prioritise vulnerability mitigation effectively. This targeted approach optimises security initiatives by focusing on high-risk areas, reducing security debt. To adapt to evolving threats, intelligence should inform risk assessment. Intelligence-led strategies empower cybersecurity leaders with real-time threat insights for proactive measures, actively tackling emerging threats and vulnerabilities – and definitely moving beyond compliance-focused strategies.

Traditional network architectures are inherently fragile, often relying on a single transport type to connect branches, production facilities, and data centres. The imperative for networks to maintain resilience has grown significantly, particularly due to the delivery of customer-facing services at branches and the increasing reliance on interconnected machines in operational environments. The cost of network downtime can now be quantified in terms of both lost customers and reduced production.
Distributed Enterprises Face New Challenges
As the importance of maintaining resiliency grows, so does the complexity of network management. Distributed enterprises must provide connectivity under challenging conditions, such as:
- Remote access for employees using video conferencing
- Local breakout for cloud services to avoid backhauling
- IoT devices left unattended in public places
- Customers accessing digital services at the branch or home
- Sites in remote areas requiring the same quality of service
Network managers require intelligent tools to remain in control without adding any unnecessary burden to end users. The number of endpoints and speed of change has made it impossible for human operators to manage without assistance from AI.

AI-Enhanced Network Management
Modern network operations centres are enhancing their visibility by aggregating data from diverse systems and consolidating them within a unified management platform. Machine learning (ML) and AI are employed to analyse data originating from enterprise networks, telecom Points of Presence (PoPs), IoT devices, cloud service providers, and user experience monitoring. These technologies enable the early identification of network issues before they reach critical levels. Intelligent networks can suggest strategies to enhance network resilience, forecast how modifications may impact performance, and are increasingly capable of autonomous responses to evolving conditions.
Here are some critical ways that AI/ML can help build resilient networks.
- Alert Noise Reduction. Network operations centres face thousands of alerts each day. As a result, operators battle with alert fatigue and are challenged to identify critical issues. Through the application of ML, contemporary monitoring tools can mitigate false positives, categorise interconnected alerts, and assist operators in prioritising the most pressing concerns. An operations team, augmented with AI capabilities could potentially de-prioritise up to 90% of alerts, allowing a concentrated focus on factors that impact network performance and resilience.
- Data Lakes. Networking vendors are building their own proprietary data lakes built upon telemetry data generated by the infrastructure they have deployed at customer sites. This vast volume of data allows them to use ML to create a tailored baseline for each customer and to recommend actions to optimise the environment.
- Root Cause Analysis. To assist network operators in diagnosing an issue, AIOps can sift through thousands of data points and correlate them to identify a root cause. Through the integration of alerts with change feeds, operators can understand the underlying causes of network problems or outages. By using ML to understand the customer’s unique environment, AIOps can progressively accelerate time to resolution.
- Proactive Response. As management layers become capable of recommending corrective action, proactive response also becomes possible, leading to self-healing networks. With early identification of sub-optimal conditions, intelligent systems can conduct load balancing, redirect traffic to higher performing SaaS regions, auto-scale cloud instances, or terminate selected connections.
- Device Profiling. In a BYOD environment, network managers require enhanced visibility to discover devices and enforce appropriate policies on them. Automated profiling against a validated database ensures guest access can be granted without adding friction to the onboarding process. With deep packet inspection, devices can be precisely classified based on behaviour patterns.
- Dynamic Bandwidth Aggregation. A key feature of an SD-WAN is that it can incorporate diverse transport types, such as fibre, 5G, and low earth orbit (LEO) satellite connectivity. Rather than using a simple primary and redundant architecture, bandwidth aggregation allows all circuits to be used simultaneously. By infusing intelligence into the SD-WAN layer, the process of path selection can dynamically prioritise traffic by directing it over higher quality or across multiple links. This approach guarantees optimal performance, even in the face of network degradation.
- Generative AI for Process Efficiency. Every tech company is trying to understand how they can leverage the power of Generative AI, and networking providers are no different. The most immediate use case will be to improve satisfaction and scalability for level 1 and level 2 support. A Generative AI-enabled service desk could provide uninterrupted support during high-volume periods, such as during network outages, or during off-peak hours.
Initiating an AI-Driven Network Management Journey
Network managers who take advantage of AI can build highly resilient networks that maximise uptime, deliver consistently high performance, and remain secure. Some important considerations when getting started include:
- Data Catalogue. Take stock of the data sources that are available to you, whether they come from network equipment telemetry, applications, or the data lake of a managed services provider. Understand how they can be integrated into an AIOps solution.
- Start Small. Begin with a pilot in an area where good data sources are available. This will help you assess the impact that AI could have on reducing alerts, improving mean time to repair (MTTR), increasing uptime, or addressing the skills gap.
- Develop an SD-WAN/SASE Roadmap. Many advanced AI benefits are built into an SD-WAN or SASE. Most organisations already have or will soon adopt SD-WAN but begin assessing the SASE framework to decide if it is suitable for your organisation.

Generative AI is seeing enterprise interest and early adoption enhancing efficiency, fostering innovation, and pushing the boundaries of possibility. It has the potential of reshaping industries – and fast!
However, alongside its immense potential, Generative AI also raises concerns. Ethical considerations surrounding data privacy and security come to the forefront, as powerful AI systems handle vast amounts of sensitive information.
Addressing these concerns through responsible AI development and thoughtful regulation will be crucial to harnessing the full transformative power of Generative AI.
Read on to find out the key challenges faced in implementing Generative AI and explore emerging use cases in industries such as Financial Services, Retail, Manufacturing, and Healthcare.
Download ‘Generative AI: Industry Adoption’ as a PDF

I have spent many years analysing the mobile and end-user computing markets. Going all the way back to 1995 where I was part of a Desktop PC research team, to running the European wireless and mobile comms practice, to my time at 3 Mobile in Australia and many years after, helping clients with their end-user computing strategies. From the birth of mobile data services (GPRS, WAP, and so on to 3G, 4G and 5G), from simple phones to powerful foldable devices, from desktop computers to a complex array of mobile computing devices to meet the many and varied employee needs. I am always looking for the “next big thing” – and there have been some significant milestones – Palm devices, Blackberries, the iPhone, Android, foldables, wearables, smaller, thinner, faster, more powerful laptops.
But over the past few years, innovation in this space has tailed off. Outside of the foldable space (which is already four years old), the major benefits of new devices are faster processors, brighter screens, and better cameras. I review a lot of great computers too (like many of the recent Surface devices) – and while they are continuously improving, not much has got my clients or me “excited” over the past few years (outside of some of the very cool accessibility initiatives).
The Force of AI
But this is all about to change. Devices are going to get smarter based on their data ecosystem, the cloud, and AI-specific local processing power. To be honest, this has been happening for some time – but most of the “magic” has been invisible to us. It happened when cameras took multiple shots and selected the best one; it happened when pixels were sharpened and images got brighter, better, and more attractive; it happened when digital assistants were called upon to answer questions and provide context.
Microsoft, among others, are about to make AI smarts more front and centre of the experience – Windows Copilot will add a smart assistant that can not only advise but execute on advice. It will help employees improve their focus and productivity, summarise documents and long chat threads, select music, distribute content to the right audience, and find connections. Added to Microsoft 365 Copilot it will help knowledge workers spend less time searching and reading – and more time doing and improving.
The greater integration of public and personal data with “intent insights” will also play out on our mobile devices. We are likely to see the emergence of the much-promised “integrated app”– one that can take on many of the tasks that we currently undertake across multiple applications, mobile websites, and sometimes even multiple devices. This will initially be through the use of public LLMs like Bard and ChatGPT, but as more custom, private models emerge they will serve very specific functions.
Focused AI Chips will Drive New Device Wars
In parallel to these developments, we expect the emergence of very specific AI processors that are paired to very specific AI capabilities. As local processing power becomes a necessity for some AI algorithms, the broad CPUs – and even the AI-focused ones (like Google’s Tensor Processor) – will need to be complemented by specific chips that serve specific AI functions. These chips will perform the processing more efficiently – preserving the battery and improving the user experience.
While this will be a longer-term trend, it is likely to significantly change the game for what can be achieved locally on a device – enabling capabilities that are not in the realm of imagination today. They will also spur a new wave of device competition and innovation – with a greater desire to be on the “latest and greatest” devices than we see today!
So, while the levels of device innovation have flattened, AI-driven software and chipset innovation will see current and future devices enable new levels of employee productivity and consumer capability. The focus in 2023 and beyond needs to be less on the hardware announcements and more on the platforms and tools. End-user computing strategies need to be refreshed with a new perspective around intent and intelligence. The persona-based strategies of the past have to be changed in a world where form factors and processing power are less relevant than outcomes and insights.
