All growth must end eventually. But it is a brave person who will predict the end of growth for the public cloud hyperscalers. The hyperscaler cloud revenues have been growing at between 25-60% the past few years (off very different bases – and often including and counting different revenue streams). Even the current softening of economic spend we are seeing across many economies is only causing a slight slowdown.

Looking forward, we expect growth in public cloud infrastructure and platform spend to continue to decline in 2024, but to accelerate in 2025 and 2026 as businesses take advantage of new cloud services and capabilities. However, the sheer size of the market means that we will see slower growth going forward – but we forecast 2026 to see the highest revenue growth of any year since public cloud services were founded.
The factors driving this growth include:
- Acceleration of digital intensity. As countries come out of their economic slowdowns and economic activity increases, so too will digital activity. And greater volumes of digital activity will require an increase in the capacity of cloud environments on which the applications and processes are hosted.
- Increased use of AI services. Businesses and AI service providers will need access to GPUs – and eventually, specialised AI chipsets – which will see cloud bills increase significantly. The extra data storage to drive the algorithms – and the increase in CPU required to deliver customised or personalised experiences that these algorithms will direct will also drive increased cloud usage.
- Further movement of applications from on-premises to cloud. Many organisations – particularly those in the Asia Pacific region – still have the majority of their applications and tech systems sitting in data centre environments. Over the next few years, more of these applications will move to hyperscalers.
- Edge applications moving to the cloud. As the public cloud giants improve their edge computing capabilities – in partnership with hardware providers, telcos, and a broader expansion of their own networks – there will be greater opportunity to move edge applications to public cloud environments.
- Increasing number of ISVs hosting on these platforms. The move from on-premise to cloud will drive some growth in hyperscaler revenues and activities – but the ISVs born in the cloud will also drive significant growth. SaaS and PaaS are typically seeing growth above the rates of IaaS – but are also drivers of the growth of cloud infrastructure services.
- Improving cloud marketplaces. Continuing on the topic of ISV partners, as the cloud hyperscalers make it easier and faster to find, buy, and integrate new services from their cloud marketplace, the adoption of cloud infrastructure services will continue to grow.
- New cloud services. No one has a crystal ball, and few people know what is being developed by Microsoft, AWS, Google, and the other cloud providers. New services will exist in the next few years that aren’t even being considered today. Perhaps Quantum Computing will start to see real business adoption? But these new services will help to drive growth – even if “legacy” cloud service adoption slows down or services are retired.

Hybrid Cloud Will Play an Important Role for Many Businesses
Growth in hyperscalers doesn’t mean that the hybrid cloud will disappear. Many organisations will hit a natural “ceiling” for their public cloud services. Regulations, proximity, cost, volumes of data, and “gravity” will see some applications remain in data centres. However, businesses will want to manage, secure, transform, and modernise these applications at the same rate and use the same tools as their public cloud environments. Therefore, hybrid and private cloud will remain important elements of the overall cloud market. Their success will be the ability to integrate with and support public cloud environments.
The future of cloud is big – but like all infrastructure and platforms, they are not a goal in themselves. It is what cloud is and will further enable businesses and customers which is exciting. As the rates of digitisation and digital intensity increase, the opportunities for the cloud infrastructure and platform providers will blossom. Sometimes they will be the driver of the growth, and other times they will just be supporting actors. But either way, in 2026 – 20 years after the birth of AWS – the growth in cloud services will be bigger than ever.

Organisations are moving beyond digitalisation to a focus on building market differentiation. It is widely acknowledged that customer-centric strategies lead to better business outcomes, including increased customer satisfaction, loyalty, competitiveness, growth, and profitability.
AI is the key enabler driving personalisation at scale. It has also become key to improving employee productivity, empowering them to focus on high-value tasks and deepening customer engagements.
Over the last month – at the Salesforce World Tour and over multiple analyst briefings – Salesforce has showcased their desire to solve customer challenges using AI innovations. They have announced a range of new AI innovations across Data Cloud, their integrated CRM platform.
Ecosystm Advisors Kaushik Ghatak, Niloy Mukherjee, Peter Carr, and Sash Mukherjee comment on Salesforce’s recent announcements and messaging.
Read on to find out more.
Download Ecosystm VendorSphere: Salesforce AI Innovations Transforming CRM as a PDF

It is not hyperbole to state that AI is on the cusp of having significant implications on society, business, economies, governments, individuals, cultures, politics, the arts, manufacturing, customer experience… I think you get the idea! We cannot understate the impact that AI will have on society. In times gone by, businesses tested ideas, new products, or services with small customer segments before they went live. But with AI we are all part of this experiment on the impacts of AI on society – its benefits, use cases, weaknesses, and threats.
What seemed preposterous just six months ago is not only possible but EASY! Do you want a virtual version of yourself, a friend, your CEO, or your deceased family member? Sure – just feed the data. Will succession planning be more about recording all conversations and interactions with an executive so their avatar can make the decisions when they leave? Why not? How about you turn the thousands of hours of recorded customer conversations with your contact centre team into a virtual contact centre team? Your head of product can present in multiple countries in multiple languages, tailored to the customer segments, industries, geographies, or business needs at the same moment.
AI has the potential to create digital clones of your employees, it can spread fake news as easily as real news, it can be used for deception as easily as for benefit. Is your organisation prepared for the social, personal, cultural, and emotional impacts of AI? Do you know how AI will evolve in your organisation?
When we focus on the future of AI, we often interview AI leaders, business leaders, futurists, and analysts. I haven’t seen enough focus on psychologists, sociologists, historians, academics, counselors, or even regulators! The Internet and social media changed the world more than we ever imagined – at this stage, it looks like these two were just a rehearsal for the real show – Artificial Intelligence.
Lack of Government or Industry Regulation Means You Need to Self-Regulate
These rapid developments – and the notable silence from governments, lawmakers, and regulators – make the requirement for an AI Ethics Policy for your organisation urgent! Even if you have one, it probably needs updating, as the scenarios that AI can operate within are growing and changing literally every day.
- For example, your customer service team might want to create a virtual customer service agent from a real person. What is the policy on this? How will it impact the person?
- Your marketing team might be using ChatGPT or Bard for content creation. Do you have a policy specifically for the creation and use of content using assets your business does not own?
- What data is acceptable to be ingested by a public Large Language Model (LLM). Are are you governing data at creation and publishing to ensure these policies are met?
- With the impending public launch of Microsoft’s Co-Pilot AI service, what data can be ingested by Co-Pilot? How are you governing the distribution of the insights that come out of that capability?
If policies are not put in place, data tagged, staff trained, before using a tool such as Co-Pilot, your business will be likely to break some privacy or employment laws – on the very first day!
What do the LLMs Say About AI Ethics Policies?
So where do you go when looking for an AI Ethics policy? ChatGPT and Bard of course! I asked the two for a modern AI Ethics policy.
You can read what they generated in the graphic below.
I personally prefer the ChatGPT4 version as it is more prescriptive. At the same time, I would argue that MOST of the AI tools that your business has access to today don’t meet all of these principles. And while they are tools and the ethics should dictate the way the tools are used, with AI you cannot always separate the process and outcome from the tool.
For example, a tool that is inherently designed to learn an employee’s character, style, or mannerisms cannot be unbiased if it is based on a biased opinion (and humans have biases!).
LLMs take data, content, and insights created by others, and give it to their customers to reuse. Are you happy with your website being used as a tool to train a startup on the opportunities in the markets and customers you serve?
By making content public, you acknowledge the risk of others using it. But at least they visited your website or app to consume it. Not anymore…
A Policy is Useless if it Sits on a Shelf
Your AI ethics policy needs to be more than a published document. It should be the beginning of a conversation across the entire organisation about the use of AI. Your employees need to be trained in the policy. It needs to be part of the culture of the business – particularly as low and no-code capabilities push these AI tools, practices, and capabilities into the hands of many of your employees.
Nearly every business leader I interview mentions that their organisation is an “intelligent, data-led, business.” What is the role of AI in driving this intelligent business? If being data-driven and analytical is in the DNA of your organisation, soon AI will also be at the heart of your business. You might think you can delay your investments to get it right – but your competitors may be ahead of you.
So, as you jump head-first into the AI pool, start to create, improve and/or socialise your AI Ethics Policy. It should guide your investments, protect your brand, empower your employees, and keep your business resilient and compliant with legacy and new legislation and regulations.

Organisations have had to transform and innovate to survive over the last two years. However, now when they look at their competitors, they see that everyone has innovated at about the same pace. The 7-year innovation cycle is history in today’s world – organisations need the right strategy and technologies to bring the time to market for innovations down to 1-2 years.
As they continue to innovate to stay ahead of the competition, here are 5 things organisations in India should keep in mind:
- The drivers of innovation will shift rapidly and industry trends need to be monitored continually to adapt to these shifts.
- Their biggest challenge in deploying Data & AI solutions will be identification of the right data for the right purpose – this will require a robust data architecture.
- While customer experience gives them immediate and tangible benefits, employee experience is almost equally – if not more – important.
- Cloud investments have helped build distributed enterprises – but streamlining investments needs a lot of focus now.
- There is a misalignment between organisations’ overall awareness of growing cyber threats and risks and their responses to them. A new cyber approach is urgently needed.
More insights into the India tech market are below.
Click here to download The Future of the Digital Enterprise – Southeast Asia as a PDF

Southeast Asia has evolved into an innovation hub with Singapore at the centre. The entrepreneurial and startup ecosystem has grown significantly across the region – for example, Indonesia now has the 5th largest number of startups in the world.
Organisations in the region are demonstrating a strong desire for tech-led innovation, innovation in experience delivery, and in evolving their business models to bring innovative products and services to market.
Here are 5 insights on the patterns of technology adoption in Southeast Asia, based on the findings of the Ecosystm Digital Enterprise Study, 2022.
- Data and AI investments are closely linked to business outcomes. There is a clear alignment between technology and business.
- Technology teams want better control of their infrastructure. Technology modernisation also focuses on data centre consolidation and cloud strategy
- Organisations are opting for a hybrid multicloud approach. They are not necessarily doing away with a ‘cloud first’ approach – but they have become more agnostic to where data is hosted.
- Cybersecurity underpins tech investments. Many organisations in the region do not have the maturity to handle the evolving threat landscape – and they are aware of it.
- Sustainability is an emerging focus area. While more effort needs to go in to formalise these initiatives, organisations are responding to market drivers.
More insights into the Southeast Asia tech market below.
Click here to download The Future of the Digital Enterprise – Southeast Asia as a PDF

When non-organic (man-made) fabric was introduced into fashion, there were a number of harsh warnings about using polyester and man-made synthetic fibres, including their flammability.
In creating non-organic data sets, should we also be creating warnings on their use and flammability? Let’s look at why synthetic data is used in industries such as Financial Services, Automotive as well as for new product development in Manufacturing.
Synthetic Data Defined
Synthetic data can be defined as data that is artificially developed rather than being generated by actual interactions. It is often created with the help of algorithms and is used for a wide range of activities, including as test data for new products and tools, for model validation, and in AI model training. Synthetic data is a type of data augmentation which involves creating new and representative data.
Why is it used?
The main reasons why synthetic data is used instead of real data are cost, privacy, and testing. Let’s look at more specifics on this:
- Data privacy. When privacy requirements limit data availability or how it can be used. For example, in Financial Services where restrictions around data usage and customer privacy are particularly limiting, companies are starting to use synthetic data to help them identify and eliminate bias in how they treat customers – without contravening data privacy regulations.
- Data availability. When the data needed for testing a product does not exist or is not available to the testers. This is often the case for new releases.
- Data for testing. When training data is needed for machine learning algorithms. However, in many instances, such as in the case of autonomous vehicles, the data is expensive to generate in real life.
- Training across third parties using cloud. When moving private data to cloud infrastructures involves security and compliance risks. Moving synthetic versions of sensitive data to the cloud can enable organisations to share data sets with third parties for training across cloud infrastructures.
- Data cost. Producing synthetic data through a generative model is significantly more cost-effective and efficient than collecting real-world data. With synthetic data, it becomes cheaper and faster to produce new data once the generative model is set up.

Why should it cause concern?
If real dataset contains biases, data augmented from it will contain biases, too. So, identification of optimal data augmentation strategy is important.
If the synthetic set doesn’t truly represent the original customer data set, it might contain the wrong buying signals regarding what customers are interested in or are inclined to buy.
Synthetic data also requires some form of output/quality control and internal regulation, specifically in highly regulated industries such as the Financial Services.
Creating incorrect synthetic data also can get a company in hot water with external regulators. For example, if a company created a product that harmed someone or didn’t work as advertised, it could lead to substantial financial penalties and, possibly, closer scrutiny in the future.
Conclusion
Synthetic data allows us to continue developing new and innovative products and solutions when the data necessary to do so wouldn’t otherwise be present or available due to volume, data sensitivity or user privacy challenges. Generating synthetic data comes with the flexibility to adjust its nature and environment as and when required in order to improve the performance of the model to create opportunities to check for outliers and extreme conditions.
In this Ecosystm Insight, our guest author Randeep Sudan shares his views on how Cities of the Future can leverage technology for future resilience and sustainability. “Technology is not the only aspect of Smart City initiatives. Besides technology, we need to revisit organisational and institutional structures, prioritise goals, and design and deploy an architecture with data as its foundation.”

Earlier this year, Sudan participated in a panel discussion organised by Microsoft where he shared his views on building resilient and sustainable Cities of the Future. Here are his key messages for policymakers and funding agencies that he shared in that session.
“Think ahead, Think across, and Think again! Strategic futures and predictive analytics is essential for cities and is critical for thinking ahead. It is also important to think across through data unification and creating data platforms. And the whole paradigm of innovation is thinking again.”

The process of developing advertising campaigns is evolving with the increasing use of artificial intelligence (AI). Advertisers want to optimise the amount of data at their disposal to craft better campaigns and drive more impact. Since early 2020, there has been a real push to integrate AI to help measure the effectiveness of campaigns and where to allocate ad spend. This now goes beyond media targeting and includes planning, analytics and creative. AI can assist in pattern matching, tailoring messages through AI-enabled hyper-personalisation, and analysing traffic to communicate through pattern identification of best times and means of communication. AI is being used to create ad copy; and social media and online advertising platforms are starting to roll out tools that help advertisers create better ads.
Ecosystm research shows that Media companies report optimisation, targeting and administrative functions such as billing are aided by AI use (Figure 1). However, the trend of Media companies leveraging AI for content design and media analysis is growing.

WPP Strengthening Tech Capabilities
This week, WPP announced the acquisition of Satalia, a UK-based company, who will consult with all WPP agencies globally to promote AI capabilities across the company and help shape the company’s AI strategy, including research and development, AI ethics, partnerships, talent and products.
It was announced that Satalia, whose clients include BT, DFS, DS Smith, PwC, Gigaclear, Tesco and Unilever, will join Wunderman Thompson Commerce to work on the technology division of their global eCommerce consultancy. Prior to the acquisition, Satalia had launched tools such as Satalia Workforce to automate work assignments; and Satalia Delivery, for automated delivery routes and schedules. The tools have been adopted by companies including PwC, DFS, Selecta and Australian supermarket chain Woolworths.
Like other global advertising organisations, WPP has been focused on expanding the experience, commerce and technology parts of the business, most recently acquiring Brazilian software engineering company DTI Digital in February. WPP also launched their own global data consultancy, Choreograph, in April. Choreograph is WPP’s newly formed global data products and technology company focused on helping brands activate new customer experiences by turning data into intelligence. This article from last year from the WPP CTO is an interesting read on their technology strategy, especially their move to cloud to enable their strategy.

Ethics & AI – The Right Focus
The acquisition of Satalia will give WPP and opportunity to evaluate important areas such as AI ethics, partnerships and talent which will be significantly important in the medium term. AI ethics in advertising is also a longer-term discussion. With AI and machine learning, the system learns patterns that help steer targeting towards audiences that are more likely to convert and identify the best places to get your message in front of these buyers. If done responsibly it should provide consumers with the ability to learn about and purchase relevant products and services. However, as we have recently discussed, AI has two main forms of bias – underrepresented data and developer bias – that also needs to be looked into.
Summary
The role of AI in the orchestration of the advertising process is developing rapidly. Media firms are adopting cloud platforms, making IP investments, and developing partnerships to build the support they can offer with their advertising services. The use of AI in advertising will help mature and season the process to be even more tailored to customer preferences.

The rollout of 5G combined with edge computing in remote locations will change the way maintenance is carried out in the field. Historically, service teams performed maintenance either in a reactive fashion – fixing equipment when it broke – or using a preventative calendar-based approach. Neither of these methods is satisfactory, with the former being too late and resulting in failure while the latter is necessarily too early, resulting in excessive expenditure and downtime. The availability of connected sensors has allowed service teams to shift to condition monitoring without the need for taking equipment offline for inspections. The advent of analytics takes this approach further and has given us optimised scheduling in the form of predictive maintenance.
The next step is prescriptive maintenance in which AI can recommend action based on current and predicted condition according to expected usage or environmental circumstances. This could be as simple as alerting an operator to automatically ordering parts and scheduling multiple servicing tasks depending on forecasted production needs in the short term.

Prescriptive maintenance has only become possible with the advancement of AI and digital twin technology, but imminent improvements in connectivity and computing will take servicing to a new level. The rollout of 5G will give a boost to bandwidth, reduce latency, and increase the number of connections possible. Equipment in remote locations – such as transmission lines or machinery in resource industries – will benefit from the higher throughput of 5G connectivity, either as part of an operator’s network rollout or a private on-site deployment. Mobile machinery, particularly vehicles, which can include hundreds of sensors will no longer be required to wait until arrival before the condition can be assessed. Furthermore, vehicles equipped with external sensors can inspect stationary infrastructure as it passes by.
Edge computing – either carried out by miniature onboard devices or at smaller scale data centres embedded in 5G networks – ensure that intensive processing can be carried out closer to equipment than with a typical cloud environment. Bandwidth hungry applications, such as video and time series analysis, can be conducted with only meta data transmitted immediately and full archives uploaded with less urgency.
Prescriptive Maintenance with 5G and the Edge – Use Cases
- Transportation. Bridges built over railway lines equipped with high-speed cameras can monitor passing trains to inspect for damage. Data-intensive video analysis can be conducted on local devices for a rapid response while selected raw data can be uploaded to the cloud over 5G to improve inference models.
- Mining. Private 5G networks built-in remote sites can provide connectivity between fixed equipment, vehicles, drones, robotic dogs, workers, and remote operations centres. Autonomous haulage trucks can be monitored remotely and in the event of a breakdown, other vehicles can be automatically redirected to prevent dumping queues.
- Utilities. Emergency maintenance needs can be prioritised before extreme weather events based on meteorological forecasts and their impact on ageing parts. Machine learning can be used to understand location-specific effects of, for example, salt content in off-shore wind turbine cables. Early detection of turbine rotor cracks can recommend shutdown during high-load periods.

Data as an Asset
Effective prescriptive maintenance only becomes possible after the accumulation and integration of multiple data sources over an extended period. Inference models should understand both normal and abnormal equipment performance in various conditions, such as extreme weather, during incorrect operation, or when adjacent parts are degraded. For many smaller organisations or those deploying new equipment, the necessary volume of data will not be available without the assistance of equipment manufacturers. Moreover, even manufacturers will not have sufficient data on interaction with complementary equipment. This provides an opportunity for large operators to sell their own inference models as a new revenue stream. For example, an electrical grid operator in North America can partner with a similar, but smaller organisation in Europe to provide operational data and maintenance recommendations. Similarly, telecom providers, regional transportation providers, logistics companies, and smart cities will find industry players in other geographies that they do not naturally compete with.
Recommendations
- Employing multiple sensors. Baseline conditions and failure signatures are improved using machine learning based on feeds from multiple sensors, such as those that monitor vibration, sound, temperature, pressure, and humidity. The use of multiple sensors makes it possible to not only identify potential failure but also the reason for it and can therefore more accurately prescribe a solution to prevent an outage.
- Data assessment and integration. Prescriptive maintenance is most effective when multiple data sources are unified as inputs. Identify the location of these sources, such as ERP systems, time series on site, environmental data provided externally, or even in emails or on paper. A data fabric should be considered to ensure insights can be extracted from data no matter the environment it resides in.
- Automated action. Reduce the potential for human error or delay by automatically generating alerts and work orders for resource managers and service staff in the event of anomaly detection. Criticality measures should be adopted to help prioritise maintenance tasks and reduce alert noise.
