All growth must end eventually. But it is a brave person who will predict the end of growth for the public cloud hyperscalers. The hyperscaler cloud revenues have been growing at between 25-60% the past few years (off very different bases – and often including and counting different revenue streams). Even the current softening of economic spend we are seeing across many economies is only causing a slight slowdown.
Looking forward, we expect growth in public cloud infrastructure and platform spend to continue to decline in 2024, but to accelerate in 2025 and 2026 as businesses take advantage of new cloud services and capabilities. However, the sheer size of the market means that we will see slower growth going forward – but we forecast 2026 to see the highest revenue growth of any year since public cloud services were founded.
The factors driving this growth include:
- Acceleration of digital intensity. As countries come out of their economic slowdowns and economic activity increases, so too will digital activity. And greater volumes of digital activity will require an increase in the capacity of cloud environments on which the applications and processes are hosted.
- Increased use of AI services. Businesses and AI service providers will need access to GPUs – and eventually, specialised AI chipsets – which will see cloud bills increase significantly. The extra data storage to drive the algorithms – and the increase in CPU required to deliver customised or personalised experiences that these algorithms will direct will also drive increased cloud usage.
- Further movement of applications from on-premises to cloud. Many organisations – particularly those in the Asia Pacific region – still have the majority of their applications and tech systems sitting in data centre environments. Over the next few years, more of these applications will move to hyperscalers.
- Edge applications moving to the cloud. As the public cloud giants improve their edge computing capabilities – in partnership with hardware providers, telcos, and a broader expansion of their own networks – there will be greater opportunity to move edge applications to public cloud environments.
- Increasing number of ISVs hosting on these platforms. The move from on-premise to cloud will drive some growth in hyperscaler revenues and activities – but the ISVs born in the cloud will also drive significant growth. SaaS and PaaS are typically seeing growth above the rates of IaaS – but are also drivers of the growth of cloud infrastructure services.
- Improving cloud marketplaces. Continuing on the topic of ISV partners, as the cloud hyperscalers make it easier and faster to find, buy, and integrate new services from their cloud marketplace, the adoption of cloud infrastructure services will continue to grow.
- New cloud services. No one has a crystal ball, and few people know what is being developed by Microsoft, AWS, Google, and the other cloud providers. New services will exist in the next few years that aren’t even being considered today. Perhaps Quantum Computing will start to see real business adoption? But these new services will help to drive growth – even if “legacy” cloud service adoption slows down or services are retired.
Hybrid Cloud Will Play an Important Role for Many Businesses
Growth in hyperscalers doesn’t mean that the hybrid cloud will disappear. Many organisations will hit a natural “ceiling” for their public cloud services. Regulations, proximity, cost, volumes of data, and “gravity” will see some applications remain in data centres. However, businesses will want to manage, secure, transform, and modernise these applications at the same rate and use the same tools as their public cloud environments. Therefore, hybrid and private cloud will remain important elements of the overall cloud market. Their success will be the ability to integrate with and support public cloud environments.
The future of cloud is big – but like all infrastructure and platforms, they are not a goal in themselves. It is what cloud is and will further enable businesses and customers which is exciting. As the rates of digitisation and digital intensity increase, the opportunities for the cloud infrastructure and platform providers will blossom. Sometimes they will be the driver of the growth, and other times they will just be supporting actors. But either way, in 2026 – 20 years after the birth of AWS – the growth in cloud services will be bigger than ever.
Google recently extended its Generative AI, Bard, to include coding in more than 20 programming languages, including C++, Go, Java, Javascript, and Python. The search giant has been eager to respond to last year’s launch of ChatGPT but as the trusted incumbent, it has naturally been hesitant to move too quickly. The tendency for large language models (LLMs) to produce controversial and erroneous outputs has the potential to tarnish established brands. Google Bard was released in March in the US and the UK as an LLM but lacked the coding ability of OpenAI’s ChatGPT and Microsoft’s Bing Chat.
Bard’s new features include code generation, optimisation, debugging, and explanation. Using natural language processing (NLP), users can explain their requirements to the AI and ask it to generate code that can then be exported to an integrated development environment (IDE) or executed directly in the browser with Google Colab. Similarly, users can request Bard to debug already existing code, explain code snippets, or optimise code to improve performance.
Google continues to refer to Bard as an experiment and highlights that as is the case with generated text, code produced by the AI may not function as expected. Regardless, the new functionality will be useful for both beginner and experienced developers. Those learning to code can use Generative AI to debug and explain their mistakes or write simple programs. More experienced developers can use the tool to perform lower-value work, such as commenting on code, or scaffolding to identify potential problems.
GitHub Copilot X to Face Competition
While the ability for Bard, Bing, and ChatGPT to generate code is one of their most important use cases, developers are now demanding AI directly in their IDEs.
In March, Microsoft made one of its most significant announcements of the year when it demonstrated GitHub Copilot X, which embeds GPT-4 in the development environment. Earlier this year, Microsoft invested $10 billion into OpenAI to add to the $1 billion from 2019, cementing the partnership between the two AI heavyweights. Among other benefits, this agreement makes Azure the exclusive cloud provider to OpenAI and provides Microsoft with the opportunity to enhance its software with AI co-pilots.
Currently, under technical preview, when Copilot X eventually launches, it will integrate into Visual Studio — Microsoft’s IDE. Presented as a sidebar or chat directly in the IDE, Copilot X will be able to generate, explain, and comment on code, debug, write unit tests, and identify vulnerabilities. The “Hey, GitHub” functionality will allow users to chat using voice, suitable for mobile users or more natural interaction on a desktop.
Not to be outdone by its cloud rivals, in April, AWS announced the general availability of what it describes as a real-time AI coding companion. Amazon CodeWhisperer, integrates with a range of IDEs, namely Visual Studio Code, IntelliJ IDEA, CLion, GoLand, WebStorm, Rider, PhpStorm, PyCharm, RubyMine, and DataGrip, or natively in AWS Cloud9 and AWS Lambda console. While the preview worked for Python, Java, JavaScript, TypeScript, and C#, the general release extends support for most languages. Amazon’s key differentiation is that it is available for free to individual users, while GitHub Copilot is currently subscription-based with exceptions only for teachers, students, and maintainers of open-source projects.
The Next Step: Generative AI in Security
The next battleground for Generative AI will be assisting overworked security analysts. Currently, some of the greatest challenges that Security Operations Centres (SOCs) face are being understaffed and overwhelmed with the number of alerts. Security vendors, such as IBM and Securonix, have already deployed automation to reduce alert noise and help analysts prioritise tasks to avoid responding to false threats.
Google recently introduced Sec-PaLM and Microsoft announced Security Copilot, bringing the power of Generative AI to the SOC. These tools will help analysts interact conversationally with their threat management systems and will explain alerts in natural language. How effective these tools will be is yet to be seen, considering hallucinations in security is far riskier than writing an essay with ChatGPT.
The Future of AI Code Generators
Although GitHub Copilot and Amazon CodeWhisperer had already launched with limited feature sets, it was the release of ChatGPT last year that ushered in a new era in AI code generation. There is now a race between the cloud hyperscalers to win over developers and to provide AI that supports other functions, such as security.
Despite fears that AI will replace humans, in their current state it is more likely that they will be used as tools to augment developers. Although AI and automated testing reduce the burden on the already stretched workforce, humans will continue to be in demand to ensure code is secure and satisfies requirements. A likely scenario is that with coding becoming simpler, rather than the number of developers shrinking, the volume and quality of code written will increase. AI will generate a new wave of citizen developers able to work on projects that would previously have been impossible to start. This may, in turn, increase demand for developers to build on these proofs-of-concept.
How the Generative AI landscape evolves over the next year will be interesting. In a recent interview, OpenAI’s founder, Sam Altman, explained that the non-profit model it initially pursued is not feasible, necessitating the launch of a capped-for-profit subsidiary. The company retains its values, however, focusing on advancing AI responsibly and transparently with public consultation. The appearance of Microsoft, Google, and AWS will undoubtedly change the market dynamics and may force OpenAI to at least reconsider its approach once again.
Last week, Kyndryl became a Premier Global Alliance Partner for AWS. This follows other recent similar partnerships for Kyndryl with Google and Microsoft. This now gives Kyndryl premier or similar partner status at the big three hyperscalers.
The Partnership
This new partnership was essential for Kyndryl to provide legitimacy to their independent reputation and their global presence. And in many respects, it is a partnership that AWS needs as much as Kyndryl does. As one of the largest global managed services providers, Kyndryl manages a huge amount of infrastructure and thousands of applications. Today, most of these applications sit outside public cloud environments, but at some stage in the future, many of these applications will move to the public cloud. AWS has positioned itself to benefit from this transition – as Kyndryl will be advising clients on which cloud environment best suits their needs, and in many cases Kyndryl will also be running the application migration and managing the application when it resides in the cloud. To that end, the further investment in developing an accelerator for VMware Cloud on AWS will also help to differentiate Kyndryl on AWS. With a high proportion of Kyndryl customers running VMware, this capability will help VMware users to migrate these workloads to the cloud and run core businesses services on AWS.
The Future
Beyond the typical partnership activities, Kyndryl will build out its own internal infrastructure in the cloud, leveraging AWS as its preferred cloud provider. This experience will mean that Kyndryl “drinks its own champagne” – many other managed services providers have not yet taken the majority of their infrastructure to the cloud, so this experience will help to set Kyndryl apart from their competitors, along with providing deep learning and best practices.
By the end of 2022, Kyndryl expects to have trained more than 10,000 professionals on AWS. Assuming the company hits these targets, they will be one of AWS’s largest partners. However, experience trumps training, and their relatively recent entry into the broader cloud ecosystem space (after coming out from under IBM’s wing at the end of 2021) means they have some way to go to have the depth and breadth of experience that other Premier Alliance Partners have today.
Ecosystm Opinion
In my recent interactions with Kyndryl, what sets them apart is the fact that they are completely customer-focused. They start with a client problem and find the best solution for that problem. Yes – some of the “best solutions” will be partner specific (such as SAP on Azure, VMware on AWS), but they aren’t pushing every customer down a specific path. They are not just an AWS partner – where every solution to every problem starts and ends with AWS. The importance of this new partnership is it expands the capabilities of Kyndryl and hence expands the possibilities and opportunities for Kyndryl clients to benefit from the best solutions in the market – regardless of whether they are on-premises or in one of the big three hyperscalers.
The Internet of Things (IoT) solutions require data integration capabilities to help business leaders solve real problems. Ecosystm research finds that the problem is that more than half of all organisations are finding integration a key challenge – right behind security (Figure 1). So, chances are, you are facing similar challenges.
This should not be taken as a criticism of IoT; just a wake-up call for all those seeking to implement what has long been test-lab technology into an enterprise environment. I love absolutely everything about IoT. IT is an essential technology. Contemporary sensor technologies are at the core of everything. It’s just that there are a lot of organisations not doing it right.
Like many technologists, I was hooked on IoT since I first sat in a Las Vegas AWS re: invent conference breakout session in 2015 and learned about MQTT protocols applied to any little thing, and how I could re-order laundry detergent or beer with an AWS button, that clumsy precursor to Alexa.
Parts of that presentation have stayed with me to this day. Predict and act. What business doesn’t want to be able to do that better? I can still see the room. I still have those notes. And I’m still working to help others embrace the full potential of this must-have enterprise capability.
There is no doubt that IoT is the Cinderella of smart cities. Even digital twinning. Without it, there is no story. It is critical to contemporary organisations because of the real-time decision-making data it can provide into significant (Industry 4.0) infrastructure and service investments. That’s worth repeating. It is critical to supporting large scale capital investments and anyone who has been in IT for any length of time knows that vindicating the need for new IT investments to capital holders is the most elusive of business demands.
But it is also a bottom-up technology that requires a top-down business case – a challenge also faced by around 40% of organisations in the Ecosystm study – and a number of other architectural components to realise its full cost-benefit or capital growth potential. Let’s not quibble, IoT is fundamental to both operational and strategic data insights, but it is not the full story.
If IoT is the belle of the smart cities ball, then integration is the glass slipper that ties the whole story together. After four years as head of technology for a capital city deeply committed to the Smart City vision, if there was one area of IoT investment I was constantly wishing I had more of, it was integration. We were drowning in data but starved of the skills and technology to deliver true strategic insights outside of single-function domains.
This reality in no way diminishes the value of IoT. Nor is it either a binary or chicken-and-egg question of whether to invest in IoT or integration. In fact, the symbiotic market potential for both IoT and integration solutions in asset-intensive businesses is not only huge but necessary.
IoT solutions are fundamental contemporary technologies that provide the opportunity for many businesses to do well in areas they would otherwise continue to do very poorly. They provide a foundation for digital enablement and a critical gateway to analytics for real-time and predictive decision making.
When applied strategically and at scale, IoT provides a magical technology capability. But the bottom line is that even magic technology can never carry the day when left to do the work of other solutions. If you have already plunged into IoT then chances are it has already become your next data silo. The question is now, what you are going to do about it?
Oracle is clearly prioritising a rapid expansion across the globe. The company is in a race to catch up with the big 3 (AWS, Google, and Microsoft), and recognises that many of their customers are eager to migrate to the cloud, and they have other options. Their strategy appears to be to rely on third-party co-location providers for most of their data centres, and build a single availability zone per region, at least to start.
Oracle Cloud Rollout Ramps Up
Let us consider the following:
- Oracle’s network spending level puts it in the range of other webscalers. Focusing only on the Network and IT portion of their CapEx, Oracle has now passed Alibaba. Oracle is also ahead of both IBM and Baidu, which are included in the “All others” category in Figure 1.
- The coverage of the Oracle Cloud Infrastructure (OCI) is impressive. It has 36 regions today (some dedicated for government use), with a plan to reach 44 by year-end 2022. That compares to 27 overall for AWS, 65 for Azure, 29 for GCP; regional competitors Tencent and Huawei have 27 regions each, and Alibaba 25 regions. The downside is that Oracle has only one availability zone in most of its regions, while the Big 3 usually have 2 or 3 per region. Oracle needs to build out its local resiliency rapidly over the next year or two or risk losing business to the big 3, especially to AWS; but the company knows this and is budgeting CapEx aggressively to address the problem.
- Oracle’s initial reliance on leased facilities may be an interim step. The rapid growth of AWS, Azure, and GCP in the late 2010s was a surprise and Oracle started to see serious risks of losing customers to these cloud platforms. Building out their own cloud base on new data centres would have taken years and cost them business. So, Oracle did the smart thing and leaped into the cloud as fast as possible with the resources and time available. The company has scaled their OCI operations at an impressive rate. It expects capital expenditures to double YoY for the fiscal year ending May 2022, as it increases “data centre capacities and geographic locations to meet current and expected customer demand” for OCI.
- Finally, Oracle has invested heavily in designing the servers to be installed in its data centres (even if most of them are leased). Oracle was an early investor in Ampere Computing, which makes Arm-based processors, sidestepping the Intel ecosystem. In May 2021, Oracle rolled out its first Arm-based compute offering, OCI Ampere A1 Compute, based on the Ampere Altra processor. Oracle says this allows OCI customers to run “cloud-native and general-purpose workloads on Arm-based instances with significant price-performance benefits.” Microsoft and Tencent also deploy the Ampere Altra in some locations.
Reaching Global Scale
Once Oracle decided to launch into the cloud, its goal was to both grow revenues and also protect its legacy base from slipping away to the Big 3, which already had a growing global footprint. Oracle chose to quickly build cloud regions in its key markets, with the understanding that it would have to fill out individual regions as time passed. This is not that different from the big 3, in fact, but Oracle started its buildout much later. It also has lesser availability zones per region.
Oracle has not ignored this disparity. It recognises that reliability is key for its clients in trusting OCI. For example, the company emphasises that:
- Each Oracle Cloud region contains at least three fault domains, which are “groupings of hardware that form logical data centers for high availability and resilience to hardware and network failures.” Fault domains allow a customer to distribute instances so “the instances are not on the same physical hardware within a single availability domain.”
- OCI has a network of 70 “FastConnect” partners which offer dedicated connectivity to OCI regions and services (comparable to AWS DirectConnect)
- OCI and Microsoft Azure have a partnership allowing “joint customers” to run workloads across the two clouds, providing low latency, cross-cloud interconnect between OCI and Azure in eight specific regions. Customers can migrate existing applications or develop cloud native applications using a mix of OCI and Azure.
- Oracle allows customers to deploy OCI completely within their own data centers, with Dedicated Region and Exadata Cloud@Customer, deploy cloud services locally with public cloud-based management, or deploy cloud services remotely on the edge with Roving Edge Infrastructure.
- Further, Oracle clearly tries to differentiate around its Arm-based Ampere processors. Reliability is not necessarily the focus, though. The main focus is contrasting Ampere with the x86 ecosystem around overall price-performance, with highlights on power efficiency, scalability and ease of development.
Ultimately the market will decide whether Oracle’s approach makes it truly competitive with the big 3. The company continues to announce some big wins, including with Deutsche Bank, FedEx, NEC, Toyota, and Zoom. The latter is probably the company’s biggest cloud win given Zoom’s rise to prominence amidst the pandemic. Not surprisingly, Oracle’s recent Singapore cloud region launch was hosted by Zoom.
Conclusion
Over the long run, the webscale market is getting more concentrated in the hands of a few players; some companies tracked as webscalers, such as HPE and SAP, will fall by the wayside as they can’t keep up with the infrastructure spending requirements of being a top player. Oracle is aiming to remain in the race, however. CEO Larry Ellison addressed this in an earnings call, arguing the global cloud market is not just the “big 3” (AWS, Azure, and GCP), but is a “big 4” due in part to Oracle’s database strengths. Ellison also argued that the OCI is “much better for security, for performance, for reliability” and cost: “we’re cheaper.” The market will ultimately decide these things, but Oracle is off to a strong start. Its asset light approach to network buildout, and limited depth within regions, clearly have downfalls. But the company has a deep roster of long-term customers across many regions, and it is moving fast to secure their business as they migrate operations to the cloud.
Last week AWS announced their plans to invest USD 5.3 billion to launch new data centres in New Zealand’s Auckland region by 2024. Apart from New Zealand, AWS has recently added new regions in Beijing, Hong Kong, Mumbai, Ningxia, Seoul, Singapore, Sydney and Tokyo; and are set to expand into Indonesia, Israel, UAE and Spain.
In a bid to deliver secure and low latency data centre capabilities, the infrastructure hub will comprise three Availability Zones (AZ) and will be owned and operated by the local AWS entity in New Zealand. The new region will enable local businesses and government entities to run workloads and store data using their local data residency preferences.
It is estimated that the new cloud region will create nearly 1,000 jobs over the next 15 years. They will continue to train and upskill the local developers, students and next-gen leaders through the AWS re/Start, AWS Academy, and AWS Educate programs. To support the launch and build new businesses, the AWS Activate program will provide web-based trainings, cloud computing credits, and business mentorship.
New Zealand is becoming attractive to cloud and data centre providers. Last year, Microsoft had also announced their Azure data centre investments and skill development programs in New Zealand. To support the future of cloud services and to fulfil the progressive data centre demands, Datagrid and Meridian Energy partnered to build the country’s first hyperscale data centre, last year. Similarly, CDC Data Centres have plans to develop two new hyperscale data centres in Auckland.
An Opportunity for New Zealand to Punch Above its Weight as the New Data Economy Hub
“The flurry of data centre related activity in New Zealand is not just a reflection of the local opportunity given that the overall IT Market size of a sub-5 million population will always be modest, even if disproportionate. Trust, governance, transparency are hallmarks of the data centre business. Consider this – New Zealand ranks #1 on Ease of Doing Business rankings globally and #1 on the Corruptions Perception Index – not as a one-off but consistently over the years.
Layered on this is a highly innovative business environment, a cluster of high-quality data science skills and an immense appetite to overcome the tyranny of distance through a strong digital economy. New Zealand has the opportunity to become a Data Economy hub as geographic proximity will become less relevant in the new digital economy paradigm.
New Zealand is strategically located between Latin America and Asia, so could act as a data hub for both regions, leveraging undersea cables. The recently initiated and signed Digital Economy Partnership Agreement between Singapore and New Zealand – with Chile as the 3rd country – is a testimony to New Zealand’s ambitions to be at the core of a digital and data economy. The DEPA is a template other countries are likely to sign up to and should enhance New Zealand’s ability to be a trusted custodian of data.
Given the country’s excellent data governance practices, access to clean energy, conducive climate for data centres, plenty of land and an exceptional innovation mindset, this is an opportunity for global businesses to leverage New Zealand as a Data Economy hub.“
New Zealand’s Data Centre Market is Becoming Attractive
“The hyperscale cloud organisations investing in New Zealand-based data centres is both a great opportunity and a significant challenge for both local data centre providers and the local digital industry. With AWS and Microsoft making significant investments in the Auckland region the new facilities, will improve access to the extensive facilities provided by Azure and AWS with reduced latency.
To date, there have not been significant barriers for most non-government organisations to access any of the hyperscalers, with latency of trans-Tasman already reasonably low. However, large organisations, particularly government departments, concerned about data sovereignty are going to welcome this announcement.
With fibre to the premise available in significant parts of New Zealand, with cost-effective 1GB+ symmetrical services available, and hyperscalers on-shore, the pressure to grow New Zealand’s constrained skilled workforce can only increase. Skills development has to be a top priority for the country to take advantage of this infrastructure. While immigration can address part of the challenge, increasing the number of skilled citizens is really needed. It is good to see the commitment that AWS is making with the availability of training options. Now we need to encourage people to take advantage of these options!“
Top Cloud Providers Continue to Drive Data Centre Investment
“Capital investments in data centres have soared in recent quarters. For the webscale sector, spending on data centres and related network technology account for over 40% of total CapEx. The webscale sector’s big cloud providers have accounted for much of the recent CapEx surge. AWS, Google, and Microsoft have been building larger facilities, expanding existing campuses and clusters, and broadening their cloud region footprint into smaller markets. These three account for just under 60% of global webscale tech CapEx over the last four quarters. The facilities these webscale players are building can be immense.
The largest webscalers – Google, AWS, Facebook and Microsoft – clearly prefer to design and operate their own facilities. Each of them spends heavily on both external procurement and internal design for the technology that goes into their data centres. Custom silicon and the highest speed, most advanced optical interconnect solutions are key. As utility costs are a huge element of running a data centre, webscalers also seek out the lowest cost (and, increasingly, greenest) power solutions, often investing in new power sources directly. Webscalers aim to deploy facilities which are on the bleeding edge of technology.
An important part of the growth in cloud adoption is the construction of infrastructure closer to the end-user. AWS’s investment in New Zealand will benefit their positioning and should help deliver more responsive and resilient services to New Zealand’s enterprise market.“