Bard’s new features include code generation, optimisation, debugging, and explanation. Using natural language processing (NLP), users can explain their requirements to the AI and ask it to generate code that can then be exported to an integrated development environment (IDE) or executed directly in the browser with Google Colab. Similarly, users can request Bard to debug already existing code, explain code snippets, or optimise code to improve performance.
Google continues to refer to Bard as an experiment and highlights that as is the case with generated text, code produced by the AI may not function as expected. Regardless, the new functionality will be useful for both beginner and experienced developers. Those learning to code can use Generative AI to debug and explain their mistakes or write simple programs. More experienced developers can use the tool to perform lower-value work, such as commenting on code, or scaffolding to identify potential problems.
GitHub Copilot X to Face Competition
While the ability for Bard, Bing, and ChatGPT to generate code is one of their most important use cases, developers are now demanding AI directly in their IDEs.
In March, Microsoft made one of its most significant announcements of the year when it demonstrated GitHub Copilot X, which embeds GPT-4 in the development environment. Earlier this year, Microsoft invested $10 billion into OpenAI to add to the $1 billion from 2019, cementing the partnership between the two AI heavyweights. Among other benefits, this agreement makes Azure the exclusive cloud provider to OpenAI and provides Microsoft with the opportunity to enhance its software with AI co-pilots.
Currently, under technical preview, when Copilot X eventually launches, it will integrate into Visual Studio — Microsoft’s IDE. Presented as a sidebar or chat directly in the IDE, Copilot X will be able to generate, explain, and comment on code, debug, write unit tests, and identify vulnerabilities. The “Hey, GitHub” functionality will allow users to chat using voice, suitable for mobile users or more natural interaction on a desktop.
The Next Step: Generative AI in Security
The next battleground for Generative AI will be assisting overworked security analysts. Currently, some of the greatest challenges that Security Operations Centres (SOCs) face are being understaffed and overwhelmed with the number of alerts. Security vendors, such as IBM and Securonix, have already deployed automation to reduce alert noise and help analysts prioritise tasks to avoid responding to false threats.
Google recently introduced Sec-PaLM and Microsoft announced Security Copilot, bringing the power of Generative AI to the SOC. These tools will help analysts interact conversationally with their threat management systems and will explain alerts in natural language. How effective these tools will be is yet to be seen, considering hallucinations in security is far riskier than writing an essay with ChatGPT.
The Future of AI Code Generators
Although GitHub Copilot and Amazon CodeWhisperer had already launched with limited feature sets, it was the release of ChatGPT last year that ushered in a new era in AI code generation. There is now a race between the cloud hyperscalers to win over developers and to provide AI that supports other functions, such as security.
Despite fears that AI will replace humans, in their current state it is more likely that they will be used as tools to augment developers. Although AI and automated testing reduce the burden on the already stretched workforce, humans will continue to be in demand to ensure code is secure and satisfies requirements. A likely scenario is that with coding becoming simpler, rather than the number of developers shrinking, the volume and quality of code written will increase. AI will generate a new wave of citizen developers able to work on projects that would previously have been impossible to start. This may, in turn, increase demand for developers to build on these proofs-of-concept.
How the Generative AI landscape evolves over the next year will be interesting. In a recent interview, OpenAI’s founder, Sam Altman, explained that the non-profit model it initially pursued is not feasible, necessitating the launch of a capped-for-profit subsidiary. The company retains its values, however, focusing on advancing AI responsibly and transparently with public consultation. The appearance of Microsoft, Google, and AWS will undoubtedly change the market dynamics and may force OpenAI to at least reconsider its approach once again.
This new partnership was essential for Kyndryl to provide legitimacy to their independent reputation and their global presence. And in many respects, it is a partnership that AWS needs as much as Kyndryl does. As one of the largest global managed services providers, Kyndryl manages a huge amount of infrastructure and thousands of applications. Today, most of these applications sit outside public cloud environments, but at some stage in the future, many of these applications will move to the public cloud. AWS has positioned itself to benefit from this transition – as Kyndryl will be advising clients on which cloud environment best suits their needs, and in many cases Kyndryl will also be running the application migration and managing the application when it resides in the cloud. To that end, the further investment in developing an accelerator for VMware Cloud on AWS will also help to differentiate Kyndryl on AWS. With a high proportion of Kyndryl customers running VMware, this capability will help VMware users to migrate these workloads to the cloud and run core businesses services on AWS.
Beyond the typical partnership activities, Kyndryl will build out its own internal infrastructure in the cloud, leveraging AWS as its preferred cloud provider. This experience will mean that Kyndryl “drinks its own champagne” – many other managed services providers have not yet taken the majority of their infrastructure to the cloud, so this experience will help to set Kyndryl apart from their competitors, along with providing deep learning and best practices.
By the end of 2022, Kyndryl expects to have trained more than 10,000 professionals on AWS. Assuming the company hits these targets, they will be one of AWS’s largest partners. However, experience trumps training, and their relatively recent entry into the broader cloud ecosystem space (after coming out from under IBM’s wing at the end of 2021) means they have some way to go to have the depth and breadth of experience that other Premier Alliance Partners have today.
In my recent interactions with Kyndryl, what sets them apart is the fact that they are completely customer-focused. They start with a client problem and find the best solution for that problem. Yes – some of the “best solutions” will be partner specific (such as SAP on Azure, VMware on AWS), but they aren’t pushing every customer down a specific path. They are not just an AWS partner – where every solution to every problem starts and ends with AWS. The importance of this new partnership is it expands the capabilities of Kyndryl and hence expands the possibilities and opportunities for Kyndryl clients to benefit from the best solutions in the market – regardless of whether they are on-premises or in one of the big three hyperscalers.
Oracle’s initial reliance on leased facilities may be an interim step. The rapid growth of AWS, Azure, and GCP in the late 2010s was a surprise and Oracle started to see serious risks of losing customers to these cloud platforms. Building out their own cloud base on new data centres would have taken years and cost them business. So, Oracle did the smart thing and leaped into the cloud as fast as possible with the resources and time available. The company has scaled their OCI operations at an impressive rate. It expects capital expenditures to double YoY for the fiscal year ending May 2022, as it increases “data centre capacities and geographic locations to meet current and expected customer demand” for OCI.
Finally, Oracle has invested heavily in designing the servers to be installed in its data centres (even if most of them are leased). Oracle was an early investor in Ampere Computing, which makes Arm-based processors, sidestepping the Intel ecosystem. In May 2021, Oracle rolled out its first Arm-based compute offering, OCI Ampere A1 Compute, based on the Ampere Altra processor. Oracle says this allows OCI customers to run “cloud-native and general-purpose workloads on Arm-based instances with significant price-performance benefits.” Microsoft and Tencent also deploy the Ampere Altra in some locations.
Reaching Global Scale
Once Oracle decided to launch into the cloud, its goal was to both grow revenues and also protect its legacy base from slipping away to the Big 3, which already had a growing global footprint. Oracle chose to quickly build cloud regions in its key markets, with the understanding that it would have to fill out individual regions as time passed. This is not that different from the big 3, in fact, but Oracle started its buildout much later. It also has lesser availability zones per region.
Oracle has not ignored this disparity. It recognises that reliability is key for its clients in trusting OCI. For example, the company emphasises that:
Each Oracle Cloud region contains at least three fault domains, which are “groupings of hardware that form logical data centers for high availability and resilience to hardware and network failures.” Fault domains allow a customer to distribute instances so “the instances are not on the same physical hardware within a single availability domain.”
OCI has a network of 70 “FastConnect” partners which offer dedicated connectivity to OCI regions and services (comparable to AWS DirectConnect)
OCI and Microsoft Azure have a partnership allowing “joint customers” to run workloads across the two clouds, providing low latency, cross-cloud interconnect between OCI and Azure in eight specific regions. Customers can migrate existing applications or develop cloud native applications using a mix of OCI and Azure.
Oracle allows customers to deploy OCI completely within their own data centers, with Dedicated Region and Exadata Cloud@Customer, deploy cloud services locally with public cloud-based management, or deploy cloud services remotely on the edge with Roving Edge Infrastructure.
Further, Oracle clearly tries to differentiate around its Arm-based Ampere processors. Reliability is not necessarily the focus, though. The main focus is contrasting Ampere with the x86 ecosystem around overall price-performance, with highlights on power efficiency, scalability and ease of development.
Ultimately the market will decide whether Oracle’s approach makes it truly competitive with the big 3. The company continues to announce some big wins, including with Deutsche Bank, FedEx, NEC, Toyota, and Zoom. The latter is probably the company’s biggest cloud win given Zoom’s rise to prominence amidst the pandemic. Not surprisingly, Oracle’s recent Singapore cloud region launch was hosted by Zoom.
Over the long run, the webscale market is getting more concentrated in the hands of a few players; some companies tracked as webscalers, such as HPE and SAP, will fall by the wayside as they can’t keep up with the infrastructure spending requirements of being a top player. Oracle is aiming to remain in the race, however. CEO Larry Ellison addressed this in an earnings call, arguing the global cloud market is not just the “big 3” (AWS, Azure, and GCP), but is a “big 4” due in part to Oracle’s database strengths. Ellison also argued that the OCI is “much better for security, for performance, for reliability” and cost: “we’re cheaper.” The market will ultimately decide these things, but Oracle is off to a strong start. Its asset light approach to network buildout, and limited depth within regions, clearly have downfalls. But the company has a deep roster of long-term customers across many regions, and it is moving fast to secure their business as they migrate operations to the cloud.
Last week AWS announced their plans to invest USD 5.3 billion to launch new data centres in New Zealand’s Auckland region by 2024. Apart from New Zealand, AWS has recently added new regions in Beijing, Hong Kong, Mumbai, Ningxia, Seoul, Singapore, Sydney and Tokyo; and are set to expand into Indonesia, Israel, UAE and Spain.
In a bid to deliver secure and low latency data centre capabilities, the infrastructure hub will comprise three Availability Zones (AZ) and will be owned and operated by the local AWS entity in New Zealand. The new region will enable local businesses and government entities to run workloads and store data using their local data residency preferences.
It is estimated that the new cloud region will create nearly 1,000 jobs over the next 15 years. They will continue to train and upskill the local developers, students and next-gen leaders through the AWS re/Start, AWS Academy, and AWS Educate programs. To support the launch and build new businesses, the AWS Activate program will provide web-based trainings, cloud computing credits, and business mentorship.
New Zealand is becoming attractive to cloud and data centre providers. Last year, Microsoft had also announced their Azure data centre investments and skill development programs in New Zealand. To support the future of cloud services and to fulfil the progressive data centre demands, Datagrid and Meridian Energy partnered to build the country’s first hyperscale data centre, last year. Similarly, CDC Data Centres have plans to develop two new hyperscale data centres in Auckland.
An Opportunity for New Zealand to Punch Above its Weight as the New Data Economy Hub
“The flurry of data centre related activity in New Zealand is not just a reflection of the local opportunity given that the overall IT Market size of a sub-5 million population will always be modest, even if disproportionate. Trust, governance, transparency are hallmarks of the data centre business. Consider this – New Zealand ranks #1 on Ease of Doing Business rankings globally and #1 on the Corruptions Perception Index – not as a one-off but consistently over the years.
Layered on this is a highly innovative business environment, a cluster of high-quality data science skills and an immense appetite to overcome the tyranny of distance through a strong digital economy. New Zealand has the opportunity to become a Data Economy hub as geographic proximity will become less relevant in the new digital economy paradigm.
New Zealand is strategically located between Latin America and Asia, so could act as a data hub for both regions, leveraging undersea cables. The recently initiated and signed Digital Economy Partnership Agreement between Singapore and New Zealand – with Chile as the 3rd country – is a testimony to New Zealand’s ambitions to be at the core of a digital and data economy. The DEPA is a template other countries are likely to sign up to and should enhance New Zealand’s ability to be a trusted custodian of data.
Given the country’s excellent data governance practices, access to clean energy, conducive climate for data centres, plenty of land and an exceptional innovation mindset, this is an opportunity for global businesses to leverage New Zealand as a Data Economy hub.“
New Zealand’s Data Centre Market is Becoming Attractive
“The hyperscale cloud organisations investing in New Zealand-based data centres is both a great opportunity and a significant challenge for both local data centre providers and the local digital industry. With AWS and Microsoft making significant investments in the Auckland region the new facilities, will improve access to the extensive facilities provided by Azure and AWS with reduced latency.
To date, there have not been significant barriers for most non-government organisations to access any of the hyperscalers, with latency of trans-Tasman already reasonably low. However, large organisations, particularly government departments, concerned about data sovereignty are going to welcome this announcement.
With fibre to the premise available in significant parts of New Zealand, with cost-effective 1GB+ symmetrical services available, and hyperscalers on-shore, the pressure to grow New Zealand’s constrained skilled workforce can only increase. Skills development has to be a top priority for the country to take advantage of this infrastructure. While immigration can address part of the challenge, increasing the number of skilled citizens is really needed. It is good to see the commitment that AWS is making with the availability of training options. Now we need to encourage people to take advantage of these options!“
Top Cloud Providers Continue to Drive Data Centre Investment
“Capital investments in data centres have soared in recent quarters. For the webscale sector, spending on data centres and related network technology account for over 40% of total CapEx. The webscale sector’s big cloud providers have accounted for much of the recent CapEx surge. AWS, Google, and Microsoft have been building larger facilities, expanding existing campuses and clusters, and broadening their cloud region footprint into smaller markets. These three account for just under 60% of global webscale tech CapEx over the last four quarters. The facilities these webscale players are building can be immense.
The largest webscalers – Google, AWS, Facebook and Microsoft – clearly prefer to design and operate their own facilities. Each of them spends heavily on both external procurement and internal design for the technology that goes into their data centres. Custom silicon and the highest speed, most advanced optical interconnect solutions are key. As utility costs are a huge element of running a data centre, webscalers also seek out the lowest cost (and, increasingly, greenest) power solutions, often investing in new power sources directly. Webscalers aim to deploy facilities which are on the bleeding edge of technology.
An important part of the growth in cloud adoption is the construction of infrastructure closer to the end-user. AWS’s investment in New Zealand will benefit their positioning and should help deliver more responsive and resilient services to New Zealand’s enterprise market.“