OpenAI’s tools – such as ChatGPT and the image engine Dell-E2 – require significant processing power to operate, particularly as they move beyond beta programs and offer services at scale. In a single week in December, the company moved past 1 million users for ChatGPT alone. The company must be burning through cash at a significant rate. This means they need significant funding to keep the lights on, particularly as the capability of the product continues to improve and the amount of data, images and content it trawls continues to expand. ChatGPT is being talked about as one of the most revolutionary tech capabilities of the decade – but it will be all for nothing if the company doesn’t have the resources to continue to operate!
This is huge for Microsoft! Much has already been discussed about the opportunity for Microsoft to compete with Google more effectively for search-related advertising dollars. But every product and service that Microsoft develops can be enriched and improved by ChatGPT:
A spreadsheet tool that automatically categorises data and extract insight
A word processing tool that creates content automatically
A CRM that creates custom offers for every individual customer based on their current circumstances
A collaboration tool that gets answers to questions before they are even asked and acts on the insights and analytics that it needs to drive the right customer and business outcomes
A presentation tool that creates slides with compelling storylines based on the needs of specific audiences
LinkedIn providing the insights users need to achieve their outcomes
A cloud-based AI engine that can be embedded into any process or application through a simple API call (this already exists!)
How Microsoft chooses to monetise these opportunities is up to the company – but the investment certainly puts Microsoft in the box seat to monetise the AI services through their own products while also taking a cut from other ways that OpenAI monetises their services.
Impact on Microsoft’s competitors
Microsoft’s investment in OpenAI will accelerate the rate of AI development and adoption. As we move into the AI era, everything will change. New business opportunities will emerge, and traditional ones will disappear. Markets will be created and destroyed. Microsoft’s investment is an attempt for the company to end up on the right side of this equation. But the other existing (and yet to be created) AI businesses won’t just give up. The Microsoft investment will create a greater urgency for Google, Apple, and others to accelerate their AI capabilities and investments. And we will see investments in OpenAI’s competitors, such as Stability AI (which raised USD 101M in October 2022).
What will change for enterprises?
Too many businesses have put “the cloud” at the centre of their transformation strategies – as if being in the cloud is an achievement in itself. While cloud made applications and processes are easier to transform (and sometimes cheaper to deploy and run), for many businesses, they have just modernised their legacy end-to-end business processes on a better platform. True transformation happens when businesses realise that their processes only existed because they of lack of human or technology capacity to treat every customer and employee as an individual, to determine their specific needs and to deliver a custom solution for them. Not to mention the huge cost of creating unique processes for every customer! But AI does this.
AI engines have the ability to make businesses completely rethink their entire application stack. They have the ability to deliver unique outcomes for every customer. Businesses need to have AI as their transformation goal – where they put intelligence at the centre of every transformation, they will make different decisions and drive better customer and business outcomes. But once again, delivering this will take significant processing power and access to huge amounts of content and data.
The Burning Question: Who owns the outcome of AI?
In the end, ChatGPT only knows what it knows – and the content that it learns from is likely to have been created by someone (ideally – as we don’t want AI to learn from bad AI!). What we don’t really understand is the unintended consequences of commercialising AI. Will content creators be less willing to share their content? Will we see the emergence of many more walled content gardens? Will blockchain and even NFTs emerge as a way of protecting and proving origin? Will legislation protect content creators or AI engines? If everyone is using AI to create content, will all content start to look more similar (as this will be the stage that the AI is learning from content created by AI)? And perhaps the biggest question of all – where does the human stop and the machine start?
These questions will need answers and they are not going to be answered in advance. Whatever the answers might be, we are definitely at the beginning of the next big shift in human-technology relations. Microsoft wants to accelerate this shift. As a technology analyst, 2023 just got a lot more interesting!
Anirban Mukherjee has more than 25 years of experience in operations excellence and technology consulting across the globe, having led transformations in Energy, Engineering, and Automotive majors. Over the last decade, he has focused on Smart Manufacturing/Industry 4.0 solutions that integrate cutting-edge digital into existing operations.
Effective prescriptive maintenance only becomes possible after the accumulation and integration of multiple data sources over an extended period. Inference models should understand both normal and abnormal equipment performance in various conditions, such as extreme weather, during incorrect operation, or when adjacent parts are degraded. For many smaller organisations or those deploying new equipment, the necessary volume of data will not be available without the assistance of equipment manufacturers. Moreover, even manufacturers will not have sufficient data on interaction with complementary equipment. This provides an opportunity for large operators to sell their own inference models as a new revenue stream. For example, an electrical grid operator in North America can partner with a similar, but smaller organisation in Europe to provide operational data and maintenance recommendations. Similarly, telecom providers, regional transportation providers, logistics companies, and smart cities will find industry players in other geographies that they do not naturally compete with.
Employing multiple sensors. Baseline conditions and failure signatures are improved using machine learning based on feeds from multiple sensors, such as those that monitor vibration, sound, temperature, pressure, and humidity. The use of multiple sensors makes it possible to not only identify potential failure but also the reason for it and can therefore more accurately prescribe a solution to prevent an outage.
Data assessment and integration. Prescriptive maintenance is most effective when multiple data sources are unified as inputs. Identify the location of these sources, such as ERP systems, time series on site, environmental data provided externally, or even in emails or on paper. A data fabric should be considered to ensure insights can be extracted from data no matter the environment it resides in.
Automated action. Reduce the potential for human error or delay by automatically generating alerts and work orders for resource managers and service staff in the event of anomaly detection. Criticality measures should be adopted to help prioritise maintenance tasks and reduce alert noise.
There are significant considerations for banks in offering these types of capabilities, such as:
Privacy. While the technology operates on non-identifiable information, the perception of clients being ‘stalked’ by their bank in order to drive business to a merchant is one that would need to be managed carefully.
Consumer opt-out. The ability for customers to opt out of this type of service is critical.
Consumer financial wellbeing. It may be in the best interests of some consumer to not receive merchant offers, for instance where they are managing to a strict budget. These considerations can be baked into the overall customer journey (eg. prompts when the consumer is nearing their self-imposed monthly budget for a category), but care will need to be taken to keep customers’ best interests at heart.
While there are multiple challenges to overcome, the fact remains that personalisation is quickly becoming a core expectation for consumers. How will banks respond, and will we see AI use cases like Crayon Data become more prominent?
Artificial Intelligence is real and has started becoming mainstream – chatbots using AI to answer queries are everywhere. AI is being used in stock trades, contact centre applications, bank loans processing, crop harvests, self-driving vehicles, and streaming entertainment. It is now part of boardroom discussions and strategic initiatives of CEOs. McKinsey predicts AI will add USD 13 trillion to the global economy by 2030.
Hype vs Reality
So much to like – but why then do we often find leaders shrugging their shoulders? Despite all the good news above there is also another side to AI. For all the green indicators, there are also some red flags (Figure 1). In fact, if one googles “Hype vs reality” the majority of the results returned are to do with AI!!!!
Our experience shows that broad swaths of executives are skeptical of AI. Leaders in a variety of businesses from large multinational banks, consumer packaged goods companies to appliance makers have privately expressed their disappointment at not being able to make AI work for them. They cannot bridge the gap between the AI hype and reality in their businesses.
The data available also bears this out – VentureBeat estimates that 87% of ML projects never make it into production. Ecosystm research suggests that only 7% of organisations have an AI centre of excellence (CoE) – while the remaining depend on ad-hoc implementations. There are several challenges that organisations face in procuring and implementing a successful AI solution – both technology and business (Figure 2).
Visible Patterns Emerge from Successful AI Use Cases
If we look back to figure 2 and analyse the challenges, we will see certain common themes – many of which are now commonplace wisdom, if not trite. Leadership alignment around AI strategy is the most common one. Getting clean data, aligning strategy with execution, and building the capabilities to use AI are all touted as critical requirements for successful execution. These themes all point to the insight that it is the human element that is more critical – not the technology.
As practitioners we have come across numerous examples of AI projects which go off-track because of human issues. Let’s take the example of an organisation that had enhancing call centre capabilities and capacity using RPA tools, as a key business mandate. There was strong leadership support and enthusiasm. It was clear that a large number of basic level tickets raised by the centre could be resolved using digital agents. This would result in substantial gains in customer experience, through faster ticket resolution and higher employee productivity – it was estimated to be above 30%. However, after two months of launching the pilot only a very small percentage of cases were identified for migration to digital agents.
Very soon, it became clear that these tools were being perceived as a replacement for human skills, rather than to augment their capabilities. The most vocal proponent of the initiative – the head of the customer experience team – became its critic, as he felt that the small savings were not worth the risk of higher agent turnover rates due to perceived job insecurity.
This was turned around by a three-day workshop focused on demonstrating how the job responsibility of agents could be enhanced as portions of their job got automated. The processes were redesigned to isolate parts which could be fully automated and to club non-automated components together driving more responsibility and discretion for agents. Once enhanced responsibility of the call centre staff was identified, managers felt more comfortable and were willing to support the initiative. In the end, the goals set at the start of the project were all met.
In my next blog I will share with you what we consider the winning formula for a successful AI deployment. In the meantime, share with us your AI stories – both of your challenges and successes.
Reconciling these seemingly conflicting requirements is possible. But it requires serious commitment from business and data/ analytics leaders – not (just) because regulators demand it, but because it is good for their customers and their business, and the only way to start capturing the full value from AI/ML.
1. ‘Heart’, not just ‘Head’
It is relatively easy to get people excited about experimenting with AI/ML. But when it comes to actually trusting the model to make decisions for us, we humans are likely to put up our defences. Convincing a loan approver, insurance under-writer, medical doctor or front-line sales-person to trust an AI/ML model – over their own knowledge or intuition – is as much about the ‘heart’ as the ‘head’. Helping them understand, on their own terms, how the alternative is at least as good as their current way of doing things, is crucial.
2. A Broad Church
Even in industries/ organisations that recognise the importance of governing AI/ML, there is a tendency to define it narrowly. For example, in Financial Services, one might argue that “an ML model is just another model” and expect existing Model Risk teams to deal with any incremental risks from AI/ML.
There are two issues with this approach:
First, AI/ML models tend to require a greater focus on model quality (e.g., with respect to stability, overfitting and unjust bias) than their traditional alternatives. The pace at which such models are expected to be introduced and re-calibrated is also much higher, stretching traditional model risk management approaches.
Second, poorly designed AI/ML models create second order risks. While not unique to AI/ML, these risks become accentuated due to model complexity, greater dependence on (high-volume, often non-traditional) data and ubiquitous adoption. One example is poor customer experience (e.g., badly communicated decisions) and unfair treatment (e.g., unfair denial of service, discrimination, misselling, inappropriate investment recommendations). Another is around the stability, integrity and competitiveness of financial markets (e.g., unintended collusion with other market players). Obligations under data privacy, sovereignty and security requirements could also become more challenging.
The only way to respond holistically is to bring together a broad coalition – of data managers and scientists, technologists, specialists from risk, compliance, operations and cyber-security, and business leaders.
3. Automate, Automate, Automate
A key driver for the adoption and effectiveness of AI/ ML is scalability. The techniques used to manage traditional models are often inadequate in the face of more data-hungry, widely used and rapidly refreshed AI/ML models. Whether it is during the development and testing phase, formal assessment/ validation or ongoing post-production monitoring, it is impossible to govern AI/ML at scale using manual processes alone.
o, somewhat counter-intuitively, we need more automation if we are to build and sustain trust in AI/ML. As humans are accountable for the outcomes of AI/ ML models, we can only be ‘in charge’ if we have the tools to provide us reliable intelligence on them – before and after they go into production. As the recent experience with model performance during COVID-19 suggests, maintaining trust in AI/ML models is an ongoing task.
I have heard people say “AI is too important to be left to the experts”. Perhaps. But I am yet to come across an AI/ML practitioner who is not keenly aware of the importance of making their models reliable and safe. What I have noticed is that they often lack suitable tools – to support them in analysing and monitoring models, and to enable conversations to build trust with stakeholders. If AI is to be adopted at scale, that must change.
Shameek Kundu is Chief Strategy Officer and Head of Financial Services at TruEra Inc. TruEra helps enterprises analyse, improve and monitor quality of machine
Have you evaluated the tech areas on your AI requirements? Get access to AI insights and key industry trends from our AI research.