It’s been barely one year since we entered the Generative AI Age. On November 30, 2022, OpenAI launched ChatGPT, with no fanfare or promotion. Since then, Generative AI has become arguably the most talked-about tech topic, both in terms of opportunities it may bring and risks that it may carry.
The landslide success of ChatGPT and other Generative AI applications with consumers and businesses has put a renewed and strengthened focus on the potential risks associated with the technology – and how best to regulate and manage these. Government bodies and agencies have created voluntary guidelines for the use of AI for a number of years now (the Singapore Framework, for example, was launched in 2019).
There is no active legislation on the development and use of AI yet. Crucially, however, a number of such initiatives are currently on their way through legislative processes globally.
EU’s Landmark AI Act: A Step Towards Global AI Regulation
The European Union’s “Artificial Intelligence Act” is a leading example. The European Commission (EC) started examining AI legislation in 2020 with a focus on
- Protecting consumers
- Safeguarding fundamental rights, and
- Avoiding unlawful discrimination or bias
The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.
This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).
Risk Classification and Regulations in the EU AI Act
At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.
For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.
Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach
The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.
A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.
Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.
Adopting AI Risk Management in Organisations: The Singapore Approach
Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.
The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.
To Comply with Upcoming AI Regulations, Risk Management is the Path Forward
As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.
Organisations are moving beyond digitalisation to a focus on building market differentiation. It is widely acknowledged that customer-centric strategies lead to better business outcomes, including increased customer satisfaction, loyalty, competitiveness, growth, and profitability.
AI is the key enabler driving personalisation at scale. It has also become key to improving employee productivity, empowering them to focus on high-value tasks and deepening customer engagements.
Over the last month – at the Salesforce World Tour and over multiple analyst briefings – Salesforce has showcased their desire to solve customer challenges using AI innovations. They have announced a range of new AI innovations across Data Cloud, their integrated CRM platform.
Ecosystm Advisors Kaushik Ghatak, Niloy Mukherjee, Peter Carr, and Sash Mukherjee comment on Salesforce’s recent announcements and messaging.
Read on to find out more.
Download Ecosystm VendorSphere: Salesforce AI Innovations Transforming CRM as a PDF
The Financial Services industry can benefit greatly from leveraging Data and AI technologies to enhance client value and innovation. BFSI organisations want to deliver AI-driven outcomes.
However, many AI projects fail to deliver long-term business value. Leaders in the industry must overcome challenges such as
- Converting proofs of concept to scalable implementations
- Deploying end-to-end AI and Data strategies
- Evolving business requirements
- Responding to emerging trends such as Generative AI.
As a technology leader in BFSI, here are 5 ways you can help deliver on your organisation’s AI ambitions.
- Think in terms of outcomes – not use cases
- Identify and eliminate digital debt
- Build the right data platform architecture
- Adopt a dual AI strategy
- Be part of an innovation ecosystem
Read on to find out more.
Click here to download ‘5 Actions to Achieve Your AI Ambitions’ as a PDF
The Retail industry has faced significant challenges in recent times. Retailers have had to deliver digital experiences and delivery models; navigate global supply chain disruptions; accommodate the remote work needs of their employees; and keep up with rapidly changing customer expectations. To remain competitive, many retailers have made significant investments in technology.
However, despite these investments, many retailers have struggled to create market differentiation. The need for innovation and constant evolution remains.
As retailers cope with hypersonalisation trends, supply chain vulnerabilities, and the rise of ESG consciousness, the industry is seeing several instances on innovation.
Read on to find out how brands such as Clinique, Gucci, Tommy Hilfiger, Nike, Woolworths, Prada, Levi Strauss, Mahsenei Hashuk and Instacart are using emerging technologies such as the Metaverse and Generative AI to create the much-needed market edge.
Download “The Future of Retail” as a PDF
Microsoft’s intention to invest a further USD 10B in OpenAI – the owner of ChatGPT and Dall-E2 confirms what we said in the Ecosystm Predicts – Cloud will be replaced by AI as the right transformation goal. Microsoft has already invested an estimated USD 3B in the company since 2019. Let’s take a look at what this means to the tech industry.
Implications for OpenAI & Microsoft
OpenAI’s tools – such as ChatGPT and the image engine Dell-E2 – require significant processing power to operate, particularly as they move beyond beta programs and offer services at scale. In a single week in December, the company moved past 1 million users for ChatGPT alone. The company must be burning through cash at a significant rate. This means they need significant funding to keep the lights on, particularly as the capability of the product continues to improve and the amount of data, images and content it trawls continues to expand. ChatGPT is being talked about as one of the most revolutionary tech capabilities of the decade – but it will be all for nothing if the company doesn’t have the resources to continue to operate!
This is huge for Microsoft! Much has already been discussed about the opportunity for Microsoft to compete with Google more effectively for search-related advertising dollars. But every product and service that Microsoft develops can be enriched and improved by ChatGPT:
- A spreadsheet tool that automatically categorises data and extract insight
- A word processing tool that creates content automatically
- A CRM that creates custom offers for every individual customer based on their current circumstances
- A collaboration tool that gets answers to questions before they are even asked and acts on the insights and analytics that it needs to drive the right customer and business outcomes
- A presentation tool that creates slides with compelling storylines based on the needs of specific audiences
- LinkedIn providing the insights users need to achieve their outcomes
- A cloud-based AI engine that can be embedded into any process or application through a simple API call (this already exists!)
How Microsoft chooses to monetise these opportunities is up to the company – but the investment certainly puts Microsoft in the box seat to monetise the AI services through their own products while also taking a cut from other ways that OpenAI monetises their services.
Impact on Microsoft’s competitors
Microsoft’s investment in OpenAI will accelerate the rate of AI development and adoption. As we move into the AI era, everything will change. New business opportunities will emerge, and traditional ones will disappear. Markets will be created and destroyed. Microsoft’s investment is an attempt for the company to end up on the right side of this equation. But the other existing (and yet to be created) AI businesses won’t just give up. The Microsoft investment will create a greater urgency for Google, Apple, and others to accelerate their AI capabilities and investments. And we will see investments in OpenAI’s competitors, such as Stability AI (which raised USD 101M in October 2022).
What will change for enterprises?
Too many businesses have put “the cloud” at the centre of their transformation strategies – as if being in the cloud is an achievement in itself. While cloud made applications and processes are easier to transform (and sometimes cheaper to deploy and run), for many businesses, they have just modernised their legacy end-to-end business processes on a better platform. True transformation happens when businesses realise that their processes only existed because they of lack of human or technology capacity to treat every customer and employee as an individual, to determine their specific needs and to deliver a custom solution for them. Not to mention the huge cost of creating unique processes for every customer! But AI does this.
AI engines have the ability to make businesses completely rethink their entire application stack. They have the ability to deliver unique outcomes for every customer. Businesses need to have AI as their transformation goal – where they put intelligence at the centre of every transformation, they will make different decisions and drive better customer and business outcomes. But once again, delivering this will take significant processing power and access to huge amounts of content and data.
The Burning Question: Who owns the outcome of AI?
In the end, ChatGPT only knows what it knows – and the content that it learns from is likely to have been created by someone (ideally – as we don’t want AI to learn from bad AI!). What we don’t really understand is the unintended consequences of commercialising AI. Will content creators be less willing to share their content? Will we see the emergence of many more walled content gardens? Will blockchain and even NFTs emerge as a way of protecting and proving origin? Will legislation protect content creators or AI engines? If everyone is using AI to create content, will all content start to look more similar (as this will be the stage that the AI is learning from content created by AI)? And perhaps the biggest question of all – where does the human stop and the machine start?
These questions will need answers and they are not going to be answered in advance. Whatever the answers might be, we are definitely at the beginning of the next big shift in human-technology relations. Microsoft wants to accelerate this shift. As a technology analyst, 2023 just got a lot more interesting!
In this Insight, guest author Anirban Mukherjee lists out the key challenges of AI adoption in traditional organisations – and how best to mitigate these challenges. “I am by no means suggesting that traditional companies avoid or delay adopting AI. That would be akin to asking a factory to keep using only steam as power, even as electrification came in during early 20th century! But organisations need to have a pragmatic strategy around what will undoubtedly be a big, but necessary, transition.”
After years of evangelising digital adoption, I have more of a nuanced stance today – supporting a prudent strategy, especially where the organisation’s internal capabilities/technology maturity is in question. I still see many traditional organisations burning budgets in AI adoption programs with low success rates, simply because of poor choices driven by misplaced expectations. Without going into the obvious reasons for over-exuberance (media-hype, mis-selling, FOMO, irrational valuations – the list goes on), here are few patterns that can be detected in those organisations that have succeeded getting value – and gloriously so!
Data-driven decision-making is a cultural change. Most traditional organisations have a point person/role accountable for any important decision, whose “neck is on the line”. For these organisations to change over to trusting AI decisions (with its characteristic opacity, and stochastic nature of recommendations) is often a leap too far.
Work on your change management, but more crucially, strategically choose business/process decision points (aka use-cases) to acceptably AI-enable.
Technical choice of ML modeling needs business judgement too. The more flexible non-linear models that increase prediction accuracy, invariably suffer from lower interpretability – and may be a poor choice in many business contexts. Depending upon business data volumes and accuracy, model bias-variance tradeoffs need to be made. Assessing model accuracy and its thresholds (false-positive-false-negative trade-offs) are similarly nuanced. All this implies that organisation’s domain knowledge needs to merge well with data science design. A pragmatic approach would be to not try to be cutting-edge.
Look to use proven foundational model-platforms – such as those for NLP, visual analytics – for first use cases. Also note that not every problem needs AI; a lot can be sorted through traditional programming (“if-then automation”) and should be. The dirty secret of the industry is that the power of a lot of products marketed as “AI-powered” is mostly traditional logic, under the hood!
In getting results from AI, most often “better data trumps better models”. Practically, this means that organisations need to spend more on data engineering effort, than on data science effort. The CDO/CIO organisation needs to build the right balance of data competencies and tools.
Get the data readiness programs started – yesterday! While the focus of data scientists is often on training an AI model, deployment of the trained model online is a whole other level of technical challenge (particularly when it comes to IT-OT and real-time integrations).
It takes time to adopt AI in traditional organisations. Building up training data and model accuracy is a slow process. Organisational changes take time – and then you have to add considerations such as data standardisation; hygiene and integration programs; and the new attention required to build capabilities in AIOps, AI adoption and governance.
Typically plan for 3 years – monitor progress and steer every 6 months. Be ready to kill “zombie” projects along the way. Train the executive team – not to code, but to understand the technology’s capabilities and limitations. This will ensure better informed buyers/consumers and help drive adoption within the organisation.
I am by no means suggesting that traditional companies avoid or delay adopting AI. That would be akin to asking a factory to keep using only steam as power, even as electrification came in during early 20th century! But organisations need to have a pragmatic strategy around what will undoubtedly be a big, but necessary, transition.
These opinions are personal (and may change with time), but definitely informed through a decade of involvement in such journeys. It is not too early for any organisation to start – results are beginning to show for those who started earlier, and we know what they got right (and wrong).
I would love to hear your views, or even engage with you on your journey!
The views and opinions mentioned in the article are personal.
Anirban Mukherjee has more than 25 years of experience in operations excellence and technology consulting across the globe, having led transformations in Energy, Engineering, and Automotive majors. Over the last decade, he has focused on Smart Manufacturing/Industry 4.0 solutions that integrate cutting-edge digital into existing operations.
The rollout of 5G combined with edge computing in remote locations will change the way maintenance is carried out in the field. Historically, service teams performed maintenance either in a reactive fashion – fixing equipment when it broke – or using a preventative calendar-based approach. Neither of these methods is satisfactory, with the former being too late and resulting in failure while the latter is necessarily too early, resulting in excessive expenditure and downtime. The availability of connected sensors has allowed service teams to shift to condition monitoring without the need for taking equipment offline for inspections. The advent of analytics takes this approach further and has given us optimised scheduling in the form of predictive maintenance.
The next step is prescriptive maintenance in which AI can recommend action based on current and predicted condition according to expected usage or environmental circumstances. This could be as simple as alerting an operator to automatically ordering parts and scheduling multiple servicing tasks depending on forecasted production needs in the short term.
Prescriptive maintenance has only become possible with the advancement of AI and digital twin technology, but imminent improvements in connectivity and computing will take servicing to a new level. The rollout of 5G will give a boost to bandwidth, reduce latency, and increase the number of connections possible. Equipment in remote locations – such as transmission lines or machinery in resource industries – will benefit from the higher throughput of 5G connectivity, either as part of an operator’s network rollout or a private on-site deployment. Mobile machinery, particularly vehicles, which can include hundreds of sensors will no longer be required to wait until arrival before the condition can be assessed. Furthermore, vehicles equipped with external sensors can inspect stationary infrastructure as it passes by.
Edge computing – either carried out by miniature onboard devices or at smaller scale data centres embedded in 5G networks – ensure that intensive processing can be carried out closer to equipment than with a typical cloud environment. Bandwidth hungry applications, such as video and time series analysis, can be conducted with only meta data transmitted immediately and full archives uploaded with less urgency.
Prescriptive Maintenance with 5G and the Edge – Use Cases
- Transportation. Bridges built over railway lines equipped with high-speed cameras can monitor passing trains to inspect for damage. Data-intensive video analysis can be conducted on local devices for a rapid response while selected raw data can be uploaded to the cloud over 5G to improve inference models.
- Mining. Private 5G networks built-in remote sites can provide connectivity between fixed equipment, vehicles, drones, robotic dogs, workers, and remote operations centres. Autonomous haulage trucks can be monitored remotely and in the event of a breakdown, other vehicles can be automatically redirected to prevent dumping queues.
- Utilities. Emergency maintenance needs can be prioritised before extreme weather events based on meteorological forecasts and their impact on ageing parts. Machine learning can be used to understand location-specific effects of, for example, salt content in off-shore wind turbine cables. Early detection of turbine rotor cracks can recommend shutdown during high-load periods.
Data as an Asset
Effective prescriptive maintenance only becomes possible after the accumulation and integration of multiple data sources over an extended period. Inference models should understand both normal and abnormal equipment performance in various conditions, such as extreme weather, during incorrect operation, or when adjacent parts are degraded. For many smaller organisations or those deploying new equipment, the necessary volume of data will not be available without the assistance of equipment manufacturers. Moreover, even manufacturers will not have sufficient data on interaction with complementary equipment. This provides an opportunity for large operators to sell their own inference models as a new revenue stream. For example, an electrical grid operator in North America can partner with a similar, but smaller organisation in Europe to provide operational data and maintenance recommendations. Similarly, telecom providers, regional transportation providers, logistics companies, and smart cities will find industry players in other geographies that they do not naturally compete with.
Recommendations
- Employing multiple sensors. Baseline conditions and failure signatures are improved using machine learning based on feeds from multiple sensors, such as those that monitor vibration, sound, temperature, pressure, and humidity. The use of multiple sensors makes it possible to not only identify potential failure but also the reason for it and can therefore more accurately prescribe a solution to prevent an outage.
- Data assessment and integration. Prescriptive maintenance is most effective when multiple data sources are unified as inputs. Identify the location of these sources, such as ERP systems, time series on site, environmental data provided externally, or even in emails or on paper. A data fabric should be considered to ensure insights can be extracted from data no matter the environment it resides in.
- Automated action. Reduce the potential for human error or delay by automatically generating alerts and work orders for resource managers and service staff in the event of anomaly detection. Criticality measures should be adopted to help prioritise maintenance tasks and reduce alert noise.
If you are a digital leader in the Financial Services industry (FSI), you have already heard this from your customers: ‘Why is it that Netflix and Amazon can make more relevant and personalised offers than my bank or wealth manager?’ Digital first players are obsessed with using data to understand their customer’s commercial and consumer behaviour. Financial Services will need to become just as obsessed with personalisation of offerings and services if they want to remain relevant to their customers. Ecosystm research finds that leveraging data to offer personalised service and product offerings to their clients is the leading digital priority in more than 50% of FSI organisations.
Banks, particularly, are both in a strong position and have a strong incentive to offer this personalisation. Their retail customers’ expectations are now shaped by the experience they have received from their favorite digital first firms, and they are making it increasingly clear that they expect personalised offerings from their banks. Furthermore, they are well positioned as a facilitator of commercial relationships between two segments of customers – consumers and merchants. The amount of data they hold on consumer interactions is comprehensive – and more importantly they are a trusted custodian of their customers’ data and privacy.
The Barriers to Personalisation
So, what is stopping them? Here are three insights from over 12 years of experience driving digitisation of Financial Services:
- Systems Legacy. Often the data and core banking systems do not allow for easy access and analysis of the required data across the data sets required (eg. Consumers and Merchants).
- Investment Priorities. There is still a significant investment happening in compliance and modernisation of core banking systems. Too often the focus of these programs can be myopic, and banks miss the opportunity to solve multiple pain points with their investments driven by overly focused problem statements.
- Culture and Purpose. Are banks stuck in a paradigm of their own making – defining their business models by what has served them well in the past? Will Amazon think about its provision of working capital to their small and medium business partners the same way as a bank does?
Vendor Focus – Crayon Data
Thankfully, there is a new breed of tech vendors who is making it easier for banks to drive personalisation of their offerings and connect customers from across segments. Crayon Data is a good example, with their maya.ai engine unearthing the preferences of customers and matching them to offerings from qualified merchants. It benefits all parties:
- The Consumer receives relevant offers, is served from discovery to fulfillment on a single platform and all personal data and information guarded by their bank.
- For Merchants, it allows them to reach the right customers at the right moment, develop valuable marketing and insights and all this directly from their bank partner’s platform.
- For Banks, it provides a scalable model for offer acquisition and easily configurable and measurable consumer engagement.
maya.ai leverages patented AI to create a powerful profile of each customer based on their buying habits and comparing these with millions of other consumers drawn in from their unstructured data sets and graph-based methodology. They then use their algorithms to assist their Financial Services client to make relevant offerings from qualified merchants to consumers in the right channel, at the right moment. All of this is done without exposing personal client information, as the data sets are based on behaviour rather than identity.
Conclusion
There are significant considerations for banks in offering these types of capabilities, such as:
- Privacy. While the technology operates on non-identifiable information, the perception of clients being ‘stalked’ by their bank in order to drive business to a merchant is one that would need to be managed carefully.
- Consumer opt-out. The ability for customers to opt out of this type of service is critical.
- Consumer financial wellbeing. It may be in the best interests of some consumer to not receive merchant offers, for instance where they are managing to a strict budget. These considerations can be baked into the overall customer journey (eg. prompts when the consumer is nearing their self-imposed monthly budget for a category), but care will need to be taken to keep customers’ best interests at heart.
While there are multiple challenges to overcome, the fact remains that personalisation is quickly becoming a core expectation for consumers. How will banks respond, and will we see AI use cases like Crayon Data become more prominent?
You know AI is the absolute next biggest thing. You know it is going to change our world!! It is the little technology trick start-ups use to disrupt industries. It enables crazy applications we have never thought of before! A few days ago, we were dazzled to learn of an AI app that promises to give one a credit rating score based on reading your face – essentially from just a photograph it can tell a prospective financier what the likelihood of your paying back the loan is!
Artificial Intelligence is real and has started becoming mainstream – chatbots using AI to answer queries are everywhere. AI is being used in stock trades, contact centre applications, bank loans processing, crop harvests, self-driving vehicles, and streaming entertainment. It is now part of boardroom discussions and strategic initiatives of CEOs. McKinsey predicts AI will add USD 13 trillion to the global economy by 2030.
Hype vs Reality
So much to like – but why then do we often find leaders shrugging their shoulders? Despite all the good news above there is also another side to AI. For all the green indicators, there are also some red flags (Figure 1). In fact, if one googles “Hype vs reality” the majority of the results returned are to do with AI!!!!
Our experience shows that broad swaths of executives are skeptical of AI. Leaders in a variety of businesses from large multinational banks, consumer packaged goods companies to appliance makers have privately expressed their disappointment at not being able to make AI work for them. They cannot bridge the gap between the AI hype and reality in their businesses.
The data available also bears this out – VentureBeat estimates that 87% of ML projects never make it into production. Ecosystm research suggests that only 7% of organisations have an AI centre of excellence (CoE) – while the remaining depend on ad-hoc implementations. There are several challenges that organisations face in procuring and implementing a successful AI solution – both technology and business (Figure 2).
Visible Patterns Emerge from Successful AI Use Cases
This brings us to an interesting dichotomy – the reality of failed implementations versus the hype surrounding AI. Digital native companies or early adopters of AI form most of the success stories. Traditional companies find it tougher to embark on a successful AI journey. There have been studies that show a staggering gap in the ROI of AI projects between early adopters versus others, and the gulf between the high performers and the rest when using AI.
If we look back to figure 2 and analyse the challenges, we will see certain common themes – many of which are now commonplace wisdom, if not trite. Leadership alignment around AI strategy is the most common one. Getting clean data, aligning strategy with execution, and building the capabilities to use AI are all touted as critical requirements for successful execution. These themes all point to the insight that it is the human element that is more critical – not the technology.
As practitioners we have come across numerous examples of AI projects which go off-track because of human issues. Let’s take the example of an organisation that had enhancing call centre capabilities and capacity using RPA tools, as a key business mandate. There was strong leadership support and enthusiasm. It was clear that a large number of basic level tickets raised by the centre could be resolved using digital agents. This would result in substantial gains in customer experience, through faster ticket resolution and higher employee productivity – it was estimated to be above 30%. However, after two months of launching the pilot only a very small percentage of cases were identified for migration to digital agents.
Very soon, it became clear that these tools were being perceived as a replacement for human skills, rather than to augment their capabilities. The most vocal proponent of the initiative – the head of the customer experience team – became its critic, as he felt that the small savings were not worth the risk of higher agent turnover rates due to perceived job insecurity.
This was turned around by a three-day workshop focused on demonstrating how the job responsibility of agents could be enhanced as portions of their job got automated. The processes were redesigned to isolate parts which could be fully automated and to club non-automated components together driving more responsibility and discretion for agents. Once enhanced responsibility of the call centre staff was identified, managers felt more comfortable and were willing to support the initiative. In the end, the goals set at the start of the project were all met.
In my next blog I will share with you what we consider the winning formula for a successful AI deployment. In the meantime, share with us your AI stories – both of your challenges and successes.
Written with contributions from Ravi Pattamatta and Ratnesh Prasad