Exiting the North-South Highway 101 onto Mountain View, California, reveals how mundane innovation can appear in person. This Silicon Valley town, home to some of the most prominent tech giants, reveals little more than a few sprawling corporate campuses of glass and steel. As the industry evolves, its architecture naturally grows less inspiring. The most imposing structures, our modern-day coliseums, are massive energy-rich data centres, recursively training LLMs among other technologies. Yet, just as the unassuming exterior of the Googleplex conceals a maze of shiny new software, GenAI harbours immense untapped potential. And people are slowly realising that.
It has been over a year that GenAI burst onto the scene, hastening AI implementations and making AI benefits more identifiable. Today, we see successful use cases and collaborations all the time.
Finding Where Expectations Meet Reality
While the data centres of Mountain View thrum with the promise of a new era, it is crucial to have a quick reality check.
Just as the promise around dot-com startups reached a fever pitch before crashing, so too might the excitement surrounding AI be entering a period of adjustment. Every organisation appears to be looking to materialise the hype.
All eyes (including those of 15 million tourists) will be on Paris as they host the 2024 Olympics Games. The International Olympic Committee (IOC) recently introduced an AI-powered monitoring system to protect athletes from online abuse. This system demonstrates AI’s practical application, monitoring social media in real time, flagging abusive content, and ensuring athlete’s mental well-being. Online abuse is a critical issue in the 21st century. The IOC chose the right time, cause, and setting. All that is left is implementation. That’s where reality is met.
While the Googleplex doesn’t emanate the same futuristic aura as whatever is brewing within its walls, Google’s AI prowess is set to take centre stage as they partner with NBCUniversal as the official search AI partner of Team USA. By harnessing the power of their GenAI chatbot Gemini, NBCUniversal will create engaging and informative content that seamlessly integrates with their broadcasts. This will enhance viewership, making the Games more accessible and enjoyable for fans across various platforms and demographics. The move is part of NBCUniversal’s effort to modernise its coverage and attract a wider audience, including those who don’t watch live television and younger viewers who prefer online content.
From Silicon Valley to Main Street
While tech giants invest heavily in GenAI-driven product strategies, retailers and distributors must adapt to this new sales landscape.
Perhaps the promise of GenAI lies in the simple storefronts where it meets the everyday consumer. Just a short drive down the road from the Googleplex, one of many 37,000-square-foot Best Buys is preparing for a launch that could redefine how AI is sold.
In the most digitally vogue style possible, the chain retailer is rolling out Microsoft’s flagship AI-enabled PCs by training over 30,000 employees to sell and repair them and equipping over 1,000 store employees with AI skillsets. Best Buy are positioning themselves to revitalise sales, which have been declining for the past ten quarters. The company anticipates that the augmentation of AI skills across a workforce will drive future growth.
The Next Generation of User-Software Interaction
We are slowly evolving from seeking solutions to seamless integration, marking a new era of User-Centric AI.
The dynamic between humans and software has mostly been transactional: a question for an answer, or a command for execution. GenAI however, is poised to reshape this. Apple, renowned for their intuitive, user-centric ecosystem, is forging a deeper and more personalised relationship between humans and their digital tools.
Apple recently announced a collaboration with OpenAI at its WWDC, integrating ChatGPT into Siri (their digital assistant) in its new iOS 18 and macOS Sequoia rollout. According to Tim Cook, CEO, they aim to “combine generative AI with a user’s personal context to deliver truly helpful intelligence”.
Apple aims to prioritise user personalisation and control. Operating directly on the user’s device, it ensures their data remains secure while assimilating AI into their daily lives. For example, Siri now leverages “on-screen awareness” to understand both voice commands and the context of the user’s screen, enhancing its ability to assist with any task. This marks a new era of personalised GenAI, where technology understands and caters to individual needs.
We are beginning to embrace a future where LLMs assume customer-facing roles. The reality is, however, that we still live in a world where complex issues are escalated to humans.
The digital enterprise landscape is evolving. Examples such as the Salesforce Einstein Service Agent, its first fully autonomous AI agent, aim to revolutionise chatbot experiences. Built on the Einstein 1 Platform, it uses LLMs to understand context and generate conversational responses grounded in trusted business data. It offers 24/7 service, can be deployed quickly with pre-built templates, and handles simple tasks autonomously.
The technology does show promise, but it is important to acknowledge that GenAI is not yet fully equipped to handle the nuanced and complex scenarios that full customer-facing roles need. As technology progresses in the background, companies are beginning to adopt a hybrid approach, combining AI capabilities with human expertise.
AI for All: Democratising Innovation
The transformations happening inside the Googleplex, and its neighbouring giants, is undeniable. The collaborative efforts of Google, SAP, Microsoft, Apple, and Salesforce, amongst many other companies leverage GenAI in unique ways and paint a picture of a rapidly evolving tech ecosystem. It’s a landscape where AI is no longer confined to research labs or data centres, but is permeating our everyday lives, from Olympic broadcasts to customer service interactions, and even our personal devices.
The accessibility of AI is increasing, thanks to efforts like Best Buy’s employee training and Apple’s on-device AI models. Microsoft’s Copilot and Power Apps empower individuals without technical expertise to harness AI’s capabilities. Tools like Canva and Uizard empower anybody with UI/UX skills. Platforms like Coursera offer certifications in AI. It’s never been easier to self-teach and apply such important skills. While the technology continues to mature, it’s clear that the future of AI isn’t just about what the machines can do for us—it’s about what we can do with them. The on-ramp to technological discovery is no longer North-South Highway 101 or the Googleplex that lays within, but rather a network of tools and resources that’s rapidly expanding, inviting everyone to participate in the next wave of technological transformation.
Historically, data scientists have been the linchpins in the world of AI and machine learning, responsible for everything from data collection and curation to model training and validation. However, as the field matures, we’re witnessing a significant shift towards specialisation, particularly in data engineering and the strategic role of Large Language Models (LLMs) in data curation and labelling. The integration of AI into applications is also reshaping the landscape of software development and application design.
The Growth of Embedded AI
AI is being embedded into applications to enhance user experience, optimise operations, and provide insights that were previously inaccessible. For example, natural language processing (NLP) models are being used to power conversational chatbots for customer service, while machine learning algorithms are analysing user behaviour to customise content feeds on social media platforms. These applications leverage AI to perform complex tasks, such as understanding user intent, predicting future actions, or automating decision-making processes, making AI integration a critical component of modern software development.
This shift towards AI-embedded applications is not only changing the nature of the products and services offered but is also transforming the roles of those who build them. Since the traditional developer may not possess extensive AI skills, the role of data scientists is evolving, moving away from data engineering tasks and increasingly towards direct involvement in development processes.
The Role of LLMs in Data Curation
The emergence of LLMs has introduced a novel approach to handling data curation and processing tasks traditionally performed by data scientists. LLMs, with their profound understanding of natural language and ability to generate human-like text, are increasingly being used to automate aspects of data labelling and curation. This not only speeds up the process but also allows data scientists to focus more on strategic tasks such as model architecture design and hyperparameter tuning.
The accuracy of AI models is directly tied to the quality of the data they’re trained on. Incorrectly labelled data or poorly curated datasets can lead to biased outcomes, mispredictions, and ultimately, the failure of AI projects. The role of data engineers and the use of advanced tools like LLMs in ensuring the integrity of data cannot be overstated.
The Impact on Traditional Developers
Traditional software developers have primarily focused on writing code, debugging, and software maintenance, with a clear emphasis on programming languages, algorithms, and software architecture. However, as applications become more AI-driven, there is a growing need for developers to understand and integrate AI models and algorithms into their applications. This requirement presents a challenge for developers who may not have specialised training in AI or data science. This is seeing an increasing demand for upskilling and cross-disciplinary collaboration to bridge the gap between traditional software development and AI integration.
Clear Role Differentiation: Data Engineering and Data Science
In response to this shift, the role of data scientists is expanding beyond the confines of traditional data engineering and data science, to include more direct involvement in the development of applications and the embedding of AI features and functions.
Data engineering has always been a foundational element of the data scientist’s role, and its importance has increased with the surge in data volume, variety, and velocity. Integrating LLMs into the data collection process represents a cutting-edge approach to automating the curation and labelling of data, streamlining the data management process, and significantly enhancing the efficiency of data utilisation for AI and ML projects.
Accurate data labelling and meticulous curation are paramount to developing models that are both reliable and unbiased. Errors in data labelling or poorly curated datasets can lead to models that make inaccurate predictions or, worse, perpetuate biases. The integration of LLMs into data engineering tasks is facilitating a transformation, freeing them from the burdens of manual data labelling and curation. This has led to a more specialised data scientist role that allocates more time and resources to areas that can create greater impact.
The Evolving Role of Data Scientists
Data scientists, with their deep understanding of AI models and algorithms, are increasingly working alongside developers to embed AI capabilities into applications. This collaboration is essential for ensuring that AI models are effectively integrated, optimised for performance, and aligned with the application’s objectives.
- Model Development and Innovation. With the groundwork of data preparation laid by LLMs, data scientists can focus on developing more sophisticated and accurate AI models, exploring new algorithms, and innovating in AI and ML technologies.
- Strategic Insights and Decision Making. Data scientists can spend more time analysing data and extracting valuable insights that can inform business strategies and decision-making processes.
- Cross-disciplinary Collaboration. This shift also enables data scientists to engage more deeply in interdisciplinary collaboration, working closely with other departments to ensure that AI and ML technologies are effectively integrated into broader business processes and objectives.
- AI Feature Design. Data scientists are playing a crucial role in designing AI-driven features of applications, ensuring that the use of AI adds tangible value to the user experience.
- Model Integration and Optimisation. Data scientists are also involved in integrating AI models into the application architecture, optimising them for efficiency and scalability, and ensuring that they perform effectively in production environments.
- Monitoring and Iteration. Once AI models are deployed, data scientists work on monitoring their performance, interpreting outcomes, and making necessary adjustments. This iterative process ensures that AI functionalities continue to meet user needs and adapt to changing data landscapes.
- Research and Continued Learning. Finally, the transformation allows data scientists to dedicate more time to research and continued learning, staying ahead of the rapidly evolving field of AI and ensuring that their skills and knowledge remain cutting-edge.
Conclusion
The integration of AI into applications is leading to a transformation in the roles within the software development ecosystem. As applications become increasingly AI-driven, the distinction between software development and AI model development is blurring. This convergence needs a more collaborative approach, where traditional developers gain AI literacy and data scientists take on more active roles in application development. The evolution of these roles highlights the interdisciplinary nature of building modern AI-embedded applications and underscores the importance of continuous learning and adaptation in the rapidly advancing field of AI.
Over the past year, many organisations have explored Generative AI and LLMs, with some successfully identifying, piloting, and integrating suitable use cases. As business leaders push tech teams to implement additional use cases, the repercussions on their roles will become more pronounced. Embracing GenAI will require a mindset reorientation, and tech leaders will see substantial impact across various ‘traditional’ domains.
AIOps and GenAI Synergy: Shaping the Future of IT Operations
When discussing AIOps adoption, there are commonly two responses: “Show me what you’ve got” or “We already have a team of Data Scientists building models”. The former usually demonstrates executive sponsorship without a specific business case, resulting in a lukewarm response to many pre-built AIOps solutions due to their lack of a defined business problem. On the other hand, organisations with dedicated Data Scientist teams face a different challenge. While these teams can create impressive models, they often face pushback from the business as the solutions may not often address operational or business needs. The challenge arises from Data Scientists’ limited understanding of the data, hindering the development of use cases that effectively align with business needs.
The most effective approach lies in adopting an AIOps Framework. Incorporating GenAI into AIOps frameworks can enhance their effectiveness, enabling improved automation, intelligent decision-making, and streamlined operational processes within IT operations.
This allows active business involvement in defining and validating use-cases, while enabling Data Scientists to focus on model building. It bridges the gap between technical expertise and business requirements, ensuring AIOps initiatives are influenced by the capabilities of GenAI, address specific operational challenges and resonate with the organisation’s goals.
The Next Frontier of IT Infrastructure
Many companies adopting GenAI are openly evaluating public cloud-based solutions like ChatGPT or Microsoft Copilot against on-premises alternatives, grappling with the trade-offs between scalability and convenience versus control and data security.
Cloud-based GenAI offers easy access to computing resources without substantial upfront investments. However, companies face challenges in relinquishing control over training data, potentially leading to inaccurate results or “AI hallucinations,” and concerns about exposing confidential data. On-premises GenAI solutions provide greater control, customisation, and enhanced data security, ensuring data privacy, but require significant hardware investments due to unexpectedly high GPU demands during both the training and inferencing stages of AI models.
Hardware companies are focusing on innovating and enhancing their offerings to meet the increasing demands of GenAI. The evolution and availability of powerful and scalable GPU-centric hardware solutions are essential for organisations to effectively adopt on-premises deployments, enabling them to access the necessary computational resources to fully unleash the potential of GenAI. Collaboration between hardware development and AI innovation is crucial for maximising the benefits of GenAI and ensuring that the hardware infrastructure can adequately support the computational demands required for widespread adoption across diverse industries. Innovations in hardware architecture, such as neuromorphic computing and quantum computing, hold promise in addressing the complex computing requirements of advanced AI models.
The synchronisation between hardware innovation and GenAI demands will require technology leaders to re-skill themselves on what they have done for years – infrastructure management.
The Rise of Event-Driven Designs in IT Architecture
IT leaders traditionally relied on three-tier architectures – presentation for user interface, application for logic and processing, and data for storage. Despite their structured approach, these architectures often lacked scalability and real-time responsiveness. The advent of microservices, containerisation, and serverless computing facilitated event-driven designs, enabling dynamic responses to real-time events, and enhancing agility and scalability. Event-driven designs, are a paradigm shift away from traditional approaches, decoupling components and using events as a central communication mechanism. User actions, system notifications, or data updates trigger actions across distributed services, adding flexibility to the system.
However, adopting event-driven designs presents challenges, particularly in higher transaction-driven workloads where the speed of serverless function calls can significantly impact architectural design. While serverless computing offers scalability and flexibility, the latency introduced by initiating and executing serverless functions may pose challenges for systems that demand rapid, real-time responses. Increasing reliance on event-driven architectures underscores the need for advancements in hardware and compute power. Transitioning from legacy architectures can also be complex and may require a phased approach, with cultural shifts demanding adjustments and comprehensive training initiatives.
The shift to event-driven designs challenges IT Architects, whose traditional roles involved designing, planning, and overseeing complex systems. With Gen AI and automation enhancing design tasks, Architects will need to transition to more strategic and visionary roles. Gen AI showcases capabilities in pattern recognition, predictive analytics, and automated decision-making, promoting a symbiotic relationship with human expertise. This evolution doesn’t replace Architects but signifies a shift toward collaboration with AI-driven insights.
IT Architects need to evolve their skill set, blending technical expertise with strategic thinking and collaboration. This changing role will drive innovation, creating resilient, scalable, and responsive systems to meet the dynamic demands of the digital age.
Whether your organisation is evaluating or implementing GenAI, the need to upskill your tech team remains imperative. The evolution of AI technologies has disrupted the tech industry, impacting people in tech. Now is the opportune moment to acquire new skills and adapt tech roles to leverage the potential of GenAI rather than being disrupted by it.
“AI Guardrails” are often used as a method to not only get AI programs on track, but also as a way to accelerate AI investments. Projects and programs that fall within the guardrails should be easy to approve, govern, and manage – whereas those outside of the guardrails require further review by a governance team or approval body. The concept of guardrails is familiar to many tech businesses and are often applied in areas such as cybersecurity, digital initiatives, data analytics, governance, and management.
While guidance on implementing guardrails is common, organisations often leave the task of defining their specifics, including their components and functionalities, to their AI and data teams. To assist with this, Ecosystm has surveyed some leading AI users among our customers to get their insights on the guardrails that can provide added value.
Data Security, Governance, and Bias
- Data Assurance. Has the organisation implemented robust data collection and processing procedures to ensure data accuracy, completeness, and relevance for the purpose of the AI model? This includes addressing issues like missing values, inconsistencies, and outliers.
- Bias Analysis. Does the organisation analyse training data for potential biases – demographic, cultural and so on – that could lead to unfair or discriminatory outputs?
- Bias Mitigation. Is the organisation implementing techniques like debiasing algorithms and diverse data augmentation to mitigate bias in model training?
- Data Security. Does the organisation use strong data security measures to protect sensitive information used in training and running AI models?
- Privacy Compliance. Is the AI opportunity compliant with relevant data privacy regulations (country and industry-specific as well as international standards) when collecting, storing, and utilising data?
Model Development and Explainability
- Explainable AI. Does the model use explainable AI (XAI) techniques to understand and explain how AI models reach their decisions, fostering trust and transparency?
- Fair Algorithms. Are algorithms and models designed with fairness in mind, considering factors like equal opportunity and non-discrimination?
- Rigorous Testing. Does the organisation conduct thorough testing and validation of AI models before deployment, ensuring they perform as intended, are robust to unexpected inputs, and avoid generating harmful outputs?
AI Deployment and Monitoring
- Oversight Accountability. Has the organisation established clear roles and responsibilities for human oversight throughout the AI lifecycle, ensuring human control over critical decisions and mitigation of potential harm?
- Continuous Monitoring. Are there mechanisms to continuously monitor AI systems for performance, bias drift, and unintended consequences, addressing any issues promptly?
- Robust Safety. Can the organisation ensure AI systems are robust and safe, able to handle errors or unexpected situations without causing harm? This includes thorough testing and validation of AI models under diverse conditions before deployment.
- Transparency Disclosure. Is the organisation transparent with stakeholders about AI use, including its limitations, potential risks, and how decisions made by the system are reached?
Other AI Considerations
- Ethical Guidelines. Has the organisation developed and adhered to ethical principles for AI development and use, considering areas like privacy, fairness, accountability, and transparency?
- Legal Compliance. Has the organisation created mechanisms to stay updated on and compliant with relevant legal and regulatory frameworks governing AI development and deployment?
- Public Engagement. What mechanisms are there in place to encourage open discussion and engage with the public regarding the use of AI, addressing concerns and building trust?
- Social Responsibility. Has the organisation considered the environmental and social impact of AI systems, including energy consumption, ecological footprint, and potential societal consequences?
Implementing these guardrails requires a comprehensive approach that includes policy formulation, technical measures, and ongoing oversight. It might take a little longer to set up this capability, but in the mid to longer term, it will allow organisations to accelerate AI implementations and drive a culture of responsible AI use and deployment.
When non-organic (man-made) fabric was introduced into fashion, there were a number of harsh warnings about using polyester and man-made synthetic fibres, including their flammability.
In creating non-organic data sets, should we also be creating warnings on their use and flammability? Let’s look at why synthetic data is used in industries such as Financial Services, Automotive as well as for new product development in Manufacturing.
Synthetic Data Defined
Synthetic data can be defined as data that is artificially developed rather than being generated by actual interactions. It is often created with the help of algorithms and is used for a wide range of activities, including as test data for new products and tools, for model validation, and in AI model training. Synthetic data is a type of data augmentation which involves creating new and representative data.
Why is it used?
The main reasons why synthetic data is used instead of real data are cost, privacy, and testing. Let’s look at more specifics on this:
- Data privacy. When privacy requirements limit data availability or how it can be used. For example, in Financial Services where restrictions around data usage and customer privacy are particularly limiting, companies are starting to use synthetic data to help them identify and eliminate bias in how they treat customers – without contravening data privacy regulations.
- Data availability. When the data needed for testing a product does not exist or is not available to the testers. This is often the case for new releases.
- Data for testing. When training data is needed for machine learning algorithms. However, in many instances, such as in the case of autonomous vehicles, the data is expensive to generate in real life.
- Training across third parties using cloud. When moving private data to cloud infrastructures involves security and compliance risks. Moving synthetic versions of sensitive data to the cloud can enable organisations to share data sets with third parties for training across cloud infrastructures.
- Data cost. Producing synthetic data through a generative model is significantly more cost-effective and efficient than collecting real-world data. With synthetic data, it becomes cheaper and faster to produce new data once the generative model is set up.
Why should it cause concern?
If real dataset contains biases, data augmented from it will contain biases, too. So, identification of optimal data augmentation strategy is important.
If the synthetic set doesn’t truly represent the original customer data set, it might contain the wrong buying signals regarding what customers are interested in or are inclined to buy.
Synthetic data also requires some form of output/quality control and internal regulation, specifically in highly regulated industries such as the Financial Services.
Creating incorrect synthetic data also can get a company in hot water with external regulators. For example, if a company created a product that harmed someone or didn’t work as advertised, it could lead to substantial financial penalties and, possibly, closer scrutiny in the future.
Conclusion
Synthetic data allows us to continue developing new and innovative products and solutions when the data necessary to do so wouldn’t otherwise be present or available due to volume, data sensitivity or user privacy challenges. Generating synthetic data comes with the flexibility to adjust its nature and environment as and when required in order to improve the performance of the model to create opportunities to check for outliers and extreme conditions.
AI has become intrinsic to our personal lives – we are often completely unaware of technology’s influence on our daily lives. For enterprises too, tech solutions often come embedded with AI capabilities. Today, an organisation’s ability to automate processes and decisions is often dependent more on their desire and appetite for tech adoption, than the technology itself.
In 2022 the key focus for enterprises will be on being able to trust their Data & AI solutions. This will include trust in their IT infrastructure, architecture and AI services; and stretch to being able to participate in trusted data sharing models. Technology vendors will lead this discussion and showcase their solutions in the light of trust.
Read what Ecosystm analysts, Darian Bird, Niloy Mukherjee, Peter Carr and Tim Sheedy think will be the leading Data & AI trends in 2022.
Click here to download Ecosystm Predicts: The Top 5 Trends for Data & AI in 2022 as PDF
You know AI is the absolute next biggest thing. You know it is going to change our world!! It is the little technology trick start-ups use to disrupt industries. It enables crazy applications we have never thought of before! A few days ago, we were dazzled to learn of an AI app that promises to give one a credit rating score based on reading your face – essentially from just a photograph it can tell a prospective financier what the likelihood of your paying back the loan is!
Artificial Intelligence is real and has started becoming mainstream – chatbots using AI to answer queries are everywhere. AI is being used in stock trades, contact centre applications, bank loans processing, crop harvests, self-driving vehicles, and streaming entertainment. It is now part of boardroom discussions and strategic initiatives of CEOs. McKinsey predicts AI will add USD 13 trillion to the global economy by 2030.
Hype vs Reality
So much to like – but why then do we often find leaders shrugging their shoulders? Despite all the good news above there is also another side to AI. For all the green indicators, there are also some red flags (Figure 1). In fact, if one googles “Hype vs reality” the majority of the results returned are to do with AI!!!!
Our experience shows that broad swaths of executives are skeptical of AI. Leaders in a variety of businesses from large multinational banks, consumer packaged goods companies to appliance makers have privately expressed their disappointment at not being able to make AI work for them. They cannot bridge the gap between the AI hype and reality in their businesses.
The data available also bears this out – VentureBeat estimates that 87% of ML projects never make it into production. Ecosystm research suggests that only 7% of organisations have an AI centre of excellence (CoE) – while the remaining depend on ad-hoc implementations. There are several challenges that organisations face in procuring and implementing a successful AI solution – both technology and business (Figure 2).
Visible Patterns Emerge from Successful AI Use Cases
This brings us to an interesting dichotomy – the reality of failed implementations versus the hype surrounding AI. Digital native companies or early adopters of AI form most of the success stories. Traditional companies find it tougher to embark on a successful AI journey. There have been studies that show a staggering gap in the ROI of AI projects between early adopters versus others, and the gulf between the high performers and the rest when using AI.
If we look back to figure 2 and analyse the challenges, we will see certain common themes – many of which are now commonplace wisdom, if not trite. Leadership alignment around AI strategy is the most common one. Getting clean data, aligning strategy with execution, and building the capabilities to use AI are all touted as critical requirements for successful execution. These themes all point to the insight that it is the human element that is more critical – not the technology.
As practitioners we have come across numerous examples of AI projects which go off-track because of human issues. Let’s take the example of an organisation that had enhancing call centre capabilities and capacity using RPA tools, as a key business mandate. There was strong leadership support and enthusiasm. It was clear that a large number of basic level tickets raised by the centre could be resolved using digital agents. This would result in substantial gains in customer experience, through faster ticket resolution and higher employee productivity – it was estimated to be above 30%. However, after two months of launching the pilot only a very small percentage of cases were identified for migration to digital agents.
Very soon, it became clear that these tools were being perceived as a replacement for human skills, rather than to augment their capabilities. The most vocal proponent of the initiative – the head of the customer experience team – became its critic, as he felt that the small savings were not worth the risk of higher agent turnover rates due to perceived job insecurity.
This was turned around by a three-day workshop focused on demonstrating how the job responsibility of agents could be enhanced as portions of their job got automated. The processes were redesigned to isolate parts which could be fully automated and to club non-automated components together driving more responsibility and discretion for agents. Once enhanced responsibility of the call centre staff was identified, managers felt more comfortable and were willing to support the initiative. In the end, the goals set at the start of the project were all met.
In my next blog I will share with you what we consider the winning formula for a successful AI deployment. In the meantime, share with us your AI stories – both of your challenges and successes.
Written with contributions from Ravi Pattamatta and Ratnesh Prasad
In this blog, our guest author Shameek Kundu talks about the importance of making AI/ machine learning models reliable and safe. “Getting data and algorithms right has always been important, particularly in regulated industries such as banking, insurance, life sciences and healthcare. But the bar is much higher now: more data, from more sources, in more formats, feeding more algorithms, with higher stakes.”
Building trust in algorithms is essential. Not (just) because regulators want it, but because it is good for customers and business. The good news is that with the right approach and tooling, it is also achievable.
Getting data and algorithms right has always been important, particularly in regulated industries such as banking, insurance, life sciences and healthcare. But the bar is much higher now: more data, from more sources, in more formats, feeding more algorithms, with higher stakes. With the increased use of Artificial Intelligence/ Machine Learning (AI/ML), today’s algorithms are also more powerful and difficult to understand.
A false dichotomy
At this point in the conversation, I get one of two reactions. One is of distrust in AI/ML and a belief that it should have little role to play in regulated industries. Another is of nonchalance; after all, most of us feel comfortable using ‘black-boxes’ (e.g., airplanes, smartphones) in our daily lives without being able to explain how they work. Why hold AI/ML to special standards?
Both make valid points. But the skeptics miss out on the very real opportunity cost of not using AI/ML – whether it is living with historical biases in human decision-making or simply not being able to do things that are too complex for a human to do, at scale. For example, the use of alternative data and AI/ML has helped bring financial services to many who have never had access before.
On the other hand, cheerleaders for unfettered use of AI/ML might be overlooking the fact that a human being (often with a limited understanding of AI/ML) is always accountable for and/ or impacted by the algorithm. And fairly or otherwise, AI/ML models do elicit concerns around their opacity – among regulators, senior managers, customers and the broader society. In many situations, ensuring that the human can understand the basis of algorithmic decisions is a necessity, not a luxury.
A way forward
Reconciling these seemingly conflicting requirements is possible. But it requires serious commitment from business and data/ analytics leaders – not (just) because regulators demand it, but because it is good for their customers and their business, and the only way to start capturing the full value from AI/ML.
1. ‘Heart’, not just ‘Head’
It is relatively easy to get people excited about experimenting with AI/ML. But when it comes to actually trusting the model to make decisions for us, we humans are likely to put up our defences. Convincing a loan approver, insurance under-writer, medical doctor or front-line sales-person to trust an AI/ML model – over their own knowledge or intuition – is as much about the ‘heart’ as the ‘head’. Helping them understand, on their own terms, how the alternative is at least as good as their current way of doing things, is crucial.
2. A Broad Church
Even in industries/ organisations that recognise the importance of governing AI/ML, there is a tendency to define it narrowly. For example, in Financial Services, one might argue that “an ML model is just another model” and expect existing Model Risk teams to deal with any incremental risks from AI/ML.
There are two issues with this approach:
First, AI/ML models tend to require a greater focus on model quality (e.g., with respect to stability, overfitting and unjust bias) than their traditional alternatives. The pace at which such models are expected to be introduced and re-calibrated is also much higher, stretching traditional model risk management approaches.
Second, poorly designed AI/ML models create second order risks. While not unique to AI/ML, these risks become accentuated due to model complexity, greater dependence on (high-volume, often non-traditional) data and ubiquitous adoption. One example is poor customer experience (e.g., badly communicated decisions) and unfair treatment (e.g., unfair denial of service, discrimination, misselling, inappropriate investment recommendations). Another is around the stability, integrity and competitiveness of financial markets (e.g., unintended collusion with other market players). Obligations under data privacy, sovereignty and security requirements could also become more challenging.
The only way to respond holistically is to bring together a broad coalition – of data managers and scientists, technologists, specialists from risk, compliance, operations and cyber-security, and business leaders.
3. Automate, Automate, Automate
A key driver for the adoption and effectiveness of AI/ ML is scalability. The techniques used to manage traditional models are often inadequate in the face of more data-hungry, widely used and rapidly refreshed AI/ML models. Whether it is during the development and testing phase, formal assessment/ validation or ongoing post-production monitoring, it is impossible to govern AI/ML at scale using manual processes alone.
o, somewhat counter-intuitively, we need more automation if we are to build and sustain trust in AI/ML. As humans are accountable for the outcomes of AI/ ML models, we can only be ‘in charge’ if we have the tools to provide us reliable intelligence on them – before and after they go into production. As the recent experience with model performance during COVID-19 suggests, maintaining trust in AI/ML models is an ongoing task.
***
I have heard people say “AI is too important to be left to the experts”. Perhaps. But I am yet to come across an AI/ML practitioner who is not keenly aware of the importance of making their models reliable and safe. What I have noticed is that they often lack suitable tools – to support them in analysing and monitoring models, and to enable conversations to build trust with stakeholders. If AI is to be adopted at scale, that must change.
Shameek Kundu is Chief Strategy Officer and Head of Financial Services at TruEra Inc. TruEra helps enterprises analyse, improve and monitor quality of machine
Have you evaluated the tech areas on your AI requirements? Get access to AI insights and key industry trends from our AI research.