Building Trust in your AI Solutions

5/5 (1)

5/5 (1)

In this blog, our guest author Shameek Kundu talks about the importance of making AI/ machine learning models reliable and safe. “Getting data and algorithms right has always been important, particularly in regulated industries such as banking, insurance, life sciences and healthcare. But the bar is much higher now: more data, from more sources, in more formats, feeding more algorithms, with higher stakes.”

Building trust in algorithms is essential. Not (just) because regulators want it, but because it is good for customers and business. The good news is that with the right approach and tooling, it is also achievable.

Getting data and algorithms right has always been important, particularly in regulated industries such as banking, insurance, life sciences and healthcare. But the bar is much higher now: more data, from more sources, in more formats, feeding more algorithms, with higher stakes. With the increased use of Artificial Intelligence/ Machine Learning (AI/ML), today’s algorithms are also more powerful and difficult to understand.

A false dichotomy

At this point in the conversation, I get one of two reactions. One is of distrust in AI/ML and a belief that it should have little role to play in regulated industries. Another is of nonchalance; after all, most of us feel comfortable using ‘black-boxes’ (e.g., airplanes, smartphones) in our daily lives without being able to explain how they work. Why hold AI/ML to special standards?

Both make valid points. But the skeptics miss out on the very real opportunity cost of not using AI/ML – whether it is living with historical biases in human decision-making or simply not being able to do things that are too complex for a human to do, at scale. For example, the use of alternative data and AI/ML has helped bring financial services to many who have never had access before.

On the other hand, cheerleaders for unfettered use of AI/ML might be overlooking the fact that a human being (often with a limited understanding of AI/ML) is always accountable for and/ or impacted by the algorithm. And fairly or otherwise, AI/ML models do elicit concerns around their opacity – among regulators, senior managers, customers and the broader society. In many situations, ensuring that the human can understand the basis of algorithmic decisions is a necessity, not a luxury.

A way forward

Reconciling these seemingly conflicting requirements is possible. But it requires serious commitment from business and data/ analytics leaders – not (just) because regulators demand it, but because it is good for their customers and their business, and the only way to start capturing the full value from AI/ML.

1. ‘Heart’, not just ‘Head’

It is relatively easy to get people excited about experimenting with AI/ML. But when it comes to actually trusting the model to make decisions for us, we humans are likely to put up our defences. Convincing a loan approver, insurance under-writer, medical doctor or front-line sales-person to trust an AI/ML model – over their own knowledge or intuition – is as much about the ‘heart’ as the ‘head’. Helping them understand, on their own terms, how the alternative is at least as good as their current way of doing things, is crucial.

2. A Broad Church

Even in industries/ organisations that recognise the importance of governing AI/ML, there is a tendency to define it narrowly. For example, in Financial Services, one might argue that “an ML model is just another model” and expect existing Model Risk teams to deal with any incremental risks from AI/ML.

There are two issues with this approach:

First, AI/ML models tend to require a greater focus on model quality (e.g., with respect to stability, overfitting and unjust bias) than their traditional alternatives. The pace at which such models are expected to be introduced and re-calibrated is also much higher, stretching traditional model risk management approaches.

Second, poorly designed AI/ML models create second order risks. While not unique to AI/ML, these risks become accentuated due to model complexity, greater dependence on (high-volume, often non-traditional) data and ubiquitous adoption. One example is poor customer experience (e.g., badly communicated decisions) and unfair treatment (e.g., unfair denial of service, discrimination, misselling, inappropriate investment recommendations). Another is around the stability, integrity and competitiveness of financial markets (e.g., unintended collusion with other market players). Obligations under data privacy, sovereignty and security requirements could also become more challenging.

The only way to respond holistically is to bring together a broad coalition – of data managers and scientists, technologists, specialists from risk, compliance, operations and cyber-security, and business leaders.

3. Automate, Automate, Automate

A key driver for the adoption and effectiveness of AI/ ML is scalability. The techniques used to manage traditional models are often inadequate in the face of more data-hungry, widely used and rapidly refreshed AI/ML models. Whether it is during the development and testing phase, formal assessment/ validation or ongoing post-production monitoring,  it is impossible to govern AI/ML at scale using manual processes alone.

o, somewhat counter-intuitively, we need more automation if we are to build and sustain trust in AI/ML. As humans are accountable for the outcomes of AI/ ML models, we can only be ‘in charge’ if we have the tools to provide us reliable intelligence on them – before and after they go into production. As the recent experience with model performance during COVID-19 suggests, maintaining trust in AI/ML models is an ongoing task.

***

I have heard people say “AI is too important to be left to the experts”. Perhaps. But I am yet to come across an AI/ML practitioner who is not keenly aware of the importance of making their models reliable and safe. What I have noticed is that they often lack suitable tools – to support them in analysing and monitoring models, and to enable conversations to build trust with stakeholders. If AI is to be adopted at scale, that must change.

Shameek Kundu is Chief Strategy Officer and Head of Financial Services at TruEra Inc. TruEra helps enterprises analyse, improve and monitor quality of machine


Have you evaluated the tech areas on your AI requirements? Get access to AI insights and key industry trends from our AI research.

Ecosystm AI Insights
0
New Zealand’s First Hyperscale Data Centre

5/5 (1)

5/5 (1)

New Zealand’s cloud landscape is looking to change drastically with the arrival of its first hyperscale data centre platform in Invercargill, a city in the southern tip of New Zealand’s South Island. The focus on digitalisation (in both the private and public sectors), growing data localisation mandates, the need for big data storage, and the demand for scalable apps and innovations are driving the demand for hyperscale data centres and more undersea cables in the region.

In December 2020, Meridian Energy, New Zealand’s fourth-largest electricity retailer, and Datagrid New Zealand, run by Hawaiki Cable announced the launch of the project to develop a 60MW, 25,000 sqm facility near the town of Makarewa at a cost of nearly USD 500 million. The project is due for a commercial launch in 2023, and a major part of the investment will involve the laying of two new submarine cables. The first subsea cable will connect Invercargill to Sydney and Melbourne in Australia and the second will connect with Hawaiki Cable’s landing point at Mangawhai Heads, north of Auckland which will further extend to Auckland, Wellington, and Christchurch.

The country clearly recognises the need for a robust infrastructure to accelerate innovation – Microsoft has also received approval from the New Zealand government in September last year to open a data centre region. To support the future of cloud services and to fulfil New Zealand progressive data centre demands, CDC Data Centres have also planned to develop two new hyperscale data centres in Auckland, New Zealand.

Ecosystm Comments

The introduction of the Datagrid New Zealand data centre in Invercargill will be a welcome asset to the Southland region of New Zealand. With primary industries being farming, fishing and forestry, the region has done well throughout the COVID-19 pandemic. This initiative will further benefit the local economy by delivering opportunities for economic prosperity for local businesses.

With long-term growth at the data centre expected to consume up to 100MW of renewable energy, Meridian Energy is well equipped to provide renewable energy generated at the Manapouri hydroelectric power station, capable of generating 850MW. The potential closure of the 550MW Tiwai Point Aluminium Smelter is expected to put the country in a position of oversupply.

The data centre will be a critical piece of New Zealand’s infrastructure, supporting the roll-out of 5G networks by telecom providers and the need for low-latency cloud compute and data storage. Datagrid will provide a competitive alternative to the likes of Microsoft’s new data centre.

While the construction and opening of the data centre will possibly add more stress to New Zealand’s under-resourced construction sector, it will also create tech jobs in the Southland region, in the long term. Unique to the region is the Southern Institute of Technology that has a Zero Fees Scheme that has been confirmed until the end of 2022. The data centre will help to keep skilled tech workers in the region.


Identifying emerging cloud computing trends can help you drive digital business decision making, vendor and technology platform selection and investment strategies.Gain access to more insights from the Ecosystm Cloud Study.

Ecosystm Cloud Insights

1
Retail Transformation in the New Decade – Ecosystm Bytes

5/5 (1)

Telecom Transformation in The New Decade – Ecosystm Bytes

5/5 (2)

Hitachi Acquires GlobalLogic

5/5 (3)

5/5 (3)

Hitachi announced their plans to acquire US based software development company GlobalLogic for an estimated USD 9.6 billion, including debt repayment. The transaction is expected to close by end of July, after which GlobalLogic will function under Hitachi’s Global Digital Holdings.

GlobalLogic was founded in 2000, and the Canada Pension Plan Investment Board and Swiss investment firm Partners Group have 45% of ownership; with the remainder owned by the company’s management.

Hitachi’s Business Portfolio Expansion

The acquisition of GlobalLogic is a part of Hitachi’s move to focus and extend the range of Hitachi’s digital services business. As Hitachi aims to expand from electronics hardware to concentrate on digital services, they are looking to benefit from GlobalLogic’s range of expertise – from chips to cloud services. Silicon Valley-based GlobalLogic has a presence in 14 countries with more than 20,000 employees and 400 active clients in industries including telecommunications, healthcare, technology, finance and automotive. This will also expand Hitachi’s network outside Japan by providing them access to a global customer base and will boost their software and solutions platforms, including Hitachi IoT portfolio and data analytics.

The GlobalLogic deal follows another big acquisition of ABB’s power grid business by Hitachi in July 2020 to focus on clean energy and distributed energy frontiers. This makes Hitachi one of the largest global grid equipment and service providers in all regions.

Hitachi is also planning to divest parts of their portfolio such as Hitachi Metals, their chemical unit and their medical equipment business.

Ecosystm Comments

Hitachi’s move to acquire GlobalLogic is very interesting and is in line with the growing trend of global Operation Technology (OT) vendors riding the wave of Industry 4.0 and ‘Product as a Service’ models – essentially, to move up the margin ladder with more digital services added on to their already established equipment business. Siemens, Schneider Electric, Panasonic, ABB, Hitachi and Johnson Controls are some of the prominent vendors who have taken pole positions in their respective industry domains, in this race to digitally transform their businesses and business models. Last year, Panasonic made a very similar move, taking a 20% equity stake in Blue Yonder, a leading supply chain software provider.

With rapid advancements in computing and communications (5G), it is now possible to converge the IT (Information Technology supporting enterprise information flows), the OT (Operational Technology – machine level control of the physical equipment), and the ET (Engineering Technology in the Product Design and Development space such as CAD, CAM, PDM etc.) domains. Three worlds that were separate till now. The convergence of these three worlds enables high impact use cases in automation, product, process, and business model innovation in almost all sectors, such as autonomous vehicles, energy efficient buildings, asset tracking and monitoring, and predictive and prescriptive maintenance. For the OT vendors therefore, it becomes critical to acquire IT and ET capabilities to become successful in the new cyber physical world. Most OT vendors are choosing to acquire these capabilities through strategic partnerships (such as Siemens with Atos and SAP; Panasonic with Blue Yonder) or acquisitions (such as Hitachi and GlobalLogic) rather than develop such capabilities organically in completely new domains.


Get your Free Copy
2
Government Transformation in the New Decade – Ecosystm Bytes

5/5 (2)

Insurance Transformation In The New Decade – Ecosystm Bytes

5/5 (2)

Shaping your Digital Journey – Ecosystm Bytes

5/5 (2)

AT&T & Fortinet Partner for a Managed SASE Solution

5/5 (1)

5/5 (1)

Last week AT&T announced a partnership with Fortinet to expand their managed security services portfolio. This partnership provides global managed Secure Access Service Edge (SASE) solutions at scale. The solution uses Fortinet’s SASE stack which unifies software-defined wide-area network (SD-WAN) and network security capabilities into AT&T managed cybersecurity framework. Additionally, AT&T SASE and Fortinet will integrate with AT&T Alien Labs Threat Intelligence platform, a threat intelligence unit to enhance detection and response. AT&T has plans to update its managed SASE service during the year and will continue to bring more options.

Talking about the AT&T-Fortinet partnership, Ecosystm Principal Advisor, Ashok Kumar says, “This move continues the trend of the convergence of networking and security solutions. AT&T is positioning themselves well with their integrated offer of network and security services to address the needs of global enterprises.”  

Convergence of Network & Security

AT&T’s improved global managed security service includes features such as secure web gateway, firewall-as-a service, cloud access security broker (CASB) and zero-trust access, which provides security teams and analysts with unified capabilities across the cloud, networks and endpoints. The solution aims to enable enterprises to create a more resilient network bringing the core capabilities of the two companies that will reduce operational costs and deliver a unified offering.

Last year AT&T also partnered with Cisco to expand its SD-WAN solution and to support AT&T Managed Services using Cisco’s vManage controller through a single management interface. Over the past years multiple vendors including Fortinet have developed comprehensive SASE solution capabilities through partnerships or acquisitions to provide a unified offering. Last year Fortinet acquired Opaq, a SASE cloud provider to bolster their security capabilities through OPAQ’s patented Zero Trust Network Access (ZTNA) cloud solution and to strengthen SD-WAN, security and edge package.

The Push Towards Flexible Networking

Kumar says, “The pandemic has created a higher demand and value for secure networking services. Enterprises experienced greater number of phishing and malware attacks last year with the sudden increase in work-from-home users. The big question enterprises need to ask themselves is whether legacy networks can support their evolving business priorities.”

“As global economies look to recover, securing remote users working from anywhere, with full mobility, will be a high priority for all enterprises. Enterprises need to evaluate mobile SASE services that provide frictionless identity management with seamless user experiences, and be compatible with the growing adoption of 5G services in 2021 and beyond.”


The Top 5 Telecommunications & Mobility Trends that will dominate the telecom industry to watch out for in 2021. Signup for Free to download the report.

New call-to-action
1