Traditional network architectures are inherently fragile, often relying on a single transport type to connect branches, production facilities, and data centres. The imperative for networks to maintain resilience has grown significantly, particularly due to the delivery of customer-facing services at branches and the increasing reliance on interconnected machines in operational environments. The cost of network downtime can now be quantified in terms of both lost customers and reduced production.
Distributed Enterprises Face New Challenges
As the importance of maintaining resiliency grows, so does the complexity of network management. Distributed enterprises must provide connectivity under challenging conditions, such as:
- Remote access for employees using video conferencing
- Local breakout for cloud services to avoid backhauling
- IoT devices left unattended in public places
- Customers accessing digital services at the branch or home
- Sites in remote areas requiring the same quality of service
Network managers require intelligent tools to remain in control without adding any unnecessary burden to end users. The number of endpoints and speed of change has made it impossible for human operators to manage without assistance from AI.

AI-Enhanced Network Management
Modern network operations centres are enhancing their visibility by aggregating data from diverse systems and consolidating them within a unified management platform. Machine learning (ML) and AI are employed to analyse data originating from enterprise networks, telecom Points of Presence (PoPs), IoT devices, cloud service providers, and user experience monitoring. These technologies enable the early identification of network issues before they reach critical levels. Intelligent networks can suggest strategies to enhance network resilience, forecast how modifications may impact performance, and are increasingly capable of autonomous responses to evolving conditions.
Here are some critical ways that AI/ML can help build resilient networks.
- Alert Noise Reduction. Network operations centres face thousands of alerts each day. As a result, operators battle with alert fatigue and are challenged to identify critical issues. Through the application of ML, contemporary monitoring tools can mitigate false positives, categorise interconnected alerts, and assist operators in prioritising the most pressing concerns. An operations team, augmented with AI capabilities could potentially de-prioritise up to 90% of alerts, allowing a concentrated focus on factors that impact network performance and resilience.
- Data Lakes. Networking vendors are building their own proprietary data lakes built upon telemetry data generated by the infrastructure they have deployed at customer sites. This vast volume of data allows them to use ML to create a tailored baseline for each customer and to recommend actions to optimise the environment.
- Root Cause Analysis. To assist network operators in diagnosing an issue, AIOps can sift through thousands of data points and correlate them to identify a root cause. Through the integration of alerts with change feeds, operators can understand the underlying causes of network problems or outages. By using ML to understand the customer’s unique environment, AIOps can progressively accelerate time to resolution.
- Proactive Response. As management layers become capable of recommending corrective action, proactive response also becomes possible, leading to self-healing networks. With early identification of sub-optimal conditions, intelligent systems can conduct load balancing, redirect traffic to higher performing SaaS regions, auto-scale cloud instances, or terminate selected connections.
- Device Profiling. In a BYOD environment, network managers require enhanced visibility to discover devices and enforce appropriate policies on them. Automated profiling against a validated database ensures guest access can be granted without adding friction to the onboarding process. With deep packet inspection, devices can be precisely classified based on behaviour patterns.
- Dynamic Bandwidth Aggregation. A key feature of an SD-WAN is that it can incorporate diverse transport types, such as fibre, 5G, and low earth orbit (LEO) satellite connectivity. Rather than using a simple primary and redundant architecture, bandwidth aggregation allows all circuits to be used simultaneously. By infusing intelligence into the SD-WAN layer, the process of path selection can dynamically prioritise traffic by directing it over higher quality or across multiple links. This approach guarantees optimal performance, even in the face of network degradation.
- Generative AI for Process Efficiency. Every tech company is trying to understand how they can leverage the power of Generative AI, and networking providers are no different. The most immediate use case will be to improve satisfaction and scalability for level 1 and level 2 support. A Generative AI-enabled service desk could provide uninterrupted support during high-volume periods, such as during network outages, or during off-peak hours.
Initiating an AI-Driven Network Management Journey
Network managers who take advantage of AI can build highly resilient networks that maximise uptime, deliver consistently high performance, and remain secure. Some important considerations when getting started include:
- Data Catalogue. Take stock of the data sources that are available to you, whether they come from network equipment telemetry, applications, or the data lake of a managed services provider. Understand how they can be integrated into an AIOps solution.
- Start Small. Begin with a pilot in an area where good data sources are available. This will help you assess the impact that AI could have on reducing alerts, improving mean time to repair (MTTR), increasing uptime, or addressing the skills gap.
- Develop an SD-WAN/SASE Roadmap. Many advanced AI benefits are built into an SD-WAN or SASE. Most organisations already have or will soon adopt SD-WAN but begin assessing the SASE framework to decide if it is suitable for your organisation.

In this Insight, guest author Anirban Mukherjee lists out the key challenges of AI adoption in traditional organisations – and how best to mitigate these challenges. “I am by no means suggesting that traditional companies avoid or delay adopting AI. That would be akin to asking a factory to keep using only steam as power, even as electrification came in during early 20th century! But organisations need to have a pragmatic strategy around what will undoubtedly be a big, but necessary, transition.”

After years of evangelising digital adoption, I have more of a nuanced stance today – supporting a prudent strategy, especially where the organisation’s internal capabilities/technology maturity is in question. I still see many traditional organisations burning budgets in AI adoption programs with low success rates, simply because of poor choices driven by misplaced expectations. Without going into the obvious reasons for over-exuberance (media-hype, mis-selling, FOMO, irrational valuations – the list goes on), here are few patterns that can be detected in those organisations that have succeeded getting value – and gloriously so!
Data-driven decision-making is a cultural change. Most traditional organisations have a point person/role accountable for any important decision, whose “neck is on the line”. For these organisations to change over to trusting AI decisions (with its characteristic opacity, and stochastic nature of recommendations) is often a leap too far.
Work on your change management, but more crucially, strategically choose business/process decision points (aka use-cases) to acceptably AI-enable.
Technical choice of ML modeling needs business judgement too. The more flexible non-linear models that increase prediction accuracy, invariably suffer from lower interpretability – and may be a poor choice in many business contexts. Depending upon business data volumes and accuracy, model bias-variance tradeoffs need to be made. Assessing model accuracy and its thresholds (false-positive-false-negative trade-offs) are similarly nuanced. All this implies that organisation’s domain knowledge needs to merge well with data science design. A pragmatic approach would be to not try to be cutting-edge.
Look to use proven foundational model-platforms – such as those for NLP, visual analytics – for first use cases. Also note that not every problem needs AI; a lot can be sorted through traditional programming (“if-then automation”) and should be. The dirty secret of the industry is that the power of a lot of products marketed as “AI-powered” is mostly traditional logic, under the hood!
In getting results from AI, most often “better data trumps better models”. Practically, this means that organisations need to spend more on data engineering effort, than on data science effort. The CDO/CIO organisation needs to build the right balance of data competencies and tools.
Get the data readiness programs started – yesterday! While the focus of data scientists is often on training an AI model, deployment of the trained model online is a whole other level of technical challenge (particularly when it comes to IT-OT and real-time integrations).
It takes time to adopt AI in traditional organisations. Building up training data and model accuracy is a slow process. Organisational changes take time – and then you have to add considerations such as data standardisation; hygiene and integration programs; and the new attention required to build capabilities in AIOps, AI adoption and governance.
Typically plan for 3 years – monitor progress and steer every 6 months. Be ready to kill “zombie” projects along the way. Train the executive team – not to code, but to understand the technology’s capabilities and limitations. This will ensure better informed buyers/consumers and help drive adoption within the organisation.
I am by no means suggesting that traditional companies avoid or delay adopting AI. That would be akin to asking a factory to keep using only steam as power, even as electrification came in during early 20th century! But organisations need to have a pragmatic strategy around what will undoubtedly be a big, but necessary, transition.
These opinions are personal (and may change with time), but definitely informed through a decade of involvement in such journeys. It is not too early for any organisation to start – results are beginning to show for those who started earlier, and we know what they got right (and wrong).
I would love to hear your views, or even engage with you on your journey!
The views and opinions mentioned in the article are personal.
Anirban Mukherjee has more than 25 years of experience in operations excellence and technology consulting across the globe, having led transformations in Energy, Engineering, and Automotive majors. Over the last decade, he has focused on Smart Manufacturing/Industry 4.0 solutions that integrate cutting-edge digital into existing operations.

When non-organic (man-made) fabric was introduced into fashion, there were a number of harsh warnings about using polyester and man-made synthetic fibres, including their flammability.
In creating non-organic data sets, should we also be creating warnings on their use and flammability? Let’s look at why synthetic data is used in industries such as Financial Services, Automotive as well as for new product development in Manufacturing.
Synthetic Data Defined
Synthetic data can be defined as data that is artificially developed rather than being generated by actual interactions. It is often created with the help of algorithms and is used for a wide range of activities, including as test data for new products and tools, for model validation, and in AI model training. Synthetic data is a type of data augmentation which involves creating new and representative data.
Why is it used?
The main reasons why synthetic data is used instead of real data are cost, privacy, and testing. Let’s look at more specifics on this:
- Data privacy. When privacy requirements limit data availability or how it can be used. For example, in Financial Services where restrictions around data usage and customer privacy are particularly limiting, companies are starting to use synthetic data to help them identify and eliminate bias in how they treat customers – without contravening data privacy regulations.
- Data availability. When the data needed for testing a product does not exist or is not available to the testers. This is often the case for new releases.
- Data for testing. When training data is needed for machine learning algorithms. However, in many instances, such as in the case of autonomous vehicles, the data is expensive to generate in real life.
- Training across third parties using cloud. When moving private data to cloud infrastructures involves security and compliance risks. Moving synthetic versions of sensitive data to the cloud can enable organisations to share data sets with third parties for training across cloud infrastructures.
- Data cost. Producing synthetic data through a generative model is significantly more cost-effective and efficient than collecting real-world data. With synthetic data, it becomes cheaper and faster to produce new data once the generative model is set up.

Why should it cause concern?
If real dataset contains biases, data augmented from it will contain biases, too. So, identification of optimal data augmentation strategy is important.
If the synthetic set doesn’t truly represent the original customer data set, it might contain the wrong buying signals regarding what customers are interested in or are inclined to buy.
Synthetic data also requires some form of output/quality control and internal regulation, specifically in highly regulated industries such as the Financial Services.
Creating incorrect synthetic data also can get a company in hot water with external regulators. For example, if a company created a product that harmed someone or didn’t work as advertised, it could lead to substantial financial penalties and, possibly, closer scrutiny in the future.
Conclusion
Synthetic data allows us to continue developing new and innovative products and solutions when the data necessary to do so wouldn’t otherwise be present or available due to volume, data sensitivity or user privacy challenges. Generating synthetic data comes with the flexibility to adjust its nature and environment as and when required in order to improve the performance of the model to create opportunities to check for outliers and extreme conditions.
The rollout of 5G combined with edge computing in remote locations will change the way maintenance is carried out in the field. Historically, service teams performed maintenance either in a reactive fashion – fixing equipment when it broke – or using a preventative calendar-based approach. Neither of these methods is satisfactory, with the former being too late and resulting in failure while the latter is necessarily too early, resulting in excessive expenditure and downtime. The availability of connected sensors has allowed service teams to shift to condition monitoring without the need for taking equipment offline for inspections. The advent of analytics takes this approach further and has given us optimised scheduling in the form of predictive maintenance.
The next step is prescriptive maintenance in which AI can recommend action based on current and predicted condition according to expected usage or environmental circumstances. This could be as simple as alerting an operator to automatically ordering parts and scheduling multiple servicing tasks depending on forecasted production needs in the short term.

Prescriptive maintenance has only become possible with the advancement of AI and digital twin technology, but imminent improvements in connectivity and computing will take servicing to a new level. The rollout of 5G will give a boost to bandwidth, reduce latency, and increase the number of connections possible. Equipment in remote locations – such as transmission lines or machinery in resource industries – will benefit from the higher throughput of 5G connectivity, either as part of an operator’s network rollout or a private on-site deployment. Mobile machinery, particularly vehicles, which can include hundreds of sensors will no longer be required to wait until arrival before the condition can be assessed. Furthermore, vehicles equipped with external sensors can inspect stationary infrastructure as it passes by.
Edge computing – either carried out by miniature onboard devices or at smaller scale data centres embedded in 5G networks – ensure that intensive processing can be carried out closer to equipment than with a typical cloud environment. Bandwidth hungry applications, such as video and time series analysis, can be conducted with only meta data transmitted immediately and full archives uploaded with less urgency.
Prescriptive Maintenance with 5G and the Edge – Use Cases
- Transportation. Bridges built over railway lines equipped with high-speed cameras can monitor passing trains to inspect for damage. Data-intensive video analysis can be conducted on local devices for a rapid response while selected raw data can be uploaded to the cloud over 5G to improve inference models.
- Mining. Private 5G networks built-in remote sites can provide connectivity between fixed equipment, vehicles, drones, robotic dogs, workers, and remote operations centres. Autonomous haulage trucks can be monitored remotely and in the event of a breakdown, other vehicles can be automatically redirected to prevent dumping queues.
- Utilities. Emergency maintenance needs can be prioritised before extreme weather events based on meteorological forecasts and their impact on ageing parts. Machine learning can be used to understand location-specific effects of, for example, salt content in off-shore wind turbine cables. Early detection of turbine rotor cracks can recommend shutdown during high-load periods.

Data as an Asset
Effective prescriptive maintenance only becomes possible after the accumulation and integration of multiple data sources over an extended period. Inference models should understand both normal and abnormal equipment performance in various conditions, such as extreme weather, during incorrect operation, or when adjacent parts are degraded. For many smaller organisations or those deploying new equipment, the necessary volume of data will not be available without the assistance of equipment manufacturers. Moreover, even manufacturers will not have sufficient data on interaction with complementary equipment. This provides an opportunity for large operators to sell their own inference models as a new revenue stream. For example, an electrical grid operator in North America can partner with a similar, but smaller organisation in Europe to provide operational data and maintenance recommendations. Similarly, telecom providers, regional transportation providers, logistics companies, and smart cities will find industry players in other geographies that they do not naturally compete with.
Recommendations
- Employing multiple sensors. Baseline conditions and failure signatures are improved using machine learning based on feeds from multiple sensors, such as those that monitor vibration, sound, temperature, pressure, and humidity. The use of multiple sensors makes it possible to not only identify potential failure but also the reason for it and can therefore more accurately prescribe a solution to prevent an outage.
- Data assessment and integration. Prescriptive maintenance is most effective when multiple data sources are unified as inputs. Identify the location of these sources, such as ERP systems, time series on site, environmental data provided externally, or even in emails or on paper. A data fabric should be considered to ensure insights can be extracted from data no matter the environment it resides in.
- Automated action. Reduce the potential for human error or delay by automatically generating alerts and work orders for resource managers and service staff in the event of anomaly detection. Criticality measures should be adopted to help prioritise maintenance tasks and reduce alert noise.

Many years ago – back in 2003 – I spent some quality time with BMC at their global analyst event in Phoenix, Arizona and they introduced the concept of “Business Service Management” (BSM). I was immediately a convert – that businesses can focus their IT Service Management initiatives on the business and customer services that the technology supports. Businesses that use BSM can have an understanding of the impact and importance of technology systems and assets because there is a direct link between these assets and the systems they support. A router that supports a customer payment platform suddenly becomes a much higher priority than one that supports an employee expense platform.
But for most businesses, this promise was never delivered. Creating a BSM solution became a highly manual process – mapping processes, assets, and applications. Many businesses that undertook this challenge reported that by the time they had mapped their processes, the map was out of date – as processes had changed; assets had been retired, replaced, or upgraded; software had been moved to the cloud or new modules had been implemented; and architectures had changed. Effectively their BSM mapping was often a pointless task – sometimes only delivering value in the slow to change systems – back-end applications and infrastructure that delivers limited value and has a defined retirement date.
The Growth of Digital Business Strategies
Our technology systems are becoming more important than ever as digital business strategies are realised and digital interactions with customers, employees, and partners significantly increase. Many businesses expect their digital investments to remain strong well into 2022 (Figure 1). More than ever, we need to understand the link between our tech systems and the business and customer services they support.

I recently had the opportunity to attend a briefing by ServiceNow regarding their new “AI-Powered Service Operations” that highlighted their service-aware CMDB – adding machine learning to their service mapping capabilities. The upgraded offering has the ability to map entire environments in hours or minutes – not months or weeks. And as a machine learning capability, it is only likely to get smarter – to learn from their customers’ use of the service and begin to recognise what applications, systems, and infrastructure are likely to be supporting each business service.
This heralds a new era in service management – one where the actual business and customer impact of outages is known immediately; where the decision to delay an upgrade or fix to a known problem can be made with a full understanding of the impacts. At one of my previous employers, email went down for about a week. It was finally attributed to an upgrade to network equipment that sat between the email system and the corporate network and the internet. The tech teams were scratching their heads for days as there was no documented link between this piece of hardware and the email system. The impact of the outage was certainly felt by the business – but had it happened at the end of the financial year, it could have impacted perhaps 10-20% of the business bookings as many deals came in at that time.
Being able to understand the link between infrastructure, cloud services, applications, databases, middleware and business processes and services is of huge value to every business – particularly as the percentage of business through digital channels and touchpoints continues to accelerate.

BHP – the multinational mining giant – has signed agreements with AWS and Microsoft Azure as their long-term cloud providers to support their digital transformation journey. This move is expected to accelerate BHP’s cloud journey, helping them deploy and scale their digital operations to the workforce quickly while reducing the need for on-premises infrastructure.
Ecosystm research has consistently shown that many large organisations are using the learnings from how the COVID-19 pandemic impacted their business to re-evaluate their Digital Transformation strategy – leveraging next generation cloud, machine learning and data analytics capabilities.
BHP’s Dual Cloud Strategy
BHP is set to use AWS’s analytics, machine learning, storage and compute platform to deploy digital services and improve operational performance. They will also launch an AWS Cloud Academy Program to train and upskill their employees on AWS cloud skills – joining other Australian companies supporting their digital workforce by forming cloud guilds such as National Australia Bank, Telstra and Kmart Group.
Meanwhile, BHP will use Microsoft’s Azure cloud platform to host their global applications portfolio including SAP S/4 HANA environment. This is expected to enable BHP to reduce their reliance on regional data centres and leverage Microsoft’s cloud environment, licenses and SAP applications. The deal extends their existing relationship with Microsoft where BHP is using Office 365, Dynamics 365 and HoloLens 2 platforms to support their productivity and remote operations.
Ecosystm principal Advisor, Alan Hesketh says, “This dual sourcing is likely to achieve cost benefits for BHP from a competitive negotiation stand-point, and positions BHP well to negotiate further improvements in the future. With their scale, BHP has negotiating power that most cloud service customers cannot achieve – although an effective competitive process is likely to offer tech buyers some improvements in pricing.”

Can this Strategy Work for You?
Hesketh thinks that the split between Microsoft for Operations and AWS for Analytics will provide some interesting challenges for BHP. “It is likely that high volumes of data will need to be moved between the two platforms, particularly from Operations to Analytics and AI. The trend is to run time-critical analytics directly from the operational systems using the power of in-memory databases and the scalable cloud platform.”
“As BHP states, using the cloud reduces the need to put hardware on-premises, and allows the faster deployment of digital innovations from these cloud platforms. While achieving technical and cost improvements in their Operations and Analytics domains, it may compromise the user experience (UX). The UX delivered by the two clouds is quite different – so delivering an integrated experience is likely to require an additional layer that is capable of delivering a consistent UX. BHP already has a strong network infrastructure in place, so they are likely to achieve this within their existing platforms. If there is a need to build this UX layer, it is likely to reduce the speed of deployment that BHP is targeting with the dual cloud procurement approach.”
Many businesses that have previously preferred a single cloud vendor will find that they will increasingly evaluate multiple cloud environments, in the future. The adoption of modern development environments and architectures such as containers, microservices, open-source, and DevOps will help them run their applications and processes on the most suitable cloud option.
While this strategy may well work for BHP, Hesketh adds, “Tech buyers considering a hybrid approach to cloud deployment need to have robust enterprise and technology architectures in place to make sure the users get the experience they need to support their roles.”

In this Insight, our guest author Anupam Verma talks about how the Global Capability Centres (GCCs) in India are poised to become Global Transformation Centres. “In the post-COVID world, industry boundaries are blurring, and business models are being transformed for the digital age. While traditional functions of GCCs will continue to be providing efficiencies, GCCs will be ‘Digital Transformation Centres’ for global businesses.”

India has a lot to offer to the world of technology and transformation. Attracted by the talent pool, enabling policies, digital infrastructure, and competitive cost structure, MNCs have long embraced India as a preferred destination for Global Capability Centres (GCCs). It has been reported that India has more than 1,700 GCCs with an estimated global market share of over 50%.
GCCs employ around 1 million Indian professionals and has an immense impact on the economy, contributing an estimated USD 30 billion. US MNCs have the largest presence in the market and the dominating industries are BSFI, Engineering & Manufacturing, Tech & Consulting.
GCC capabilities have always been evolving
The journey began with MNCs setting up captives for cost optimisation & operational excellence. GCCs started handling operations (such as back-office and business support functions), IT support (such as app development and maintenance, remote IT infrastructure, and help desk) and customer service contact centres for the parent organisation.
In the second phase, MNCs started leveraging GCCs as centers of excellence (CoE). The focus then was product innovation, Engineering Design & R&D. BFSI and Professional Services firms started expanding the scope to cover research, underwriting, and consulting etc. Some global MNCs that have large GCCs in India are Apple, Microsoft, Google, Nissan, Ford, Qualcomm, Cisco, Wells Fargo, Bank of America, Barclays, Standard Chartered, and KPMG.
In the post-COVID world, industry boundaries are blurring, and business models are being transformed for the digital age. While traditional functions of GCCs will continue to be providing efficiencies, GCCs will be “Digital Transformation Centres” for global businesses.
The New Age GCC in the post-COVID world
On one hand, the pandemic broke through cultural barriers that had prevented remote operations and work. The world became remote everything! On the other hand, it accelerated digital adoption in organisations. Businesses are re-imagining customer experiences and fast-tracking digital transformation enabled by technology (Figure 1). High digital adoption and rising customer expectations will also be a big catalyst for change.

In last few years, India has seen a surge in talent pool in emerging technologies such as data analytics, experience design, AI/ML, robotic process automation, IoT, cloud, blockchain and cybersecurity. GCCs in India will leverage this talent pool and play a pivotal role in enabling digital transformation at a global scale. GCCs will have direct and significant impacts on global business performance and top line growth creating long-term stakeholder value – and not be only about cost optimisation.
GCCs in India will also play an important role in digitisation and automation of existing processes, risk management and fraud prevention using data analytics and managing new risks like cybersecurity.
More and more MNCs in traditional businesses will add GCCs in India over the next decade and the existing 1,700 plus GCCs will grow in scale and scope focussing on innovation. Shift of supply chains to India will also be supported by Engineering R & D Centres. GCCs passed the pandemic test with flying colours when an exceptionally large workforce transitioned to the Work from Home model. In a matter of weeks, the resilience, continuity, and efficiency of GCCs returned to pre-pandemic levels with a distributed and remote workforce.
A Final Take
Having said that, I believe the growth spurt in GCCs in India will come from new-age businesses. Consumer-facing platforms (eCommerce marketplaces, Healthtechs, Edtechs, and Fintechs) are creating digital native businesses. As of June 2021, there are more than 700 unicorns trying to solve different problems using technology and data. Currently, very few unicorns have GCCs in India (notable names being Uber, Grab, Gojek). However, this segment will be one of the biggest growth drivers.
Currently, only 10% of the GCCs in India are from Asia Pacific organisations. Some of the prominent names being Hitachi, Rakuten, Panasonic, Samsung, LG, and Foxconn. Asian MNCs have an opportunity to move fast and stay relevant. This segment is also expected to grow disproportionately.
New age GCCs in India have the potential to be the crown jewel for global MNCs. For India, this has a huge potential for job creation and development of Smart City ecosystems. In this decade, growth of GCCs will be one of the core pillars of India’s journey to a USD 5 trillion economy.
The views and opinions mentioned in the article are personal.
Anupam Verma is part of the Senior Leadership team at ICICI Bank and his responsibilities have included leading the Bank’s strategy in South East Asia to play a significant role in capturing Investment, NRI remittance, and trade flows between SEA and India.

Organisations have found that it is not always desirable to send data to the cloud due to concerns about latency, connectivity, energy, privacy and security. So why not create learning processes at the Edge?
What challenges does IoT bring?
Sensors are now generating such an increasing volume of data that it is not practical that all of it be sent to the cloud for processing. From a data privacy perspective, some sensor data is sensitive and sending data and images to the cloud will be subject to privacy and security constraints.
Regardless of the speed of communications, there will always be a demand for more data from more sensors – along with more security checks and higher levels of encryption – causing the potential for communication bottlenecks.
As the network hardware itself consumes power, sending a constant stream of data to the cloud can be taxing for sensor devices. The lag caused by the roundtrip to the cloud can be prohibitive in applications that require real-time response inputs.
Machine learning (ML) at the Edge should be prioritised to leverage that constant flow of data and address the requirement for real-time responses based on that data. This should be aided by both new types of ML algorithms and by visual processing units (VPUs) being added to the network.
By leveraging ML on Edge networks in production facilities, for example, companies can look out for potential warning signs and do scheduled maintenance to avoid any nasty surprises. Remember many sensors are linked intrinsically to public safety concerns such as water processing, supply of gas or oil, and public transportation such as metros or trains.
Ecosystm research shows that deploying IoT has its set of challenges (Figure 1) – many of these challenges can be mitigated by processing data at the Edge.

Predictive analytics is a fundamental value proposition for IoT, where responding faster to issues or taking action before issues occur, is key to a high return on investment. So, using edge computing for machine learning located within or close to the point of data gathering can in some cases be a more practical or socially beneficial approach.
In IoT the role of an edge computer is to pre-process data and act before the data is passed on to the main server. This allows a faster, low latency response and minimal traffic between the cloud server processing and the Edge. However, a better understanding of the benefits of edge computing is required if it has to be beneficial for a number of outcomes.


If we can get machine learning happening in the field, at the Edge, then we reduce the time lag and also create an extra trusted layer in unmanned production or automated utilities situations. This can create more trusted environments in terms of possible threats to public services.
What kind of examples of machine learning in the field can we see?
Healthcare
Health systems can improve hospital patient flow through machine learning (ML) at the Edge. ML offers predictive models to assist decision-makers with complex hospital patient flow information based on near real-time data.
For example, an academic medical centre created an ML pipeline that leveraged all its data – patient administration, EHR and clinical and claims data – to create learnings that could predict length of stay, emergency department (ED) arrival models, ED admissions, aggregate discharges, and total bed census. These predictive models proved effective as the medical centre reduced patient wait times and staff overtime and was able to demonstrate improved patient outcomes. And for a medical centre that use sensors to monitor patients and gather requests for medicine or assistance, Edge processing means keeping private healthcare data in-house rather than sending it off to cloud servers.
Retail
A retail store could use numerous cameras for self-checkout and inventory management and to monitor foot traffic. Such specific interaction details could slow down a network and can be replaced by an on-site Edge server with lower latency and a lower total cost. This is useful for standalone grocery pop-up sites such as in Sweden and Germany.
In Retail, k-nearest neighbours is often used in ML for abnormal activity analysis – this learning algorithm can also be used for visual pattern recognition used as part of retailers’ loss prevention tactics.
Summary
Working with the data locally on the Edge, creates reduced latency, reduced cloud usage and costs, independence from a network connection, more secure data, and increased data privacy.
Cloud and Edge computing that uses machine learning can together provide the best of both worlds: decentralised local storage, processing and reaction, and then uploading to the cloud, enabling additional insights, data backups (redundancy), and remote access.

Last week I wrote about the need to remove hype from reality when it comes to AI. But what will ensure that your AI projects succeed?
It is quite obvious that success is determined by human aspects rather than technological factors. We have identified four key organisational actions that enable successful AI implementation at scale (Figure 1).

#1 Establish a Data Culture
The traditional focus for companies has been on ensuring access to good, clean data sets and the proper use of that data. Ecosystm research shows that only 28% of organisations focused on customer service, also focus on creating a data-driven organisational culture. But our experience has shown that culture is more critical than having the data. Does the organisation have a culture of using data to drive decisions? Does every level of the organisation understand and use data insights to do their day-to-day jobs? Is decision-making data-driven and decentralised, needing to be escalated only when there is ambiguity or need for strategic clarity? Do business teams push for new data sources when they are not able to get the insights they need?
Without this kind of culture, it may be possible to implement individual pieces of automation in a specific area or process, applying brute force to see it through. In order to transform the business and truly extract the power of AI, we advise organisations to build a culture of data-driven decision-making first. That organisational mindset, will make you capable implementing AI at scale. Focusing on changing the organisational culture will deliver greater returns than trying to implement piecemeal AI projects – even in the short to mid-term.
#2 Ingrain a Digital-First Mindset
Assuming a firm has passed the data culture hurdle, it needs to consider whether it has adopted a digital-first mindset. AI is one of many technologies that impact businesses, along with AR/VR, IoT, 5G, cloud and Blockchain to name a few. Today’s environment requires firms to be capable of utilising a variety of these technologies – often together – and possessing a workforce capable of using these digital tools.
A workforce with the digital-first mindset looks for a digital solution to problems wherever appropriate. They have a good understanding of digital technologies relevant to their space and understand key digital methodologies – such as Customer 360 to deliver a truly superior customer experience or Agile methodologies to successfully manage AI at scale.
AI needs business managers at the operational levels to work with IT or AI tech teams to pinpoint processes that are right for AI. They need to make an estimation based on historical data of what specific problems require an AI solution. This is enabled by the digital-first mindset.
#3 Demystify AI
The next step is to get business leaders, functional leaders, and business operational teams – not just those who work with AI – to acquire a basic understanding of AI.
They do not need to learn the intricacies of programming or how to create neural networks or anything nearly as technical in nature. However, all levels from the leadership down should have a solid understanding of what AI can do, the basics of how it works, how the process of training data results in improved outcomes and so on. They need to understand the continuous learning nature of AI solutions, getting better over time. While AI tools may recommend an answer, human insight is often needed to make a correct decision off this recommendation.

#4 Drive Implementation Bottom-Up
AI projects need alignment, objectives, strategy – and leadership and executive buy-in. But a very important aspect of an AI-driven organisation that is able to build scalable AI, is letting projects run bottom up.
As an example, a reputed Life Sciences company embarked on a multi-year AI project to improve productivity. They wanted to use NLP, Discovery, Cognitive Assist and ML to augment clinical proficiency of doctors and expected significant benefits in drug discovery and clinical trials by leveraging the immense dataset that was built over the last 20 years.
The company ran this like any other transformation project, with a central program management team taking the lead with the help of an AI Centre of Competency. These two teams developed a compelling business case, and identified initial pilots aligned with the long-term objectives of the program. However, after 18 months, they had very few tangible outcomes. Everyone including doctors, research scientists, technicians, and administrators, who participated in the program had their own interpretation of what AI was not able to do.
Discussion revealed that the doctors and researchers felt that they were training AI to replace themselves. Seeing a tool trying to mimic the same access and understanding of numerous documents baffled them at best. They were not ready to work with AI programs step-by-step to help AI tools learn and discover new insights.
At this point, we suggested approaching the project bottom-up – wherein the participating teams would decide specific projects to take up. This developed a culture where teams collaborated as well as competed with each other, to find new ways to use AI. Employees were shown a roadmap of how their jobs would be enhanced by offloading routine decisions to AI. They were shown that AI tools augment the employees’ cognitive capabilities and made them more effective.
The team working on critical trials found these tools extremely useful and were able to collaborate with other organisations specialising in similar trials. They created the metadata and used ML algorithms to discover new insights. Working bottom-up led to a very successful AI deployment.
We have seen time and again that while leadership may set the strategy and objectives, it is best to let the teams work bottom-up to come up with the projects to implement.
#5 Invest in Upskilling
The four “keys” are important to build an AI-powered, future-proof enterprise. They are all human related – and when they come together to work as a winning formula is when organisations invest in upskilling. Upskilling is the common glue and each factor requires specific kinds of upskilling (Figure 2).

Upskilling needs vary by organisational level and the key being addressed. The bottom line is that upskilling is a universal requirement for driving AI at scale, successfully. And many organisations are realising it fast – Bosch and DBS Bank are some of the notable examples.
How much is your organisation invested in upskilling for AI implementation at scale? Share your stories in the comment box below.
Written with contributions from Ravi Pattamatta and Ratnesh Prasad
