Web3 Evolution: From Speculation to Real-World Applications

5/5 (1)

5/5 (1)

2024 was a pivotal year for cryptocurrency, driven by substantial institutional adoption. The approval and launch of spot Bitcoin and Ethereum ETFs marked a turning point, solidifying digital assets as institutional-grade. Bitcoin has evolved into a macro asset, and the ecosystem’s outlook remains robust, with signs of regulatory clarity in the US and increasing broad adoption. High-quality research from firms like VanEck, Messari, Pantera, Galaxy, and a16Z, has further strengthened my conviction.  

As a “normie in web3,” my perspective comes from connecting the dots through research, not from early airdrops or token swaps. While the speculative frenzy, rug pulls, and scams at the “casino” end are off-putting, the real potential on the “computer” side of blockchains is thrilling. Events like TOKEN2049 in Dubai and Singapore highlight the ecosystem’s energy, with hundreds of side events now central to the experience.

As the web3 ecosystem evolves, new blockchains, roll-ups, and protocols vie for attention. With 60 million unique wallets in the on-chain economy, adoption is set to expand beyond this base. DeFi transaction volumes have surpassed USD 200B/month, yet the ecosystem remains in its early stages, with only 10 million users.

Despite current fragmentation, the future looks promising. Themes like tokenising real-world assets, decentralised public infrastructure, stablecoins for instant payments, and the convergence of AI and blockchain could reshape finance, identity, infrastructure, and computing. Web3 holds transformative potential, even if not in marketing terms like “unstable” coins or “unreal world assets.”

The Decentralisation Paradox of Web3

Decentralisation may have been a core tenet of web3 at the onset but is also seen as a constraint to scaling or improving user experience in certain instances. I always saw decentralisation as a progressive spectrum and not a binary. It is, however, a difficult north star to maintain, as scaling becomes an actual human coordination challenge.

In Blockchains. We have seen this phenomenon manifest with the Ethereum ecosystem in particular. Of the fifty-plus roll-ups listed on L2 Beat, only Arbitrum and OP Mainnet have progressed beyond Stage 0, with many still not posting fraud proofs to L1. Some high-performance L1s and L2s have deprioritised decentralisation in favour of scaling and UX. Whether this trade-off leads to greater vulnerability or stronger product-market fit remains to be seen – most users care more about performance than underlying technology. In 2025, we’ll likely witness the quiet demise of as many blockchains as new ones emerge.

In Finance. On the institutional side, some aspects of high-value transactions in traditional finance or TradFi, such as custody, need trusted intermediaries to minimise counterparty risk. For web3 to scale beyond the 60-million-odd wallets that participate in the on-chain economy today, we need protocols that marry blockchains’ efficiency, composability, and programmability with the trusted identity and verifiability of the regulated financial systems. While “CeDeFi” or Centralised Decentralised Finance might sound ironical to most in the crypto native world, I expect much more convergence with institutions launching tokenisation projects on public blockchains, including Ethereum and Solana. I like underway pilots, such as one by Chainlink with SWIFT, facilitating off-chain cash settlements for tokenised funds. Some of these projects will find strong traction and scale coupled with regulatory blessings in certain progressive jurisdictions in 2025.

In Infrastructure. While decentralised compute clusters for post-training and inference from the likes of io.net can lower the cost of computing for start-ups, scaling decentralised AI LLMs to make them competitive against LLMs from centralised entities like OpenAI is a nearly impossible order. New metas such as decentralised science or DeSci are exciting because they open the possibility of fast-tracking fundamental research and drug discovery.

Looking Back at 2024: What I Found Exciting

ETFs. BlackRock’s IBIT ETF became the fastest to reach USD 3 billion in AUM within 30 days and scaled to USD 40 billion in 200 days. The institutional landscape now goes beyond traditional ETFs, with major financial institutions expanding digital asset capabilities across custody, market access, and retail integration. These include institutional-grade custody from Standard Chartered and Nomura, market access from Goldman Sachs, and retail integration from fintechs such as Revolut.

Stablecoins. Stablecoin usage beyond trading has continued to grow at a healthy clip, emerging as a real killer use case in payments. Transaction volumes rose from USD 10T to USD 20T in a year, and yes, that is a trillion with a “t”! The current market capitalisation of stablecoins is approximately USD 201.5 billion, slated to triple in 2025, with Tether’s USDT at over 67% market share. We might see new fiat-backed stablecoins being launched this year, such as Ethena’s yield-bearing stablecoin, but I don’t expect USDT’s dominance to change.

RWAs. Even though stablecoins represent 97% of real-world assets on-chain and the dollar value of all other types of assets is still insignificant, the potential market for asset tokenisation is still a staggering USD 1.4T, and with regulatory clarity, even if RWAs on-chain were to quadruple, the resulting USD 50B will be a sliver of the overall opportunity. We can expect more projects in asset classes such as private credit – rwa.xyz is a great dashboard to watch this space.

DePIN. Decentralised public infrastructure across wireless, energy, compute, sensors, identity, and logistics reached a USD 50B market cap and USD 500M in ARR. Key developments include the emergence of AI as a major driver of DePIN adoption, the maturation of supply-side growth playbooks, and the shift in focus toward demand-side monetisation. More than 13 million devices globally contribute to DePINs daily, demonstrating successful supply-side scaling. Notable projects include:

  • Helium Mobile: Adding 100k+ subscribers and diversifying revenue streams.
  • AI Integration: Bittensor leading decentralised AI with successful subnets.
  • Energy DePINs: Glow and Daylight addressing challenges in distributed energy systems.
  • Identity Verification: World (formerly Worldcoin) achieving 20 million verified identities.

These trends indicate significant advancements in the web3 ecosystem, and the continued evolution of blockchain technologies and their applications in finance, infrastructure, and beyond holds immense promise for 2025 and beyond.

In my next Ecosystm Insights, I’ll present the trends in 2025 that I am excited about. Watch this space!

FinTech Industry
0
AI’s Unintended Consequences: The Automation Paradox

5/5 (3)

5/5 (3)

Automation and AI hold immense promise for accelerating productivity, reducing errors, and streamlining tasks across virtually every industry. From manufacturing plants that operate robotic arms to software-driven solutions that analyse millions of data points in seconds, these technological advancements are revolutionising how we work. However, AI has already led to, and will continue to bring about, many unintended consequences.

One that has been discussed for nearly a decade but is starting to impact employees and brand experiences is the “automation paradox”. As AI and automation take on more routine tasks, employees find themselves tackling the complex exceptions and making high-stakes decisions.

What is the Automation Paradox?

1. The Shifting Burden from Low to High Value Tasks

When AI systems handle mundane or repetitive tasks, ‘human’ employees can direct their efforts toward higher-value activities. At first glance, this shift seems purely beneficial. AI helps filter out extraneous work, enabling humans to focus on the tasks that require creativity, empathy, or nuanced judgment. However, by design, these remaining tasks often carry greater responsibility. For instance, in a retail environment with automated checkout systems, a human staff member is more likely to deal with complex refund disputes or tense customer interactions. Or in a warehouse, as many processes are automated by AI and robots, humans are left with the oversight of, and responsibility for entire processes. Over time, handling primarily high-pressure situations can become mentally exhausting, contributing to job stress and potential burnout.

2. Increased Reliance on Human Judgment in Edge Cases

AI excels at pattern recognition and data processing at scale, but unusual or unprecedented scenarios can stump even the best-trained models. The human workforce is left to solve these complex, context-dependent challenges. Take self-driving cars as an example. While most day-to-day driving can be safely automated, human oversight is essential for unpredictable events – like sudden weather changes or unexpected road hazards.

Human intervention can be a critical, life-or-death matter, amplifying the pressure and stakes for those still in the loop.

3. The Fallibility Factor of AI

Ironically, as AI becomes more capable, humans may trust it too much. When systems make mistakes, it is the human operator who must detect and rectify them. But the further removed people are from the routine checks and balances – since “the system” seems to handle things so competently – the greater the chance that an error goes unnoticed until it has grown into a major problem. For instance, in the aviation industry, pilots who rely heavily on autopilot systems must remain vigilant for rare but critical emergency scenarios, which can be more taxing due to limited practice in handling manual controls.

Add to These the Known Challenges of AI!

Bias in Data and Algorithms. AI systems learn from historical data, which can carry societal and organisational biases. If left unchecked, these algorithms can perpetuate or even amplify unfairness. For instance, an AI-driven hiring platform trained on past decisions might favour candidates from certain backgrounds, unintentionally excluding qualified applicants from underrepresented groups.

Privacy and Data Security Concerns. The power of AI often comes from massive data collection, whether for predicting consumer trends or personalising user experiences. This accumulation of personal and sensitive information raises complex legal and ethical questions. Leaks, hacks, or improper data sharing can cause reputational damage and legal repercussions.

Skills Gap and Workforce Displacement. While AI can eliminate the need for certain manual tasks, it creates a demand for specialised skills, such as data science, machine learning operations, and AI ethics oversight. If an organisation fails to provide employees with retraining opportunities, it risks exacerbating skill gaps and losing valuable institutional knowledge.

Ethical and Social Implications. AI-driven decision-making can have profound impacts on communities. For example, a predictive policing system might inadvertently target specific neighbourhoods based on historical arrest data. When these systems lack transparency or accountability, public trust erodes, and social unrest can follow.

How Can We Mitigate the Known and Unknown Consequences of AI?

While some of the unintended consequences of AI and automation won’t be known until systems are deployed and processes are in practice, there are some basic hygiene approaches that technology leaders and their organisational peers can take to minimise these impacts.

  1. Human-Centric Design. Incorporate user feedback into AI system development. Tools should be designed to complement human skills, not overshadow them.
  2. Comprehensive Training. Provide ongoing education for employees expected to handle advanced AI or edge-case scenarios, ensuring they remain engaged and confident when high-stakes decisions arise.
  3. Robust Governance. Develop clear policies and frameworks that address bias, privacy, and security. Assign accountability to leaders who understand both technology and organisational ethics.
  4. Transparent Communication. Maintain clear channels of communication regarding what AI can and cannot do. Openness fosters trust, both internally and externally.
  5. Increase your organisational AIQ (AI Quotient). Most employees are not fully aware of the potential of AI and its opportunity to improve – or change – their roles. Conduct regular upskilling and knowledge sharing activities to improve the AIQ of your employees so they start to understand how people, plus data and technology, will drive their organisation forward.

Let me know your thoughts on the Automation Paradox, and stay tuned for my next blog on redefining employee skill pathways to tackle its challenges.

The Future of Industries
0
AI Stakeholders: The HR Perspective

5/5 (2)

5/5 (2)

AI has broken free from the IT department. It’s no longer a futuristic concept but a present-day reality transforming every facet of business. Departments across the enterprise are now empowered to harness AI directly, fuelling innovation and efficiency without waiting for IT’s stamp of approval. The result? A more agile, data-driven organisation where AI unlocks value and drives competitive advantage.

Ecosystm’s research over the past two years, including surveys and in-depth conversations with business and technology leaders, confirms this trend: AI is the dominant theme. And while the potential is clear, the journey is just beginning.

Here are key AI insights for HR Leaders from our research.

AI-the-HR-Perspective-1
AI-the-HR-Perspective-2
AI-the-HR-Perspective-3
AI-the-HR-Perspective-4
AI-the-HR-Perspective-5
AI-the-HR-Perspective-6
AI-the-HR-Perspective-7
AI-the-HR-Perspective-8
previous arrowprevious arrow
next arrownext arrow
AI-the-HR-Perspective-1
AI-the-HR-Perspective-2
AI-the-HR-Perspective-3
AI-the-HR-Perspective-4
AI-the-HR-Perspective-5
AI-the-HR-Perspective-6
AI-the-HR-Perspective-7
AI-the-HR-Perspective-8
previous arrow
next arrow
Shadow

Click here to download “AI Stakeholders: The HR Perspective” as a PDF.

HR: Leading the Charge (or Should Be)

Our research reveals a fascinating dynamic in HR. While 54% of HR leaders currently use AI for recruitment (scanning resumes, etc.), their vision extends far beyond. A striking majority plan to expand AI’s reach into crucial areas: 74% for workforce planning, 68% for talent development and training, and 62% for streamlining employee onboarding.

The impact is tangible, with organisations already seeing significant benefits. GenAI has streamlined presentation creation for bank employees, allowing them to focus on content rather than formatting and improving efficiency. Integrating GenAI into knowledge bases has simplified access to internal information, making it quicker and easier for employees to find answers. AI-driven recruitment screening is accelerating hiring in the insurance sector by analysing resumes and applications to identify top candidates efficiently. Meanwhile, AI-powered workforce management systems are transforming field worker management by optimising job assignments, enabling real-time tracking, and ensuring quick responses to changes.

The Roadblocks and the Opportunity

Despite this promising outlook, HR leaders face significant hurdles. Limited exploration of use cases, the absence of a unified organisational AI strategy, and ethical concerns are among the key barriers to wider AI deployments.

Perhaps most concerning is the limited role HR plays in shaping AI strategy. While 57% of tech and business leaders cite increased productivity as the main driver for AI investments, HR’s influence is surprisingly weak. Only 20% of HR leaders define AI use cases, manage implementation, or are involved in governance and ownership. A mere 8% primarily manage AI solutions.

This disconnect represents a massive opportunity.

2025 and Beyond: A Call to Action for HR

Despite these challenges, our research indicates HR leaders are prioritising AI for 2025. Increased productivity is the top expected outcome, while three in ten will focus on identifying better HR use cases as part of a broader data-centric approach.

The message is clear: HR needs to step up and claim its seat at the AI table. By proactively defining use cases, championing ethical considerations, and collaborating closely with tech teams, HR can transform itself into a strategic driver of AI adoption, unlocking the full potential of this transformative technology for the entire organisation. The future of HR is intelligent, and it’s time for HR leaders to embrace it.

AI Research and Reports
0
Can We Afford AI? The Cost Debate Heats Up 

4.3/5 (3)

4.3/5 (3)

Welcome to 2025, the Year of the Snake – now enhanced, of course, with AI-powered features! While 2023 and 2024 saw a surprising global consensus on the potential risks of AI and the need for careful management (think AI legislation), the opening weeks of 2025 have thrown a new, and perhaps more pressing, concern into the spotlight: cost. 

The recent unveiling of Project Stargate sent ripples throughout the tech world, not just for its ambitious goals, but for its staggering price tag: a cool USD 500B over four years. Let that sink in. That’s roughly the equivalent of Singapore’s entire GDP in 2023. For context, that kind of money could fund the entire Apollo program and build two International Space Stations, with some spending money left over. It’s a figure that underscores the sheer scale of investment required to push the boundaries of AI. 

But then, the plot thickened. A relatively unknown Chinese company, DeepSeek, seemingly out of nowhere, launched its R1 large language model (LLM). Not only does R1 appear to be a direct competitor to OpenAI’s latest offerings, but DeepSeek also claims to have achieved this feat at a fraction of the cost, and using fewer (and potentially less powerful) GPUs. This announcement sent shockwaves through the stock market on January 27th, impacting nearly every stock associated with AI chip manufacturing. Nvidia (NVDA), a key player in the AI hardware space, suffered one of the biggest single-day losses in US stock market history, with nearly USD 600B wiped off its market capitalisation. Ironically, that’s more than Project Stargate’s entire budget plus the cost of an ISS. 

This dramatic market reaction highlights several critical trends emerging in 2025. The previously observed consensus on AI risks and legislation is already beginning to fracture (witness the recent back-and-forth on AI regulation). Meanwhile, the exorbitant cost of AI development is becoming increasingly apparent. We’re also seeing a renewed West versus (Far) East rivalry playing out in the AI arena, extending beyond just technological competition. And finally, the age-old debate between open-source and proprietary software is back, with some LLMs, like DeepSeek’s R1, leaning more towards open access than others. 

For organisations considering investing in AI, and indeed for all of us whose lives are increasingly touched by AI developments, it’s crucial to keep a close watch on these powerful trends. The risks, the investments, and the potential benefits of AI must be carefully scrutinised and potentially reassessed. The recent stock market correction suggests a necessary pushback against the over-confidence and over-spending that has characterised some areas of AI development. As DeepSeek’s R1 has shown, sometimes it doesn’t take much to disrupt the party.  

The question now is: how will the landscape shift, and who will emerge as the true leaders in this expensive, yet potentially transformative, race? 

AI Research and Reports
0
Building the AI Future: Top 5 Infra Trends for 2025

5/5 (1)

5/5 (1)

AI is reshaping the tech infrastructure landscape, demanding a fundamental rethinking of organisational infrastructure strategies. Traditional infrastructure, once sufficient, now struggles to keep pace with the immense scale and complexity of AI workloads. To meet these demands, organisations are turning to high-performance computing (HPC) solutions, leveraging powerful GPUs and specialised accelerators to handle the computationally intensive nature of AI algorithms.

Real-time AI applications, from fraud detection to autonomous vehicles, require lightning-fast processing speeds and low latency. This is driving the adoption of high-speed networks and edge computing, enabling data processing closer to the source and reducing response times. AI-driven automation is also streamlining infrastructure management, automating tasks like network provisioning, security monitoring, and capacity planning. This not only reduces operational overhead but also improves efficiency and frees up valuable resources.

Ecosystm analysts Darian Bird, Peter Carr, Simona Dimovski, and Tim Sheedy present the key trends shaping the tech infrastructure market in 2025.

Click here to download ‘Building the AI Future: Top 5 Infra Trends for 2025’ as a PDF

1. The AI Buildout Will Accelerate; China Will Emerge as a Winner

In 2025, the race for AI dominance will intensify, with Nvidia emerging as the big winner despite an impending AI crash. Many over-invested companies will fold, flooding the market with high-quality gear at bargain prices. Meanwhile, surging demand for AI infrastructure – spanning storage, servers, GPUs, networking, and software like observability, hybrid cloud tools, and cybersecurity – will make it a strong year for the tech infrastructure sector.

Ironically, China’s exclusion from US tech deals has spurred its rise as a global tech giant. Forced to develop its own solutions, China is now exporting its technologies to friendly nations worldwide.

By 2025, Chinese chipmakers are expected to rival international peers, with some reaching parity.

2. AI-Optimised Cloud Platforms Will Dominate Infrastructure Investments

AI-optimised cloud platforms will become the go-to infrastructure for organisations, enabling seamless integration of machine learning capabilities, scalable compute power, and efficient deployment tools.

As regulatory demands grow and AI workloads become more complex, these platforms will provide localised, compliant solutions that meet data privacy laws while delivering superior performance.

This shift will allow businesses to overcome the limitations of traditional infrastructure, democratising access to high-performance AI resources and lowering entry barriers for smaller organisations. AI-optimised cloud platforms will drive operational efficiencies, foster innovation, and help businesses maintain compliance, particularly in highly regulated industries.

3. PaaS Architecture, Not Data Cleanup, Will Define AI Success

By 2025, as AI adoption reaches new heights, organisations will face an urgent need for AI-ready data, spurring significant investments in data infrastructure. However, the approach taken will be pivotal.

A stark divide will arise between businesses fixated on isolated data-cleaning initiatives and those embracing a Platform-as-a-Service (PaaS) architecture.

The former will struggle, often unintentionally creating more fragmented systems that increase complexity and cybersecurity risks. While data cleansing is important, focusing exclusively on it without a broader architectural vision leads to diminishing returns. On the other hand, organisations adopting PaaS architectures from the start will gain a distinct advantage through seamless integration, centralised data management, and large-scale automation, all critical for AI.

4. Small Language Models Will Push AI to the Edge

While LLMs have captured most of the headlines, small language models (SLMs) will soon help to drive AI use at the edge. These compact but powerful models are designed to operate efficiently on limited hardware, like AI PCs, wearables, vehicles, and robots. Their small size translates into energy efficiency, making them particularly useful in mobile applications. They also help to mitigate the alarming electricity consumption forecasts that could make widespread AI adoption unsustainable.

Self-contained SMLs can function independently of the cloud, allowing them to perform tasks that require low latency or without Internet access.

Connected machines in factories, warehouses, and other industrial environments will have the benefit of AI without the burden of a continuous link to the cloud.

5. The Impact of AI PCs Will Remain Limited

AI PCs have been a key trend in 2024, with most brands launching AI-enabled laptops. However, enterprise feedback has been tepid as user experiences remain unchanged. Most AI use cases still rely on the public cloud, and applications have yet to be re-architected to fully leverage NPUs. Where optimisation exists, it mainly improves graphics efficiency, not smarter capabilities. Currently, the main benefit is extended battery life, explaining the absence of AI in desktop PCs, which don’t rely on batteries.

The market for AI PCs will grow as organisations and consumers adopt them, creating incentives for developers to re-architect software to leverage NPUs.

This evolution will enable better data access, storage, security, and new user-centric capabilities. However, meaningful AI benefits from these devices are still several years away.

Ecosystm Predicts 2024
0
Securing the AI Frontier: Top 5 Cyber Trends for 2025

5/5 (1)

5/5 (1)

Ecosystm research shows that cybersecurity is the most discussed technology at the Board and Management level, driven by the increasing sophistication of cyber threats and the rapid adoption of AI. While AI enhances security, it also introduces new vulnerabilities. As organisations face an evolving threat landscape, they are adopting a more holistic approach to cybersecurity, covering prevention, detection, response, and recovery.

In 2025, cybersecurity leaders will continue to navigate a complex mix of technological advancements, regulatory pressures, and changing business needs. To stay ahead, organisations will prioritise robust security solutions, skilled professionals, and strategic partnerships.

Ecosystm analysts Darian Bird, Sash Mukherjee, and Simona Dimovski present the key cybersecurity trends for 2025.

Click here to download ‘Securing the AI Frontier: Top 5 Cyber Trends for 2025’ as a PDF

1. Cybersecurity Will Be a Critical Differentiator in Corporate Strategy

The convergence of geopolitical instability, cyber weaponisation, and an interconnected digital economy will make cybersecurity a cornerstone of corporate strategy. State-sponsored cyberattacks targeting critical infrastructure, supply chains, and sensitive data have turned cyber warfare into an operational reality, forcing businesses to prioritise security.

Regulatory pressures are driving this shift, mandating breach reporting, data sovereignty, and significant penalties, while international cybersecurity norms compel companies to align with evolving standards to remain competitive.

The stakes are high. Stakeholders now see cybersecurity as a proxy for trust and resilience, scrutinising both internal measures and ecosystem vulnerabilities.

2. Zero Trust Architectures Will Anchor AI-Driven Environments

The future of cybersecurity lies in never trusting, always verifying – especially where AI is involved.

In 2025, the rise of AI-driven systems will make Zero Trust architectures vital for cybersecurity. Unlike traditional networks with implicit trust, AI environments demand stricter scrutiny due to their reliance on sensitive data, autonomous decisions, and interconnected systems. The growing threat of adversarial attacks – data poisoning, model inversion, and algorithmic manipulation – highlights the urgency of continuous verification.

Global forces are driving this shift. Regulatory mandates like the EU’s DORA, the US Cybersecurity Executive Order, and the NIST Zero Trust framework call for robust safeguards for critical systems. These measures align with the growing reliance on AI in high-stakes sectors like Finance, Healthcare, and National Security.

3. Organisations Will Proactively Focus on AI Governance & Data Privacy

Organisations are caught between excitement and uncertainty regarding AI. While the benefits are immense, businesses struggle with the complexities of governing AI. The EU AI Act looms large, pushing global organisations to brace for stricter regulations, while a rise in shadow IT sees business units bypassing traditional IT to deploy AI independently.

In this environment of regulatory ambiguity and organisational flux, CISOs and CIOs will prioritise data privacy and governance, proactively securing organisations with strong data frameworks and advanced security solutions to stay ahead of emerging regulations.

Recognising that AI will be multi-modal, multi-vendor, and hybrid, organisations will invest in model orchestration and integration platforms to simplify management and ensure smoother compliance.

4. Network & Security Stacks Will Streamline Through Converged Platforms

This shift stems from the need for unified management, cost efficiency, and the recognition that standardisation enhances security posture.

Tech providers are racing to deliver comprehensive network and security platforms.

Recent M&A moves by HPE (Juniper), Palo Alto Networks (QRadar SaaS), Fortinet (Lacework), and LogRhythm (Exabeam) highlight this trend. Rising player Cato Networks is capitalising on mid-market demand for single-provider solutions, with many customers planning to consolidate vendors in their favour. Meanwhile, telecoms are expanding their SASE offerings to support organisations adapting to remote work and growing cloud adoption.

5. AI Will Be Widely Used to Combat AI-Powered Threats in Real-time

By 2025, the rise of AI-powered cyber threats will demand equally advanced AI-driven defences.

Threat actors are using AI to launch adaptive attacks like deepfake fraud, automated phishing, and adversarial machine learning, operating at a speed and scale beyond traditional defences.

Real-time AI solutions will be essential for detection and response.

Nation-state-backed advanced persistent threat (APT) groups and GenAI misuse are intensifying these challenges, exploiting vulnerabilities in critical infrastructure and supply chains. Mandatory reporting and threat intelligence sharing will strengthen AI defences, enabling real-time adaptation to emerging threats.

Ecosystm Predicts 2024
0