Ground Realities: Australia’s Tech Pulse

5/5 (1)

5/5 (1)

Australia is making meaningful progress on its digital journey, driven by a vibrant tech sector, widespread technology adoption, and rising momentum in AI. But realising its full potential as a leading digital economy will depend on bridging the skills gap, moving beyond surface-level AI applications, accelerating SME digital transformation, and navigating ongoing economic uncertainty. For many enterprises, the focus is shifting from experimentation to execution, using technology to drive efficiency, resilience, and measurable outcomes.

Increasingly, leaders are asking not just how fast Australia can innovate, but how wisely. Strategic choices made now will shape a digital future grounded in national values where technology fuels both economic growth and public good.

These five key realities capture the current state of Australia’s technology landscape, based on insights from Ecosystm’s industry conversations and research.

1. Responsible by Design: Australia’s Path to Trusted AI

AI in Australia is progressing with a strong focus on ethics and public trust. Regulators like ASIC and the OAIC (Office of the Australian Information Commissioner) have made it clear that AI systems, especially in banking, insurance, and healthcare, must be transparent and fair. Banks like ANZ and Commonwealth Bank, have developed responsible AI frameworks to ensure their algorithms don’t unintentionally discriminate or mislead customers.

Yet a clear gap remains between ambition and readiness. Ecosystm research shows nearly 77% of Australian organisations acknowledge progress in piloting real-world use cases but worry they’re falling behind due to weak governance and poor-quality data.

The conversation around AI in Australia is evolving beyond productivity to include building trust. Success is now measured by the confidence regulators, customers, and communities have in AI systems. The path forward is clear: AI must drive innovation while upholding principles of fairness, transparency, and accountability.

2. The New AI Skillset: Where Data Science Meets Compliance and Context

Australia is on track to face a shortfall of 250,000 skilled workers in tech and business by 2030, according to the Future Skills Organisation. But the gap isn’t just in coders or engineers; it’s in hybrid talent: professionals who can connect AI development with regulatory, ethical, and commercial understanding.

In sectors like finance, AI adoption has stalled not due to lack of tools, but due to a lack of people who can interpret financial regulations and translate them into data science requirements. The same challenge affects healthcare, where digital transformation projects often slow down because technical teams lack domain-specific compliance and risk expertise.

While skilled migration has rebounded post-pandemic, the domestic pipeline remains limited. In response, organisations like Microsoft and Commonwealth Bank are investing in cross-skilling employees in AI, cloud, and risk management. Government initiatives such as CSIRO’s Responsible AI program and UNSW’s AI education efforts are also working to build talent fluent in both technology and ethics.

Despite these efforts, Australia’s shortage of hybrid talent remains a critical bottleneck, shaping not just how fast AI is adopted, but how responsibly and effectively it is deployed.

3. Beyond Coverage: Closing the Digital Gap for Regional Australia

Australia’s vast geography creates a uniquely local digital divide. Despite the National Broadband Network (NBN) rollout, many regional areas still face slow speeds and outages. The 2023 Regional Telecommunications Review found that over 2.8 million Australians remain without reliable internet access. Industries suffer tangible impacts. GrainCorp, a major agribusiness, uses AI to communicate with workers during the harvest season, but regional connectivity gaps hinder real-time monitoring and analytics. In healthcare, the Royal Flying Doctor Service reports that poor internet reliability in remote areas undermines telehealth consultations, particularly crucial for Indigenous communities.

Efforts to address these gaps are underway. Telstra launched satellite services through partnerships with Starlink and OneWeb to cover remote zones. However, these solutions often come with prohibitive costs, particularly for smaller businesses, farms, and community organisations that cannot afford private network infrastructure.

The implications are clear: without reliable and affordable internet, regional enterprises will struggle to adopt AI, cloud-based systems, and digital tools that drive efficiency and equity. The next step must be a coordinated approach involving government, telecom providers, and industry, focused not just on coverage, but on quality, affordability, and support for local innovation. Bridging this digital divide is not simply about infrastructure, it’s about ensuring inclusive access to the tools that power modern business and essential services.

4. Resilience Over Defence: Australia’s Evolving Cybersecurity Focus

Australia’s cyber landscape has shifted sharply following major breaches like Optus, Medibank, and Latitude Financial, which pushed cybersecurity to the top of national agendas. In response, regulators and organisations have adopted a more urgent, coordinated stance. Under the Security of Critical Infrastructure (SOCI) Act, critical sectors must now report serious incidents within hours, enabling faster, government-led responses and stronger collective resilience.

Organisations across sectors are stepping up their defences, moving from reactive measures to proactive preparedness. NAB confirmed that it spends over USD 150M annually on cybersecurity, focusing on real-time threat hunting, simulation exercises, and red teaming. Telstra continues to run annual “cyber war games” involving IT, legal, and crisis communications teams to prepare for worst-case scenarios.

This collective focus signals a broader shift across Australian industries: cybersecurity maturity is no longer judged by perimeter defence alone. Instead, resilience – an organisation’s ability to detect, respond, and recover swiftly – is now the benchmark for protecting critical assets in an increasingly complex threat landscape.

5. Designing for the Long Term: Sustainability as a Core Capability

Organisations across Australia are under growing pressure – not only from regulators, but also from investors, customers, and communities – to demonstrate that their digital strategies are delivering real environmental and social outcomes. The bar has shifted from ESG disclosure to ESG performance. Technology is no longer just an efficiency lever; it’s expected to be a catalyst for sustainability transformation.

This expectation is especially acute in Australia’s core industries, where environmental impact is both material and highly scrutinised. In mining, for example, Rio Tinto’s 20-year renewable energy deal with Edify Energy aims to cut emissions by up to 70% at its Queensland aluminium operations by 2028. But the focus on transition is not limited to high-emission sectors. In financial services, institutions are actively supporting the shift to a low-carbon economy, from setting long-term net-zero targets to aligning lending practices with climate goals, including phasing out support for high-emission assets.

Yet for many, the path forward is still fragmented. ESG data often sits in silos, legacy systems constrain visibility, and ownership of sustainability metrics is scattered. Digital transformation efforts that treat ESG as an add-on, rather than embedding it into the foundations of data, governance, and decision-making, risk missing the mark. Australia’s next digital frontier will be measured not just by innovation, but by how effectively it enables a low-carbon, inclusive, and resilient economy.

Shaping Australia’s Digital Future

Australia’s technology journey is accelerating, but significant challenges must be addressed to unlock its full potential. Moving beyond basic digitalisation, the country is embracing advanced technologies as essential drivers of economic growth and productivity. Strong government initiatives and investments are creating a foundation for innovation and building a highly skilled digital workforce. However, overcoming barriers such as talent shortages, infrastructure gaps, and governance complexities is critical. Only by tackling these obstacles head-on and embedding technology deeply across organisations of all sizes can Australia transform automation into true data-driven autonomy and new business models, securing its position as a global digital leader.

AI Research and Reports
0
Responsible AI, Competitive Advantage: A Guide to Global Regulation 

5/5 (1)

5/5 (1)

AI can no longer be treated as a side experiment; it is often embedded in core decisions, customer experiences, operations, and innovation. And as adoption accelerates, so does regulatory scrutiny. Around the world, governments are moving quickly to set rules on how AI can be used, what risks must be controlled, and who is held accountable when harm occurs. 

This shift makes Responsible AI a strategic imperative – not just a compliance checkbox. It’s about reducing reputational risk, protecting customers and IP, and earning the trust needed to scale AI responsibly. Embedding transparency, fairness, and accountability into AI systems isn’t just ethical, it’s smart business. 

Understanding the regulatory landscape is a key part of that responsibility. As frameworks evolve, organisations must stay ahead of the rules shaping AI and ensure leadership is asking the right questions.  

EU AI Act: Setting the Standard for Responsible AI  

The EU AI Act is the world’s first comprehensive legislative framework for AI. It introduces a risk-based classification system: minimal, limited, high, and unacceptable. High-risk applications, including those used in HR, healthcare, finance, law enforcement, and critical infrastructure, must comply with strict requirements around transparency, data governance, ongoing monitoring, and human oversight. Generative AI models above certain thresholds are also subject to obligations such as disclosing training data sources and ensuring content integrity. 

Although an EU regulation, the Act has global relevance. Organisations outside the EU may fall within its scope if their AI systems impact EU citizens or markets. And just as the GDPR became a de facto global standard for data protection, the EU AI Act is expected to create a ripple effect, shaping how other countries approach AI regulation. It sets a clear precedent for embedding safety, accountability, and human-centric principles into AI governance. As a result, it is one of the most closely tracked developments by compliance teams, risk officers, and AI governance leads worldwide.  

However, as AI governance firms up worldwide, Asia Pacific organisations must look beyond Europe. From Washington to Beijing, several regulatory frameworks are rapidly influencing global norms. Whether organisations are building, deploying, or partnering on AI, these five are shaping the rules of the game.  

AI Regulations Asia Pacific Organisations Must Track 

1. United States: Setting the Tone for Global AI Risk Management 

The U.S. Executive Order on AI (2023) signals a major policy shift in federal oversight. It mandates agencies to establish AI safety standards, governance protocols, and risk assessment practices, with an emphasis on fairness, explainability, and security, especially in sensitive domains like healthcare, employment, and finance. Central to this effort is the NIST AI Risk Management Framework (AI RMF), quickly emerging as a global touchstone. 

Though designed as domestic policy, the Order’s influence is global. It sets a high bar for what constitutes responsible AI and is already shaping procurement norms and international expectations. For Asia Pacific organisations, early alignment isn’t just about accessing the U.S. market; it’s about maintaining credibility and competitiveness in a global AI landscape that is rapidly converging around these standards. 

Why it matters to Asia Pacific organisations 

  • Global Supply Chains Depend on It. U.S.-linked firms must meet stringent AI safety and procurement standards to stay viable. Falling short could mean loss of market and partnership access. 
  • NIST Is the New Global Benchmark. Aligning with AI RMF enables consistent risk management and builds confidence with global regulators and clients. 
  • Explainability Is Essential. AI systems must provide auditable, transparent decisions to satisfy legal and market expectations. 
  • Security Isn’t Optional. Preventing misuse and securing models is a non-negotiable baseline for participation in global AI ecosystems. 

2. China: Leading with Strict GenAI Regulation 

China’s 2023 Generative AI Measures impose clear rules on public-facing GenAI services. Providers must align content with “core socialist values,” prevent harmful bias, and ensure outputs are traceable and verifiable. Additionally, algorithms must be registered with regulators, with re-approval required for significant changes. These measures embed accountability and auditability into AI development and signal a new standard for regulatory oversight. 

For Asia Pacific organisations, this is more than compliance with local laws; it’s a harbinger of global trends. As major economies adopt similar rules, embracing traceability, algorithmic governance, and content controls now offers a competitive edge. It also demonstrates a commitment to trustworthy AI, positioning firms as serious players in the future global AI market. 

Why it matters to Asia Pacific organisations 

  • Regulatory Access and Avoiding Risk. Operating in or reaching Chinese users means strict content and traceability compliance is mandatory. 
  • Global Trend Toward Algorithm Governance. Requirements like algorithm registration are becoming regional norms and early adoption builds readiness. 
  • Transparency and Documentation. Rules align with global moves toward auditability and explainability. 
  • Content and Data Localisation. Businesses must invest in moderation and rethink infrastructure to comply with China’s standards. 

3. Singapore: A Practical Model for Responsible AI 

Singapore’s Model AI Governance Framework, developed by IMDA and PDPC, offers a pragmatic and principles-led path to ethical AI. Centred on transparency, human oversight, robustness, fairness, and explainability, the framework is accompanied by a detailed implementation toolkit, including use-case templates and risk-based guidance. It’s a practical playbook for firms looking to embed responsibility into their AI systems from the start. 

For Asia Pacific organisations, Singapore’s approach serves as both a local standard and a launchpad for global alignment. Adopting it enables responsible innovation, prepares teams for tighter compliance regimes, and builds trust with stakeholders at home and abroad. It’s a smart move for firms seeking to lead responsibly in the region’s growing AI economy. 

Why it matters to Asia Pacific organisations 

  • Regionally Rooted, Globally Relevant. Widely adopted across Southeast Asia, the framework suits industries from finance to logistics. 
  • Actionable Tools for Teams. Templates and checklists make responsible AI real and repeatable at scale. 
  • Future Compliance-Ready. Even if voluntary now, it positions firms to meet tomorrow’s regulations with ease. 
  • Trust as a Strategic Asset. Emphasising fairness and oversight boosts buy-in from regulators, partners, and users. 
  • Global Standards Alignment. Harmonises with the NIST RMF and G7 guidance, easing cross-border operations. 

4. OECD & G7: The Foundations of Global AI Trust 

The OECD AI Principles, adopted by over 40 countries, and the G7 Hiroshima Process establish a high-level consensus on what trustworthy AI should look like. They champion values such as transparency, accountability, robustness, and human-centricity. The G7 further introduced voluntary codes for foundation model developers, encouraging practices like documenting limitations, continuous risk testing, and setting up incident reporting channels. 

For Asia Pacific organisations, these frameworks are early indicators of where global regulation is heading. Aligning now sends a strong signal of governance maturity, supports safer AI deployment, and strengthens relationships with investors and international partners. They also help firms build scalable practices that can evolve alongside regulatory expectations. 

Why it matters to Asia Pacific organisations 

  • Blueprint for Trustworthy AI. Principles translate to real-world safeguards like explainability and continuous testing. 
  • Regulatory Foreshadowing. Many Asia Pacific countries cite these frameworks in shaping their own AI policies. 
  • Investor and Partner Signal. Compliance demonstrates maturity to stakeholders, aiding capital access and deals. 
  • Safety Protocols for Scale. G7 recommendations help prevent AI failures and harmful outcomes. 
  • Enabler of Cross-Border Collaboration. Global standards support smoother AI export, adoption, and partnership. 

5. Japan: Balancing Innovation and Governance 

Japan’s AI governance, guided by its 2022 strategy and active role in the G7 Hiroshima Process, follows a soft law approach that encourages voluntary adoption of ethical principles. The focus is on human-centric, transparent, and safe AI, allowing companies to experiment within defined ethical boundaries without heavy-handed mandates. 

For Asia Pacific organisations, Japan offers a compelling governance model that supports responsible innovation. By following its approach, firms can scale AI while staying aligned with international norms and anticipating formal regulations. It’s a flexible yet credible roadmap for building internal AI governance today. 

Why it matters to Asia Pacific organisations 

  • Room to Innovate with Guardrails. Voluntary guidelines support agile experimentation without losing ethical direction. 
  • Emphasis on Human-Centred AI. Design principles prioritise user rights and build long-term trust. 
  • G7-Driven Interoperability. As a G7 leader, Japan’s standards help companies align with broader international norms. 
  • Transparency and Safety Matter. Promoting explainability and security sets firms apart in global markets. 
  • Blueprint for Internal Governance. Useful for creating internal policies that are regulation-ready. 

Why This Matters: Beyond Compliance 

The global regulatory patchwork is quickly evolving into a complex landscape of overlapping expectations. For multinational companies, this creates three clear implications: 

  • Compliance is no longer optional. With enforcement kicking in (especially under the EU AI Act), failure to comply could mean fines, blocked products, or reputational damage. 
  • Enterprise AI needs guardrails. Businesses must build not just AI products, but AI governance, covering model explainability, data quality, access control, bias mitigation, and audit readiness. 
  • Trust drives adoption. As AI systems touch more customer and employee experiences, being able to explain and defend AI decisions becomes essential for maintaining stakeholder trust. 

AI regulation is not a brake on innovation; it’s the foundation for sustainable, scalable growth. For forward-thinking businesses, aligning with emerging standards today will not only reduce risk but also increase competitive advantage tomorrow. The organisations that win in the AI age will be the ones who combine speed with responsibility, and governance with ambition. 

AI Research and Reports
0
Ground Realities: Banking AI Pulse 

5/5 (1)

5/5 (1)

Consider the sheer volume of information flowing through today’s financial systems: every QR payment, e-KYC onboarding, credit card swipe, and cross-border transfer captures a data point. With digital banking and Open Banking, financial institutions are sitting on a goldmine of insights. But this isn’t just about data collection; it’s about converting that data into strategic advantage in a fast-moving, customer-driven landscape. 

With digital banks gaining traction and regulators around the world pushing bold reforms, the industry is entering a new phase of financial innovation powered by data and accelerated by AI.  

Ecosystm gathered insights and identified key challenges from senior banking leaders during a series of roundtables we moderated across Asia Pacific. The conversations revealed a clear picture of where momentum is building – and where obstacles continue to slow progress. From these discussions, several key themes emerged that highlight both opportunities and ongoing barriers in the Banking sector.  

1. AI is Leading to End-to-End Transformation 

Banks are moving beyond generic digital offerings to deliver hyper-personalised, data-driven experiences that build loyalty and drive engagement. AI is driving this shift by helping institutions anticipate customer needs through real-time analysis of behavioural, transactional, and demographic data. From pre-approved credit offers and contextual investment nudges to app interfaces that adapt to individual financial habits, personalisation is becoming a core strategy, not just a feature. This is a huge departure from reactive service models, positioning data as a long-term strategic asset. 

But the impact of AI isn’t limited to customer-facing experiences. It’s also driving innovation deep within the banking stack, from fraud detection and SME loan processing to intelligent chatbots that scale customer support. On the infrastructure side, banks are investing in agile, AI-ready platforms to support automation, model training, and advanced analytics at scale. These shifts are redefining how banks operate, make decisions, and deliver value. Institutions that integrate AI across both front-end journeys and back-end processes are setting a new benchmark for agility, efficiency, and competitiveness in a fast-changing financial landscape. 

2. Regulatory Shifts are Redrawing the Competitive Landscape 

Regulators are moving quickly in Asia Pacific by introducing frameworks for Open Banking, real-time payments, and even AI-specific standards like Singapore’s AI Verify. But the challenge for banks isn’t just keeping up with evolving external mandates. Internally, many are navigating a complicated mix of overlapping policies, built up over years of compliance with local, regional, and global rules. This often slows down innovation and makes it harder to implement AI and automation consistently across the organisation. 

As banks double down on AI, it is clear that governance can’t be an afterthought. Many are still dealing with fragmented ownership of AI systems, inconsistent oversight, and unclear rules around things like model fairness and explainability. The more progressive ones are starting to fix this by setting up centralised governance frameworks, investing in risk-based controls, and putting processes in place to monitor things like bias and model drift from day one. They are not just trying to stay compliant; they are preparing for what’s coming next. In this landscape, the ability to manage regulatory complexity with speed and clarity, both internally and externally, is quickly becoming a competitive edge. 

3. Success Depends on Strategy, Not Just Tech 

While enthusiasm for AI is high, sustainable success hinges on a clear, aligned strategy that connects technology to business outcomes. Many banks struggle with fragmented initiatives because they lack a unified roadmap that prioritises high-impact use cases. Without clear goals, AI projects often fail to deliver meaningful value, becoming isolated pilots with limited scalability. 

To avoid this, banks need to develop robust return-on-investment (ROI) models tailored to their context — measuring benefits like faster credit decisioning, reduced fraud losses, or increased cross-selling effectiveness. These models must consider not only the upfront costs of infrastructure and talent, but also ongoing expenses such as model retraining, governance, and integration with existing systems. 

Ethical AI governance is another essential pillar. With growing regulatory scrutiny and public concern about opaque “black box” models, banks must embed transparency, fairness, and accountability into their AI frameworks from the outset. This goes beyond compliance; strong governance builds trust and is key to responsible, long-term use of AI in sensitive, high-stakes financial environments. 

4. Legacy Challenges Still Hold Banks Back 

Despite strong momentum, many banks face foundational barriers that hinder effective AI deployment. Chief among these is data fragmentation. Core customer, transaction, compliance, and risk data are often scattered across legacy systems and third-party platforms, making it difficult to access the integrated, high-quality data that AI models require. 

This limits the development of comprehensive solutions and makes AI implementations slower, costlier, and less effective. Instead of waiting for full system replacements, banks need to invest in integration layers and modern data platforms that unify data sources and make them AI-ready. These platforms can connect siloed systems – such as CRM, payments, and core banking – to deliver a consolidated view, which is crucial for accurate credit scoring, personalised offers, and effective risk management. 

Banks must also address talent gaps. The shortage of in-house AI expertise means many institutions rely on external consultants, which increases costs and reduces knowledge transfer. Without building internal capabilities and adjusting existing processes to accommodate AI, even sophisticated models may end up underused or misapplied. 

5. Collaboration and Capability Building are Key Enablers 

AI transformation isn’t just a technology project – it’s an organisation-wide shift that requires new capabilities, ways of working, and strategic partnerships. Success depends on more than just hiring data scientists. Relationship managers, credit officers, compliance teams, and frontline staff all need to be trained to understand and act on AI-driven insights. Processes such as loan approvals, fraud escalations, and customer engagement must be redesigned to integrate AI outputs seamlessly. 

To drive continuous innovation, banks should establish internal Centres of Excellence for AI. These hubs can lead experimentation with high-value use cases like predictive credit scoring or real-time fraud detection, while ensuring that learnings are shared across business units. They also help avoid duplication and promote strategic alignment. 

Partnerships with fintechs, technology providers, and academic institutions play a vital role as well. These collaborations offer access to cutting-edge tools, niche expertise, and locally relevant AI models that reflect the regulatory, cultural, and linguistic contexts banks operate in. In a fast-moving and increasingly competitive space, this combination of internal capability building and external collaboration gives banks the agility and foresight to lead. 

AI Research and Reports
0
Ground Realities: Indonesia Tech Pulse

5/5 (1)

5/5 (1)

Indonesia’s vast, diverse population and scattered islands create a unique landscape for AI adoption. Across sectors – from healthcare to logistics and banking to public services – leaders view AI not just as a tool for efficiency but as a means to expand reach, build resilience, and elevate citizen experience. With AI expected to add up to 12% of Indonesia’s GDP by 2030, it’s poised to be a core engine of growth.

Yet, ambition isn’t enough. While AI interest is high, execution is patchy. Many organisations remain stuck in isolated pilots or siloed experiments. Those scaling quickly face familiar hurdles: fragmented infrastructure, talent gaps, integration issues, and a lack of unified strategy and governance.

Ecosystm gathered insights and identified key challenges from senior tech leaders during a series of roundtables we moderated in Jakarta. The conversations revealed a clear picture of where momentum is building – and where obstacles continue to slow progress. From these discussions, several key themes emerged that highlight both opportunities and ongoing barriers in the country’s digital journey.

Theme 1. Digital Natives are Accelerating Innovation; But Need Scalable Guardrails

Indonesia’s digital-first companies – especially in fintech, logistics tech, and media streaming – are rapidly building on AI and cloud-native foundations. Players like GoTo, Dana, Jenius, and Vidio are raising the bar not only in customer experience but also in scaling technology across a mobile-first nation. Their use of AI for customer support, real-time fraud detection, biometric eKYC, and smart content delivery highlights the agility of digital-native models. This innovation is particularly concentrated in Jakarta and Bandung, where vibrant startup ecosystems and rich talent pools drive fast iteration.

Yet this momentum brings new risks. Deepfake attacks during onboarding, unsecured APIs, and content piracy pose real threats. Without the layered controls and regulatory frameworks typical of banks or telecom providers, many startups are navigating high-stakes digital terrain without a safety net.

As these companies become pillars of Indonesia’s digital economy, a new kind of guardrail is essential; flexible enough to support rapid growth, yet robust enough to mitigate systemic risk.

A sector-wide governance playbook, grounded in local realities and aligned with global standards, could provide the balance needed to scale both quickly and securely.

Theme 2. Scaling AI in Indonesia: Why Infrastructure Investment Matters

Indonesia’s ambition for AI is high, and while digital infrastructure still faces challenges, significant opportunities lie ahead. Although telecom investment has slowed and state funding tightened, growing momentum from global cloud players is beginning to reshape the landscape. AWS’s commitment to building cloud zones and edge locations beyond Java is a major step forward.

For AI to scale effectively across Indonesia’s diverse archipelago, the next wave of progress will depend on stronger investment incentives for data centres, cloud interconnects, and edge computing.

A proactive government role – through updated telecom regulations, streamlined permitting, and public-private partnerships – can unlock this potential.

Infrastructure isn’t just the backbone of digital growth; it’s a powerful lever for inclusion, enabling remote health services, quality education, and SME empowerment across even the most distant regions.

Theme 3. Cyber Resilience Gains Momentum; But Needs to Be More Holistic

Indonesian organisations are facing an evolving wave of cyber threats – from sophisticated ransomware to DDoS attacks targeting critical services. This expanding threat landscape has elevated cyber resilience from a technical concern to a strategic imperative embraced by CISOs, boards, and risk committees alike. While many organisations invest heavily in security tools, the challenge remains in moving beyond fragmented solutions toward a truly resilient operating model that emphasises integration, simulation, and rapid response.

The shift from simply being “secure” to becoming genuinely “resilient” is gaining momentum. Resilience – captured by the Bahasa Indonesia term “ulet” – is now recognised as the ability not just to defend, but to endure disruption and bounce back stronger. Regulatory steps like OJK’s cyber stress testing and continuity planning requirements are encouraging organisations to go beyond mere compliance.

Organisations will now need to operationalise resilience by embedding it into culture through cross-functional drills, transparent crisis playbooks, and agile response practices – so when attacks strike, business impact is minimised and trust remains intact.

For many firms, especially in finance and logistics, this mindset and operational shift will be crucial to sustaining growth and confidence in a rapidly evolving digital landscape.

Theme 4. Organisations Need a Roadmap for Legacy System Transformation

Legacy systems continue to slow modernisation efforts in traditional sectors such as banking, insurance, and logistics by creating both technical and organisational hurdles that limit innovation and scalability. These outdated IT environments are deeply woven into daily operations, making integration complex, increasing downtime risks, and frustrating cross-functional teams striving to deliver digital value swiftly. The challenge goes beyond technology – there’s often a disconnect between new digital initiatives and existing workflows, which leads to bottlenecks and slows progress.

Recognising these challenges, many organisations are now investing in middleware solutions, automation, and phased modernisation plans that focus on upgrading key components gradually. This approach helps bridge the gap between legacy infrastructure and new digital capabilities, reducing the risk of enterprise-wide disruption while enabling continuous innovation.

The crucial next step is to develop and commit to a clear, incremental roadmap that balances risk with progress – ensuring legacy systems evolve in step with digital ambitions and unlock the full potential of transformation.

Theme 5. AI Journey Must Be Rooted in Local Talent and Use Cases

Ecosystm research reveals that only 13% of Indonesian organisations have experimented with AI, with most yet to integrate it into their core strategies.

While Indonesia’s AI maturity remains uneven, there is a broad recognition of AI’s potential as a powerful equaliser – enhancing public service delivery across 17,000 islands, democratising diagnostics in rural healthcare, and improving disaster prediction for flood-prone Jakarta.

The government’s 2045 vision emphasises inclusive growth and differentiated human capital, but achieving these goals requires more than just infrastructure investment. Building local talent pipelines is critical. Initiatives like IBM’s AI Academy in Batam, which has trained over 2,000 AI practitioners, are promising early steps. However, scaling this impact means embedding AI education into national curricula, funding interdisciplinary research, and supporting SMEs with practical adoption toolkits.

The opportunity is clear: GenAI can act as an multiplier, empowering even resource-constrained sectors to enhance reach, personalisation, and citizen engagement.

To truly unlock AI’s potential, Indonesia must move beyond imported templates and focus on developing grounded, context-aware AI solutions tailored to its unique landscape.

From Innovation to Impact

Indonesia’s tech journey is at a pivotal inflection point – where ambition must transform into alignment, and isolated pilots must scale into robust platforms. Success will depend not only on technology itself but on purpose-driven strategy, resilient infrastructure, cultural readiness, and shared accountability across industries. The future won’t be shaped by standalone innovations, but by coordinated efforts that convert experimentation into lasting, systemic impact.

AI Research and Reports
0
AI’s Unintended Consequences: The Automation Paradox

5/5 (3)

5/5 (3)

Automation and AI hold immense promise for accelerating productivity, reducing errors, and streamlining tasks across virtually every industry. From manufacturing plants that operate robotic arms to software-driven solutions that analyse millions of data points in seconds, these technological advancements are revolutionising how we work. However, AI has already led to, and will continue to bring about, many unintended consequences.

One that has been discussed for nearly a decade but is starting to impact employees and brand experiences is the “automation paradox”. As AI and automation take on more routine tasks, employees find themselves tackling the complex exceptions and making high-stakes decisions.

What is the Automation Paradox?

1. The Shifting Burden from Low to High Value Tasks

When AI systems handle mundane or repetitive tasks, ‘human’ employees can direct their efforts toward higher-value activities. At first glance, this shift seems purely beneficial. AI helps filter out extraneous work, enabling humans to focus on the tasks that require creativity, empathy, or nuanced judgment. However, by design, these remaining tasks often carry greater responsibility. For instance, in a retail environment with automated checkout systems, a human staff member is more likely to deal with complex refund disputes or tense customer interactions. Or in a warehouse, as many processes are automated by AI and robots, humans are left with the oversight of, and responsibility for entire processes. Over time, handling primarily high-pressure situations can become mentally exhausting, contributing to job stress and potential burnout.

2. Increased Reliance on Human Judgment in Edge Cases

AI excels at pattern recognition and data processing at scale, but unusual or unprecedented scenarios can stump even the best-trained models. The human workforce is left to solve these complex, context-dependent challenges. Take self-driving cars as an example. While most day-to-day driving can be safely automated, human oversight is essential for unpredictable events – like sudden weather changes or unexpected road hazards.

Human intervention can be a critical, life-or-death matter, amplifying the pressure and stakes for those still in the loop.

3. The Fallibility Factor of AI

Ironically, as AI becomes more capable, humans may trust it too much. When systems make mistakes, it is the human operator who must detect and rectify them. But the further removed people are from the routine checks and balances – since “the system” seems to handle things so competently – the greater the chance that an error goes unnoticed until it has grown into a major problem. For instance, in the aviation industry, pilots who rely heavily on autopilot systems must remain vigilant for rare but critical emergency scenarios, which can be more taxing due to limited practice in handling manual controls.

Add to These the Known Challenges of AI!

Bias in Data and Algorithms. AI systems learn from historical data, which can carry societal and organisational biases. If left unchecked, these algorithms can perpetuate or even amplify unfairness. For instance, an AI-driven hiring platform trained on past decisions might favour candidates from certain backgrounds, unintentionally excluding qualified applicants from underrepresented groups.

Privacy and Data Security Concerns. The power of AI often comes from massive data collection, whether for predicting consumer trends or personalising user experiences. This accumulation of personal and sensitive information raises complex legal and ethical questions. Leaks, hacks, or improper data sharing can cause reputational damage and legal repercussions.

Skills Gap and Workforce Displacement. While AI can eliminate the need for certain manual tasks, it creates a demand for specialised skills, such as data science, machine learning operations, and AI ethics oversight. If an organisation fails to provide employees with retraining opportunities, it risks exacerbating skill gaps and losing valuable institutional knowledge.

Ethical and Social Implications. AI-driven decision-making can have profound impacts on communities. For example, a predictive policing system might inadvertently target specific neighbourhoods based on historical arrest data. When these systems lack transparency or accountability, public trust erodes, and social unrest can follow.

How Can We Mitigate the Known and Unknown Consequences of AI?

While some of the unintended consequences of AI and automation won’t be known until systems are deployed and processes are in practice, there are some basic hygiene approaches that technology leaders and their organisational peers can take to minimise these impacts.

  1. Human-Centric Design. Incorporate user feedback into AI system development. Tools should be designed to complement human skills, not overshadow them.
  2. Comprehensive Training. Provide ongoing education for employees expected to handle advanced AI or edge-case scenarios, ensuring they remain engaged and confident when high-stakes decisions arise.
  3. Robust Governance. Develop clear policies and frameworks that address bias, privacy, and security. Assign accountability to leaders who understand both technology and organisational ethics.
  4. Transparent Communication. Maintain clear channels of communication regarding what AI can and cannot do. Openness fosters trust, both internally and externally.
  5. Increase your organisational AIQ (AI Quotient). Most employees are not fully aware of the potential of AI and its opportunity to improve – or change – their roles. Conduct regular upskilling and knowledge sharing activities to improve the AIQ of your employees so they start to understand how people, plus data and technology, will drive their organisation forward.

Let me know your thoughts on the Automation Paradox, and stay tuned for my next blog on redefining employee skill pathways to tackle its challenges.

The Future of Industries
0
AI Stakeholders: The HR Perspective

5/5 (2)

5/5 (2)

AI has broken free from the IT department. It’s no longer a futuristic concept but a present-day reality transforming every facet of business. Departments across the enterprise are now empowered to harness AI directly, fuelling innovation and efficiency without waiting for IT’s stamp of approval. The result? A more agile, data-driven organisation where AI unlocks value and drives competitive advantage.

Ecosystm’s research over the past two years, including surveys and in-depth conversations with business and technology leaders, confirms this trend: AI is the dominant theme. And while the potential is clear, the journey is just beginning.

Here are key AI insights for HR Leaders from our research.

AI-the-HR-Perspective-1
AI-the-HR-Perspective-2
AI-the-HR-Perspective-3
AI-the-HR-Perspective-4
AI-the-HR-Perspective-5
AI-the-HR-Perspective-6
AI-the-HR-Perspective-7
AI-the-HR-Perspective-8
previous arrowprevious arrow
next arrownext arrow
AI-the-HR-Perspective-1
AI-the-HR-Perspective-2
AI-the-HR-Perspective-3
AI-the-HR-Perspective-4
AI-the-HR-Perspective-5
AI-the-HR-Perspective-6
AI-the-HR-Perspective-7
AI-the-HR-Perspective-8
previous arrow
next arrow
Shadow

Click here to download “AI Stakeholders: The HR Perspective” as a PDF.

HR: Leading the Charge (or Should Be)

Our research reveals a fascinating dynamic in HR. While 54% of HR leaders currently use AI for recruitment (scanning resumes, etc.), their vision extends far beyond. A striking majority plan to expand AI’s reach into crucial areas: 74% for workforce planning, 68% for talent development and training, and 62% for streamlining employee onboarding.

The impact is tangible, with organisations already seeing significant benefits. GenAI has streamlined presentation creation for bank employees, allowing them to focus on content rather than formatting and improving efficiency. Integrating GenAI into knowledge bases has simplified access to internal information, making it quicker and easier for employees to find answers. AI-driven recruitment screening is accelerating hiring in the insurance sector by analysing resumes and applications to identify top candidates efficiently. Meanwhile, AI-powered workforce management systems are transforming field worker management by optimising job assignments, enabling real-time tracking, and ensuring quick responses to changes.

The Roadblocks and the Opportunity

Despite this promising outlook, HR leaders face significant hurdles. Limited exploration of use cases, the absence of a unified organisational AI strategy, and ethical concerns are among the key barriers to wider AI deployments.

Perhaps most concerning is the limited role HR plays in shaping AI strategy. While 57% of tech and business leaders cite increased productivity as the main driver for AI investments, HR’s influence is surprisingly weak. Only 20% of HR leaders define AI use cases, manage implementation, or are involved in governance and ownership. A mere 8% primarily manage AI solutions.

This disconnect represents a massive opportunity.

2025 and Beyond: A Call to Action for HR

Despite these challenges, our research indicates HR leaders are prioritising AI for 2025. Increased productivity is the top expected outcome, while three in ten will focus on identifying better HR use cases as part of a broader data-centric approach.

The message is clear: HR needs to step up and claim its seat at the AI table. By proactively defining use cases, championing ethical considerations, and collaborating closely with tech teams, HR can transform itself into a strategic driver of AI adoption, unlocking the full potential of this transformative technology for the entire organisation. The future of HR is intelligent, and it’s time for HR leaders to embrace it.

AI Research and Reports
0
Securing the AI Frontier: Top 5 Cyber Trends for 2025

5/5 (1)

5/5 (1)

Ecosystm research shows that cybersecurity is the most discussed technology at the Board and Management level, driven by the increasing sophistication of cyber threats and the rapid adoption of AI. While AI enhances security, it also introduces new vulnerabilities. As organisations face an evolving threat landscape, they are adopting a more holistic approach to cybersecurity, covering prevention, detection, response, and recovery.

In 2025, cybersecurity leaders will continue to navigate a complex mix of technological advancements, regulatory pressures, and changing business needs. To stay ahead, organisations will prioritise robust security solutions, skilled professionals, and strategic partnerships.

Ecosystm analysts Darian Bird, Sash Mukherjee, and Simona Dimovski present the key cybersecurity trends for 2025.

Click here to download ‘Securing the AI Frontier: Top 5 Cyber Trends for 2025’ as a PDF

1. Cybersecurity Will Be a Critical Differentiator in Corporate Strategy

The convergence of geopolitical instability, cyber weaponisation, and an interconnected digital economy will make cybersecurity a cornerstone of corporate strategy. State-sponsored cyberattacks targeting critical infrastructure, supply chains, and sensitive data have turned cyber warfare into an operational reality, forcing businesses to prioritise security.

Regulatory pressures are driving this shift, mandating breach reporting, data sovereignty, and significant penalties, while international cybersecurity norms compel companies to align with evolving standards to remain competitive.

The stakes are high. Stakeholders now see cybersecurity as a proxy for trust and resilience, scrutinising both internal measures and ecosystem vulnerabilities.

2. Zero Trust Architectures Will Anchor AI-Driven Environments

The future of cybersecurity lies in never trusting, always verifying – especially where AI is involved.

In 2025, the rise of AI-driven systems will make Zero Trust architectures vital for cybersecurity. Unlike traditional networks with implicit trust, AI environments demand stricter scrutiny due to their reliance on sensitive data, autonomous decisions, and interconnected systems. The growing threat of adversarial attacks – data poisoning, model inversion, and algorithmic manipulation – highlights the urgency of continuous verification.

Global forces are driving this shift. Regulatory mandates like the EU’s DORA, the US Cybersecurity Executive Order, and the NIST Zero Trust framework call for robust safeguards for critical systems. These measures align with the growing reliance on AI in high-stakes sectors like Finance, Healthcare, and National Security.

3. Organisations Will Proactively Focus on AI Governance & Data Privacy

Organisations are caught between excitement and uncertainty regarding AI. While the benefits are immense, businesses struggle with the complexities of governing AI. The EU AI Act looms large, pushing global organisations to brace for stricter regulations, while a rise in shadow IT sees business units bypassing traditional IT to deploy AI independently.

In this environment of regulatory ambiguity and organisational flux, CISOs and CIOs will prioritise data privacy and governance, proactively securing organisations with strong data frameworks and advanced security solutions to stay ahead of emerging regulations.

Recognising that AI will be multi-modal, multi-vendor, and hybrid, organisations will invest in model orchestration and integration platforms to simplify management and ensure smoother compliance.

4. Network & Security Stacks Will Streamline Through Converged Platforms

This shift stems from the need for unified management, cost efficiency, and the recognition that standardisation enhances security posture.

Tech providers are racing to deliver comprehensive network and security platforms.

Recent M&A moves by HPE (Juniper), Palo Alto Networks (QRadar SaaS), Fortinet (Lacework), and LogRhythm (Exabeam) highlight this trend. Rising player Cato Networks is capitalising on mid-market demand for single-provider solutions, with many customers planning to consolidate vendors in their favour. Meanwhile, telecoms are expanding their SASE offerings to support organisations adapting to remote work and growing cloud adoption.

5. AI Will Be Widely Used to Combat AI-Powered Threats in Real-time

By 2025, the rise of AI-powered cyber threats will demand equally advanced AI-driven defences.

Threat actors are using AI to launch adaptive attacks like deepfake fraud, automated phishing, and adversarial machine learning, operating at a speed and scale beyond traditional defences.

Real-time AI solutions will be essential for detection and response.

Nation-state-backed advanced persistent threat (APT) groups and GenAI misuse are intensifying these challenges, exploiting vulnerabilities in critical infrastructure and supply chains. Mandatory reporting and threat intelligence sharing will strengthen AI defences, enabling real-time adaptation to emerging threats.

Ecosystm Predicts 2024
0
Building Trust in Data: Strategic Imperatives for India’s Leaders

5/5 (2)

5/5 (2)

At a recently held Ecosystm roundtable, in partnership with Qlik and 121Connects, Ecosystm Principal Advisor Manoj Chugh, moderated a conversation where Indian tech and data leaders discussed building trust in data strategies. They explored ways to automate data pipelines and improve governance to drive better decisions and business outcomes. Here are the key takeaways from the session.

Manoj Chugh, Principal Advisor, Ecosystm

Data isn’t just a byproduct anymore; it’s the lifeblood of modern businesses, fuelling informed decisions and strategic growth. But with vast amounts of data, the challenge isn’t just managing it; it’s building trust. AI, once a beacon of hope, is now at risk without a reliable data foundation. Ecosystm research reveals that a staggering 66% of Indian tech leaders doubt their organisation’s data quality, and the problem of data silos is exacerbating this trust crisis.

At the Leaders Roundtable in Mumbai, I had the opportunity to moderate a discussion among data and digital leaders on the critical components of building trust in data and leveraging it to drive business value. The consensus was that building trust requires a comprehensive strategy that addresses the complexities of data management and positions the organisation for future success. Here are the key strategies that are essential for achieving these goals.

1. Adopting a Unified Data Approach

Organisations are facing a growing wave of complex workloads and business initiatives. To manage this expansion, IT teams are turning to multi-cloud, SaaS, and hybrid environments. However, this diverse landscape introduces new challenges, such as data silos, security vulnerabilities, and difficulties in ensuring interoperability between systems.

67% of organisations in India struggle with using their data due to complexities such as data silos and integration challenges.

A unified data strategy is crucial to overcome these challenges. By ensuring platform consistency, robust security, and seamless data integration, organisations can simplify data management, enhance security, and align with business goals – driving informed decisions, innovation, and long-term success.

Real-time data integration is essential for timely data availability, enabling organisations to make data-driven decisions quickly and effectively. By integrating data from various sources in real-time, businesses can gain valuable insights into their operations, identify trends, and respond to changing market conditions.

Organisations that are able to integrate their IT and operational technology (OT) systems find their data accuracy increasing. By combining IT’s digital data management expertise with OT’s real-time operational insights, organisations can ensure more accurate, timely, and actionable data. This integration enables continuous monitoring and analysis of operational data, leading to faster identification of errors, more precise decision-making, and optimised processes.

2. Enhancing Data Quality with Automation and Collaboration

As the volume and complexity of data continue to grow, ensuring high data quality is essential for organisations to make accurate decisions and to drive trust in data-driven solutions. Automated data quality tools are useful for cleansing and standardising data to eliminate errors and inconsistencies.

When you have the right tools in place, it becomes easier to classify data correctly and implement frameworks for governance. Automated tools can help identify sensitive data, control access, and standardise definitions across departments.

As mentioned earlier, integrating IT and OT systems can help organisations improve operational efficiency and resilience. By leveraging data-driven insights, businesses can identify bottlenecks, optimise workflows, and proactively address potential issues before they escalate. This can lead to cost savings, increased productivity, and improved customer satisfaction.

However, while automation technologies can help, organisations must also invest in training employees in data management, data visualisation, and data governance.

3. Modernising Data Infrastructure for Agility and Innovation

In today’s fast-paced business landscape, agility is paramount. Modernising data infrastructure is essential to remain competitive – the right digital infrastructure focuses on optimising costs, boosting capacity and agility, and maximising data leverage, all while safeguarding the organisation from cyber threats. This involves migrating data lakes and warehouses to cloud platforms and adopting advanced analytics tools. However, modernisation efforts must be aligned with specific business goals, such as enhancing customer experiences, optimising operations, or driving innovation. A well-modernised data environment not only improves agility but also lays the foundation for future innovations.

43% of organisations in India face obstacles in Al implementation due to unclear data governance and ethical guidelines.

Technology leaders must assess whether their data architecture supports the organisation’s evolving data requirements, considering factors such as data flows, necessary management systems, processing operations, and AI applications. The ideal data architecture should be tailored to the organisation’s specific needs, considering current and future data demands, available skills, costs, and scalability.

4. Strengthening Data Governance with a Structured Approach

Data governance is crucial for establishing trust in data, and providing a framework to manage its quality, integrity, and security throughout its lifecycle. By setting clear policies and processes, organisations can build confidence in their data, support informed decision-making, and foster stakeholder trust.

A key component of data governance is data lineage – the ability to trace the history and transformation of data from its source to its final use. Understanding this journey helps organisations verify data accuracy and integrity, ensure compliance with regulatory requirements and internal policies, improve data quality by proactively addressing issues, and enhance decision-making through context and transparency.

A tiered data governance structure, with strategic oversight at the executive level and operational tasks managed by dedicated data governance councils, ensures that data governance aligns with broader organisational goals and is implemented effectively.

Are You Ready for the Future of AI?

The ultimate goal of your data management and discovery mechanisms is to ensure that you are advancing at pace with the industry. The analytics landscape is undergoing a profound transformation, promising to revolutionise how organisations interact with data. A key innovation, the data fabric, is enabling organisations to analyse unstructured data, where the true value often lies, resulting in cleaner and more reliable data models.

This image has an empty alt attribute; its file name is Quote-4.png

GenAI has emerged as another game-changer, empowering employees across the organisation to become citizen data scientists. This democratisation of data analytics allows for a broader range of insights and fosters a more data-driven culture. Organisations can leverage GenAI to automate tasks, generate new ideas, and uncover hidden patterns in their data.

The shift from traditional dashboards to real-time conversational tools is also reshaping how data insights are delivered and acted upon. These tools enable users to ask questions in natural language, receiving immediate and relevant answers based on the underlying data. This conversational approach makes data more accessible and actionable, empowering employees to make data-driven decisions at all levels of the organisation.

To fully capitalise on these advancements, organisations need to reassess their AI/ML strategies. By ensuring that their tech initiatives align with their broader business objectives and deliver tangible returns on investment, organisations can unlock the full potential of data-driven insights and gain a competitive edge. It is equally important to build trust in AI initiatives, through a strong data foundation. This involves ensuring data quality, accuracy, and consistency, as well as implementing robust data governance practices. A solid data foundation provides the necessary groundwork for AI and GenAI models to deliver reliable and valuable insights.

The Future of AI
0
Accelerate AI Adoption: Guardrails for Effective Use

5/5 (3)

5/5 (3)

“AI Guardrails” are often used as a method to not only get AI programs on track, but also as a way to accelerate AI investments. Projects and programs that fall within the guardrails should be easy to approve, govern, and manage – whereas those outside of the guardrails require further review by a governance team or approval body. The concept of guardrails is familiar to many tech businesses and are often applied in areas such as cybersecurity, digital initiatives, data analytics, governance, and management.

While guidance on implementing guardrails is common, organisations often leave the task of defining their specifics, including their components and functionalities, to their AI and data teams. To assist with this, Ecosystm has surveyed some leading AI users among our customers to get their insights on the guardrails that can provide added value.

Data Security, Governance, and Bias

AI: Data, Security, and Bias
  • Data Assurance. Has the organisation implemented robust data collection and processing procedures to ensure data accuracy, completeness, and relevance for the purpose of the AI model? This includes addressing issues like missing values, inconsistencies, and outliers.
  • Bias Analysis. Does the organisation analyse training data for potential biases – demographic, cultural and so on – that could lead to unfair or discriminatory outputs?
  • Bias Mitigation. Is the organisation implementing techniques like debiasing algorithms and diverse data augmentation to mitigate bias in model training?
  • Data Security. Does the organisation use strong data security measures to protect sensitive information used in training and running AI models?
  • Privacy Compliance. Is the AI opportunity compliant with relevant data privacy regulations (country and industry-specific as well as international standards) when collecting, storing, and utilising data?

Model Development and Explainability

AI: Model Development and Explainability
  • Explainable AI. Does the model use explainable AI (XAI) techniques to understand and explain how AI models reach their decisions, fostering trust and transparency?
  • Fair Algorithms. Are algorithms and models designed with fairness in mind, considering factors like equal opportunity and non-discrimination?
  • Rigorous Testing. Does the organisation conduct thorough testing and validation of AI models before deployment, ensuring they perform as intended, are robust to unexpected inputs, and avoid generating harmful outputs?

AI Deployment and Monitoring

AI: Deployment and Monitoring
  • Oversight Accountability. Has the organisation established clear roles and responsibilities for human oversight throughout the AI lifecycle, ensuring human control over critical decisions and mitigation of potential harm?
  • Continuous Monitoring. Are there mechanisms to continuously monitor AI systems for performance, bias drift, and unintended consequences, addressing any issues promptly?
  • Robust Safety. Can the organisation ensure AI systems are robust and safe, able to handle errors or unexpected situations without causing harm? This includes thorough testing and validation of AI models under diverse conditions before deployment.
  • Transparency Disclosure. Is the organisation transparent with stakeholders about AI use, including its limitations, potential risks, and how decisions made by the system are reached?

Other AI Considerations

AI: Ethical Considerations
  • Ethical Guidelines. Has the organisation developed and adhered to ethical principles for AI development and use, considering areas like privacy, fairness, accountability, and transparency?
  • Legal Compliance. Has the organisation created mechanisms to stay updated on and compliant with relevant legal and regulatory frameworks governing AI development and deployment?
  • Public Engagement. What mechanisms are there in place to encourage open discussion and engage with the public regarding the use of AI, addressing concerns and building trust?
  • Social Responsibility. Has the organisation considered the environmental and social impact of AI systems, including energy consumption, ecological footprint, and potential societal consequences?

Implementing these guardrails requires a comprehensive approach that includes policy formulation, technical measures, and ongoing oversight. It might take a little longer to set up this capability, but in the mid to longer term, it will allow organisations to accelerate AI implementations and drive a culture of responsible AI use and deployment.

AI Research and Reports
0