2023 has been an eventful year. In May, the WHO announced that the pandemic was no longer a global public health emergency. However, other influencers in 2023 will continue to impact the market, well into 2024 and beyond.
Global Conflicts. The Russian invasion of Ukraine persisted; the Israeli-Palestinian conflict escalated into war; African nations continued to see armed conflicts and political crises; there has been significant population displacement.
Banking Crisis. American regional banks collapsed – Silicon Valley Bank and First Republic Bank collapses ranking as the third and second-largest banking collapses in US history; Credit Suisse was acquired by UBS in Switzerland.
Climate Emergency. The UN’s synthesis report found that there’s still a chance to limit global temperature increases by 1.5°C; Loss and Damage conversations continued without a significant impact.
Power of AI. The interest in generative AI models heated up; tech vendors incorporated foundational models in their enterprise offerings – Microsoft Copilot was launched; awareness of AI risks strengthened calls for Ethical/Responsible AI.
Click below to find out what Ecosystm analysts Achim Granzen, Darian Bird, Peter Carr, Sash Mukherjee and Tim Sheedy consider the top 5 tech market forces that will impact organisations in 2024.
Click here to download ‘Ecosystm Predicts: Tech Market Dynamics 2024’ as a PDF
#1 State-sponsored Attacks Will Alter the Nature Of Security Threats
It is becoming clearer that the post-Cold-War era is over, and we are transitioning to a multi-polar world. In this new age, malevolent governments will become increasingly emboldened to carry out cyber and physical attacks without the concern of sanction.
Unlike most malicious actors driven by profit today, state adversaries will be motivated to maximise disruption.
Rather than encrypting valuable data with ransomware, wiper malware will be deployed. State-sponsored attacks against critical infrastructure, such as transportation, energy, and undersea cables will be designed to inflict irreversible damage. The recent 23andme breach is an example of how ethnically directed attacks could be designed to sow fear and distrust. Additionally, even the threat of spyware and phishing will cause some activists, journalists, and politicians to self-censor.

#2 AI Legislation Breaches Will Occur, But Will Go Unpunished
With US President Biden’s recently published “Executive order on Safe, Secure and Trustworthy AI” and the European Union’s “AI Act” set for adoption by the European Parliament in mid-2024, codified and enforceable AI legislation is on the verge of becoming reality. However, oversight structures with powers to enforce the rules are currently not in place for either initiative and will take time to build out.
In 2024, the first instances of AI legislation violations will surface – potentially revealed by whistleblowers or significant public AI failures – but no legal action will be taken yet.

#3 AI Will Increase Net-New Carbon Emissions
In an age focused on reducing carbon and greenhouse gas emissions, AI is contributing to the opposite. Organisations often fail to track these emissions under the broader “Scope 3” category. Researchers at the University of Massachusetts, Amherst, found that training a single AI model can emit over 283T of carbon dioxide, equal to emissions from 62.6 gasoline-powered vehicles in a year.
Organisations rely on cloud providers for carbon emission reduction (Amazon targets net-zero by 2040, and Microsoft and Google aim for 2030, with the trajectory influencing global climate change); yet transparency on AI greenhouse gas emissions is limited. Diverse routes to net-zero will determine the level of greenhouse gas emissions.
Some argue that AI can help in better mapping a path to net-zero, but there is concern about whether the damage caused in the process will outweigh the benefits.

#4 ESG Will Transform into GSE to Become the Future of GRC
Previously viewed as a standalone concept, ESG will be increasingly recognised as integral to Governance, Risk, and Compliance (GRC) practices. The ‘E’ in ESG, representing environmental concerns, is becoming synonymous with compliance due to growing environmental regulations. The ‘S’, or social aspect, is merging with risk management, addressing contemporary issues such as ethical supply chains, workplace equity, and modern slavery, which traditional GRC models often overlook. Governance continues to be a crucial component.
The key to organisational adoption and transformation will be understanding that ESG is not an isolated function but is intricately linked with existing GRC capabilities.
This will present opportunities for GRC and Risk Management providers to adapt their current solutions, already deployed within organisations, to enhance ESG effectiveness. This strategy promises mutual benefits, improving compliance and risk management while simultaneously advancing ESG initiatives.

#5 Productivity Will Dominate Workforce Conversations
The skills discussions have shifted significantly over 2023. At the start of the year, HR leaders were still dealing with the ‘productivity conundrum’ – balancing employee flexibility and productivity in a hybrid work setting. There were also concerns about skills shortage, particularly in IT, as organisations prioritised tech-driven transformation and innovation.
Now, the focus is on assessing the pros and cons (mainly ROI) of providing employees with advanced productivity tools. For example, early studies on Microsoft Copilot showed that 70% of users experienced increased productivity. Discussions, including Narayana Murthy’s remarks on 70-hour work weeks, have re-ignited conversations about employee well-being and the impact of technology in enabling employees to achieve more in less time.
Against the backdrop of skills shortages and the need for better employee experience to retain talent, organisations are increasingly adopting/upgrading their productivity tools – starting with their Sales & Marketing functions.


The challenge of AI is that it is hard to build a business case when the outcomes are inherently uncertain. Unlike a traditional process improvement procedure, there are few guarantees that AI will solve the problem it is meant to solve. Organisations that have been experimenting with AI for some time are aware of this, and have begun to formalise their Proof of Concept (PoC) process to make it easily repeatable by anyone in the organisation who has a use case for AI. PoCs can validate assumptions, demonstrate the feasibility of an idea, and rally stakeholders behind the project.
PoCs are particularly useful at a time when AI is experiencing both heightened visibility and increased scrutiny. Boards, senior management, risk, legal and cybersecurity professionals are all scrutinising AI initiatives more closely to ensure they do not put the organisation at risk of breaking laws and regulations or damaging customer or supplier relationships.
13 Steps to Building an AI PoC
Despite seeming to be lightweight and easy to implement, a good PoC is actually methodologically sound and consistent in its approach. To implement a PoC for AI initiatives, organisations need to:
- Clearly define the problem. Businesses need to understand and clearly articulate the problem they want AI to solve. Is it about improving customer service, automating manual processes, enhancing product recommendations, or predicting machinery failure?
- Set clear objectives. What will success look like for the PoC? Is it about demonstrating technical feasibility, showing business value, or both? Set tangible metrics to evaluate the success of the PoC.
- Limit the scope. PoCs should be time-bound and narrow in scope. Instead of trying to tackle a broad problem, focus on a specific use case or a subset of data.
- Choose the right data. AI is heavily dependent on data. For a PoC, select a representative dataset that’s large enough to provide meaningful results but manageable within the constraints of the PoC.
- Build a multidisciplinary team. Involve team members from IT, data science, business units, and other relevant stakeholders. Their combined perspectives will ensure both technical and business feasibility.
- Prioritise speed over perfection. Use available tools and platforms to expedite the development process. It’s more important to quickly test assumptions than to build a highly polished solution.
- Document assumptions and limitations. Clearly state any assumptions made during the PoC, as well as known limitations. This helps set expectations and can guide future work.
- Present results clearly. Once the PoC is complete, create a clear and concise report or presentation that showcases the results, methodologies, and potential implications for the business.
- Get feedback. Allow stakeholders to provide feedback on the PoC. This includes end-users, technical teams, and business leaders. Their insights will help refine the approach and guide future iterations.
- Plan for the next steps. What actions need to follow a successful PoC demonstration? This might involve a pilot project with a larger scope, integrating the AI solution into existing systems, or scaling the solution across the organisation.
- Assess costs and ROI. Evaluate the costs associated with scaling the solution and compare it with the anticipated ROI. This will be crucial for securing budget and support for further expansion.
- Continually learn and iterate. AI is an evolving field. Use the PoC as a learning experience and be prepared to continually iterate on your solutions as technologies and business needs evolve.
- Consider ethical and social implications. Ensure that the AI initiative respects privacy, reduces bias, and upholds the ethical standards of the organisation. This is critical for building trust and ensuring long-term success.
Customising AI for Your Business
The primary purpose of a PoC is to validate an idea quickly and with minimal risk. It should provide a clear path for decision-makers to either proceed with a more comprehensive implementation or to pivot and explore alternative solutions. It is important for the legal, risk and cybersecurity teams to be aware of the outcomes and support further implementation.
AI initiatives will inevitably drive significant productivity and customer experience improvements – but not every solution will be right for the business. At Ecosystm, we have come across organisations that have employed conversational AI in their contact centres to achieve entirely distinct results – so the AI experience of peers and competitors may not be relevant. A consistent PoC process that trains business and technology teams across the organisation and encourages experimentation at every possible opportunity, would be far more useful.

It’s been barely one year since we entered the Generative AI Age. On November 30, 2022, OpenAI launched ChatGPT, with no fanfare or promotion. Since then, Generative AI has become arguably the most talked-about tech topic, both in terms of opportunities it may bring and risks that it may carry.
The landslide success of ChatGPT and other Generative AI applications with consumers and businesses has put a renewed and strengthened focus on the potential risks associated with the technology – and how best to regulate and manage these. Government bodies and agencies have created voluntary guidelines for the use of AI for a number of years now (the Singapore Framework, for example, was launched in 2019).
There is no active legislation on the development and use of AI yet. Crucially, however, a number of such initiatives are currently on their way through legislative processes globally.
EU’s Landmark AI Act: A Step Towards Global AI Regulation
The European Union’s “Artificial Intelligence Act” is a leading example. The European Commission (EC) started examining AI legislation in 2020 with a focus on
- Protecting consumers
- Safeguarding fundamental rights, and
- Avoiding unlawful discrimination or bias
The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.
This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).
Risk Classification and Regulations in the EU AI Act
At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.

For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.

Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach
The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.
A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.
Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.
Adopting AI Risk Management in Organisations: The Singapore Approach
Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.
The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.
To Comply with Upcoming AI Regulations, Risk Management is the Path Forward
As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.
