A lot has been written and spoken about DeepSeek since the release of their R1 model in January. Soon after, Alibaba, Mistral AI, and Ai2 released their own updated models, and we have seen Manus AI being touted as the next big thing to follow.
DeepSeek’s lower-cost approach to creating its model – using reinforcement learning, the mixture-of-experts architecture, multi-token prediction, group relative policy optimisation, and other innovations – has driven down the cost of LLM development. These methods are likely to be adopted by other models and are already being used today.
While the cost of AI is a challenge, it’s not the biggest for most organisations. In fact, few GenAI initiatives fail solely due to cost.
The reality is that many hurdles still stand in the way of organisations’ GenAI initiatives, which need to be addressed before even considering the business case – and the cost – of the GenAI model.
Real Barriers to GenAI
• Data. The lifeblood of any AI model is the data it’s fed. Clean, well-managed data yields great results, while dirty, incomplete data leads to poor outcomes. Even with RAG, the quality of input data dictates the quality of results. Many organisations I work with are still discovering what data they have – let alone cleaning and classifying it. Only a handful in Australia can confidently say their data is fully managed, governed, and AI-ready. This doesn’t mean GenAI initiatives must wait for perfect data, but it does explain why Agentic AI is set to boom – focusing on single applications and defined datasets.
• Infrastructure. Not every business can or will move data to the public cloud – many still require on-premises infrastructure optimised for AI. Some companies are building their own environments, but this often adds significant complexity. To address this, system manufacturers are offering easy-to-manage, pre-built private cloud AI solutions that reduce the effort of in-house AI infrastructure development. However, adoption will take time, and some solutions will need to be scaled down in cost and capacity to be viable for smaller enterprises in Asia Pacific.
• Process Change. AI algorithms are designed to improve business outcomes – whether by increasing profitability, reducing customer churn, streamlining processes, cutting costs, or enhancing insights. However, once an algorithm is implemented, changes will be required. These can range from minor contact centre adjustments to major warehouse overhauls. Change is challenging – especially when pre-coded ERP or CRM processes need modification, which can take years. Companies like ServiceNow and SS&C Blue Prism are simplifying AI-driven process changes, but these updates still require documentation and training.
• AI Skills. While IT teams are actively upskilling in data, analytics, development, security, and governance, AI opportunities are often identified by business units outside of IT. Organisations must improve their “AI Quotient” – a core understanding of AI’s benefits, opportunities, and best applications. Broad upskilling across leadership and the wider business will accelerate AI adoption and increase the success rate of AI pilots, ensuring the right people guide investments from the start.
• AI Governance. Trust is the key to long-term AI adoption and success. Being able to use AI to do the “right things” for customers, employees, and the organisation will ultimately drive the success of GenAI initiatives. Many AI pilots fail due to user distrust – whether in the quality of the initial data or in AI-driven outcomes they perceive as unethical for certain stakeholders. For example, an AI model that pushes customers toward higher-priced products or services, regardless of their actual needs, may yield short-term financial gains but will ultimately lose to ethical competitors who prioritise customer trust and satisfaction. Some AI providers, like IBM and Microsoft, are prioritising AI ethics by offering tools and platforms that embed ethical principles into AI operations, ensuring long-term success for customers who adopt responsible AI practices.
GenAI and Agentic AI initiatives are far from becoming standard business practice. Given the current economic and political uncertainty, many organisations will limit unbudgeted spending until markets stabilise. However, technology and business leaders should proactively address the key barriers slowing AI adoption within their organisations. As more AI platforms adopt the innovations that helped DeepSeek reduce model development costs, the economic hurdles to GenAI will become easier to overcome.

Welcome to 2025, the Year of the Snake – now enhanced, of course, with AI-powered features! While 2023 and 2024 saw a surprising global consensus on the potential risks of AI and the need for careful management (think AI legislation), the opening weeks of 2025 have thrown a new, and perhaps more pressing, concern into the spotlight: cost.
The recent unveiling of Project Stargate sent ripples throughout the tech world, not just for its ambitious goals, but for its staggering price tag: a cool USD 500B over four years. Let that sink in. That’s roughly the equivalent of Singapore’s entire GDP in 2023. For context, that kind of money could fund the entire Apollo program and build two International Space Stations, with some spending money left over. It’s a figure that underscores the sheer scale of investment required to push the boundaries of AI.
But then, the plot thickened. A relatively unknown Chinese company, DeepSeek, seemingly out of nowhere, launched its R1 large language model (LLM). Not only does R1 appear to be a direct competitor to OpenAI’s latest offerings, but DeepSeek also claims to have achieved this feat at a fraction of the cost, and using fewer (and potentially less powerful) GPUs. This announcement sent shockwaves through the stock market on January 27th, impacting nearly every stock associated with AI chip manufacturing. Nvidia (NVDA), a key player in the AI hardware space, suffered one of the biggest single-day losses in US stock market history, with nearly USD 600B wiped off its market capitalisation. Ironically, that’s more than Project Stargate’s entire budget plus the cost of an ISS.
This dramatic market reaction highlights several critical trends emerging in 2025. The previously observed consensus on AI risks and legislation is already beginning to fracture (witness the recent back-and-forth on AI regulation). Meanwhile, the exorbitant cost of AI development is becoming increasingly apparent. We’re also seeing a renewed West versus (Far) East rivalry playing out in the AI arena, extending beyond just technological competition. And finally, the age-old debate between open-source and proprietary software is back, with some LLMs, like DeepSeek’s R1, leaning more towards open access than others.
For organisations considering investing in AI, and indeed for all of us whose lives are increasingly touched by AI developments, it’s crucial to keep a close watch on these powerful trends. The risks, the investments, and the potential benefits of AI must be carefully scrutinised and potentially reassessed. The recent stock market correction suggests a necessary pushback against the over-confidence and over-spending that has characterised some areas of AI development. As DeepSeek’s R1 has shown, sometimes it doesn’t take much to disrupt the party.
The question now is: how will the landscape shift, and who will emerge as the true leaders in this expensive, yet potentially transformative, race?
