Embedding Sustainability in Corporate Strategy and Operations​

5/5 (2)

5/5 (2)

In our previous Ecosystm Insights, Ecosystm Principal Advisor, Gerald Mackenzie, highlighted the key drivers for boosting ESG maturity and the need to transition from standalone ESG projects to integrating ESG goals into organisational strategy and operations. ​

This shift can be difficult, requiring an alignment of ESG objectives with broader strategic aims and using organisational capabilities effectively. The solution involves prioritising essential goals, knitting them into overall business strategy, quantifying success metrics, and establishing incentives and governance for effective execution.​

The benefits are proven and significant. Stronger Customer and Employee Value Propositions, better bottom line, improved risk profile, and more attractive enterprise valuations for investors and lenders.​

According to Gerald, here are 5 things to keep in mind when starting on an ESG journey. 

Embedding-Sustainability-in-Corporate-Strategy-Operations-1
Embedding-Sustainability-in-Corporate-Strategy-Operations-2
Embedding-Sustainability-in-Corporate-Strategy-Operations-3
Embedding-Sustainability-in-Corporate-Strategy-Operations-4
Embedding-Sustainability-in-Corporate-Strategy-Operations-5
Embedding-Sustainability-in-Corporate-Strategy-Operations-6
Embedding-Sustainability-in-Corporate-Strategy-Operations-7
Embedding-Sustainability-in-Corporate-Strategy-Operations-8
Embedding-Sustainability-in-Corporate-Strategy-Operations-9
previous arrowprevious arrow
next arrownext arrow
Embedding-Sustainability-in-Corporate-Strategy-Operations-1
Embedding-Sustainability-in-Corporate-Strategy-Operations-2
Embedding-Sustainability-in-Corporate-Strategy-Operations-3
Embedding-Sustainability-in-Corporate-Strategy-Operations-4
Embedding-Sustainability-in-Corporate-Strategy-Operations-5
Embedding-Sustainability-in-Corporate-Strategy-Operations-6
Embedding-Sustainability-in-Corporate-Strategy-Operations-7
Embedding-Sustainability-in-Corporate-Strategy-Operations-8
Embedding-Sustainability-in-Corporate-Strategy-Operations-9
previous arrow
next arrow
Shadow

Download ‘Embedding Sustainability in Corporate Strategy and Operations​’ as a PDF

Access More Insights Here

0
Putting Customers at the Heart of Sustainability: A New Path for ESG Strategy

5/5 (3)

5/5 (3)

In an era marked by heightened global awareness of environmental, social, and governance (ESG) issues, organisations find themselves at a crossroad where profitability converges with responsibility. The imperative to take resolute action on ESG fronts is underscored by a compelling array of statistics and evidence that highlight the profound impact of these considerations on long-term success.  

A 2020 McKinsey report revealed that executives and investors value companies with robust ESG performance around 10% higher in valuations than laggards. Equally pivotal, workplace diversity is now recognised as a strategic advantage; a study in the Harvard Business Review finds that companies with above-average total diversity had both 19% higher innovation revenues and 9% higher EBIT margins, on average. Against this backdrop, organisations must recognise that embracing ESG principles is not merely an ethical gesture but a strategic imperative that safeguards resilience, reputation, and enduring financial prosperity. 

The data from the ongoing Ecosystm State of ESG Adoption study was used to evaluate the status and maturity of organisations’ ESG strategy and implementation progress. A diverse representation across industries such as Financial Services, Manufacturing, and Retail & eCommerce, as well as from roles across the organisation has helped us with insights and an understanding of where organisations stand in terms of the maturity of their ESG strategy and implementation efforts.  

A Tailored Approach to Improve ESG Maturity 

Ecosystm assists clients in driving greater impact through their ESG adoption. Our tools evaluate an organisation’s aspirations and roadmaps using a maturity model, along with a series of practical drivers that enhance ESG response maturity. The maturity of an organisation’s approach on ESG tends to progress from a reactive, or risk/compliance-based focus, to a customer, or opportunity driven approach, to a purpose led approach that is focused on embedding ESG into the core culture of the organisation. Our advisory, research and consulting offerings are customised to the transitions organisations are seeking to make in their maturity levels over time.   

ESG Maturity Defined

Within the maturity framework outlined above, Ecosystm has identified the key organisational drivers to improve maturity and adoption. The Ecosystm ESG Consulting offerings are configured to both support the development of ESG strategy and the delivery and ‘story telling’ around ESG programs based on the goals of the customer (maturity aspiration) and the gaps they need to close to deliver the aspiration.   

What ESG Maturity Looks Like

Key Findings of the Ecosystm State of ESG Study 

89% of respondents self-reported that their organisation had an ESG strategy; however, a notable 60% also identified that a lack of alignment of sustainability goals to enterprise strategy was a key issue in implementation. This reflects many of the client discussions we’ve had, where customers share that ESG goals that have not been fully tested against other organisational priorities can create tensions and make it difficult to solve trade-offs across the organisation during implementation.  

This image has an empty alt attribute; its file name is Sustainability-Leader-Quote-1.jpg

People & Leadership/Execution & Governance 

Capabilities are still emerging. 40% of respondents mentioned that a lack of a governance framework for ESG was a barrier to adoption, and 56% mentioned that immature metrics and reporting maturity slowed adoption. 64% of respondents also mentioned that a lack of specialised resources as a key barrier to ESG adoption.

In our discussions with customers, we understand that there is good support for ESG across organisations, but there needs to be a simple narrative compelling them to action on a few clearly articulated priorities, a clear mandate from senior leadership and credible resourcing and governance to ensure follow through. 

This image has an empty alt attribute; its file name is Sustainability-Leader-Quote.jpg

Data and Technology Enablement 

There is a strong opportunity for improvement. “We can’t manage what we cannot measure” has been the common refrain from the clients we have spoken to and the survey reflected this. Only 47% of respondents say that preparing data, analytics, reporting, and metrics for internal consumption is a priority for their tech teams.   

This image has an empty alt attribute; its file name is ESG-Lead-Quote.jpg

ESG is rapidly emerging as a key priority for customers, investors, talent, and other stakeholders who seek a comprehensive and genuine commitment from the organisations they interact with. Successfully determining the right priorities and effectively mobilising your organisation and external collaborators for implementation are pivotal. It’s crucial to acknowledge the intricacy and extent of effort needed for this endeavour. 

With our timely research findings complementing our ESG maturity and implementation frameworks, analyst insights and consulting support, Ecosystm is well-positioned to help you to navigate your journey to ESG maturity. 

Ecosystm Consulting
0
Sustainability is About MUCH More than Green Credentials

5/5 (3)

5/5 (3)

As an industry, the tech sector tends to jump on keywords and terms – and sometimes reshapes their meaning and intention. “Sustainable” is one of those terms. Technology vendors are selling (allegedly!) “sustainable software/hardware/services/solutions” – in fact, the focus on “green” or “zero carbon” or “recycled” or “circular economy” is increasing exponentially at the moment. And that is good news – as I mentioned in my previous post, we need to significantly reduce greenhouse gas emissions if we want a future for our kids. But there is a significant disconnect between the way tech vendors use the word “sustainable” and the way it is used in boardrooms and senior management teams of their clients.

Defining Sustainability

For organisations, Sustainability is a broad business goal – in fact for many, it is the over-arching goal. A sustainable organisation operates in a way that balances economic, social, and environmental (ESG) considerations. Rather than focusing solely on profits, a sustainable organisation aims to meet the needs of the present without compromising the ability of future generations to meet their own needs.

This is what building a “Sustainable Organisation” typically involves:

Economic Sustainability. The organisation must be financially stable and operate in a manner that ensures long-term economic viability. It doesn’t just focus on short-term profits but invests in long-term growth and resilience.

Social Sustainability. This involves the organisation’s responsibility to its employees, stakeholders, and the wider community. A sustainable organisation will promote fair labour practices, invest in employee well-being, foster diversity and inclusion, and engage in ethical decision-making. It often involves community engagement and initiatives that support societal growth and well-being.

Environmental Sustainability. This facet includes the responsible use of natural resources and minimising negative impacts on the environment. A sustainable organisation seeks to reduce its carbon footprint, minimise waste, enhance energy efficiency, and often supports or initiates activities that promote environmental conservation.

Governance and Ethical Considerations. Sustainable organisations tend to have transparent and responsible governance. They follow ethical business practices, comply with laws and regulations, and foster a culture of integrity and accountability.

Security and Resilience. Sustainable organisations have the ability to thwart bad actors – and in the situation that they are breached, to recover from these breaches quickly and safely. Sustainable organisations can survive cybersecurity incidents and continue to operate when breaches occur, with the least impact.

Long-Term Focus. Sustainability often requires a long-term perspective. By looking beyond immediate gains and considering the long-term impact of decisions, a sustainable organisation can better align its strategies with broader societal goals.

Stakeholder Engagement. Understanding and addressing the needs and concerns of different stakeholders (including employees, customers, suppliers, communities, and shareholders) is key to sustainability. This includes open communication and collaboration with these groups to foster relationships based on trust and mutual benefit.

Adaptation and Innovation. The organisation is not static and recognises the need for continual improvement and adaptation. This might include innovation in products, services, or processes to meet evolving sustainability standards and societal expectations.

Alignment with the United Nations’ Sustainable Development Goals (UNSDGs). Many sustainable organisations align their strategies and operations with the UNSDGs which provide a global framework for addressing sustainability challenges.

Organisations Appreciate Precise Messaging

A sustainable organisation is one that integrates economic, social, and environmental considerations into all aspects of its operations. It goes beyond mere compliance with laws to actively pursue positive impacts on people and the planet, maintaining a balance that ensures long-term success and resilience.

These factors are all top of mind when business leaders, boards and government agencies use the word “sustainable”. Helping organisations meet their emission reduction targets is a good starting point – but it is a long way from all businesses need to become sustainable organisations.

Tech providers need to reconsider their use of the term “sustainable” – unless their solution or service is helping organisations meet all of the features outlined above. Using specific language would be favoured by most customers – telling them how the solution will help them reduce greenhouse gas emissions, meet compliance requirements for CO2 and/or waste reduction, and save money on electricity and/or management costs – these are all likely to get the sale over the line faster than a broad “sustainability” messaging will.

Access More Insights Here
0
Your Organisation Needs an AI Ethics Policy TODAY!

5/5 (2)

5/5 (2)

It is not hyperbole to state that AI is on the cusp of having significant implications on society, business, economies, governments, individuals, cultures, politics, the arts, manufacturing, customer experience… I think you get the idea! We cannot understate the impact that AI will have on society. In times gone by, businesses tested ideas, new products, or services with small customer segments before they went live. But with AI we are all part of this experiment on the impacts of AI on society – its benefits, use cases, weaknesses, and threats. 

What seemed preposterous just six months ago is not only possible but EASY! Do you want a virtual version of yourself, a friend, your CEO, or your deceased family member? Sure – just feed the data. Will succession planning be more about recording all conversations and interactions with an executive so their avatar can make the decisions when they leave? Why not? How about you turn the thousands of hours of recorded customer conversations with your contact centre team into a virtual contact centre team? Your head of product can present in multiple countries in multiple languages, tailored to the customer segments, industries, geographies, or business needs at the same moment.  

AI has the potential to create digital clones of your employees, it can spread fake news as easily as real news, it can be used for deception as easily as for benefit. Is your organisation prepared for the social, personal, cultural, and emotional impacts of AI? Do you know how AI will evolve in your organisation?  

When we focus on the future of AI, we often interview AI leaders, business leaders, futurists, and analysts. I haven’t seen enough focus on psychologists, sociologists, historians, academics, counselors, or even regulators! The Internet and social media changed the world more than we ever imagined – at this stage, it looks like these two were just a rehearsal for the real show – Artificial Intelligence. 

Lack of Government or Industry Regulation Means You Need to Self-Regulate 

These rapid developments – and the notable silence from governments, lawmakers, and regulators – make the requirement for an AI Ethics Policy for your organisation urgent! Even if you have one, it probably needs updating, as the scenarios that AI can operate within are growing and changing literally every day.  

  • For example, your customer service team might want to create a virtual customer service agent from a real person. What is the policy on this? How will it impact the person? 
  • Your marketing team might be using ChatGPT or Bard for content creation. Do you have a policy specifically for the creation and use of content using assets your business does not own?  
  • What data is acceptable to be ingested by a public Large Language Model (LLM). Are are you governing data at creation and publishing to ensure these policies are met?  
  • With the impending public launch of Microsoft’s Co-Pilot AI service, what data can be ingested by Co-Pilot? How are you governing the distribution of the insights that come out of that capability? 

If policies are not put in place, data tagged, staff trained, before using a tool such as Co-Pilot, your business will be likely to break some privacy or employment laws – on the very first day! 

What do the LLMs Say About AI Ethics Policies? 

So where do you go when looking for an AI Ethics policy? ChatGPT and Bard of course! I asked the two for a modern AI Ethics policy. 

You can read what they generated in the graphic below.

YourOrganisationNeedsanAIEthicsPolicyTODAY-1
YourOrganisationNeedsanAIEthicsPolicyTODAY-2
YourOrganisationNeedsanAIEthicsPolicyTODAY-3
previous arrowprevious arrow
next arrownext arrow
YourOrganisationNeedsanAIEthicsPolicyTODAY-1
YourOrganisationNeedsanAIEthicsPolicyTODAY-2
YourOrganisationNeedsanAIEthicsPolicyTODAY-3
previous arrow
next arrow
Shadow

I personally prefer the ChatGPT4 version as it is more prescriptive. At the same time, I would argue that MOST of the AI tools that your business has access to today don’t meet all of these principles. And while they are tools and the ethics should dictate the way the tools are used, with AI you cannot always separate the process and outcome from the tool.  

For example, a tool that is inherently designed to learn an employee’s character, style, or mannerisms cannot be unbiased if it is based on a biased opinion (and humans have biases!).  

LLMs take data, content, and insights created by others, and give it to their customers to reuse. Are you happy with your website being used as a tool to train a startup on the opportunities in the markets and customers you serve?  

By making content public, you acknowledge the risk of others using it. But at least they visited your website or app to consume it. Not anymore… 

A Policy is Useless if it Sits on a Shelf 

Your AI ethics policy needs to be more than a published document. It should be the beginning of a conversation across the entire organisation about the use of AI. Your employees need to be trained in the policy. It needs to be part of the culture of the business – particularly as low and no-code capabilities push these AI tools, practices, and capabilities into the hands of many of your employees.  

Nearly every business leader I interview mentions that their organisation is an “intelligent, data-led, business.” What is the role of AI in driving this intelligent business? If being data-driven and analytical is in the DNA of your organisation, soon AI will also be at the heart of your business. You might think you can delay your investments to get it right – but your competitors may be ahead of you.  

So, as you jump head-first into the AI pool, start to create, improve and/or socialise your AI Ethics Policy. It should guide your investments, protect your brand, empower your employees, and keep your business resilient and compliant with legacy and new legislation and regulations. 

AI Research and Reports
0
5 Insights to Help Organisations Build Scalable AI – An ASEAN View

No ratings yet.

No ratings yet.

Data & AI initiatives are firmly at the core of any organisation’s tech-led transformation efforts. Businesses today realise the value of real-time data insights to deliver the agility that is required to succeed in today’s competitive, and often volatile, market.

But organisations continue to struggle with their data & AI initiatives for a variety of reasons. Organisations in ASEAN report some common challenges in implementing successful data & AI initiatives.

Here are 5 insights to build scalable AI.

  1. Data Access a Key Stumbling Block. Many organisations find that they no longer need to rely on centralised data repositories.
  2. Organisations Need Data Creativity. A true data-first organisation derives value from their data & AI investments across the entire organisation, cross-leveraging data.
  3. Governance Not Built into Organisational Psyche. A data-first organisation needs all employees to have a data-driven mindset. This can only be driven by clear guidelines that are laid out early on and adhered to by data generators, managers, and consumers.
  4. Lack of End-to-End Data Lifecycle Management. It is critical to have observability, intelligence, and automation built into the entire data lifecycle.
  5. Democratisation of Data & AI Should Be the Goal. The true value of data & AI solutions will be fully realised when the people who benefit from the solutions are the ones managing the solutions and running the queries that will help them deliver better value to the business.

Read below to find out more.

5-Insights-to-Build-Scalable-AI-ASEAN-1
5-Insights-to-Build-Scalable-AI-ASEAN-2
5-Insights-to-Build-Scalable-AI-ASEAN-3
5-Insights-to-Build-Scalable-AI-ASEAN-4
5-Insights-to-Build-Scalable-AI-ASEAN-5
5-Insights-to-Build-Scalable-AI-ASEAN-6
5-Insights-to-Build-Scalable-AI-ASEAN-7
5-Insights-to-Build-Scalable-AI-ASEAN-8
5-Insights-to-Build-Scalable-AI-ASEAN-9
5-Insights-to-Build-Scalable-AI-ASEAN-10
5-Insights-to-Build-Scalable-AI-ASEAN-11
previous arrow
next arrow
5-Insights-to-Build-Scalable-AI-ASEAN-1
5-Insights-to-Build-Scalable-AI-ASEAN-2
5-Insights-to-Build-Scalable-AI-ASEAN-3
5-Insights-to-Build-Scalable-AI-ASEAN-4
5-Insights-to-Build-Scalable-AI-ASEAN-5
5-Insights-to-Build-Scalable-AI-ASEAN-6
5-Insights-to-Build-Scalable-AI-ASEAN-7
5-Insights-to-Build-Scalable-AI-ASEAN-8
5-Insights-to-Build-Scalable-AI-ASEAN-9
5-Insights-to-Build-Scalable-AI-ASEAN-10
5-Insights-to-Build-Scalable-AI-ASEAN-11
previous arrow
next arrow
Shadow

Download 5 Insights to Help Organisations Build Scalable AI – An ASEAN View as a PDF

Artificial Intelligence Insights
0
Global Initiatives to Support AI Governance and Ethics

5/5 (3)

5/5 (3) Any new technology that changes our businesses or society for the better often has a potential dark side that is viewed with suspicion and mistrust. The media, especially on the Internet, is eager to prey on our fears and invoke a dystopian future where technology has gotten out of control or is used for nefarious purposes. For examples of how technology can be used in an unexpected and unethical manner, one can look at science fiction movies, Artificial Intelligence (AI) vs AI chatbots conversations, autonomous killer robots, facial recognition for mass surveillance or the writings of Sci-Fi authors such as Isaac Asimov and Iain M. Banks that portrays a grim use of technology.

This situation is only exacerbated by social media and the prevalence of “fake news” that can quickly propagate incorrect, unscientific or unsubstantiated rumours.

As AI is evolving, it is raising some new ethical and legal questions. AI works by analysing data that is fed into it and draws conclusions based on what it has learned or been trained to do. Though it has many benefits, it may pose a threat to humans, data privacy, and the potential outcomes of the decisions. To curb the chances of such outcomes, organisations and policymakers are crafting recommendations about ensuring the responsible and ethical use of AI. In addition, governments are also taking initiatives to take it a step further and working on the development of principles, drafting laws and regulations. Tech developers are also trying to self-regulate their AI capabilities.

Amit Gupta, CEO, Ecosystm interviewed Matt Pollins, Partner of renowned law firm CMS where they discussed the implementation of regulations for AI.

To maximise the benefits of science and technology for the society, in May 2019, World Economic Forum  (WEF) – an independent international organisation for Public-Private Cooperation – announced the formation of six separate fourth industrial revolution councils in San Francisco.

The goal of the councils is to work on a global level around new technology policy guidance, best policy practices, strategic guidelines and to help regulate technology under six domains – AI, precision medicine, autonomous driving, mobility, IoT, and blockchain. There is participation of over 200 industry leaders from organisations such as Microsoft, Qualcomm, Uber, Dana-Farber, European Union, Chinese Academy of Medical Sciences and the World Bank, to address the concerns around absence of clear unified guidelines.

Similarly, the Organization for Economic Co-operation and Development (OECD)  created a global reference point for AI adoption principles and recommendations for governments of countries across the world. The OECD AI principles are called “values-based principles,” and are clearly envisioned to endorse AI “that is innovative and trustworthy and that respects human rights and democratic values.”

Likewise, in April, the European Union published a set of guidelines on how companies and governments should develop ethical applications of AI to address the issues that might affect society as we integrate AI into sectors like healthcare, education, and consumer technology.

The Personal Data Protection Commission (PDPC) in Singapore presented the first edition of a Proposed Model AI Governance Framework (Model Framework) – an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way. We can several organisations coming forward on AI governance. As examples, NEC released the “NEC Group AI and Human Rights Principles“, Google has created AI rules and objectives, and the Partnership on AI was established to study and plan best practices on AI technologies.

 

What could be the real-world challenges around the ethical use of AI?

Progress in the adoption of AI has shown some incredible cases benefitting various industries – commerce, transportation, healthcare, agriculture, education – and offering efficiency and savings. However, AI developments are also anticipated to disrupt several legal frameworks owing to the concerns of AI implementation in high-risk areas. The challenge today is that several AI applications have been used by consumers or organisations only for them to later realise that the project was not ethically fit. An example is the development of a fully autonomous AI-controlled weapon system which is drawing criticism from various nations across the globe and the UN itself.

“Before an organisation embarks on the project, it is vital for a regulation to be in place right from the beginning of the project. This enables the vendor and the organisation to reach a common goal and understanding of what is ethical and right. With such practices in place bias, breach of confidentiality and ethics can be avoided” says Ecosystm Analyst, Audrey William. “Apart from working with the AI vendor and a service provider or systems integrator, it is highly recommended that the organisation consult a specialist such as Foundation for Responsible Robotics, Data & Society, AI Ethics Lab that help look into the parameters of ethics and bias before the project deployment.”

Another challenge arises from a data protection perspective because AI models are fed with data sets for their training and learning. This data is often obtained from usage history and data tracking that may compromise an individual’s identity. The use of this information may lead to a breach of user rights and privacy which may leave an organisation facing consequences around legal prosecutions, governance, and ethics.

One other area that is not looked into is racial and gender bias. Phone manufacturers have been criticised in the past on matters of racial and gender bias, when the least errors in identification occur with light-skinned males. This opened conversations on how the technology works on people of different races and genders.

San Francisco recently banned the use of facial recognition by the police and other agencies, proposing that the technology may pose a serious threat to civil liberties. “Implementing AI technologies such as facial recognition solution means organisations have to ensure that there are no racial bias and discrimination issues. Any inaccuracy or glitches in the data may tend to make the machines untrustworthy” says William.

Given what we know about existing AI systems, we should be very concerned that the possibilities of technology breaching humanitarian laws, are more likely than not.

Could strong governance restrict the development and implementation of AI?

The disruptive potential of AI poses looming risks around ethics, transparency, and security, hence the need for greater governance. AI will be used safely only once governance and policies have been framed, mandating its use.

William thinks that, “AI deployments have positive implications on creating better applications in health, autonomous driving, smart cities, and a eventually a better society. Worrying too much about regulations will impede the development of AI. A fine line has to be drawn between the development of AI and ensuring that the development does not cross the boundaries of ethics, transparency, and fairness.”

 

While AI as a technology has a way to go before it matures, at the moment it is the responsibility of both organisations and governments to strike a balance between technology development and use, and regulations and frameworks in the best interest of citizens and civil liberties.

4