In my last Ecosystm Insight, I spoke about the importance of data architecture in defining the data flow, data management systems required, the data processing operations, and AI applications. Data Mesh and Data Fabric are both modern architectural approaches designed to address the complexities of managing and accessing data across a large organisation. While they share some commonalities, such as improving data accessibility and governance, they differ significantly in their methodologies and focal points.
Data Mesh
- Philosophy and Focus. Data Mesh is primarily focused on the organisational and architectural approach to decentralise data ownership and governance. It treats data as a product, emphasising the importance of domain-oriented decentralised data ownership and architecture. The core principles of Data Mesh include domain-oriented decentralised data ownership, data as a product, self-serve data infrastructure as a platform, and federated computational governance.
- Implementation. In a Data Mesh, data is managed and owned by domain-specific teams who are responsible for their data products from end to end. This includes ensuring data quality, accessibility, and security. The aim is to enable these teams to provide and consume data as products, improving agility and innovation.
- Use Cases. Data Mesh is particularly effective in large, complex organisations with many independent teams and departments. It’s beneficial when there’s a need for agility and rapid innovation within specific domains or when the centralisation of data management has become a bottleneck.
Data Fabric
- Philosophy and Focus. Data Fabric focuses on creating a unified, integrated layer of data and connectivity across an organisation. It leverages metadata, advanced analytics, and AI to improve data discovery, governance, and integration. Data Fabric aims to provide a comprehensive and coherent data environment that supports a wide range of data management tasks across various platforms and locations.
- Implementation. Data Fabric typically uses advanced tools to automate data discovery, governance, and integration tasks. It creates a seamless environment where data can be easily accessed and shared, regardless of where it resides or what format it is in. This approach relies heavily on metadata to enable intelligent and automated data management practices.
- Use Cases. Data Fabric is ideal for organisations that need to manage large volumes of data across multiple systems and platforms. It is particularly useful for enhancing data accessibility, reducing integration complexity, and supporting data governance at scale. Data Fabric can benefit environments where there’s a need for real-time data access and analysis across diverse data sources.
Both approaches aim to overcome the challenges of data silos and improve data accessibility, but they do so through different methodologies and with different priorities.
Data Mesh and Data Fabric Vendors
The concepts of Data Mesh and Data Fabric are supported by various vendors, each offering tools and platforms designed to facilitate the implementation of these architectures. Here’s an overview of some key players in both spaces:
Data Mesh Vendors
Data Mesh is more of a conceptual approach than a product-specific solution, focusing on organisational structure and data decentralisation. However, several vendors offer tools and platforms that support the principles of Data Mesh, such as domain-driven design, product thinking for data, and self-serve data infrastructure:
- Thoughtworks. As the originator of the Data Mesh concept, Thoughtworks provides consultancy and implementation services to help organisations adopt Data Mesh principles.
- Starburst. Starburst offers a distributed SQL query engine (Starburst Galaxy) that allows querying data across various sources, aligning with the Data Mesh principle of domain-oriented, decentralised data ownership.
- Databricks. Databricks provides a unified analytics platform that supports collaborative data science and analytics, which can be leveraged to build domain-oriented data products in a Data Mesh architecture.
- Snowflake. With its Data Cloud, Snowflake facilitates data sharing and collaboration across organisational boundaries, supporting the Data Mesh approach to data product thinking.
- Collibra. Collibra provides a data intelligence cloud that offers data governance, cataloguing, and privacy management tools essential for the Data Mesh approach. By enabling better data discovery, quality, and policy management, Collibra supports the governance aspect of Data Mesh.
Data Fabric Vendors
Data Fabric solutions often come as more integrated products or platforms, focusing on data integration, management, and governance across a diverse set of systems and environments:
- Informatica. The Informatica Intelligent Data Management Cloud includes features for data integration, quality, governance, and metadata management that are core to a Data Fabric strategy.
- Talend. Talend provides data integration and integrity solutions with strong capabilities in real-time data collection and governance, supporting the automated and comprehensive approach of Data Fabric.
- IBM. IBM’s watsonx.data is a fully integrated data and AI platform that automates the lifecycle of data across multiple clouds and systems, embodying the Data Fabric approach to making data easily accessible and governed.
- TIBCO. TIBCO offers a range of products, including TIBCO Data Virtualization and TIBCO EBX, that support the creation of a Data Fabric by enabling comprehensive data management, integration, and governance.
- NetApp. NetApp has a suite of cloud data services that provide a simple and consistent way to integrate and deliver data across cloud and on-premises environments. NetApp’s Data Fabric is designed to enhance data control, protection, and freedom.
The choice of vendor or tool for either Data Mesh or Data Fabric should be guided by the specific needs, existing technology stack, and strategic goals of the organisation. Many vendors provide a range of capabilities that can support different aspects of both architectures, and the best solution often involves a combination of tools and platforms. Additionally, the technology landscape is rapidly evolving, so it’s wise to stay updated on the latest offerings and how they align with the organisation’s data strategy.
The data architecture outlines how data is managed in an organisation and is crucial for defining the data flow, data management systems required, the data processing operations, and AI applications. Data architects and engineers define data models and structures based on these requirements, supporting initiatives like data science. Before we delve into the right data architecture for your AI journey, let’s talk about the data management options. Technology leaders have the challenge of deciding on a data management system that takes into consideration factors such as current and future data needs, available skills, costs, and scalability. As data strategies become vital to business success, selecting the right data management system is crucial for enabling data-driven decisions and innovation.
Data Warehouse
A Data Warehouse is a centralised repository that stores vast amounts of data from diverse sources within an organisation. Its main function is to support reporting and data analysis, aiding businesses in making informed decisions. This concept encompasses both data storage and the consolidation and management of data from various sources to offer valuable business insights. Data Warehousing evolves alongside technological advancements, with trends like cloud-based solutions, real-time capabilities, and the integration of AI and machine learning for predictive analytics shaping its future.
Core Characteristics
- Integrated. It integrates data from multiple sources, ensuring consistent definitions and formats. This often includes data cleansing and transformation for analysis suitability.
- Subject-Oriented. Unlike operational databases, which prioritise transaction processing, it is structured around key business subjects like customers, products, and sales. This organisation facilitates complex queries and analysis.
- Non-Volatile. Data in a Data Warehouse is stable; once entered, it is not deleted. Historical data is retained for analysis, allowing for trend identification over time.
- Time-Variant. It retains historical data for trend analysis across various time periods. Each entry is time-stamped, enabling change tracking and trend analysis.
Benefits
- Better Decision Making. Data Warehouses consolidate data from multiple sources, offering a comprehensive business view for improved decision-making.
- Enhanced Data Quality. The ETL process ensures clean and consistent data entry, crucial for accurate analysis.
- Historical Analysis. Storing historical data enables trend analysis over time, informing future strategies.
- Improved Efficiency. Data Warehouses enable swift access and analysis of relevant data, enhancing efficiency and productivity.
Challenges
- Complexity. Designing and implementing a Data Warehouse can be complex and time-consuming.
- Cost. The cost of hardware, software, and specialised personnel can be significant.
- Data Security. Storing large amounts of sensitive data in one place poses security risks, requiring robust security measures.
Data Lake
A Data Lake is a centralised repository for storing, processing, and securing large volumes of structured and unstructured data. Unlike traditional Data Warehouses, which are structured and optimised for analytics with predefined schemas, Data Lakes retain raw data in its native format. This flexibility in data usage and analysis makes them crucial in modern data architecture, particularly in the age of big data and cloud.
Core Characteristics
- Schema-on-Read Approach. This means the data structure is not defined until the data is read for analysis. This offers more flexible data storage compared to the schema-on-write approach of Data Warehouses.
- Support for Multiple Data Types. Data Lakes accommodate diverse data types, including structured (like databases), semi-structured (like JSON, XML files), unstructured (like text and multimedia files), and binary data.
- Scalability. Designed to handle vast amounts of data, Data Lakes can easily scale up or down based on storage needs and computational demands, making them ideal for big data applications.
- Versatility. Data Lakes support various data operations, including batch processing, real-time analytics, machine learning, and data visualisation, providing a versatile platform for data science and analytics.
Benefits
- Flexibility. Data Lakes offer diverse storage formats and a schema-on-read approach for flexible analysis.
- Cost-Effectiveness. Cloud-hosted Data Lakes are cost-effective with scalable storage solutions.
- Advanced Analytics Capabilities. The raw, granular data in Data Lakes is ideal for advanced analytics, machine learning, and AI applications, providing deeper insights than traditional data warehouses.
Challenges
- Complexity and Management. Without proper management, a Data Lake can quickly become a “Data Swamp” where data is disorganised and unusable.
- Data Quality and Governance. Ensuring the quality and governance of data within a Data Lake can be challenging, requiring robust processes and tools.
- Security. Protecting sensitive data within a Data Lake is crucial, requiring comprehensive security measures.
Data Lakehouse
A Data Lakehouse is an innovative data management system that merges the strengths of Data Lakes and Data Warehouses. This hybrid approach strives to offer the adaptability and expansiveness of a Data Lake for housing extensive volumes of raw, unstructured data, while also providing the structured, refined data functionalities typical of a Data Warehouse. By bridging the gap between these two traditional data storage paradigms, Lakehouses enable more efficient data analytics, machine learning, and business intelligence operations across diverse data types and use cases.
Core Characteristics
- Unified Data Management. A Lakehouse streamlines data governance and security by managing both structured and unstructured data on one platform, reducing organizational data silos.
- Schema Flexibility. It supports schema-on-read and schema-on-write, allowing data to be stored and analysed flexibly. Data can be ingested in raw form and structured later or structured at ingestion.
- Scalability and Performance. Lakehouses scale storage and compute resources independently, handling large data volumes and complex analytics without performance compromise.
- Advanced Analytics and Machine Learning Integration. By providing direct access to both raw and processed data on a unified platform, Lakehouses facilitate advanced analytics, real-time analytics, and machine learning.
Benefits
- Versatility in Data Analysis. Lakehouses support diverse data analytics, spanning from traditional BI to advanced machine learning, all within one platform.
- Cost-Effective Scalability. The ability to scale storage and compute independently, often in a cloud environment, makes Lakehouses cost-effective for growing data needs.
- Improved Data Governance. Centralising data management enhances governance, security, and quality across all types of data.
Challenges
- Complexity in Implementation. Designing and implementing a Lakehouse architecture can be complex, requiring expertise in both Data Lakes and Data Warehouses.
- Data Consistency and Quality. Though crucial for reliable analytics, ensuring data consistency and quality across diverse data types and sources can be challenging.
- Governance and Security. Comprehensive data governance and security strategies are required to protect sensitive information and comply with regulations.
The choice between Data Warehouse, Data Lake, or Lakehouse systems is pivotal for businesses in harnessing the power of their data. Each option offers distinct advantages and challenges, requiring careful consideration of organisational needs and goals. By embracing the right data management system, organisations can pave the way for informed decision-making, operational efficiency, and innovation in the digital age.
2024 and 2025 are looking good for IT services providers – particularly in Asia Pacific. All types of providers – from IT consultants to managed services VARs and systems integrators – will benefit from a few converging events.
However, amidst increasing demand, service providers are also challenged with cost control measures imposed in organisations – and this is heightened by the challenge of finding and retaining their best people as competition for skills intensifies. Providers that service mid-market clients might find it hard to compete and grow without significant process automation to compensate for the higher employee costs.
Why Organisations are Opting for IT Service
- Organisations are seeking further cost reductions. Managed services providers will see more opportunities to take cost and complexity out of organisation’s IT functions. The focus in 2024 will be less on “managing” services and more on “transforming” them using ML, AI, and automation to reduce cost and improve value.
- Big app upgrades are back on the agenda. SAP is going above and beyond to incentivise their customers and partners to migrate their on-premises and hyperscale hosted instances to true cloud ERP. Initiatives such as Rise with SAP have been further expanded and improved to accelerate the transition. Salesforce customers are also looking to streamline their deployments while also taking advantage of the new AI and data capabilities. But many of these projects will still be complex and time-consuming.
- Cloud deployments are getting more complex. For many organisations, the simple cloud migrations are done. This is the stage of replatforming, retiring, and refactoring applications to take advantage of public and hybrid cloud capabilities. These are not simple lift and shift – or switch to SaaS – engagements.
- AI will drive a greater need for process improvement and transformation. This will happen along with associated change management and training programs. While it is still early days for GenAI, before the end of 2024, many organisations will move beyond experimentation to department or enterprise wide GenAI initiatives.
- Increasing cybersecurity and data governance demands will prolong the security skill shortage. More organisations will turn to managed security services providers and cybersecurity consultants to help them develop their strategy and response to the rising threat levels.
Choosing the Right Cost Model for IT Services
Buyers of IT services must implement strict cost-control measures and consider various approaches to align costs with business and customer outcomes, including different cost models:
Fixed-Price Contracts. These contracts set a firm price for the entire project or specific deliverables. Ideal when project scope is clear, they offer budget certainty upfront but demand detailed specifications, potentially leading to higher initial quotes due to the provider assuming more risk.
Time and Materials (T&M) Contracts with Caps. Payment is based on actual time and materials used, with negotiated caps to prevent budget overruns. Combining flexibility with cost predictability, this model offers some control over total expenses.
Performance-Based Pricing. Fees are tied to service provider performance, incentivising achievement of specific KPIs or milestones. This aligns provider interests with client goals, potentially resulting in cost savings and improved service quality.
Retainer Agreements with Scope Limits. Recurring fees are paid for ongoing services, with defined limits on work scope or hours within a given period. This arrangement ensures resource availability while containing expenses, particularly suitable for ongoing support services.
Other Strategies for Cost Efficiency and Effective Management
Technology leaders should also consider implementing some of the following strategies:
Phased Payments. Structuring payments in phases, tied to the completion of project milestones, helps manage cash flow and provides a financial incentive for the service provider to meet deadlines and deliverables. It also allows for regular financial reviews and adjustments if the project scope changes.
Cost Transparency and Itemisation. Detailed billing that itemises the costs of labour, materials, and other expenses provides transparency to verify charges, track spending against the budget, and identify areas for potential savings.
Volume Discounts and Negotiated Rates. Negotiating volume discounts or preferential rates for long-term or large-scale engagements, makes providers to offer reduced rates for a commitment to a certain volume of work or an extended contract duration.
Utilisation of Shared Services or Cloud Solutions. Opting for shared or cloud-based solutions where feasible, offers economies of scale and reduces the need for expensive, dedicated infrastructure and resources.
Regular Review and Adjustment. Conducting regular reviews of the services and expenses with the provider to ensure alignment with the budget and objectives, prepares organisations to adjust the scope, renegotiate terms, or implement cost-saving measures as needed.
Exit Strategy. Planning an exit strategy that include provisions for contract termination, transition services, protects an organisation in case the partnership needs to be dissolved.
Conclusion
Many businesses swing between insourcing and outsourcing technology capabilities – with the recent trend moving towards insourcing development and outsourcing infrastructure to the public cloud. But 2024 will see demand for all types of IT services across nearly every geography and industry. Tech services providers can bring significant value to your business – but improved management, monitoring, and governance will ensure that this value is delivered at a fair cost.
Zurich will be the centre of attention for the Financial and Regulatory industries from June 26th to 28th as it hosts the second edition of the Point Zero Forum. Organised by Elevandi and the Swiss State Secretariat for International Finance, this event serves as a platform to encourage dialogue on policy and technology in Financial Services, with a particular emphasis on adopting transformative technologies and establishing the necessary governance and risk frameworks.
As a knowledge partner, Ecosystm is deeply involved in the Point Zero Forum. Throughout the event, we will actively engage in discussions and closely monitor three key areas: ESG, digital assets, and Responsible AI.
Read on to find out what our leaders — Amit Gupta (CEO, Ecosystm Group), Ullrich Loeffler (CEO and Co-Founder, Ecosystm), and Anubhav Nayyar (Chief Growth Advisor, Ecosystm) — say about why this will be core to building a sustainable and innovative future.
Download ‘Building Synergy Between Policy & Technology’ as a PDF
In my last Ecosystm Insight, I spoke about the 5 strategies that leading CX leaders follow to stay ahead of the curve. Data is at the core of these CX strategies. But a customer data breach can have an enormous financial and reputational impact on a brand.
Here are 12 essential steps to effective governance that will help you unlock the power of customer data.
- Understand data protection laws and regulations
- Create a data governance framework
- Establish data privacy and security policies
- Implement data minimisation
- Ensure data accuracy
- Obtain explicit consent
- Mask, anonymise and pseudonymise data
- Implement strong access controls
- Train employees
- Conduct risk assessments and audits
- Develop a data breach response plan
- Monitor and review
Read on to find out more.
Download ‘A 12-Step Plan for Governance of Customer Data’ as a PDF
In good times and in bad, a great customer experience (CX) differentiates your company from your competitors and creates happy customers who turn into brand advocates. While some organisations in Asia Pacific are just starting out on their CX journey, many have made deep investments. But in the fast-paced world of digital, physical and omnichannel experience improvement, if you stand still, you fall behind.
We interviewed CX leaders across the region, and here are the top 5 top actions that they are taking to stay ahead of the curve.
#1 Better Governance of Customer Data
Most businesses accelerate their CX journeys by collecting and analysing data. They copy data from one channel to another, share data across touchpoints, create data silos to better understand data, and attempt to create a single view of the customer. Without effective governance, every time create copies of customer data are created, moved, and shared with partners, it increases the attack surface of the business. And there is nothing worse than telling customers that their data was accessed, stolen or compromised – and that they need to get a new credit card, driver’s license or passport.
To govern customer data effectively, it is essential to collaborate with different stakeholders, such as legal, risk, IT, and CX leaders – data owners, consumers, and managers, analytics leaders, data owners, and data managers – in the strategy discussions.
#2 Creating Human Experiences
To create a human-centric experience, it is important to understand what humans want. However, given that each brand has different values, the expectations of customers may not always be consistent.
Much of the investment in CX by Asian companies over the past five years have been focused on making transactions easy and effective – but ultimately it is the emotional attachment which brings customers back repeatedly. In creating human experiences, brands create a brand voice that is authentic, relatable, empathetic and is consistent across all channels.
Humanising the experience and brand requires:
- Hyperpersonalisation of customer interactions. By efforts such as using names, understanding location requirements, remembering past purchases, and providing tailored recommendations based on their expectations, businesses can make customers feel valued and understood. Understanding the weather, knowing whether the customer’s favourite team won or lost on the weekend, mentioning an important birthday, etc. can all drive real, human experiences – with or without an actual human involved in the process!
- Transparency. Honesty and transparency can go a long way in building trust with customers. Businesses should be open about their processes, pricing, and policies. Organisations should be transparent about mistakes and what they are doing to fix the problem.
#3 Building Co-creation Opportunities
Co-creation is a collaborative approach where organisations involve their customers in the development and improvement of products, services, and experiences. This process can foster innovation, enhance customer satisfaction, and contribute to long-term business success. Co-creation can increase customer satisfaction and loyalty, drive innovation, enhance brand reputation, boost market relevance, and reduce risks and costs.
Strategies for co-creation include:
- Creating open innovation platforms where customers can submit ideas, feedback, and suggestions
- Organising workshops or focus groups that bring together customers, designers, and developers to brainstorm and generate new ideas
- Running contests or crowdsourcing initiatives to engage customers in problem-solving and idea generation
- Establishing feedback loops and engaging customers in the iterative development process
- Partnering with customers or external stakeholders, such as suppliers or distributors, to co-create new products or services
#4 Collecting Data – But Telling Stories
Organisations use storytelling as a powerful CX tool to connect with their customers, convey their brand values, and build trust.
Here are some ways organisations share stories with their customers:
- Brand storytelling. Creating narratives around their brand that showcase their mission, vision, and values
- Customer testimonials and case studies. Sharing real-life experiences of satisfied customers to showcase the value of a product or service
- Content marketing. Creating engaging content in the form of blog posts, articles, videos, podcasts, and more to educate, entertain, and inform their customers
- Social media. Posting photos, videos, or updates that showcase the brand’s personality, to strengthen relationships with the audience
- Packaging and in-store experiences. Creative packaging and well-designed in-store experiences to tell a brand story and create memorable customer interactions
- Corporate social responsibility (CSR) initiatives. Helping customers understand the values the organisation stands for and build trust
#5 Finally – Not Telling Just Positive Stories!
Many companies focus on telling the good stories: “Here’s what happens when you use our products”; “Our customers are super-successful” and; “Don’t just take it from us, listen to what our customers say.”
But memorable stories are created with contrast – like telling the story of what happened when someone didn’t use the product or service. Successful brands don’t want to just leave the audience with a vision of what could be possible, but also what will be likely if they don’t invest. Advertisers have understood this for years, but customers don’t just hear stories through advertisements – they hear it through social media, word of mouth, traditional media, and from sales and account executives.
Data & AI initiatives are firmly at the core of any organisation’s tech-led transformation efforts. Businesses today realise the value of real-time data insights to deliver the agility that is required to succeed in today’s competitive, and often volatile, market.
But organisations continue to struggle with their data & AI initiatives for a variety of reasons. Organisations in ASEAN report some common challenges in implementing successful data & AI initiatives.
Here are 5 insights to build scalable AI.
- Data Access a Key Stumbling Block. Many organisations find that they no longer need to rely on centralised data repositories.
- Organisations Need Data Creativity. A true data-first organisation derives value from their data & AI investments across the entire organisation, cross-leveraging data.
- Governance Not Built into Organisational Psyche. A data-first organisation needs all employees to have a data-driven mindset. This can only be driven by clear guidelines that are laid out early on and adhered to by data generators, managers, and consumers.
- Lack of End-to-End Data Lifecycle Management. It is critical to have observability, intelligence, and automation built into the entire data lifecycle.
- Democratisation of Data & AI Should Be the Goal. The true value of data & AI solutions will be fully realised when the people who benefit from the solutions are the ones managing the solutions and running the queries that will help them deliver better value to the business.
Read below to find out more.
Download 5 Insights to Help Organisations Build Scalable AI – An ASEAN View as a PDF