Building a Data-Driven Foundation to Super Charge Your AI Journey

5/5 (2)

5/5 (2)

AI has become a business necessity today, catalysing innovation, efficiency, and growth by transforming extensive data into actionable insights, automating tasks, improving decision-making, boosting productivity, and enabling the creation of new products and services.

Generative AI stole the limelight in 2023 given its remarkable advancements and potential to automate various cognitive processes. However, now the real opportunity lies in leveraging this increased focus and attention to shine the AI lens on all business processes and capabilities. As organisations grasp the potential for productivity enhancements, accelerated operations, improved customer outcomes, and enhanced business performance, investment in AI capabilities is expected to surge.

In this eBook, Ecosystm VP Research Tim Sheedy and Vinod Bijlani and Aman Deep from HPE APAC share their insights on why it is crucial to establish tailored AI capabilities within the organisation.

AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook_2
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook-1
AI-Powered Enterprise_HPE_Ecosystm_eBook_2
AI-Powered Enterprise_HPE_Ecosystm_eBook-3
AI-Powered Enterprise_HPE_Ecosystm_eBook-4
AI-Powered Enterprise_HPE_Ecosystm_eBook-5
AI-Powered Enterprise_HPE_Ecosystm_eBook-6
AI-Powered Enterprise_HPE_Ecosystm_eBook-7
AI-Powered Enterprise_HPE_Ecosystm_eBook-8
AI-Powered Enterprise_HPE_Ecosystm_eBook-9
AI-Powered Enterprise_HPE_Ecosystm_eBook-10
AI-Powered Enterprise_HPE_Ecosystm_eBook-11
AI-Powered Enterprise_HPE_Ecosystm_eBook-12
previous arrowprevious arrow
next arrownext arrow
AI-Powered Enterprise_HPE_Ecosystm_eBook-1
AI-Powered Enterprise_HPE_Ecosystm_eBook_2
AI-Powered Enterprise_HPE_Ecosystm_eBook-3
AI-Powered Enterprise_HPE_Ecosystm_eBook-4
AI-Powered Enterprise_HPE_Ecosystm_eBook-5
AI-Powered Enterprise_HPE_Ecosystm_eBook-6
AI-Powered Enterprise_HPE_Ecosystm_eBook-7
AI-Powered Enterprise_HPE_Ecosystm_eBook-8
AI-Powered Enterprise_HPE_Ecosystm_eBook-9
AI-Powered Enterprise_HPE_Ecosystm_eBook-10
AI-Powered Enterprise_HPE_Ecosystm_eBook-11
AI-Powered Enterprise_HPE_Ecosystm_eBook-12
previous arrow
next arrow
Shadow

Click here to download the eBook “AI-Powered Enterprise: Building a Data Driven Foundation To Super Charge Your AI Journey”

AI Research and Reports
0
Accelerate AI Adoption: Guardrails for Effective Use

5/5 (3)

5/5 (3)

“AI Guardrails” are often used as a method to not only get AI programs on track, but also as a way to accelerate AI investments. Projects and programs that fall within the guardrails should be easy to approve, govern, and manage – whereas those outside of the guardrails require further review by a governance team or approval body. The concept of guardrails is familiar to many tech businesses and are often applied in areas such as cybersecurity, digital initiatives, data analytics, governance, and management.

While guidance on implementing guardrails is common, organisations often leave the task of defining their specifics, including their components and functionalities, to their AI and data teams. To assist with this, Ecosystm has surveyed some leading AI users among our customers to get their insights on the guardrails that can provide added value.

Data Security, Governance, and Bias

AI: Data, Security, and Bias
  • Data Assurance. Has the organisation implemented robust data collection and processing procedures to ensure data accuracy, completeness, and relevance for the purpose of the AI model? This includes addressing issues like missing values, inconsistencies, and outliers.
  • Bias Analysis. Does the organisation analyse training data for potential biases – demographic, cultural and so on – that could lead to unfair or discriminatory outputs?
  • Bias Mitigation. Is the organisation implementing techniques like debiasing algorithms and diverse data augmentation to mitigate bias in model training?
  • Data Security. Does the organisation use strong data security measures to protect sensitive information used in training and running AI models?
  • Privacy Compliance. Is the AI opportunity compliant with relevant data privacy regulations (country and industry-specific as well as international standards) when collecting, storing, and utilising data?

Model Development and Explainability

AI: Model Development and Explainability
  • Explainable AI. Does the model use explainable AI (XAI) techniques to understand and explain how AI models reach their decisions, fostering trust and transparency?
  • Fair Algorithms. Are algorithms and models designed with fairness in mind, considering factors like equal opportunity and non-discrimination?
  • Rigorous Testing. Does the organisation conduct thorough testing and validation of AI models before deployment, ensuring they perform as intended, are robust to unexpected inputs, and avoid generating harmful outputs?

AI Deployment and Monitoring

AI: Deployment and Monitoring
  • Oversight Accountability. Has the organisation established clear roles and responsibilities for human oversight throughout the AI lifecycle, ensuring human control over critical decisions and mitigation of potential harm?
  • Continuous Monitoring. Are there mechanisms to continuously monitor AI systems for performance, bias drift, and unintended consequences, addressing any issues promptly?
  • Robust Safety. Can the organisation ensure AI systems are robust and safe, able to handle errors or unexpected situations without causing harm? This includes thorough testing and validation of AI models under diverse conditions before deployment.
  • Transparency Disclosure. Is the organisation transparent with stakeholders about AI use, including its limitations, potential risks, and how decisions made by the system are reached?

Other AI Considerations

AI: Ethical Considerations
  • Ethical Guidelines. Has the organisation developed and adhered to ethical principles for AI development and use, considering areas like privacy, fairness, accountability, and transparency?
  • Legal Compliance. Has the organisation created mechanisms to stay updated on and compliant with relevant legal and regulatory frameworks governing AI development and deployment?
  • Public Engagement. What mechanisms are there in place to encourage open discussion and engage with the public regarding the use of AI, addressing concerns and building trust?
  • Social Responsibility. Has the organisation considered the environmental and social impact of AI systems, including energy consumption, ecological footprint, and potential societal consequences?

Implementing these guardrails requires a comprehensive approach that includes policy formulation, technical measures, and ongoing oversight. It might take a little longer to set up this capability, but in the mid to longer term, it will allow organisations to accelerate AI implementations and drive a culture of responsible AI use and deployment.

AI Research and Reports
0
How Green is Your Cloud?

5/5 (1)

5/5 (1)

For many organisations migrating to cloud, the opportunity to run workloads from energy-efficient cloud data centres is a significant advantage. However, carbon emissions can vary from one country to another and if left unmonitored, will gradually increase over time as cloud use grows. This issue will become increasingly important as we move into the era of compute-intensive AI and the burden of cloud on natural resources will shift further into the spotlight.

The International Energy Agency (IEA) estimates that data centres are responsible for up to 1.5% of global electricity use and 1% of GHG emissions. Cloud providers have recognised this and are committed to change. Between 2025 and 2030, all hyperscalers – AWS, Azure, Google, and Oracle included – expect to power their global cloud operations entirely with renewable sources.

Chasing the Sun

Cloud providers are shifting their sights from simply matching electricity use with renewable power purchase agreements (PPA) to the more ambitious goal of operating 24/7 on carbon-free sources. A defining characteristic of renewables though is intermittency, with production levels fluctuating based on the availability of sunlight and wind. Leading cloud providers are using AI to dynamically distribute compute workloads throughout the day to regions with lower carbon intensity. Workloads that are processed with solar power during daylight can be shifted to nearby regions with abundant wind energy at night.

Addressing Water Scarcity

Many of the largest cloud data centres are situated in sunny locations to take advantage of solar power and proximity to population centres. Unfortunately, this often means that they are also in areas where water is scarce. While liquid-cooled facilities are energy efficient, local communities are concerned on the strain on water sources. Data centre operators are now committing to reduce consumption and restore water supplies. Simple measures, such as expanding humidity (below 20% RH) and temperature tolerances (above 30°C) in server rooms have helped companies like Meta to cut wastage. Similarly, Google has increased their reliance on non-potable sources, such as grey water and sea water.

From Waste to Worth

Data centre operators have identified innovative ways to reuse the excess heat generated by their computing equipment. Some have used it to heat adjacent swimming pools while others have warmed rooms that house vertical farms. Although these initiatives currently have little impact on the environmental impact of cloud, they suggest a future where waste is significantly reduced.

Greening the Grid

The giant facilities that cloud providers use to house their computing infrastructure are also set to change. Building materials and construction account for an astonishing 11% of global carbon emissions. The use of recycled materials in concrete and investing in greener methods of manufacturing steel are approaches the construction industry are attempting to lessen their impact. Smaller data centres have been 3D printed to accelerate construction and use recyclable printing concrete. While this approach may not be suitable for hyperscale facilities, it holds potential for smaller edge locations.

Rethinking Hardware Management

Cloud providers rely on their scale to provide fast, resilient, and cost-effective computing. In many cases, simply replacing malfunctioning or obsolete equipment would achieve these goals better than performing maintenance. However, the relentless growth of e-waste is putting pressure on cloud providers to participate in the circular economy. Microsoft, for example, has launched three Circular Centres to repurpose cloud equipment. During the pilot of their Amsterdam centre, it achieved 83% reuse and 17% recycling of critical parts. The lifecycle of equipment in the cloud is largely hidden but environmentally conscious users will start demanding greater transparency.

Recommendations

Organisations should be aware of their cloud-derived scope 3 emissions and consider broader environmental issues around water use and recycling. Here are the steps that can be taken immediately:

  1. Monitor GreenOps. Cloud providers are adding GreenOps tools, such as the AWS Customer Carbon Footprint Tool, to help organisations measure the environmental impact of their cloud operations. Understanding the relationship between cloud use and emissions is the first step towards sustainable cloud operations.
  2. Adopt Cloud FinOps for Quick ROI. Eliminating wasted cloud resources not only cuts costs but also reduces electricity-related emissions. Tools such as CloudVerse provide visibility into cloud spend, identifies unused instances, and helps to optimise cloud operations.
  3. Take a Holistic View. Cloud providers are being forced to improve transparency and reduce their environmental impact by their biggest customers. Getting educated on the actions that cloud partners are taking to minimise emissions, water use, and waste to landfill is crucial. In most cases, dedicated cloud providers should reduce waste rather than offset it.
  4. Enable Remote Workforce. Cloud-enabled security and networking solutions, such as SASE, allow employees to work securely from remote locations and reduce their transportation emissions. With a SASE deployed in the cloud, routine management tasks can be performed by IT remotely rather than at the branch, further reducing transportation emissions.
Get your Free Copy
0
Beyond Reality: The Rise of Deepfakes

4.8/5 (6)

4.8/5 (6)

In the Ecosystm Predicts: Building an Agile & Resilient Organisation: Top 5 Trends in 2024​, Principal Advisor Darian Bird said, “The emergence of Generative AI combined with the maturing of deepfake technology will make it possible for malicious agents to create personalised voice and video attacks.” Darian highlighted that this democratisation of phishing, facilitated by professional-sounding prose in various languages and tones, poses a significant threat to potential victims who rely on misspellings or oddly worded appeals to detect fraud. As we see more of these attacks and social engineering attempts, it is important to improve defence mechanisms and increase awareness. 

Understanding Deepfake Technology 

The term Deepfake is a combination of the words ‘deep learning’ and ‘fake’. Deepfakes are AI-generated media, typically in the form of images, videos, or audio recordings. These synthetic content pieces are designed to appear genuine, often leading to the manipulation of faces and voices in a highly realistic manner. Deepfake technology has gained spotlight due to its potential for creating convincing yet fraudulent content that blurs the line of reality. 

Deepfake algorithms are powered by Generative Adversarial Networks (GANs) and continuously enhance synthetic content to closely resemble real data. Through iterative training on extensive datasets, these algorithms refine features such as facial expressions and voice inflections, ensuring a seamless emulation of authentic characteristics.  

Deepfakes Becoming Increasingly Convincing 

Hyper-realistic deepfakes, undetectable to the human eye and ear, have become a huge threat to the financial and technology sectors. Deepfake technology has become highly convincing, blurring the line between real and fake content. One of the early examples of a successful deepfake fraud was when a UK-based energy company lost USD 243k through a deepfake audio scam in 2019, where scammers mimicked the voice of their CEO to authorise an illegal fund transfer.  

Deepfakes have evolved from audio simulations to highly convincing video manipulations where faces and expressions are altered in real-time, making it hard to distinguish between real and fake content. In 2022, for instance, a deepfake video of Elon Musk was used in a crypto scam that resulted in a loss of about USD 2 million for US consumers. This year, a multinational company in Hong Kong lost over USD 25 million when an employee was tricked into sending money to fraudulent accounts after a deepfake video call by what appeared to be his colleagues. 

Regulatory Responses to Deepfakes 

Countries worldwide are responding to the challenges posed by deepfake technology through regulations and awareness campaigns. 

  • Singapore’s Online Criminal Harms Act, that will come into effect in 2024, will empower authorities to order individuals and Internet service providers to remove or block criminal content, including deepfakes used for malicious purposes.  
  • The UAE National Programme for Artificial Intelligence released a deepfake guide to educate the public about both harmful and beneficial applications of this technology. The guide categorises fake content into shallow and deep fakes, providing methods to detect deepfakes using AI-based tools, with a focus on promoting positive uses of advanced technologies. 
  • The proposed EU AI Act aims to regulate them by imposing transparency requirements on creators, mandating them to disclose when content has been artificially generated or manipulated. 
  • South Korea passed a law in 2020 banning the distribution of harmful deepfakes. Offenders could be sentenced to up to five years in prison or fined up to USD 43k. 
  • In the US, states like California and Virginia have passed laws against deepfake pornography, while federal bills like the DEEP FAKES Accountability Act aim to mandate disclosure and counter malicious use, highlighting the diverse global efforts to address the multifaceted challenges of deepfake regulation. 

Detecting and Protecting Against Deepfakes 

Detecting deepfake becomes increasingly challenging as technology advances. Several methods are needed – sometimes in conjunction – to be able to detect a convincing deepfake. These include visual inspection that focuses on anomalies, metadata analysis to examine clues about authenticity, forensic analysis for pattern and audio examination, and machine learning that uses algorithms trained on real and fake video datasets to classify new videos.  

However, identifying deepfakes requires sophisticated technology that many organisations may not have access to. This heightens the need for robust cybersecurity measures. Deepfakes have seen an increase in convincing and successful phishing – and spear phishing – attacks and cyber leaders need to double down on cyber practices.  

Defences can no longer depend on spotting these attacks. It requires a multi-pronged approach which combines cyber technologies, incidence response, and user education.  

Preventing access to users. By employing anti-spoofing measures organisations can safeguard their email addresses from exploitation by fraudulent actors. Simultaneously, minimising access to readily available information, particularly on websites and social media, reduces the chance of spear-phishing attempts. This includes educating employees about the implications of sharing personal information and clear digital footprint policies. Implementing email filtering mechanisms, whether at the server or device level, helps intercept suspicious emails; and the filtering rules need to be constantly evaluated using techniques such as IP filtering and attachment analysis.  

Employee awareness and reporting. There are many ways that organisations can increase awareness in employees starting from regular training sessions to attack simulations. The usefulness of these sessions is often questioned as sometimes they are merely aimed at ticking off a compliance box. Security leaders should aim to make it easier for employees to recognise these attacks by familiarising them with standard processes and implementing verification measures for important email requests. This should be strengthened by a culture of reporting without any individual blame. 

Securing against malware. Malware is often distributed through these attacks, making it crucial to ensure devices are well-configured and equipped with effective endpoint defences to prevent malware installation, even if users inadvertently click on suspicious links. Specific defences may include disabling macros and limiting administrator privileges to prevent accidental malware installation. Strengthening authentication and authorisation processes is also important, with measures such as multi-factor authentication, password managers, and alternative authentication methods like biometrics or smart cards. Zero trust and least privilege policies help protect organisation data and assets.   

Detection and Response. A robust security logging system is crucial, either through off-the shelf monitoring tools, managed services, or dedicated teams for monitoring. What is more important is that the monitoring capabilities are regularly updated. Additionally, having a well-defined incident response can swiftly mitigate post-incident harm post-incident. This requires clear procedures for various incident types and designated personnel for executing them, such as initiating password resets or removing malware. Organisations should ensure that users are informed about reporting procedures, considering potential communication challenges in the event of device compromise. 

Conclusion 

The rise of deepfakes has brought forward the need for a collaborative approach. Policymakers, technology companies, and the public must work together to address the challenges posed by deepfakes. This collaboration is crucial for making better detection technologies, establishing stronger laws, and raising awareness on media literacy. 

The Resilient Enterprise
0
Prepare for an Explosion in IT Services Spend

5/5 (3)

5/5 (3)

2024 and 2025 are looking good for IT services providers – particularly in Asia Pacific. All types of providers – from IT consultants to managed services VARs and systems integrators – will benefit from a few converging events.

However, amidst increasing demand, service providers are also challenged with cost control measures imposed in organisations – and this is heightened by the challenge of finding and retaining their best people as competition for skills intensifies. Providers that service mid-market clients might find it hard to compete and grow without significant process automation to compensate for the higher employee costs.

Why Organisations are Opting for IT Service

Choosing the Right Cost Model for IT Services

Buyers of IT services must implement strict cost-control measures and consider various approaches to align costs with business and customer outcomes, including different cost models:

Fixed-Price Contracts. These contracts set a firm price for the entire project or specific deliverables. Ideal when project scope is clear, they offer budget certainty upfront but demand detailed specifications, potentially leading to higher initial quotes due to the provider assuming more risk.

Time and Materials (T&M) Contracts with Caps. Payment is based on actual time and materials used, with negotiated caps to prevent budget overruns. Combining flexibility with cost predictability, this model offers some control over total expenses.

Performance-Based Pricing. Fees are tied to service provider performance, incentivising achievement of specific KPIs or milestones. This aligns provider interests with client goals, potentially resulting in cost savings and improved service quality.

Retainer Agreements with Scope Limits. Recurring fees are paid for ongoing services, with defined limits on work scope or hours within a given period. This arrangement ensures resource availability while containing expenses, particularly suitable for ongoing support services.

Other Strategies for Cost Efficiency and Effective Management

Technology leaders should also consider implementing some of the following strategies:

Phased Payments. Structuring payments in phases, tied to the completion of project milestones, helps manage cash flow and provides a financial incentive for the service provider to meet deadlines and deliverables. It also allows for regular financial reviews and adjustments if the project scope changes.

Cost Transparency and Itemisation. Detailed billing that itemises the costs of labour, materials, and other expenses provides transparency to verify charges, track spending against the budget, and identify areas for potential savings.

Volume Discounts and Negotiated Rates. Negotiating volume discounts or preferential rates for long-term or large-scale engagements, makes providers to offer reduced rates for a commitment to a certain volume of work or an extended contract duration.

Utilisation of Shared Services or Cloud Solutions. Opting for shared or cloud-based solutions where feasible, offers economies of scale and reduces the need for expensive, dedicated infrastructure and resources.

Regular Review and Adjustment. Conducting regular reviews of the services and expenses with the provider to ensure alignment with the budget and objectives, prepares organisations to adjust the scope, renegotiate terms, or implement cost-saving measures as needed.

Exit Strategy. Planning an exit strategy that include provisions for contract termination, transition services, protects an organisation in case the partnership needs to be dissolved.

Conclusion

Many businesses swing between insourcing and outsourcing technology capabilities – with the recent trend moving towards insourcing development and outsourcing infrastructure to the public cloud. But 2024 will see demand for all types of IT services across nearly every geography and industry. Tech services providers can bring significant value to your business – but improved management, monitoring, and governance will ensure that this value is delivered at a fair cost.

More Insights to tech Buyer Guidance
0
Anticipating Tech Advances and Disruptions​: Strategic Guidance for Technology Leaders

5/5 (2)

5/5 (2)

2024 will be another crucial year for tech leaders – through the continuing economic uncertainties, they will have to embrace transformative technologies and keep an eye on market disruptors such as infrastructure providers and AI startups. Ecosystm analysts outline the key considerations for leaders shaping their organisations’ tech landscape in 2024.​

Navigating Market Dynamics

Market Trends that will impact organisations' tech investments and roadmap in 2024 - Sash Mukherjee

Continuing Economic Uncertainties​. Organisations will focus on ongoing projects and consider expanding initiatives in the latter part of the year.​

Popularity of Generative AI​. This will be the time to go beyond the novelty factor and assess practical business outcomes, allied costs, and change management.​

Infrastructure Market Disruption​. Keeping an eye out for advancements and disruptions in the market (likely to originate from the semiconductor sector)​ will define vendor conversations.

Need for New Tech Skills​. Generative AI will influence multiple tech roles, including AIOps and IT Architecture. Retaining talent will depend on upskilling and reskilling. ​

Increased Focus on Governance​. Tech vendors are guide tech leaders on how to implement safeguards for data usage, sharing, and cybersecurity.​

5 Key Considerations for Tech Leaders​

Anticipating-Tech-Advances-Disruptions-1
Anticipating-Tech-Advances-Disruptions-2
Anticipating-Tech-Advances-Disruptions-3
Anticipating-Tech-Advances-Disruptions-4
Anticipating-Tech-Advances-Disruptions-5
Anticipating-Tech-Advances-Disruptions-6
Anticipating-Tech-Advances-Disruptions-7
Anticipating-Tech-Advances-Disruptions-8
Anticipating-Tech-Advances-Disruptions-9
previous arrowprevious arrow
next arrownext arrow
Anticipating-Tech-Advances-Disruptions-1
Anticipating-Tech-Advances-Disruptions-2
Anticipating-Tech-Advances-Disruptions-3
Anticipating-Tech-Advances-Disruptions-4
Anticipating-Tech-Advances-Disruptions-5
Anticipating-Tech-Advances-Disruptions-6
Anticipating-Tech-Advances-Disruptions-7
Anticipating-Tech-Advances-Disruptions-8
Anticipating-Tech-Advances-Disruptions-9
previous arrow
next arrow
Shadow

Click here to download ‘Anticipating ​ Tech Advances and Disruptions​: Strategic Guidance for Technology Leaders’ as a PDF.

#1 Accelerate and Adapt: Streamline IT with a DevOps Culture 

Over the next 12-18 months, advancements in AI, machine learning, automation, and cloud-native technologies will be vital in leveraging scalability and efficiency. Modernisation is imperative to boost responsiveness, efficiency, and competitiveness in today’s dynamic business landscape.​

The continued pace of disruption demands that organisations modernise their applications portfolios with agility and purpose. Legacy systems constrained by technical debt drag down velocity, impairing the ability to deliver new innovative offerings and experiences customers have grown to expect. ​

Prioritising modernisation initiatives that align with key value drivers is critical. Technology leaders should empower development teams to move beyond outdated constraints and swiftly deploy enhanced applications, microservices, and platforms. ​

Accelerate and Adapt: Streamline IT with a DevOps Culture - Clay Miller

#2 Empowering Tomorrow: Spring Clean Your Tech Legacy for New Leaders

Modernising legacy systems is a strategic and inter-generational shift that goes beyond simple technical upgrades. It requires transformation through the process of decomposing and replatforming systems – developed by previous generations – into contemporary services and signifies a fundamental realignment of your business with the evolving digital landscape of the 21st century.​

The essence of this modernisation effort is multifaceted. It not only facilitates the integration of advanced technologies but also significantly enhances business agility and drives innovation. It is an approach that prepares your organisation for impending skill gaps, particularly as the older workforce begins to retire over the next decade. Additionally, it provides a valuable opportunity to thoroughly document, reevaluate, and improve business processes. This ensures that operations are not only efficient but also aligned with current market demands, contemporary regulatory standards, and the changing expectations of customers.​

Empowering Tomorrow: Spring Clean Your Tech Legacy for New Leaders - Peter Carr

#3 Employee Retention: Consider the Strategic Role of Skills Acquisition

The agile, resilient organisation needs to be able to respond at pace to any threat or opportunity it faces. Some of this ability to respond will be related to technology platforms and architectures, but it will be the skills of employees that will dictate the pace of reform. While employee attrition rates will continue to decline in 2024 – but it will be driven by skills acquisition, not location of work.  ​

Organisations who offer ongoing staff training – recognising that their business needs new skills to become a 21st century organisation – are the ones who will see increasing rates of employee retention and happier employees. They will also be the ones who offer better customer experiences, driven by motivated employees who are committed to their personal success, knowing that the organisation values their performance and achievements. ​

Employee Retention: Consider the Strategic Role of Skills Acquisition - Tim Sheedy

#4 Next-Gen IT Operations: Explore Gen AI for Incident Avoidance and Predictive Analysis

The integration of Generative AI in IT Operations signifies a transformative shift from the automation of basic tasks, to advanced functions like incident avoidance and predictive analysis. Initially automating routine tasks, Generative AI has evolved to proactively avoiding incidents by analysing historical data and current metrics. This shift from proactive to reactive management will be crucial for maintaining uninterrupted business operations and enhancing application reliability. ​

Predictive analysis provides insight into system performance and user interaction patterns, empowering IT teams to optimise applications pre-emptively, enhancing efficiency and user experience. This also helps organisations meet sustainability goals through accurate capacity planning and resource allocation, also ensuring effective scaling of business applications to meet demands. ​

Next-Gen IT Operations: Explore Gen AI for Incident Avoidance and Predictive Analysis - Richard Wilkins

#5 Expanding Possibilities: Incorporate AI Startups into Your Portfolio

While many of the AI startups have been around for over five years, this will be the year they come into your consciousness and emerge as legitimate solutions providers to your organisation. And it comes at a difficult time for you! ​

Most tech leaders are looking to reduce technical debt – looking to consolidate their suppliers and simplify their tech architecture. Considering AI startups will mean a shift back to more rather than fewer tech suppliers; a different sourcing strategy; more focus on integration and ongoing management of the solutions; and a more complex tech architecture. ​

To meet business requirements will mean that business cases will need to be watertight – often the value will need to be delivered before a contract has been signed. ​

Expanding Possibilities: Incorporate AI Startups into Your Portfolio - Tim Sheedy
Access More Insights Here

0
Transformative Integration: HPE’s Acquisition of Juniper Networks

5/5 (2)

5/5 (2)

Hewlett Packard Enterprise (HPE) has entered into a definitive agreement to acquire Juniper Networks for USD 40 per share, totaling an equity value of about USD 14 Billion. This strategic move is aimed to enhance HPE’s portfolio by focusing on higher-growth solutions and reinforcing their high-margin networking business. HPE expects to double their networking business, positioning the combined entity as a leader in networking solutions. With the growing demand for secure, unified technology driven by AI and hybrid cloud trends, HPE aims to offer comprehensive, disruptive solutions that connect, protect, and analyse data from edge to cloud.

This would also be the organisation’s largest deal since becoming an independent company in 2015. The acquisition is expected to be completed by late 2024 or early 2025.

Ecosystm analysts Darian Bird and Richard Wilkins provide their insights on the HPE acquisition and its implications for the tech market.

Converging Networking and Security

One of the big drawcards for HPE is Juniper’s Mist AI. The networking vendors have been racing to catch up – both in capabilities and in marketing. The acquisition though will give HPE a leadership position in network visibility and manageability. With GreenLake and soon Mist AI, HPE will have a solid AIOps story across the entire infrastructure.

HPE has been working steadily towards becoming a player in the converged networking-security space. They integrated Silver Peak well to make a name for themselves in SD-WAN and last year acquiring Axis Security gave them the Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), and Cloud Access Security Broker (CASB) modules in the Secure Service Edge (SSE) stack. Bringing all of this to the market with Juniper’s networking prowess positions HPE as a formidable player, especially as the Secure Access Service Edge (SASE) market gains momentum.

As the market shifts towards converged SASE, there will only be more interest in the SD-WAN and SSE vendors. In just over one year, Cato Networks and Netskope have raised funds, Check Point acquired Perimeter 81, and Versa Networks has made noises about an IPO. The networking and security players are all figuring out how they can deliver a single-vendor SASE.

Although HPE’s strategic initiatives signal a robust market position, potential challenges arise from the overlap between Aruba and Juniper. However, the distinct focus on the edge and data center, respectively, may help alleviate these concerns. The acquisition also marks HPE’s foray into the telecom space, leveraging its earlier acquisition of Athonet and establishing a significant presence among service providers. This expansion enhances HPE’s overall market influence, posing a challenge to the long-standing dominance of Cisco.

The strategic acquisition of Juniper Networks by HPE can make a transformative leap in AIOps and Software-Defined Networking (SDN). There is a potential for this to establish a new benchmark in IT management.

AI in IT Operations Transformation

The integration of Mist’s AI-driven wireless solutions and HPE’s SDN is a paradigm shift in IT operations management and will help organisations transition from a reactive to a predictive and proactive model. Mist’s predictive analytics, coupled with HPE’s SDN capabilities, empower networks to dynamically adjust to user demands and environmental changes, ensuring optimal performance and user experience. Marvis, Mist’s Virtual Network Assistant (VNA), adds conversational troubleshooting capabilities, enhancing HPE’s network solutions. The integration envisions an IT ecosystem where Juniper’s AI augments HPE’s InfoSight, providing deeper insights into network behaviour, preemptive security measures, and more autonomous IT operations.

Transforming Cloud and Edge Computing

The incorporation of Juniper’s AI into HPE’s cloud and edge computing solutions promises a significant improvement in data processing and management. AI-driven load balancing and resource allocation mechanisms will significantly enhance multi-cloud environment efficiency, ensuring robust and seamless cloud services, particularly vital in IoT applications where real-time data processing is critical. This integration not only optimises cloud operations but also has the potential to align with HPE’s commitment to sustainability, showcasing how AI advancements can contribute to energy conservation.

In summary, HPE’s acquisition of Juniper Networks, and specifically the integration of the Mist AI platform, is a pivotal step towards an AI-driven, efficient, and predictive IT infrastructure. This can redefine the standards in AIOps and SDN, creating a future where IT systems are not only reactive but also intuitively adaptive to the evolving demands of the digital landscape.

Ecosystm-Snapshot

0
COP28: Progress, Challenges, and Next Steps

5/5 (2)

5/5 (2)

The 28th United Nations Climate Change Conference (or COP28) took place at the end of 2023 in one of the most climate-vulnerable countries in the world – the UAE. The event brought together nations, leaders, and climate experts to unite around tangible climate action and deliver realistic solutions.

COP28 marked a watershed moment in the global effort to fight climate change because it concluded the first Global Stocktake – a routine assessment of progress under the Paris Agreement that occurs every five years. It is clear that we are not on track to meet the agreement’s goals, but the decisions and actions taken during COP28 can redefine the trajectory of climate action.

COP27: Laying the Foundation

COP27 laid the groundwork for this year’s conference. The summit focused on mitigation, adaptation, finance, and collaboration. The key outcomes of COP27 included the creation of the loss and damage fund, new pledges to the Adaptation Fund, and advancements in the Santiago Network focused on technical support for climate-affected regions. The conference also saw progress on the Global Stocktake and formal recognition of new issues such as water, food security, and forests within climate deliberations.

However, there was widespread criticism for failing to live up to the urgency of impending climate crisis. Despite being called the “implementation COP”, nothing decisive was done to ensure global warming is limited to 1.5 degrees celsius.

COP28: Milestones

Launching the first-ever Global Stocktake. The Global Stocktake was the spotlight of this year’s event and covered various climate issues, including energy, transport, and nature. Despite strong opposition from Oil & Gas interests, negotiators secured an agreement indicating the start of the end of the fossil fuel era – a much-needed conclusion to the hottest year in history. The next global assessment of Paris Agreement targets is expected to take place at COP33 in 2028.

Supporting sustainable agriculture. A landmark declaration on sustainable agriculture was adopted to address climate-related threats to global food systems. Signed by 160 countries, the declaration pledged a collective commitment by participating nations to expedite the integration of agriculture and food systems into national climate actions by 2025. For the first time ever, the summit also featured an entire day devoted to food and agriculture and saw a food systems roadmap laid out by the Food and Agriculture Organisation (FAO).

Operationalising the “Loss and Damage” fund. The conference saw the approval of the “loss and damage fund” that was first tabled at COP27 last year. The fund has been a long-requested support for developing nations facing the impact of climate change.

Tripling renewables and doubling energy efficiency by 2030. 118 countries signed a renewable energy pledge to triple the world’s green energy capacity to 11,000 GW by 2030, reducing the reliance on fossil fuels in generating energy. The pledge is expected to see global average annual rate of energy efficiency improvements from around 2% to over 4% every year until 2030. While the pledge spearheaded by the EU, the US, and the UAE is not legally binding, it is a step in the right direction.  

Adapting to a warmer world. COP28 provided a framework for the ‘Global Goal on Adaptation’ to guide countries in their efforts to protect their people and ecosystems from climate change. An explicit 2030 date has been integrated into the text for targets on water security, ecosystem restoration, health, climate-resilient food systems, resilient human settlements, reduction of poverty, and protection of tangible cultural heritage.

Addressing methane. Methane took centre stage at COP28, reflecting its significant role in current global warming. US, Canada, Brazil, and Egypt announced more than USD 1 billion in funding to reduce methane emissions. Despite facing political challenges, these measures signify a shift towards concrete regulatory and pricing tools, marking a step forward in addressing methane’s impact on climate protection.

How COP28 Could Have Been More Impactful

Better funding allocations. Although the “loss and damage” funding agreement seems like a major outcome, the actual financial commitments fell far short. US and China, despite being the world’s largest emitters, extended only USD 17.5 million and USD 10 million to the fund, respectively. There is also debate about how funds should be distributed, with mature countries favouring aid allocation based on vulnerability. This approach might exclude middle-income countries that have suffered significant climate-related damage recently.

More focus on AI. While COP28 tackled critical climate issues, it overlooked a significant concern – the environmental impact of AI. While AI holds promise for improved sustainability, it is important to address the environmental consequences of AI model training and deployment. The absence of scrutiny on the ecological impact of AI represents a missed early opportunity, considering the widespread hype and significant investments in the technology.

Recognising climate refugees. The increase in climate-related displacement is a growing concern, with millions already affected and predictions of a significant rise by 2050. International law does not recognise those displaced by climate events as refugees. Despite this, the topic wasn’t adequately explored at COP28, highlighting the need for inclusive discussions and solutions for safe migration pathways.

A Call for Unified Action

While COP28 and similar forums highlight the severity of the climate crisis, the real power lies in continuous collective conversations that identify gaps, strive to bridge them, and drive meaningful change. Ecosystm, in collaboration with partners Kyndryl and Microsoft, conducted a Global Sustainability Barometer study, that reveals that while 85% of global organisations acknowledge the strategic importance of sustainability goals, only 16% have successfully integrated sustainability into their corporate and transformation strategies with tangible data.

While governments and policy makers continue to focus on building a sustainable future for the planet, this is time for a shift in mindset and action is pivotal for a unified global effort in addressing climate challenges and building a sustainable future – from organisations and individuals alike.  

Access More Insights Here
0
Microsoft Copilot’s Real Battle: Going Beyond Business Proposals and Use Cases

5/5 (3)

5/5 (3)

Earlier in the year, Microsoft unveiled its vision for Copilot, a digital companion that aims to provide a unified user experience across Bing, Edge, Microsoft 365, and Windows. This vision includes a consistent user experience. The rollout began with Windows in September and expanded to Microsoft 365 Copilot for enterprise customers this month.

Many organisations across Asia Pacific will soon face the question on whether to invest in Microsoft 365 Copilot – despite its current limitations in supporting all regional languages. Copilot is currently supported in English (US, GB, AU, CA, IN), Japanese, and Chinese Simplified. Microsoft plans to support more languages such as Arabic, Chinese Traditional, Korean and Thai over the first half of 2024. There are still several languages used across Asia Pacific that will not be supported until at least the second half of 2024 or later.

Access to Microsoft 365 Copilot comes with certain prerequisites. Organisations need to have either a Microsoft 365 E3 or E5 license and an Azure Active Directory account. F3 licenses do not currently have access to 365 Copilot. For E3 license holders the cost per user for adding Copilot would nearly double – so it is a significant extra spend and will need to deliver measurable and tangible benefits and a strong business case. It is doubtful whether most organisations will be able to justify this extra spend.

However, Copilot has the potential to significantly enhance the productivity of knowledge workers, saving them many hours each week, with hundreds of use cases already emerging for different industries and user profiles. Microsoft is offering a plethora of information on how to best adopt, deploy, and use Copilot. The key focus when building a business case should revolve around how knowledge workers will use this extra time.

Maximising Copilot Integration: Steps to Drive Adoption and Enhance Productivity

Identifying use cases, building the business proposal, and securing funding for Copilot is only half the battle. Driving the change and ensuring all relevant employees use the new processes will be significantly harder. Consider how employees currently use their productivity tools compared to 15 years ago, with many still relying on the same features and capabilities in their Office suites as they did in earlier versions. In cases where new features were embraced, it typically occurred because knowledge workers didn’t have to make any additional efforts to incorporate them, such as the auto-type ahead functions in email or the seamless integration of Teams calls.

The ability of your organisation to seamlessly integrate Copilot into daily workflows, optimising productivity and efficiency while harnessing AI-generated data and insights for decision-making will be of paramount importance. It will be equally important to be watchful to mitigate potential risks associated with an over-reliance on AI without sufficient oversight.

Implementing Copilot will require some essential steps:

  • Training and onboarding. Provide comprehensive training to employees on how to use Copilot’s features within Microsoft 365 applications.
  • Integration into daily tasks. Encourage employees to use Copilot for drafting emails, documents, and generating meeting notes to familiarise them with its capabilities.
  • Customisation. Tailor Copilot’s settings and suggestions to align with company-specific needs and workflows.
  • Automation. Create bots, templates, integrations, and other automation functions for multiple use cases. For example, when users first log onto their PC, they could get a summary of missed emails, chats – without the need to request it.
  • Feedback loop. Implement a feedback mechanism to monitor how Copilot is used and to make adjustments based on user experiences.
  • Evaluating effectiveness. Gauge how Copilot’s features are enhancing productivity regularly and adjust usage strategies accordingly. Focus on the increased productivity – what knowledge workers now achieve with the time made available by Copilot.

Changing the behaviours of knowledge workers can be challenging – particularly for basic processes that they have been using for years or even decades. Knowledge of use cases and opportunities for Copilot will not just filter across the organisation. Implementing formal training and educational programs and backing them up with refresher courses is important to ensure compliance and productivity gains.

AI Research and Reports
0