Microsoft-Copilot-Beyond-Business-Proposals-and-Use-Cases
Microsoft Copilot’s Real Battle: Going Beyond Business Proposals and Use Cases

5/5 (3)

5/5 (3)

Earlier in the year, Microsoft unveiled its vision for Copilot, a digital companion that aims to provide a unified user experience across Bing, Edge, Microsoft 365, and Windows. This vision includes a consistent user experience. The rollout began with Windows in September and expanded to Microsoft 365 Copilot for enterprise customers this month.

Many organisations across Asia Pacific will soon face the question on whether to invest in Microsoft 365 Copilot – despite its current limitations in supporting all regional languages. Copilot is currently supported in English (US, GB, AU, CA, IN), Japanese, and Chinese Simplified. Microsoft plans to support more languages such as Arabic, Chinese Traditional, Korean and Thai over the first half of 2024. There are still several languages used across Asia Pacific that will not be supported until at least the second half of 2024 or later.

Access to Microsoft 365 Copilot comes with certain prerequisites. Organisations need to have either a Microsoft 365 E3 or E5 license and an Azure Active Directory account. F3 licenses do not currently have access to 365 Copilot. For E3 license holders the cost per user for adding Copilot would nearly double – so it is a significant extra spend and will need to deliver measurable and tangible benefits and a strong business case. It is doubtful whether most organisations will be able to justify this extra spend.

However, Copilot has the potential to significantly enhance the productivity of knowledge workers, saving them many hours each week, with hundreds of use cases already emerging for different industries and user profiles. Microsoft is offering a plethora of information on how to best adopt, deploy, and use Copilot. The key focus when building a business case should revolve around how knowledge workers will use this extra time.

Maximising Copilot Integration: Steps to Drive Adoption and Enhance Productivity

Identifying use cases, building the business proposal, and securing funding for Copilot is only half the battle. Driving the change and ensuring all relevant employees use the new processes will be significantly harder. Consider how employees currently use their productivity tools compared to 15 years ago, with many still relying on the same features and capabilities in their Office suites as they did in earlier versions. In cases where new features were embraced, it typically occurred because knowledge workers didn’t have to make any additional efforts to incorporate them, such as the auto-type ahead functions in email or the seamless integration of Teams calls.

The ability of your organisation to seamlessly integrate Copilot into daily workflows, optimising productivity and efficiency while harnessing AI-generated data and insights for decision-making will be of paramount importance. It will be equally important to be watchful to mitigate potential risks associated with an over-reliance on AI without sufficient oversight.

Implementing Copilot will require some essential steps:

  • Training and onboarding. Provide comprehensive training to employees on how to use Copilot’s features within Microsoft 365 applications.
  • Integration into daily tasks. Encourage employees to use Copilot for drafting emails, documents, and generating meeting notes to familiarise them with its capabilities.
  • Customisation. Tailor Copilot’s settings and suggestions to align with company-specific needs and workflows.
  • Automation. Create bots, templates, integrations, and other automation functions for multiple use cases. For example, when users first log onto their PC, they could get a summary of missed emails, chats – without the need to request it.
  • Feedback loop. Implement a feedback mechanism to monitor how Copilot is used and to make adjustments based on user experiences.
  • Evaluating effectiveness. Gauge how Copilot’s features are enhancing productivity regularly and adjust usage strategies accordingly. Focus on the increased productivity – what knowledge workers now achieve with the time made available by Copilot.

Changing the behaviours of knowledge workers can be challenging – particularly for basic processes that they have been using for years or even decades. Knowledge of use cases and opportunities for Copilot will not just filter across the organisation. Implementing formal training and educational programs and backing them up with refresher courses is important to ensure compliance and productivity gains.

AI Research and Reports
0
0
Starting-Strong-Successful-AI-Projects-Start-with-a-Proof-of-Concept
Starting Strong: Successful AI Projects Start with a Proof of Concept

5/5 (2)

5/5 (2)

The challenge of AI is that it is hard to build a business case when the outcomes are inherently uncertain. Unlike a traditional process improvement procedure, there are few guarantees that AI will solve the problem it is meant to solve. Organisations that have been experimenting with AI for some time are aware of this, and have begun to formalise their Proof of Concept (PoC) process to make it easily repeatable by anyone in the organisation who has a use case for AI. PoCs can validate assumptions, demonstrate the feasibility of an idea, and rally stakeholders behind the project.

PoCs are particularly useful at a time when AI is experiencing both heightened visibility and increased scrutiny. Boards, senior management, risk, legal and cybersecurity professionals are all scrutinising AI initiatives more closely to ensure they do not put the organisation at risk of breaking laws and regulations or damaging customer or supplier relationships.

13 Steps to Building an AI PoC

Despite seeming to be lightweight and easy to implement, a good PoC is actually methodologically sound and consistent in its approach. To implement a PoC for AI initiatives, organisations need to:

  • Clearly define the problem. Businesses need to understand and clearly articulate the problem they want AI to solve. Is it about improving customer service, automating manual processes, enhancing product recommendations, or predicting machinery failure?
  • Set clear objectives. What will success look like for the PoC? Is it about demonstrating technical feasibility, showing business value, or both? Set tangible metrics to evaluate the success of the PoC.
  • Limit the scope. PoCs should be time-bound and narrow in scope. Instead of trying to tackle a broad problem, focus on a specific use case or a subset of data.
  • Choose the right data. AI is heavily dependent on data. For a PoC, select a representative dataset that’s large enough to provide meaningful results but manageable within the constraints of the PoC.
  • Build a multidisciplinary team. Involve team members from IT, data science, business units, and other relevant stakeholders. Their combined perspectives will ensure both technical and business feasibility.
  • Prioritise speed over perfection. Use available tools and platforms to expedite the development process. It’s more important to quickly test assumptions than to build a highly polished solution.
  • Document assumptions and limitations. Clearly state any assumptions made during the PoC, as well as known limitations. This helps set expectations and can guide future work.
  • Present results clearly. Once the PoC is complete, create a clear and concise report or presentation that showcases the results, methodologies, and potential implications for the business.
  • Get feedback. Allow stakeholders to provide feedback on the PoC. This includes end-users, technical teams, and business leaders. Their insights will help refine the approach and guide future iterations.
  • Plan for the next steps. What actions need to follow a successful PoC demonstration? This might involve a pilot project with a larger scope, integrating the AI solution into existing systems, or scaling the solution across the organisation.
  • Assess costs and ROI. Evaluate the costs associated with scaling the solution and compare it with the anticipated ROI. This will be crucial for securing budget and support for further expansion.
  • Continually learn and iterate. AI is an evolving field. Use the PoC as a learning experience and be prepared to continually iterate on your solutions as technologies and business needs evolve.
  • Consider ethical and social implications. Ensure that the AI initiative respects privacy, reduces bias, and upholds the ethical standards of the organisation. This is critical for building trust and ensuring long-term success.

Customising AI for Your Business

The primary purpose of a PoC is to validate an idea quickly and with minimal risk. It should provide a clear path for decision-makers to either proceed with a more comprehensive implementation or to pivot and explore alternative solutions. It is important for the legal, risk and cybersecurity teams to be aware of the outcomes and support further implementation.

AI initiatives will inevitably drive significant productivity and customer experience improvements – but not every solution will be right for the business. At Ecosystm, we have come across organisations that have employed conversational AI in their contact centres to achieve entirely distinct results – so the AI experience of peers and competitors may not be relevant. A consistent PoC process that trains business and technology teams across the organisation and encourages experimentation at every possible opportunity, would be far more useful.

AI Research and Reports
0
0
Building a Successful Fintech Business​

5/5 (3)

5/5 (3)

Fintechs have carved out a niche both in their customer-centric approach and in crafting solutions for underserved communities without access to traditional financial services. Irrespective of their objectives, there is an immense reliance on innovation for lower-cost, personalised, and more convenient services.​

However, a staggering 75% of venture-backed fintech startups fail to scale and grow – and this applies to fintechs as well. 

Here are the 5 areas that fintechs need to focus on to succeed in a competitive market.​

Building-Successful-Fintech-Business-1
Building-Successful-Fintech-Business-1
Building-Successful-Fintech-Business-2
Building-Successful-Fintech-Business-3
Building-Successful-Fintech-Business-4
Building-Successful-Fintech-Business-5
Building-Successful-Fintech-Business-6
Building-Successful-Fintech-Business-7
Building-Successful-Fintech-Business-8
previous arrowprevious arrow
next arrownext arrow
Building-Successful-Fintech-Business-1
Building-Successful-Fintech-Business-2
Building-Successful-Fintech-Business-3
Building-Successful-Fintech-Business-4
Building-Successful-Fintech-Business-5
Building-Successful-Fintech-Business-6
Building-Successful-Fintech-Business-7
Building-Successful-Fintech-Business-8
previous arrow
next arrow
Shadow

Download ‘Building a Successful Fintech Business​’ as a PDF

Get your Free Copy
0
0
Expanding-AI-Applications-From-Generative-AI-to-Business-Transformation
Expanding AI Applications: From Generative AI to Business Transformation

5/5 (3)

5/5 (3)

Generative AI has stolen the limelight in 2023 from nearly every other technology – and for good reason. The advances made by Generative AI providers have been incredible, with many human “thinking” processes now in line to be automated.  

But before we had Generative AI, there was the run-of-the-mill “traditional AI”. However, despite the traditional tag, these capabilities have a long way to run within your organisation. In fact, they are often easier to implement, have less risk (and more predictability) and are easier to generate business cases for. Traditional AI systems are often already embedded in many applications, systems, and processes, and can easily be purchased as-a-service from many providers.  

Traditional vs Generative AI

Unlocking the Potential of AI Across Industries 

Many organisations around the world are exploring AI solutions today, and the opportunities for improvement are significant: 

  • Manufacturers are designing, developing and testing in digital environments, relying on AI to predict product responses to stress and environments. In the future, Generative AI will be called upon to suggest improvements. 
  • Retailers are using AI to monitor customer behaviours and predict next steps. Algorithms are being used to drive the best outcome for the customer and the retailer, based on previous behaviours and trained outcomes. 
  • Transport and logistics businesses are using AI to minimise fuel usage and driver expenses while maximising delivery loads. Smart route planning and scheduling is ensuring timely deliveries while reducing costs and saving on vehicle maintenance. 
  • Warehouses are enhancing the safety of their environments and efficiently moving goods with AI. Through a combination of video analytics, connected IoT devices, and logistical software, they are maximising the potential of their limited space. 
  • Public infrastructure providers (such as shopping centres, public transport providers etc) are using AI to monitor public safety. Video analytics and sensors is helping safety and security teams take public safety beyond traditional human monitoring. 

AI Impacts Multiple Roles 

Even within the organisation, different lines of business expect different outcomes for AI implementations. 

  • IT teams are monitoring infrastructure, applications, and transactions – to better understand root-cause analysis and predict upcoming failures – using AI. In fact, AIOps, one of the fastest-growing areas of AI, yields substantial productivity gains for tech teams and boosts reliability for both customers and employees. 
  • Finance teams are leveraging AI to understand customer payment patterns and automate the issuance of invoices and reminders, a capability increasingly being integrated into modern finance systems. 
  • Sales teams are using AI to discover the best prospects to target and what offers they are most likely to respond to.  
  • Contact centres are monitoring calls, automating suggestions, summarising records, and scheduling follow-up actions through conversational AI. This is allowing to get agents up to speed in a shorter period, ensuring greater customer satisfaction and increased brand loyalty. 

Transitioning from Low-Risk to AI-Infused Growth 

These are just a tiny selection of the opportunities for AI. And few of these need testing or business cases – many of these capabilities are available out-of-the-box or out of the cloud. They don’t need deep analysis by risk, legal, or cybersecurity teams. They just need a champion to make the call and switch them on.  

One potential downside of Generative AI is that it is drawing unwarranted attention to well-established, low-risk AI applications. Many of these do not require much time from data scientists – and if they do, the challenge is often finding the data and creating the algorithm. Humans can typically understand the logic and rules that the models create – unlike Generative AI, where the outcome cannot be reverse-engineered. 

The opportunity today is to take advantage of the attention that LLMs and other Generative AI engines are getting to incorporate AI into every conceivable aspect of a business. When organisations understand the opportunities for productivity improvements, speed enhancement, better customer outcomes and improved business performance, the spend on AI capabilities will skyrocket. Ecosystm estimates that for most organisations, AI spend will be less than 5% of their total tech spend in 2024 – but it is likely to grow to over 20% within the next 4-5 years. 

AI Research and Reports
0
0
AI-Legislations-Gain-Traction-What-Does-it-Mean-for-AI-Risk-Management-sff
AI Legislations Gain Traction: What Does it Mean for AI Risk Management?

5/5 (3)

5/5 (3)

It’s been barely one year since we entered the Generative AI Age. On November 30, 2022, OpenAI launched ChatGPT, with no fanfare or promotion. Since then, Generative AI has become arguably the most talked-about tech topic, both in terms of opportunities it may bring and risks that it may carry.

The landslide success of ChatGPT and other Generative AI applications with consumers and businesses has put a renewed and strengthened focus on the potential risks associated with the technology – and how best to regulate and manage these. Government bodies and agencies have created voluntary guidelines for the use of AI for a number of years now (the Singapore Framework, for example, was launched in 2019).

There is no active legislation on the development and use of AI yet. Crucially, however, a number of such initiatives are currently on their way through legislative processes globally.

EU’s Landmark AI Act: A Step Towards Global AI Regulation

The European Union’s “Artificial Intelligence Act” is a leading example. The European Commission (EC) started examining AI legislation in 2020 with a focus on

  • Protecting consumers
  • Safeguarding fundamental rights, and
  • Avoiding unlawful discrimination or bias

The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.

This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).

Risk Classification and Regulations in the EU AI Act

At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.

Risk levels of proposed EU AI Act

For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.

Four risk levels of the AI Act

Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach

The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.

A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.

Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.

Adopting AI Risk Management in Organisations: The Singapore Approach

Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.

The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.

To Comply with Upcoming AI Regulations, Risk Management is the Path Forward

As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.

Get your Free Copy
0
0
Meeting-Emerging-Threats-with-Intelligent-Strategies-in-BFSI
Meeting Emerging Threats with Intelligent Strategies in BFSI

5/5 (4)

5/5 (4)

Trust in the Banking, Financial Services, and Insurance (BFSI) industry is critical – and this amplifies the value of stolen data and fuels the motivation of malicious actors. Ransomware attacks continue to escalate, underscoring the need for fortified backup, encryption, and intrusion prevention systems. Similarly, phishing schemes have become increasingly sophisticated, placing a burden on BFSI cyber teams to educate employees, inform customers, deploy multifactor authentication, and implement fraud detection systems. While BFSI organisations work to fortify their defences, intruders continually find new avenues for profit – cyber protection is a high-stakes game of technological cat and mouse!

Some of these challenges inherent to the industry include the rise of cryptojacking – the unauthorised use of a BFSI company’s extensive computational resources for cryptocurrency mining.

What Keeps BFSI Technology Leaders awake at night?

Building Trust Amidst Expanding Threat Landscape

BFSI organisations face increasing complexity in their IT landscapes. Amidst initiatives like robo-advisory, point-of-sale lending, and personalised engagements – often facilitated by cloud-based fintech providers – they encounter new intricacies. As guest access extends to bank branches and IoT devices proliferate in public settings, vulnerabilities can emerge unexpectedly. Threats may arise from diverse origins, including misconfigured ATMs, unattended security cameras, or even asset trackers. Ensuring security and maintaining customer trust requires BFSI organisations to deploy automated and intelligent security systems to respond to emerging new threats. 

Ecosystm research finds that nearly 70% of BFSI organisations have the intention of adopting AI and automation for security operations, over the next two years. But the reality is that adoption is still fairly nascent. Their top cyber focus areas remain data security, risk and compliance management, and application security.

Areas that BFSI organisations are not prioritising enough today

Addressing Alert Fatigue and Control Challenges

According to Ecosystm research, 50% of BFSI organisations use more than 50 security tools to secure their infrastructure – and these are only the known tools. Cyber leaders are not only challenged with finding, assessing, and deploying the right tools, they are also challenged with managing them. Management challenges include a lack of centralised control across assets and applications and handling a high volume of security events and false positives.

Software updates and patches within the IT environment are crucial for security operations to identify and address potential vulnerabilities. Management of the IT environment should be paired with greater automation – event correlation, patching, and access management can all be improved through reduced manual processes.

Security operations teams must contend with the thousands of alerts that they receive each day. As a result, security analysts suffer from alert fatigue and struggle to recognise critical issues and novel threats. There is an urgency to deploy solutions that can help to reduce noise. For many organisations, an AI-augmented security team could de-prioritise 90% of alerts and focus on genuine risks

Taken a step further, tools like AIOps can not only prioritise alerts but also respond to them. Directing issues to the appropriate people, recommending actions that can be taken by operators directly in a collaboration tool, and rules-based workflows performed automatically are already possible. Additionally, by evaluating past failures and successes, AIOps can learn over time which events are likely to become critical and how to respond to them. This brings us closer to the dream of NoOps, where security operations are completely automated. 

Threat Intelligence and Visibility for a Proactive Cyber Approach

New forms of ransomware, phishing schemes, and unidentified vulnerabilities in cloud are emerging to exploit the growing attack surface of financial services organisations. Security operations teams in the BFSI sector spend most of their resources dealing with incoming alerts, leaving them with little time to proactively investigate new threats. It is evident that organisations require a partner that has the scale to maintain a data lake of threats identified by a broad range of customers even within the same industry. For greater predictive capabilities, threat intelligence should be based on research carried out on the dark web to improve situational awareness. These insights can help security operations teams to prepare for future attacks. Regular reporting to keep CIOs and CISOs informed of the changing threat landscape can also ease the mind of executives.

To ensure services can be delivered securely, BFSI organisations require additional visibility of traffic on their networks. The ability to not only inspect traffic as it passes through the firewall but to see activity within the network is critical in these increasingly complex environments. Network traffic anomaly detection uses machine learning to recognise typical traffic patterns and generates alerts for abnormal activity, such as privilege escalation or container escape. The growing acceptance of BYOD has also made device visibility more complex. By employing AI and adopting a zero-trust approach, devices can be profiled and granted appropriate access automatically. Network operators gain visibility of unknown devices and can easily enforce policies on a segmented network.

Intelligent Cyber Strategies

Here is what BFSI CISOs should prioritise to build a cyber resilient organisation.

Automation. The volume of incoming threats has grown beyond the capability of human operators to investigate manually. Increase the level of automation in your SOC to minimise the routine burden on the security operations team and allow them to focus on high-risk threats. 

Cyberattack simulation exercises. Many security teams are too busy dealing with day-to-day operations to perform simulation exercises. However, they are a vital component of response planning. Organisation-wide exercises – that include security, IT operations, and communications teams – should be conducted regularly. 

An AIOps topology map. Identify where you have reliable data sources that could be analysed by AIOps. Then select a domain by assessing the present level of observability and automation, IT skills gap, frequency of threats, and business criticality. As you add additional domains and the system learns, the value you realise from AIOps will grow. 

A trusted intelligence partner. Extend your security operations team by working with a partner that can provide threat intelligence unattainable to most individual organisations. Threat intelligence providers can pool insights gathered from a diversity of client engagements and dedicated researchers. By leveraging the experience of a partner, BFSI organisations can better plan for how they will respond to inevitable breaches. 

Conclusion

An effective cybersecurity strategy demands a comprehensive approach that incorporates technology, education, and policies while nurturing a culture of security awareness throughout the organisation. CISOs face the daunting task of safeguarding their organisations against relentless cyber intrusion attempts by cybercriminals, who often leverage cutting-edge automated intrusion technologies.

To maintain an advantage over these threats, cybersecurity teams must have access to continuous threat intelligence; automation will be essential in addressing the shortage of security expertise and managing the overwhelming volume and frequency of security events. Collaborating with a specialised partner possessing both scale and experience is often the answer for organisations that want to augment their cybersecurity teams with intelligent, automated agents capable of swiftly

The Resilient Enterprise
0
0