Fragile Connections: The Undersea Threat to Global Connectivity 

5/5 (3)

5/5 (3)

Undersea cables form the invisible backbone of the modern internet, carrying vast amounts of data across continents and connecting billions of people. These vital arteries of global communication are, however, surprisingly vulnerable.  

Hybrid Warfare at Sea 

Recent incidents have highlighted the vulnerability of undersea infrastructure, particularly in the Baltic Sea. In the latest case, a fibre optic cable between Latvia and Sweden was reportedly severed by the dragging anchor of the cargo ship Vezhen, originating from Russia’s Ust-Luga port. Swedish authorities boarded and seized the vessel. 

In December, the Eagle S Panamax oil tanker, sailing from St. Petersburg, allegedly damaged a power cable and three fibre optic cables between Estonia and Finland, as well as another connection between Finland and Germany. Finnish authorities seized the ship for investigation. A similar incident occurred in November when the Yi Peng 3, also from Ust-Luga, was linked to cable ruptures connecting Sweden to Lithuania and Finland to Germany. Although shadowed by the Royal Danish Navy, the vessel was ultimately allowed to continue its voyage. 

The suspected sabotage of 11 undersea cables in 15 months has alarmed NATO countries, prompting increased surveillance around Europe. Patrols will focus on protecting critical assets like fibre optic cables, power lines, gas pipelines, and environmental sensors. Dubbed Baltic Sentry, the mission will deploy frigates, patrol aircraft, and unmanned naval drones, supported by NATO’s Maritime Centre for the Security of Critical Undersea Infrastructure. An AI system will monitor unusual shipping activity, such as loitering near cables or erratic course changes, aiming to cut response times to 30-60 minutes. Meanwhile, Operation Nordic Warden will analyse satellite imagery, patrol data, and Automatic Identification System (AIS) signals to assess risks in 22 key areas. 

The primary concern is damage to infrastructure in the shallow waters of the Baltic Sea, but suspicious activity elsewhere has caught the attention of tech giants. Ireland, a critical hub for Europe’s cloud data centres, hosts undersea cables owned by companies like Google, Microsoft, and Amazon, linking it to the US and UK. As a non-NATO country, Ireland faces the challenge of monitoring over 3,000km of coastline. Recently, both the Irish Defence Forces and Royal Navy shadowed a Russian spy ship in the Irish Sea and English Channel. While cable damage is often immediately evident, the risk of communication taps is more alarming and harder to detect. 

How Resilient Are Undersea Cable Networks? 

There are about 400 undersea cables spanning over 1.3 million kms globally. According to the International Cable Protection Committee, around 200 incidents of cable damage occur annually, mostly caused by dragged anchors or trawling. Only about 10% result from natural causes like weather or wildlife. Near shorelines, cables are heavily protected and often buried under several metres of sand in shallow waters. However, in deeper seas, they are harder to monitor and safeguard. 

Highly developed regions, such as the Baltic Sea, North Sea, and Irish Sea, rely on multiple redundant cables to maintain connections between countries. While severing a single link may reduce capacity and cause inconvenience, major disruptions are rare, even for remote European islands served by multiple cables. 

Fibre optic cable repairs typically take days to weeks, faster than the lengthy timelines for fixing power cables or gas pipelines. Repair costs range from USD 1-3 million depending on the damage. Faults are located using test pulses, and specialised ships lift the damaged sections to the surface for splicing. However, with only 22 repair-designated cable ships worldwide, simultaneous outages could significantly delay restoration. 

In regions with less cooperative neighbours, obtaining permissions can further slow repairs. For instance, cables crossing the South China Sea face increasing challenges in deployment and maintenance, complicating connections between ASEAN nations. Routing cables along longer coastal paths raises costs and impacts latency, adding further strain to the network. 

Responding to Escalating Incidents 

Plausible deniability and the opaque nature of maritime operations make attributing these events challenging. Nonetheless, NATO countries view them as part of Russia’s broader hybrid warfare strategy, which avoids direct confrontation while instilling fear and uncertainty by showcasing an adversary’s reach. Attacks on undersea cables undermine public trust in a government’s ability to protect critical infrastructure. 

European governments initially downplayed the impact of these attacks, likely to minimise psychological effects and avoid escalation. While this cautious approach, coupled with rapid repairs, proved effective in the short term, it may have emboldened adversaries, leading to further incidents. In response, Sweden and Finland are now more willing to seize vessels in their territorial waters to deter both intentional and negligent actions. 

Implications for Enterprise Networks 

While enterprises cannot prevent damage to undersea infrastructure, they can mitigate risks and build resilient networks: 

  • Satellite Connectivity. Satellite internet services like Starlink and Eutelsat may not be ideal for bandwidth-intensive applications but can support critical services requiring international connections. An SD-WAN enables automatic failover to a redundant circuit if a land-based or undersea cable is disrupted. 
  • Dynamic Path Selection. Modern WAN architectures with dynamic path selection can reroute traffic to alternate cloud regions when primary paths are down. Locally available services can continue operating on domestic networks unaffected by international outages. 
  • Edge Computing. Adopting an edge-to-cloud strategy allows the running of select workloads closer to the edge or in local data centres. This reduces reliance on international links, improves resilience, and lowers latency. 
  • Disaster Recovery Planning. Enterprises should incorporate extended network outages into their disaster recovery plans, assessing the potential impact on operations and distinguishing between land-based, undersea, and other types of connections. 
The Resilient Enterprise
0
Coding Evolved: How AI Tools Boost Efficiency and Quality

5/5 (2)

5/5 (2)

AI tools have become a game-changer for the technology industry, enhancing developer productivity and software quality. Leveraging advanced machine learning models and natural language processing, these tools offer a wide range of capabilities, from code completion to generating entire blocks of code, significantly reducing the cognitive load on developers. AI-powered tools not only accelerate the coding process but also ensure higher code quality and consistency, aligning seamlessly with modern development practices. Organisations are reaping the benefits of these tools, which have transformed the software development lifecycle. 

Ecosystm research indicates that close to half (nearly 50%) of Asia Pacific organisations are already leveraging AI tools for code generation, with an additional 32% actively evaluating similar GenAI tools

Impact on Developer Productivity 

AI tools are becoming an indispensable part of software development owing to their: 

  • Speed and Efficiency. AI-powered tools provide real-time code suggestions, which dramatically reduces the time developers spend writing boilerplate code and debugging. For example, Tabnine can suggest complete blocks of code based on the comments or a partial code snippet, which accelerates the development process. 
  • Quality and Accuracy. By analysing vast datasets of code, AI tools can offer not only syntactically correct but also contextually appropriate code suggestions. This capability reduces bugs and improves the overall quality of the software. 
  • Learning and Collaboration. AI tools also serve as learning aids for developers by exposing them to new or better coding practices and patterns. Novice developers, in particular, can benefit from real-time feedback and examples, accelerating their professional growth. These tools can also help maintain consistency in coding standards across teams, fostering better collaboration. 

Advantages of Using AI Tools in Development 

  • Reduced Time to Market. Faster coding and debugging directly contribute to shorter development cycles, enabling organisations to launch products faster. This reduction in time to market is crucial in today’s competitive business environment where speed often translates to a significant market advantage. 
  • Cost Efficiency. While there is an upfront cost in integrating these AI tools, the overall return on investment (ROI) is enhanced through the reduced need for extensive manual code reviews, decreased dependency on large development teams, and lower maintenance costs due to improved code quality. 
  • Scalability and Adaptability. AI tools learn and adapt over time, becoming more efficient and aligned with specific team or project needs. This adaptability ensures that the tools remain effective as the complexity of projects increases or as new technologies emerge. 

Deployment Models 

The choice between SaaS and on-premises deployment models involves a trade-off between control, cost, and flexibility. Organisations need to consider their specific requirements, including the level of control desired over the infrastructure, sensitivity of the data, compliance needs, and available IT resources. A thorough assessment will guide the decision, ensuring that the deployment model chosen aligns with the organisation’s operational objectives and strategic goals. 

SAAS Vs. On-Premises: A guide to choosing the right deployment model

Technology teams must consider challenges such as the reliability of generated code, the potential for generating biased or insecure code, and the dependency on external APIs or services. Proper oversight, regular evaluations, and a balanced integration of AI tools with human oversight are recommended to mitigate these risks. 

A Roadmap for AI Integration 

The strategic integration of AI tools in software development offers a significant opportunity for companies to achieve a competitive edge. By starting with pilot projects, organisations can assess the impact and utility of AI within specific teams. Encouraging continuous training in AI advancements empowers developers to leverage these tools effectively.  Regular audits ensure that AI-generated code adheres to security standards and company policies, while feedback mechanisms facilitate the refinement of tool usage and address any emerging issues. 

Technology teams have the opportunity to not only boost operational efficiency but also cultivate a culture of innovation and continuous improvement in their software development practices. As AI technology matures, even more sophisticated tools are expected to emerge, further propelling developer capabilities and software development to new heights. 

More Insights to tech Buyer Guidance
0
Breaches are Inevitable – Build Resiliency through Recovery & Backup

5/5 (3)

5/5 (3)

A lot gets written about cybersecurity – and organisations spend a lot on it! Ecosystm research finds that 63% of organisations across Asia Pacific are planning to increase their cyber budget for the next year. As budgets continue to rise, the threat landscape continues to get more complex and difficult to navigate. Despite increasing spend, 69% of organisations believe a breach is inevitable. And breaches can be EXPENSIVE! Medibank, in Australia, was breached in (or around) October, 2022. The cost of the breach is expected to reach around USD 52 million when everything is done and dusted – and this does not include the impacts of any potential findings or outcomes from regulatory investigations or litigation.

Recovering Strong

While cybersecurity is still crucially important, the ability to recover from breaches quickly and cost-effectively is also imperative. How you recover from a breach will ultimately determine your organisation’s long-term viability and success. The capabilities needed to recover quickly include:

  • A well-documented and practices incident response plan. The plan should outline the roles and responsibilities of all team members, communication protocols, and steps to be taken in the event of a breach.
  • Backup and Disaster Recovery (DR) solutions. Regular backups of critical data and systems are essential to quickly recover from a breach. Backup solutions should include offsite or cloud-based options that are isolated from the main network. DR solutions ensure that critical systems can be quickly restored and made operational after a breach.
  • Cybersecurity awareness training. Investing in regular training for all employees is crucial to ensure they are aware of the latest threats and know how to respond in the event of a breach.
  • Automated response tools. Automation can help speed up the response time during a breach by automatically blocking malicious IPs, quarantining infected devices, or taking other predefined actions based on the nature of the attack.
  • Threat intelligence. This can help organisations stay ahead of the latest threats and vulnerabilities and frame quicker responses if a breach occurs.

Backup and Disaster Recovery is Evolving

Most organisations already have backup and disaster recovery capabilities in place – but too often they are older systems, designed more as a “just in case” versus a “will keep us in business” capability. Backup and DR systems are evolving and improving – and with the increased likelihood of a breach, it is a good time to consider what a modern Backup and DR system can provide to your organisation. Here are some of the key trends and considerations that technology leaders should be aware of:

  • Cloud-based solutions. More organisations are moving towards cloud-based backup and DR solutions. Cloud solutions offer several advantages, including scalability, cost-effectiveness, and the ability to access data and systems from anywhere. However, technology leaders need to consider data security, compliance requirements, and the reliability of the cloud service provider.
  • Hybrid options. As hybrid cloud becomes the norm for most organisations, hybrid solutions backup and DR that combine on-premises and cloud-based backups are becoming more popular. This approach provides the best of both worlds – the security and control of on-premises backups with the scalability and flexibility of the cloud.
  • Increased use of automation. Automation is becoming more prevalent in backup and DR solutions. Automation helps reduce the time it takes to backup data, restore systems, and test DR plans. It also minimises the risk of human error. Technology leaders should look for solutions that offer automation capabilities while also allowing for manual intervention when necessary.
  • Cybersecurity integration. With the rise of cyberattacks, especially ransomware, it is crucial that backup and DR solutions are integrated with an organisation’s cybersecurity strategy. Backup data should be encrypted and isolated from the main network to prevent attackers from accessing or corrupting it. Regular testing of backup and DR plans should also include scenarios where a cyberattack, such as ransomware, is involved.
  • More frequent backups. Data is becoming more critical to business operations, so there is a trend towards more frequent backups, even continuous backups, to minimise data loss in the event of a disaster. Technology leaders need to balance the need for frequent backups with the cost and complexity involved.
  • Super-fast data recovery. Some data recovery platforms can recover data FAST – in as little as 6 seconds. The ability to recover data faster than the bad actors can delete it makes organisations less vulnerable and buys more time to plug the gaps that the attackers are exploiting to gain access to data and systems.
  • Monitoring and analytics. Modern backup and DR solutions offer advanced monitoring and analytics capabilities. This allows organisations to track the performance of their backups, identify potential issues before they become critical, and optimise their backup and DR processes. Technology leaders should look for solutions that offer comprehensive monitoring and analytics capabilities.
  • Compliance considerations. With the increasing focus on data privacy and protection, organisations need to ensure that backup and DR solutions are compliant with relevant regulations, often dictated at the industry level in each geography. Technology leaders should work with their legal and compliance teams to ensure that their backup and DR solutions meet all necessary requirements.

The sooner you evolve and modernise your backup and disaster recovery capabilities, the more breathing room your cybersecurity team has, to improve the ability to repel threats. New security architectures and postures – such as Zero Trust and SASE are emerging as better ways to build your cybersecurity capabilities – but they won’t happen overnight and require significant investment, training, and business change to implement. 

The Resilient Enterprise
0