Cybersecurity Predictions Challenge 2023

In 2023, we predict that hackers will try to bypass cybersecurity defenses using new techniques focused on business processes, identity, and artificial intelligence. This year, Corey Nachreiner, chief security officer at WatchGuard and Marc Laliberte, director of security operations at WatchGuard squares off in a predictions challenge, offering different takes on potential hacks and attacks in these categories. Whose predictions will come true…only time will tell!

  • 1 year ago Posted in

Business Process: Hackers Go Vertical vs. Targeting Vendors and Partners

Insurers Verticalize Their Already Increased Security Requirements

Cyber insurance is a huge topic lately as both costs and compliance requirements have risen over the past few years. Insurers have taken heavy losses since they began offering cyber extortion options, as their initial strategy of paying ransoms drove up their costs. As a result, they have begun passing those increased costs on to customers and significantly increased the technical security requirements they ask of customers before insuring them.

While clients are already reeling from the significant new requirements and bigger bills required to renew their policies, we think some verticals will have it tougher than others during 2023. Insurers realise that certain verticals are more attractive targets for cybercriminals and will force them to adhere to the strictest compliance regulations and bear the highest costs. The most affected industries are also the ones in the headlines due to cyberattacks. For instance, we suspect healthcare, critical infrastructure, finance, and managed service providers (MSPs) will be subjected to more severe cybersecurity requirements from insurers. We also believe cybersecurity vendors will be targets of higher prices and requirements. Some insurers will even adopt “approved security vendor lists,” only underwriting policies for companies that use security solutions from particular vendors. In the end, if your vertical is targeted by cyber attackers, you might want to plan for increased premiums and more hoops to jump through.

Cybersecurity Evaluation and Validation Becomes a Top Factor in Selecting Vendors and Partners

The past two years have been packed with what seems like five –years' worth of digital supply chain breaches. A digital supply chain breach is one where a software or hardware insecurity with one of your vendors, either through a product flaw or a breach to their own network, introduces a security hole that opens you or your organisation to a breach. Common examples include the SolarWinds and Piriform attacks – where a breach to their networks resulted in attackers booby-trapping popular products like Orion and CCleaner. Another example is the Kaseya event, where a zero-day vulnerability in the company’s popular VSA product exposed customers who used it to a ransomware attack.

With a surge in these supply chain attacks; organisations are increasingly concerned with the security of partners and vendors they do business with. After spending so much time refining their own defenses, it would feel especially frustrating to fall due to someone else’s security errors. As a result, companies are making a vendor’s own internal security practices a key part of the product selection decision. In fact, vendor validation and third-party risk analysis have even become a new industry vertical, with products that help survey and keep track of your outside vendors’ security programs. In short, during 2023, the internal security of vendors will become a top selection factor for software and hardware products and services – right below price and performance.

Identity – The First Metaverse Hack vs. MFA Social Engineering Surge

The First Big Metaverse Hack Will Affect Business Through New Productivity Use Cases

Whether you love or hate the idea, the metaverse has been making headlines lately. Huge companies like Meta (Facebook) and TikTok’s parent company, ByteDance, are investing billions into building the connected virtual/mixed/augmented worlds they believe will become a mainstream part of society in the not-too-distant future. But the virtual reality (VR) metaverse offers great new potential for exploitation and social engineering. We already leak a lot of our private data online via mouse and keyboard ‒ now imagine a device with numerous cameras and infrared (IR) and depth sensors that track your head, hand, finger, face and eye movements, too. In addition, consider the device mapping your room, furniture and even your house in 3D as you move around, while also tracking things like your laptop keyboard. All of this happens today if you use a modern VR or mixed reality (MR) headset like the Meta Quest Pro. Now imagine software keeping historical records of all this tracked data. What could a malicious hacker do with it? Perhaps create a virtual deepfake of your online avatar that can also move and act like you do.

While these potential threat vectors may still be five to ten years away, that doesn’t mean the metaverse isn’t already being targeted today. Instead, we think the first metaverse attack affecting business will be from a well-known threat vector reimagined for the VR future. Near the end of 2022, Meta released the Meta Quest Pro as an “enterprise” VR/MR headset for productivity and creativity use cases. Among other things, the Meta Quest Pro allows you to create a remote connection to your traditional computer desktop, allowing you to see your computer’s screen in a virtual environment, and can create many virtual monitors and workspaces for your computer. It even allows a remote employee to launch virtual (vs. video) meetings that supposedly enable you to interact in a much more human fashion. As fancy as this may sound, it essentially leverages the same remote desktop technologies as Microsoft’s Remote Desktop, or Virtual Network Computing (VNC) ‒ the same type of remote desktop technologies that cybercriminals have targeted and exploited countless times in the past. That is why in 2023, we believe the first big metaverse hack that affects a business will result from a

vulnerability in new enterprise productivity features, like remote desktop, used in the latest generation of VR/MR headsets targeting enterprise use cases.

MFA Adoption Fuels Surge in Social Engineering

Threat actors will aggressively target multi-factor authentication (MFA) users in 2023 as increased MFA adoption requires attackers to find some way around these security validation solutions. Confirming what we’ve previously predicted, MFA adoption is up six percentage points to 40% this year, according to a Thales survey conducted by 451 Research. This will push cyber attackers to rely more on malicious MFA bypass techniques in their targeted credential attacks, otherwise they will lose out on a certain caliber of victim. We expect several new MFA vulnerabilities and bypass techniques to surface in 2023. However, the most common way cybercriminals will sidestep these solutions is through smart social engineering. For instance, the success of push bombing isn’t an MFA failure per se; it’s caused by human error. Attackers don’t have to hack MFA if they can trick your users or simply wear them down with a deluge of approval requests that eventually drive them to click on a malicious link. Attackers can also update their adversary-in-the-middle (AitM) techniques to include the MFA process, capturing authentication session tokens when users legitimately log in. In either case, expect many more MFA-targeted social engineering attacks during 2023.

Hacking AI Robotaxis vs. Vulnerability Proliferation through AI Coding Tools

A Novel Robotaxi Hack Will Result in a Dazed and Confused AI Car

Several tech companies like Cruise, Baidu and Waymo have started testing robotaxis in many cities around the world, including San Francisco and Beijing. Robotaxis are basically self-driving cars that provide an Uber or Lyft-like experience, but without a human driver. Companies like Baidu claim they have already successfully completed over a million of these autonomous trips to mostly delighted passengers, and you can imagine how businesses would be drawn to the cost savings of eliminating their gig economy workforce.

That said, the pilot projects haven’t all been unicorns and rainbows. In June, one of Cruise’s robotaxis was involved in an accident that injured its three passengers as well as the driver of the other vehicle. While Cruise claims the human-driven vehicle seemed at fault, that doesn’t help people trust the artificial intelligence (AIs) these cars use to drive themselves, especially when simple tricks like creatively-placed road salt have confused them before. Previous security research has shown that internet-connected cars can get hacked, and humans have already proven that you can socially (or should we say, “visually?”) engineer AI. When you combine those two things with a mobile phone-based service that anyone can use, we’ll surely see at least one cybersecurity incident where threat actors target robotaxis for fun and profit. Since

these autonomous vehicle services are so new and still in testing, we do not believe a hack will result in a dangerous accident in the near future. However, in 2023, we suspect some security researchers or grey hat hackers could perpetrate a technical robotaxi prank that causes one such vehicle to get stuck not knowing what to do, potentially holding up traffic.

AI Coding Tools Introduce Basic Vulnerabilities to New Developers’ Projects

While machine learning (ML) and artificial intelligence (AI) hasn’t become quite as all-powerful as some tech evangelists claim, it has evolved significantly to offer many new practical capabilities. Besides generating new art from written prompts, AI/ML tools can now write code for lazy (or smartly efficient) developers. In both cases, the AI draws on existing art or computer code to generate its new creations.

GitHub’s Copilot is one such automated coding tool. GitHub trains Copilot using the “big data” of billions of lines of code found in its repositories. However, as with any AI/ML algorithm, the quality of its output is only as good as the quality of the training data going into it and the prompts it’s given to work with. Put another way, if you feed AI bad or insecure code, you can expect it to deliver the same. Studies have already shown that up to 40% of the code Copilot generates has included exploitable security vulnerabilities, and this percentage increases when the developer’s own code contains vulnerabilities. This is a big enough issue that GitHub is quick to warn, “You are responsible for ensuring the security and quality of your code [when using Copilot].”

In 2023, we predict an ignorant and/or inexperienced developer who is overly reliant on Copilot, or a similar AI coding tool, will release an app that includes a critical vulnerability introduced by the automated code.

By Manuel Sanchez, Information Security and Compliance Specialist, iManage.
Anita Mavridis, VP of Product at Zivver, and Sue Musumeci, Director of Quality & Clinical...
By Danny Lopez, CEO of Glasswall.
Nadir Izrael, Co-Founder and CTO at Armis discusses the importance of critical infrastructure...
By Darren Thomson, Field CTO EMEAI at Commvault.
By Asher Benbenisty, Director of Product Marketing at AlgoSec.
By Steve Purser, former Head of Core Operations at the EU Agency for Cybersecurity, and Zivver’s...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist, Business and...