How the Channel Can Harness the Power of Artificial Intelligence

By Guy March, Senior Director of EMEA Channels, Tenable.

  • 1 year ago Posted in

You don’t have to spend much time researching Artificial Intelligence (AI) before tripping over messaging around how various products have been revolutionised with the technology. There is excitement for the opportunities it could deliver in healthcare treatments and medical advancements. The possibilities for our transport networks and the vehicles that use them. The thought of visiting the Metaverse. The list really does go on. However, there is also concern about the dangers it presents. Not least within cyber criminality as bad people use it to do bad things.

So what is the AI reality?

The Power of AI

Historically Artificial Intelligence (AI) was used primarily to analyse data. Machine learning, an application of AI, uses mathematical models of data to help a computer learn without direct instruction. Deep Learning, part of a broader family of machine learning methods, structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own. Today, with Generative AI — a subset of AI, it is possible to learn about artefacts from data but take this further to generate innovative new creations that are similar to, but don’t repeat, the original.

Harnessing the power and speed of Generative AI, such as Google Virtex AI, OpenAI GPT-4, LangChain and many others, it is possible to return new intelligent information in minutes. This can be used to accelerate research and development cycles in fields ranging from medicine to product development and of course cybersecurity. Generative AI is going to change the way humans interact with software, computing devices and the cloud.

Speaking specifically from a cyber perspective, the biggest impact it will have is enabling security professionals to better interact with security data. Most vendors are investing significant resources

into leveraging AI to solve foundational security problems. Harnessing the power of AI potentially enables security teams to work faster, search faster, analyse faster and ultimately make decisions faster.

That said, Generative AI depends on a breadth and quality of data to provide clear and accurate insights. If you have unique data then you’re going to have unique intelligence guiding decisions. It’s truly “garbage in, garbage out” — or “gold in, gold out” — depending on the source. MSSPs are perfectly positioned to help their customers by feeding reliable intelligence into their data sets, drawing from a mix of multiple point solutions to aggregate this information and deliver strategic actions that will reduce cyber risks.

Making sense of it all can be daunting. Customers are looking to their channel partners to decipher the polished marketing to really understand what exactly is being offered. They want strong counsel to understand what the solution is capable of, what they need it for, and what is unnecessary expense.

AI turned rogue

While created as a tool for good, AI can just as easily be weaponized by malicious cyber attackers to accelerate their money making schemes or even create misinformation. There are a number of examples and methods by which generative AI can be leveraged maliciously. Generative AI, simply put, is a method by which a model builds relationships between words and when interacted with can predict what a response should be based on these relationships. It learns about artefacts from data and generates innovative new creations that are similar to, but don’t repeat, the original.

We’re already seeing bad people test the bounds of what’s possible, with AI used to create deep fake videos. Attackers are also harnessing the power of Generative AI to accelerate their capacity to "create" malicious emails, malware, and more. Now, instead of creating these malicious communications or software themselves, which is time consuming, they are using the speed and intelligence of Generative AI to write the malicious code and communications on their behalf. This means they can operate their illicit activity to launch attacks quickly.

When you look at code, from a generative AI perspective, it's just words. Looking at how that code has been exploited in the past and using that to find new zero day vulnerabilities in other code sets becomes much easier. We have seen one example where a security researcher was able to get a bot running in the snapchat application to write in basic code that is similar to how ransomware locks a system down. We have also seen examples of phishing attacks becoming much more sophisticated and being able to easily evade the algorithms of anti-spam software.

Whilst AI can be used to automate more targeted and convincing attacks, the flaws these attacks target haven’t changed. That means the foundation to defending against any style of attack, be it AI or human powered, remains unchanged. What has changed is the rate at which the cat and mouse game is played. Attackers are going to be much more efficient in many aspects.

The good news is that generative AI can also be a supercharger for cyber defenders.

AI harnessed by the good guys

Security teams have long struggled to address the challenge of prevention in the face of evolving attack techniques. With attackers constantly finding new ways to breach organisations, security teams struggle to keep up.

Attackers see many ways in and multiple paths through technology environments to do damage to organisations. Even with broad visibility of the attack surface, it is difficult for security teams to conduct analysis, interpret the findings and identify what steps to take to reduce risk as quickly as possible. As a result, security teams are constantly in react mode, delivering maximum effort but often a step behind the attackers.

A commissioned study of 100 U.K. based cybersecurity and IT leaders, conducted in 2023 by Forrester Consulting on behalf of Tenable, found that the average organisation was able to prevent 52% of the cyberattacks they encountered in the last two years. However, having only this much coverage left them vulnerable to 48% of the attacks faced, with security teams forced to focus time and efforts reactively mitigating rather than preventing attacks. Looking at what’s holding the teams back from switching focus, it was evident that time is not on their side. Six in 10 respondents (60%) say the cybersecurity team is too busy fighting critical incidents to take a preventive approach to reducing their organisation’s exposure. A different approach is needed.

AI has the potential to do just that. It can be used by cybersecurity professionals to search for patterns, how they explain what they’re finding in the simplest language possible, and how they decide what actions to take to reduce cyber risk.

AI can and is being harnessed by defenders to power preventive security solutions that cut through complexity to provide the concise guidance defenders need to stay ahead of attackers and prevent successful attacks. Harnessing the power of AI enables security teams to work faster, search faster, analyse faster and ultimately make decisions faster.

For MSSPs, generative AI is the same as any other new technology that enters the arena. There is a learning curve but, as defenders, we must take time to understand our data infrastructure to determine where the greatest risks lie and then take steps to reduce that risk. It’s important to remember that, for now at least, while AI is capable of quickly identifying and automating some actions that need to be taken it’s imperative that humans are the ones making critical decisions on where and when to act.

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.