Artificial Intelligence in Cybersecurity: Assessing the Benefits & Risks

By Jesper Trolle, CEO of Exclusive Networks.

  • 1 year ago Posted in

Across the globe, we are seeing how AI is modernising industries and reshaping businesses with its ability to simplify work and analyse big data. Research shows the AI market is expected to reach a value of $53.9 billion by 2028, rising at a growth rate of 32.2% each year.

In the cybersecurity industry, AI is playing a key role in helping detect and prevent cyber threats in real-time, analyse large data sets and recognise new threats. However, in the cyber sector AI is increasingly becoming a double-edged sword, as it is also enabling bad actors to execute more sophisticated and efficient attacks.

Here we take a look at the benefits of AI in cybersecurity, as well as issues around the use of AI and how we can improve them, such as complementing AI with human experts, prioritising AI regulation, and international collaboration.

AI combined with human supervision can bolster cybersecurity efforts

First, the integration of AI has transformed the cybersecurity industry – eliminating pain points around cumbersome manual threat detection, time-consuming high volume of alerts and slow incident response rates. The technology’s ability to automate tasks, quickly identify anomalies in network traffic and analyse vast amounts of data has transformed the ways organisations can defend themselves. As AI continues to evolve, we can expect more efficiency in our future cybersecurity efforts. It’s great to see our industry is recognising the benefits of AI in cybersecurity, with research suggesting that 82% of IT decision-makers plan to invest in AI-driven cybersecurity over the next two years. Although AI has enhanced cybersecurity efforts, we must remind ourselves of the need for human supervision. The technology still lacks contextual awareness and the creative problem-solving capabilities of humans, so going forward people will need to complement AI efforts to provide critical insights and improve the models.

The need for AI regulation

Next, given that AI is still in the early phases of development the algorithms are exhibiting biases and the lack of transparency in AI models is creating mistrust in the system. What we are seeing is content that is producing racial/cultural stereotypes as a result of a lack of diversity among AI developers - meaning there are blind spots when addressing discriminatory outcomes. Without companies promoting openness and explainability of their AI systems, it is impacting the cybersecurity industry's ability to effectively identify and mitigate cyber threats.

With the potential for AI systems to create false threat assessments and mischaracterise attacks, it is essential to implement AI regulations to establish a secure environment for the

cybersecurity industry. This could look like regulations that address biases and ensure AI systems are impartial. Additionally, we need regulation that strikes a balance between innovation and safeguarding our best interests. Having frameworks in place will also encourage independent audits of AI systems and promote the ethical use of AI.

Why international collaboration is critical?

Finally, in our rapidly changing cybersecurity landscape, we are seeing the rise in new attack methods and vulnerabilities every day. Examples of this include cybercriminals using AI to launch more sophisticated attacks such as phishing – the creation of highly personalised emails, advanced malware, as well as faster password breaking.

To address this, cross-border collaboration is essential – pooling together expertise and knowledge to develop more comprehensive solutions against cyber threats. Cooperation amongst governments, industries and companies is essential to bring together a diverse set of perspectives and resources, enabling a quicker detection and response to threats. At Exclusive Networks, we have partnered with organisations like the International Chamber of Commerce and Belgium’s Cyber Security Coalition in order to share valuable information and best practices with each other.

What’s next?

Undeniably, AI has massively improved our cybersecurity capabilities – improving threat detection, testing for vulnerabilities, and strengthening authentication. The issue is around how AI is facilitating new forms of attacks – expanding the threat landscape and bringing in new modes of attack. In terms of the next steps of the industry, we need to prioritise the deployment of responsible AI and focus on the design and development of AI systems.

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.