Enhancing cybersecurity with ChatGPT-4

CSIRO explores the role of ChatGPT-4 in supporting human analysts, easing workloads and improving efficiency within security operations.

CSIRO, Australia's national science agency, has completed an in-depth analysis of a 10-month trial conducted with global cybersecurity firm eSentire. The trial evaluated how large language models (LLMs), exemplified by ChatGPT-4, can support cybersecurity analysts in identifying and thwarting threats, while simultaneously reducing mental fatigue.

Data was gathered at eSentire’s Security Operations Centres (SOCs) in both Ireland and Canada. It was anonymised and centred around the daily tasks of analysts tasked with tracking, investigating, and responding to cyberattacks.

Throughout the trial, 45 cybersecurity professionals interacted with ChatGPT-4, posing over 3,000 questions primarily focused on routine but crucial tasks such as interpreting technical details, editing reports, and analysing malware code. "ChatGPT-4 supported analysts with tasks like interpreting alerts, polishing reports, or analysing code, while leaving judgement calls to the human expert," noted Dr Mohan Baruwal Chhetri, Principal Research Scientist at CSIRO’s Data61.

By integrating AI within regular workflows, CSIRO aims to augment human expertise rather than replace it. "This collaborative approach adapts to the user’s needs, builds trust, and frees up time for higher-value tasks," Dr Baruwal Chhetri elaborated.

Undertaken as part of CSIRO’s Collaborative Intelligence (CINTEL) program, the study delves into how human-AI collaborations can elevate performance and wellbeing across fields, notably cybersecurity, where analyst fatigue poses an increasing challenge.

With SOC teams overwhelmed by alerts—many of which are false positives—the risk of missing threats, declining productivity, and burnout increases. Human-AI collaboration could also invigorate sectors like emergency response and healthcare.

The trial, touted by Dr Martin Lochner as the first significant long-term industrial study, demonstrates how LLMs can be deployed effectively in real-world cybersecurity operations, thus shaping future developments in AI tools for SOC teams.

Key insights emerged, such as analysts rarely seeking direct answers; only four per cent of requests asked for answers like this. Instead, analysts valued receiving evidence and context which supported autonomous decision-making.

With the initial study concluded, CSIRO plans a further examination, expanding the research to evaluate usage patterns of ChatGPT-4 over a two-year span. This extended phase will also apply qualitative analysis of analyst experiences, measuring outcomes with log data to thoroughly assess AI’s influence on productivity in SOC environments.

Gartner report finds that, by 2028, as AI data proliferates, organisations will shift to a...
A joint effort by Fujitsu and SC Ventures aims to push quantum computing applications in financial...
JumpCloud introduces AI features that aim to enhance safe innovation and compliance, ensuring...
Worldwide AI spending is set to reach $2.52 trillion by 2026, seeing significant growth in AI...
Exploring Europe's potential for industrial transformation through investments and enhanced...
Cloudflare has acquired Human Native, an AI data marketplace, to develop tools that help creators...
AI is transforming business decisions, emphasising governance and the human-machine alliance for...
A new survey reveals the hidden costs of AI-generated outputs, suggesting that without proper...