Research reveals a rise in novel social engineering attacks

Darktrace research reveals 135% increase in ‘novel social engineering’ attacks in 2023 amidst widespread availability of ChatGPT.

  • 1 year ago Posted in

Darktrace has revealed that its researchers observed a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT.  

 

These novel social engineering attacks use sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length with no links or attachments. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale. 

 

In March 2023, Darktrace commissioned a global survey with Censuswide, to 6,711 employees across the UK, US, France, Germany, Australia, and the Netherlands to gather third-party insights into human behaviour around email, to better understand how employees globally react to potential security threats, their understanding of email security and the modern technologies that are being used as a tool to transform the threats against them.  

 

In the UK, key findings from 1,011 respondents include: 

 

•       73% of employees are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication 

•       The top three characteristics of communication that make employees think an email is a phishing attack are: being invited to click a link or open an attachment (62%), poor use of spelling and grammar (61%) and unknown sender or unexpected content (58%)  

•       Nearly 1 in 5 (19%) UK employees have fallen for a fraudulent email or text in the past 

•       58% of employees have noticed an increase in the frequency of scam emails and texts in the last 6 months 

•       80% of employees are concerned about the amount of personal information available about them online that could be used in phishing and other email scams 

 

Picture this scenario. Your CEO emails you to ask for information. It’s written in the exact language and tone of voice that they typically use. They even reference a personal anecdote or joke. Darktrace’s research shows that 61% of UK employees look out for poor use of spelling and/or grammar as a sign that an email is fraudulent, but this email contains no mistakes. The spelling and grammar are perfect, it has personal information and it’s utterly convincing. But your CEO didn’t write it. It was crafted by generative AI, using basic information that a cyber-criminal pulled from social media profiles. 

 

The emergence of ChatGPT has catapulted AI into the mainstream consciousness – nearly a quarter (24%) of respondents in the UK have already tried ChatGPT or other Gen AI chatbots for themselves – and with it, real concerns have emerged about its implications for cyber defence. 73% of employees are concerned that hackers can use generative AI to create scam emails indistinguishable from genuine communications. 

 

Emails from CEOs or other senior business leaders are the third highest type of email that employees are most likely to engage with, with almost one in five of respondents (19%) agreeing. Defenders are up against Generative AI attacks that are linguistically complex and entirely novel scams that use techniques and reference topics that we have never seen before.  

 

Many UK employees (nearly a third) have sent an important email to the wrong recipient with a similar looking alias by mistake or due to autocomplete. This rises to over two in five (43%) in the financial services industry and 41% in the legal industry, adding another layer of security risk that isn’t malicious. A self-learning system can spot this error before the sensitive information is incorrectly shared. Self-learning AI in email, unlike all other email security tools, is not trained on what ‘bad’ looks like but instead learns you and the normal patterns of life for each unique organisation. 

 

By understanding what’s normal, it can determine what doesn’t belong in a particular individual’s inbox. Email security systems get this wrong too often, with 71% of respondents saying that their company’s spam/security filters incorrectly stop important legitimate emails from getting to their inbox. 

 

Max Heinemeyer, Chief Product Officer at Darktrace commented on the findings: “Email security has challenged cyber defenders for almost three decades. Since its introduction, many additional communication tools have been added to our working days but for most industries and employees, email remains a staple part of everyone’s job. As such, it remains one of the most useful tools for attackers looking to lure victims into divulging confidential information through communication that exploits trust, blackmails, or promises reward so that threat actors can get to the heart of critical systems, every single day. 

 

“The email threat landscape is evolving. For 30 years security teams have given employees training on spotting spelling mistakes, suspicious links, and attachments. While we always want to maintain a defence-in-depth strategy, there are increasing diminishing returns in the approach of entrusting employees with spotting malicious emails. In a time where readily-available technology allows to rapidly create believable, personalized, novel and linguistically complex phishing emails, we find humans even more ill-equipped to verify the legitimacy of ‘bad’ emails than ever before. Defensive technology needs to keep pace with the changes in the email threat landscape, we have to arm organisations with AI that can do that.” 

XM Cyber has released the findings of its third annual research report, Navigating the Paths of...
In response to evolving cyber threats, Graylog has released Graylog Security 6.0 to help...
Extends the Dynatrace platform’s existing security capabilities to enable customers to drive...
Cato Networks has unveiled the findings of its inaugural Cato CTRL SASE Threat Report for Q1 2024....
Google Cloud enables CrowdStrike for Mandiant IR and MDR services.
Powered by Precision AI, copilots will supercharge security team productivity and improve security...
Report highlights how technological advancements breed stronger cloud threats as 91% express...
Zscaler has collaborated with Google on a joint zero trust architecture with Chrome Enterprise.