Hoxhunt, a cybersecurity behavior change software company, released a research report that analyzes the effectiveness of ChatGPT-generated phishing attacks. The study, which analyzed more than 53,000 email users in more than 100 countries, compare the win-rate on simulated phishing attacks created by human social engineers and those created by AI large language models.
While the potential for ChatGPT to be used for malicious phishing activity captures everyone’s imagination, Hoxhunt’s research highlights that human social engineers still outperform AI in terms of inducing clicks on malicious links.
The study revealed that professional red teamers induced a 4.2 percent click rate, vs. a 2.9 percent click rate by ChatGPT in the population sample of email users. Humans remained clearly better at hoodwinking other humans, outperforming AI by 69 percent.
The study also revealed that users with more experience in a security awareness and behavior change program displayed significant protection against phishing attacks by human and AI-generated emails with failure rates dropping from more than 14 percent with less trained users to between 2-4 percent with experienced users.
The research ultimately showcases that AI can be used for good or evil; to educate and to attack. It, therefore, creates more opportunities for the attacker and the defender.
The human layer is by far the highest attack surface and the greatest source of data breaches, with at least 82 percent of beaches involving the human element. While large language model-augmented phishing attacks do not yet perform as well as human social engineering, that gap likely will close and AI already is being used by attackers, making it imperative for security awareness and behavior change training to be dynamic with the evolving threat landscape in order to keep people and organizations safe from attacks.