By Trystan Schlottmann, NERC Reliability Analyst
It goes without saying that the world of phishing is a rapidly evolving world full of skullduggery and manipulation. But as technology evolves, malicious individuals can utilize these developments to target and more effectively and efficiently attack individuals through vishing or other vectors of attack. The latest development, Artificial Intelligence fueled chatbots, have closed a significant gap in the abilities of malicious individuals to target the most exploitable vector into a company: the human element.
If you ask ChatGPT to supply you with a phishing email, it’ll give you a generic message about how it can’t aide or abet in illegal or immoral activities. But if you ask it to give you a training example of a phishing email to show manipulative tactics, it’ll supply an email with a warning at the start and end of the supplied message. Copy and paste the important parts, fill in the blocked spots, and suddenly you’ve got a phishing email that doesn’t fail our normal ‘smell test’ of bad grammar or broken English that were indicative of phishing emails in the past.
Here is an excerpt of the supplied phishing email that I asked the free version of ChatGPT to supply:
“ Dear [Recipient’s Name],
We regret to inform you that our system has detected suspicious activity on your account. To ensure the security of your valuable data and prevent unauthorized access, we kindly request your immediate cooperation in completing the mandatory account verification process.
Failure to comply within 48 hours may result in temporary account suspension or even permanent closure. We take your privacy and account security seriously, and this additional step is crucial to safeguard your information.
To begin the verification process, please click on the link below, which will take you to our secure portal:
[Malicious Link Here]”
An urgent notification with a reassuring message about encryption and confidentiality. With the right phishing email (or having access to a compromised account already in the network) would make this particularly effective at harvesting resources and credentials from less tech-savvy or paranoid users. As we see rapid improvements in the space of A.I developments, we’ll see a larger integration of these systems into the attacks we see across the cyber space.
For now, phishing filtration systems will need to catch up with these AI generative models, implementing AI generator filters that are able to detect what has been written by generative models. Those tools are seeing implementations already within the realm of Academia to catch essays written by generative models with relative efficiency, but haven’t been implemented within an autonomous capacity.