Can Attackers Use AI Too?
Generative Artificial Intelligence (AI) is undoubtedly a fascinating and helpful tool. Models like ChatGPT can be used to draft up emails to potential clients, write code, aid in research efforts, even generate a picture of something that has never existed. AI has so many uses that make work life easier and more efficient. As with many useful tools, though, generative AI can be used for more nefarious purposes by those who have learned to exploit it.
Previously, many phishing emails could be identified by poor grammar and misspelled words. However, in the same way that a business professional can use ChatGPT to craft an email to a client, an attacker can use AI to create a convincing phishing email with perfect spelling and grammar.
Ashley D’Andrea at Keeper gave an example of how easy drafting a phishing email using AI is in her article How AI is Making Phishing Attacks More Dangerous. Just typing in “Draft a phishing email” results in a message saying, “This content may violate our usage policies.” However, when asked to draft an example of a phishing email, ChatGPT obliged. The example provided by ChatGPT includes a message asserting that unusual activity was detected in the victim’s account, and the account information would need to be updated using the link provided within 24 hours. The email also asserts, “Failure to complete this process within 24 hours may result in a temporary suspension of your account for security reasons” (D’Andrea). The cherry on top is, there are no spelling or grammar errors to be found. To someone with an untrained eye, the email would look perfectly legitimate.
D'Andrea’s example shows a traditional spray and pray style phishing email, which doesn’t have one specific target. Attackers can use AI for spear phishing, too. Spear phishing attacks are more personalized and often do target one individual or a more specific group of people. Often, attackers need to research their targets to make a spear phishing attempt effective. By using AI to research their targets and write the messages, attackers can make these otherwise laborious and time-consuming tasks simple, quick, and just as effective as doing them manually. AI can even analyze how someone typically writes emails, allowing an attacker to impersonate them more effectively, and making a spear phishing attempt more convincing (D’Andrea).
Fredrik Heiding, Bruce Schneier and Arun Vishwanath with the Harvard Business Review explain in their article, AI Will Increase the Quantity—and Quality—of Phishing Scams, that the phishing process can now be fully automated using AI. Automating the process lowers costs of launching a phishing attack by over 95% while achieving nearly the same results as going through the process manually. Using AI makes phishing attacks faster, easier, and significantly cheaper while still being effective. It is no surprise that IT leaders have noticed a 51% increase in AI powered attacks, according to a survey commissioned by Keeper in 2024 (D’Andrea).
Cybersecurity education is more important than ever. Phishing attacks continue to become more sophisticated, and as they evolve, cybersecurity awareness and education need to evolve too. Businesses should ensure their security policies and guidelines are updated to reflect changes in how attacks are being conducted. The technology in a business can be as secure as Fort Knox, but if the people within an organization are ill equipped to recognize phishing attempts, there will always be gaping holes in security.
Checking for grammar and spelling is no longer the best way to spot a phishing email, but there are some things you can still check for. When discerning whether an email is legitimate, watch out for a sense of urgency and authority in the message. Often, attackers will use language such as “complete these steps within 24 hours.” Occasionally, like in D’Andrea’s example, these messages will include threats like “your account will be deactivated if these steps are not completed within 24 hours.” The attacker appeals to a sense of authority by posing as someone legitimate, like a company the victim holds an account with. In a spear phishing attempt, the sender could even appear to be an employee’s boss or coworker. Using urgent language while impersonating figures with authority is designed to make the victim act without taking time to consider whether the message can be trusted.
When receiving a message or email that seems phishy (pun intended), the first thing to do is relax and ask yourself, “is this actually a legitimate message?” Do not click on any links, open any attachments, respond to the message, or even open the message if it can be avoided. To confirm whether a suspicious message or email is legitimate, directly ask the supposed sender using known contact information. As an example, if a suspicious email from your bank pops into your inbox, call the bank’s official helpline to confirm whether the email is legitimate. As always, when in doubt, don’t click the link!
To learn more about how to protect your business from being a phisher’s next catch, contact SKBInfo@skbcyber.com.