AI makes identifying phishing emails trickier than ever

Phishing attacks can feel relentless, and large language models (LLM) or artificial intelligence (AI), like ChatGPT, make them even more challenging to identify. Historically, phishing emails, which often rely on a sense of urgency to steal personal information, were riddled with poor grammar and illogical requests. Now, AI is helping scammers create phishing emails that look genuine. The University’s IT Security team shares a few ways AI is making phishing emails harder to identify:

  • Goodbye Grammatical Errors: AI can write messages with perfect grammar and natural-sounding language.
  • Vishing: Voice cloning technology is emerging, allowing scammers to mimic a real person’s voice. This adds a whole new layer of deception to phishing attempts.
  • Personalization: AI can analyze social media profiles to personalize emails with details specific to you. These tactics can create a powerful sense of legitimacy.
  • Chatbots: Phishing attempts aren’t limited to email anymore. AI chatbots can be used to impersonate customer service representatives or other trusted figures, carrying on conversations to trick you into giving away personal information.

Learn more about how to avoid falling for phishing scams.