Share

AI-Powered Social Engineering Threats

AI-Powered Social Engineering Threats

Generative AI is rapidly transforming social engineering, making cyberattacks more convincing, scalable, and harder to detect. Once limited by poor grammar and generic templates, phishing schemes can now be tailored with alarming precision using AI models capable of real-time web searches, workflow automation, and multilingual fluency. IBM’s X-Force team highlights how attackers use AI not just to write emails, but also to build malicious websites, generate fake media, and automate entire campaigns—lowering the barrier for cybercriminals while increasing their reach and impact.

The rise of AI agents could push attacks into new territory. These tools can gather data, analyze it, and execute personalized attacks at speed and scale. By scraping public content from job postings, press releases, or social media, attackers can craft hyper-targeted messages or malware customized to their victims’ environment. Combined with deepfake audio and video, these techniques blur the line between digital deception and reality, forcing defenders to rethink traditional training and detection models.

Experts warn that training and data hygiene are now more important than ever. IBM’s Stephanie Carruthers emphasizes that while AI-generated attacks may sound perfect, they still rely on familiar manipulation tactics—emotional urgency, impersonation, and recycled scam narratives. That’s why security awareness training must evolve to address these patterns directly and occur more frequently. Organizations are also urged to limit oversharing online, including details in job listings or employee photos that can be weaponized by attackers using AI.

Kosinski, Matthew, & Carruthers, Stephanie. 2025. “With Generative AI, Social Engineering Gets More Dangerous—and Harder to Spot.” IBM. May 19.

READ: https://bit.ly/44QemLC 

Share post: