Phishing and Business Email Compromise (BEC) continuously pose a significant risk to organizations. These kinds of threats are polymorphic and rapidly changing, with the goal of evading traditional email security measures. Attackers leverage social engineering tactics and exploit human trust to steal sensitive data or hijack financial transfers. Keep in mind that while phishing is the number one way cyber-attacks begin, it’s usually not the end, and almost always turns into something else.
The challenge? Attackers are getting smarter. They leverage social engineering tactics, craft personalized messages, and exploit trusted sources to manipulate victims. Furthermore, the rise of Generative AI (GenAI) is adding a new layer of complexity, allowing attackers to create even more convincing phishing attempts, at a much greater scale.
This ever-increasing sophistication creates a significant burden on SOC analysts, who are often overwhelmed by the sheer volume of suspicious emails that evaded security solutions and were reported by employees. But there’s hope. The very technology that fuels the evolution of phishing attacks – Artificial Intelligence – can also be used to combat them.
This article explores the growing sophistication of phishing and BEC attacks, the challenges they pose for SOCs, and how AI-powered SOC analysts can be a powerful weapon in the fight against these ever-evolving threats.
Detecting Phishing Attacks Is Becoming More Difficult
Phishing and Business Email Compromise (BEC) attacks are becoming a multi-facet threat for organizations. It’s also a problem for which there is little external help as the scope of MDR/MSSP services often doesn’t encompass investigating user-reported emails, necessitating SOCs to address this area independently.
Here’s why these attacks are increasingly evading detection:
- Mastering manipulation: Phishing attempts are becoming adept at social engineering. Attackers leverage urgency, fear, or a sense of authority to pressure recipients. They craft emails that appear to be from trusted colleagues, managers, or even your own IT department. This emotional manipulation can cloud judgment and lead to clicking malicious links or attachments.
- The personal touch: Long gone the generic approach. Criminals are weaving personal details, gleaned from data breaches or social media, into their messages. This personalization makes the emails feel more legitimate, increasing the chances of victims falling prey to the deception.
- Spoofing sophistication: Attackers can make it seem like the email originated from a trusted source, like a high-level executive. This is especially dangerous in BEC scams, where the goal is to trick employees into authorizing fraudulent transactions.
- Leveraging legitimate platforms: Cybercriminals are exploiting popular cloud services and other seemingly safe platforms to host malicious content. This makes it harder for traditional email security solutions to detect and block these threats, as they might appear harmless at first glance.
- Gen AI, LLM, and phishing – Attackers using Generative AI and Large Language Models (LLMs) no longer need to be fluent in the language of their victims. LLMs break down language barriers for attackers, allowing them to request the AI to craft messages in any desired language. This is just one use of AI and LLMs in these types of attacks – We’ll cover the complexity and influence of Gen AI in the next section in greater detail.
- More polymorphic and ephemeral – the attack infrastructure (IPs, domains, etc.) and the attacks themselves ( the email subjects and content. etc.) all of which would be caught with threat intel and other static forms of defense are increasingly easy to automate the creation of, especially using Gen AI. This means that attackers can spin up more permutations and use them for shorter periods of time to evade detection.
These emails are increasingly successful to hitting inboxes of would-be victims where they are often flagged by employees as suspicious. This generates a larger workload for SOC analysts which must review and respond to them.
- Time-consuming process – Manually triaging user-reported phishing and BEC emails takes 5 to 20 minutes per email, which we would all agree is a huge amount of SOC analyst time. This generates a workload that often consumes multiple full-time employees within a SOC.
- High false positive rate – As much as 95% of user-reported emails actually turn out to be safe, thus wasting SOC analyst time. Meaning, that this time could have been better spent on other tasks.
- The communication gap – A critical challenge lies in the communication gap between SOC analysts and those reporting suspicious emails. While analysts delve into investigations, reporters often wait for a response. Extended silence can lead to discouragement and a reluctance to report future suspicious emails. This lack of user engagement ultimately weakens the organization’s overall security posture.
The constant barrage of email alerts can create a monotonous triage process, leading to analyst burnout and potentially impacting retention. The increasing difficulty of detecting phishing and BEC attacks further compounds this frustration, ultimately discouraging SOC analysts.
Generative AI Is Increasing the Danger of Phishing and BEC Attacks
The landscape of phishing and Business Email Compromise (BEC) attacks is undergoing a significant transformation fueled by the rise of Generative AI (GenAI). These powerful language models, like ChatGPT and similar Large Language Models (LLMs), are being weaponized by attackers to craft highly sophisticated email scams that evade traditional security measures and human intuition.
Many in the cybersecurity industry believe AI might be our secret weapon against phishing attacks. However, this technology is a double-edged sword. The bad actors are also taking advantage of it. In fact, a recent report by the National Cyber Security Centre (NCSC) paints a concerning picture of the future cyber threat landscape. The report predicts that over the next two years, hackers will leverage Artificial Intelligence (AI) to refine and amplify their existing tactics. This will ultimately lead to a surge in the volume and severity of cyberattacks.
The report further highlights that this trend isn’t limited to highly skilled attackers. Even novice cybercriminals, “hackers-for-hire,” and hacktivists are already utilizing AI to varying degrees. This democratization of AI-powered cybercrime tools is particularly worrying, as it empowers even the least skilled attackers to conduct effective information gathering and initiate a successful attack. In essence, the report suggests that these opportunistic attackers stand to benefit the most from this evolving threat landscape.
Here’s how GenAI is specifically amplifying the dangers of email-based phishing and BEC attacks:
- Personalized phishing at scale: GenAI can analyze vast amounts of data readily available online, including social media profiles and leaked information. As mentioned above, this allows attackers to personalize phishing emails with an unsettling degree of accuracy. Imagine receiving an email that not only appears to be from your boss but also references a specific project you’re working on or a recent conversation you had. This increased credibility significantly raises the success rate of these phishing attempts.
- Crafting convincing language: Gone are the days of clunky, grammatically incorrect phishing emails. GenAI can generate natural-sounding prose that mimics human writing styles and tones. This makes it harder to identify red flags based on language alone, further blurring the lines between legitimate and malicious emails.
- Dynamic content and subject lines: GenAI surpasses static phishing templates. It can dynamically generate email content and subject lines that adapt to the recipient and situation. This increases the likelihood of the email sparking a sense of urgency or relevance, tricking the recipient into clicking malicious links or attachments, and also evade static defense mechanisms, for example looking for specific keywords that might indicate a BEC attack.
- Evolving phishing techniques: GenAI allows attackers to continuously refine their tactics based on success rates. By analyzing past attempts, GenAI can identify what resonates with victims and adapt phishing strategies accordingly. This creates a constantly evolving threat landscape that security solutions struggle to keep pace with.
- Automating attack campaigns: GenAI can automate much of the phishing campaign process. From crafting personalized emails to identifying potential targets, GenAI streamlines attack operations, allowing cybercriminals to launch large-scale campaigns with minimal effort. This significantly increases the volume of phishing emails bombarding inboxes, further straining security resources.
The combined effect of these GenAI-powered advancements creates a perfect storm for phishing and BEC attacks. Next, we will explore some potential solutions and strategies to mitigate the dangers posed by this evolving threat landscape.
Preventing Phishing and BEC Attacks Using AI-Powered SOC Analysts
On the flip side of things, companies increasingly fight AI with AI. That’s why a new wave of AI-powered solutions is emerging, boosting SOC analysts’ capabilities and revolutionizing how organizations combat phishing and BEC attacks. These AI-powered SOC analysts serve as a force multiplier, automating tedious tasks and bringing a level of efficiency and accuracy that human analysts alone cannot achieve. It’s as if your best SOC analyst had infinite time to handle everything that comes her way. It boils down to being able to scale and use a behavioral lens to evaluate emails as part of the triage and investigation processes, connecting all the workflow dots.
Here’s a closer look at how best-in-breed AI-powered SOC analysts are transforming the fight against phishing and BEC attacks:
- Autonomous email triage: Phishing and BEC attacks often rely on high volume, hoping to overwhelm human analysts and slip through the cracks. AI excels at handling repetitive tasks, and AI-powered SOC analysts can automate the initial triage and analysis of every email alert and user-reported email. For every alert or report that arrives at the SOC, AI emulates the investigation processes that a human analyst would use to analyze in order to identify suspicious characteristics. This frees up valuable analyst time for more critical tasks like investigating complex threats, developing security strategies, and conducting proactive threat hunting.
- Beyond static checks: Gone are the days of relying solely on simple keyword filters or sender reputation checks. AI-powered SOC analysts utilize a more sophisticated approach. They can dynamically analyze emails, performing deeper inspections based on the initial results like cross-referencing content against external threat intel or the normal behavioral patterns of a user or organization. This allows them to identify subtle red flags, such as slight variations in phrasing commonly used in phishing campaigns or URLs that appear legitimate at first glance but contain hidden malicious elements.
- Seeing the bigger picture: Phishing attacks are often just the tip of the iceberg. They might be part of a larger, multi-stage attack orchestrated by cyber criminals. AI-powered SOC analysts go beyond the individual email. They can correlate data from various sources, including email, endpoints, network activity, and user behavior. This comprehensive view allows them to identify the full attack scope, uncover potential root causes, and assess the potential impact on users, credentials, and machines. This holistic approach ensures a more effective incident response that addresses not just the immediate threat but also the underlying vulnerabilities.
- Automating remediation and response: Time is of the essence when dealing with cyber threats. AI-powered SOC analysts can significantly reduce response times by automating remediation and response actions. Upon identifying a malicious email, they can automatically trigger corrective actions such as blocking the email at the gateway, isolating infected endpoints, and disabling compromised accounts. This swift and automated response can significantly minimize damage and prevent the attack from spreading further.
- Streamlining communication and fostering collaboration: We mentioned before that effective communication is crucial for maintaining user engagement in reporting suspicious emails. Traditional workflows often suffer from slow or non-existent feedback, leading to frustration and a decline in user participation. AI-powered SOC analysts can automate communication by sending timely notifications to reporters and affected users, keeping everyone informed of the situation and actions taken. This fosters collaboration and builds trust between SOC teams and end users, encouraging continued vigilance against phishing and BEC attacks.
- Continuous learning and improvement: The cyber threat landscape is constantly evolving. AI-powered SOC analysts are not static tools either. They are designed to continuously learn and improve with every interaction and every incident they encounter. By analyzing past attack data and user behavior, they can refine their detection models and response strategies over time. Additionally, some solutions can recommend actions to improve organizational resilience after an incident, such as targeted phishing awareness training for users who interacted with a specific scam campaign.
An important closing note: AI-powered SOC analysts are not designed to replace human analysts at all. They are powerful tools that empower analysts by taking care of mundane, repetitive tasks, allowing for accelerated response times, and improved communication, ultimately enabling SOC teams to work smarter, not harder. This frees up analysts to focus on higher-level cognitive functions such as strategic planning, in-depth investigation, and incident orchestration. The human element remains crucial for decision-making, complex problem-solving, and adapting security strategies to evolving threats. AI serves to augment and enhance the capabilities of SOC analysts, allowing them to operate at their full potential. Learn more on how to automate email security workflows using AI.