Perception Point, an online security company, announced its latest development to fight the increasing threat of AI-generated emails that pose hazards. Perception Point’s detection technology uses AI’s power to create large-language models (LLMs) and deep learning technology to detect and stop BEC (BEC) attacks facilitated by the generative AI technology.
Criminals use technology that generates AI to launch precise, targeted attacks on organizations of all dimensions. The technology has been hailed as a potent new instrument for cybercrime, particularly regarding social engineering or BEC attacks, since it creates customized, high-quality emails that look like human-generated output.
According to Verizon’s most recent investigation report on data breaches, it is estimated that over 50 percent of all social engineering-related incidents are attributed to BEC. Perception Point’s annual report for 2023 also revealed an increase of 83% in BEC attempts.
To tackle this increasing threat, the company developed a new detection method that is based on LLMs that use transformers -which are AI models that can comprehend the semantics of text, which is similar to the renowned LLMs like the OpenAI’s ChatGPT as well as Google’s Bard.
The algorithm can also detect distinct patterns in the LLM-generated text, which is critical in deterring and preventing the threat of AI-based gen.
Beyond security solutions that are based on legacy
Perception Point affirms that established security companies typically fail to meet the needed level of accuracy in detection by analyzing behavioral and contextual contexts.
The company claims that even though advanced security tools for email use behavior and context detection cannot detect the new attacks that intelligent AI facilitates. These attacks do not follow the usual patterns that detection systems were designed to detect.
In addition, the company says that the solutions currently on the market depend on post-delivery detection. This means malicious emails could remain in the user’s mailbox long before being deleted.
“Legacy email security solutions which rely on signatures and reputation analysis struggle to stop even the most basic payload-less BEC attacks,” Tal Zamir, the CTO of Perception Point, has told VentureBeat. “Our unique model’s advantage is in the ability to recognize the pattern of repeated patterns in the text generated by LLM. The model employs a unique three-phase structure which detects BEC with the highest rates of detection and reduces negatives.”
Zamir explained that the difference is its complete examination of all emails and identifying the ones that are deemed malicious prior to them reaching the user’s inbox. He said that this proactive strategy reduces the risk and harms associated with detection-based techniques, which rely on identifying and responding to the threats after they have entered the system.
Furthermore, the solution provides a managed incident response system that relieves the customers’ SOC teams of the obligation to respond quickly to incidents and implement advanced algorithms in real time to combat emerging and novel threats.
Perception Point claims its model has a remarkable processing speed for received emails and an average processing time of 0.06 seconds. It was developed using millions of malicious samples that the company has collected and constantly updated with fresh information to improve its efficiency.
Making use of the power of generative AI to reduce email-based threats
Perception Point’s Zamir stated that the latest attacks are based on fake emails that cybercriminals use to pretend to be trusted companies. Employing social engineering techniques, attackers trick employees into making large amounts of money or releasing private information.
“Attackers exploit the fact that employees in the modern enterprise are the weakest link in the organization regarding security,” Zamir said to VentureBeat. “They are leveraging BEC text-based attacks, which normally do not have malicious payloads such as URLs or malicious files, and thus bypass traditional email security systems, arriving into the users’ inboxes.”
The report also stated that the advent of intelligent AI, specifically LLMs, has helped boost imitation, phishing, and BEC attacks. This technology allows cybercriminals to act at a higher speed and in greater numbers than ever.
“Tasks that once required extensive time and effort, such as target research, reconnaissance, copywriting and design, can now be accomplished within minutes using carefully crafted prompts,” Zamir added. “This amplifies the threat by expanding the pool of potential victims and significantly increasing the chances of successful attacks.”
To minimize false positives resulting from the frequent use of generative AI to identify legitimate emails, Perception Point uses a distinctive three-phase model within its model.
After a scoring procedure, the model utilizes transformations and clustering algorithms to sort the content of emails. By integrating the insights gained from these phases with additional data, including the sender’s reputation and authentication protocol details, it can determine if it generates an email and determines whether it poses the possibility of posing a security risk.
“Our model continuously scans each email, even the embedded URLs and the files, by using a patent-pending HAP (Hardware Assisted Platform) detection layer. This is our exclusive next-generation Sandbox that scans the contents at the CPU/memory levels,” said Zamir.
What’s the next step in Perception Point
Zamir has stated that his business is working to create AI capabilities that can sift through huge amounts of data to identify potential threats and offer customers measurable intelligence.
He also pointed out that incorporating intelligent AI bots into collaboration tools like Slack or Teams and browsers like Edge and cloud storage solutions like Google Drive or OneDrive has opened up new attack opportunities.
“Perception Point recognizes these emerging threats, and we are developing AI security solutions designed to prevent, detect and respond to the ever-increasing threat landscape complexity,” Zamir said. Zamir. “We will continue to ensure that our clients can leverage the power of generative AI without compromising their security posture.”