
(Wikimedia Commons)
The rise of advanced technology has opened new doors for cybercriminals, especially with regards to social engineering. As reported on February 27, 2023 by thecyberwire, “Researchers at Safeguard Cyber have observed a social engineering campaign on LinkedIn that used the DALL-E generative AI model to make images for phony ads."
For threat actors attempting to socially engineer potential victims, LinkedIn is a prime hunting ground. In this article, we'll talk more about why this is the case, but it's important to remember that as technology changes, so to must the response.
The Psychology of Social Engineering

(Wikimedia Commons)
Cybercriminals use social engineering, a powerful tool in their toolbox, to trick people into disclosing sensitive information or taking actions that jeopardize security. Organizations can better protect themselves from these kinds of attacks if they understand the biases and psychological concepts that are at the heart of this tactic.
Social engineering is all about taking advantage of these biases, which are natural tendencies that shape thoughts and actions. Targeting these biases frequently entails using the six principles of influence that behavioral psychologist Robert Cialdini developed:
Reciprocity,
Commitment and Consistency,
Social Proof,
Authority,
Liking, and
Scarcity.
By manipulating these concepts, social engineers can establish trust with an individual, ultimately leading them to disclose sensitive information or engage in unlawful activities.
Reciprocity, for instance, refers to the inherent propensity of people to want to return favors. The target is made to feel like they have to give the attacker important information in return for free help or samples.
Commitment and consistency emphasize a person's willingness to uphold their obligations and stay loyal to their self-image. Social engineers can take advantage of this by getting the target to make a tiny commitment before increasing their demands because they know the victim is more likely to comply because of the earlier agreement.
Both social proof and profit come from the target's propensity to imitate others' behavior and take their cues, especially in tense circumstances. Social engineers may pretend to be authoritative people or say falsely that others have already done what they want. This makes it more likely that the target will do what they want.
Liking is based on the idea that people are more likely to agree with someone they like, frequently as a result of flattery or developing a connection.
Scarcity plays on people's perceptions of limited supply to increase demand and urgency, which can result in rash decisions made without careful consideration.
Social engineering attacks use different methods, like pretexting, baiting, blackmailing, and "quid pro quo," to take advantage of these psychological ideas.
Some of these strategies are pretending to be someone else and luring victims with tempting offers, threatening to tell secrets, or making promises in exchange for the target's help. Social engineers can compromise even the most robust security systems by taking advantage of people's psychological flaws.
Types of Social Engineering Attacks
Social engineering attacks are as diverse as they are successful, preying on human psychology and trust to persuade people to reveal sensitive information or take actions that jeopardize security. As discussed by Copado, "To prevent a social engineering attack, you need to understand what they look like and how you might be targeted."
Phishing: Fraudulent emails from allegedly trustworthy sources, such as banks, dupe users into clicking dangerous links or downloading files, compromising their devices or accounts.
Spear phishing: A type of phishing that targets specific persons or corporations using personalized emails and in-depth knowledge, making it more difficult to detect and are typically more successful.
Vishing: Con artists use phone calls or voicemails to imitate trustworthy institutions in order to gain sensitive information or deliver compromising instructions.
Smishing: Text messages with fake links, time-sensitive requests, or threats that play on fear to elicit immediate action are used to target individuals.
Pretexting: Impersonating someone else in order to gain false trust, obtaining confidential information, or persuading targets into performing specified behaviors.
Baiting: Promises enticing rewards, such as free software, in exchange for private data, preying on the targets' curiosity or greed.
Quid pro quo: Utilizes the reciprocity principle by promising something desirable in exchange for collaboration, generally involving information or access to restricted systems.
Watering Hole: Infiltrates websites visited by the target audience and compromises users' devices via malicious malware.
Tailgating: Gaining entry to secure areas by following authorized personnel, taking advantage of victims' care for others, or exploiting inadequate security routines.
Honey traps: Close relationships, sometimes romantic or friendly, are formed in order to build trust and gain access to secret information or restricted systems.
Dumpster diving: Looks through trash to find confidential documents, passwords, or other important information to use in future social engineering assaults or system infiltration.
Synthetic Media in Social Engineering

(Wikimedia Commons)
The phrase "synthetic media" refers to a broad category of digital information produced or altered by artificial intelligence. This covers deepfakes, artificial intelligence (AI)-generated art, and virtual or augmented reality settings.
In particular, deepfakes relate to the use of AI algorithms to produce plausible but phony audio and video recordings. Deepfakes can make very realistic, edited videos that are hard to tell from real ones by putting one person's face on another person's body or changing footage that already exists.
Cybercriminals now have access to more tools in social engineering attacks thanks to improvements in synthetic media technology. This makes them harder to spot and harder to defend against.
At the forefront of synthetic media's use in social engineering is Dr. Matthew Canham, who lists seven attack scenario examples in his paper, "Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering." They are: digital extortion, catfishing, audio-deep fake-vishing, gift card scams, zishing (zoom phishing), synthetic honey traps, and automated attacks against authentication algorithms.
Attackers use deep-fake technology to appear as their targets in incriminating videos or images, then blackmail them into complying with their demands. Because these complex fakes appear so authentic, the threats have more weight.
Malicious actors use incredibly convincing deepfake videos or audio to impersonate top-level authorities or executives, duping employees into revealing private information or transferring money in impersonation schemes. Deepfakes are so convincing that targets have difficulty distinguishing between a legitimate request and a created imitation.
Perpetrators employ synthetic media to build fake personas online in order to exploit victims for financial or personal gain, this is a well known attack type that has also been previously discussed, it is known as catfishing. Deepfake technology lets scammers make photos and videos that look like they were taken in real life. This makes it harder to spot their fake identities.
Gift Card scams frequently use human-automation teams to scale up; they typically target individual humans; scammers currently rely on text-based communications to make impersonation easier; the interactions are typically low-latency texts or emails; and scammers frequently pose as well-known coworkers or a supervisor.
A "synthetic honey trap" refers to a deceptive tactic that uses artificially created or manipulated digital content to lure attackers, cybercriminals, or unauthorized users into engaging with the trap, ultimately exposing their intentions or tactics. In this idea, "synthetic media," which includes digitally created photos, films, or sounds, is combined with a "honey trap," a traditional deception tactic in which a target is made to appear desirable or alluring in order to lure enemies.
Automatic Attack Against Authentication Algorithms may be one of the more disturbing types of attacks. An attacker may use this to get around voice authentication or be able to create a deep fake that can pass facial recognition. These attacks are limited only by the creativity of the attacker.
In an intricate scam in early 2020, con artists employed deepfake speech technology to impersonate a corporate director and trick a Hong Kong bank manager into transferring $35 million. This was the second known incident employing voice-shaping techniques. In a different case fraudsters exploited deepfake audio to steal $243,000 from a UK-based energy company while imitating the CEO of the parent company to enable an unauthorized financial transfer in 2019. These skilled heists highlight the risky potential of deep-fake technology in cybercrime.
Deepfakes and other synthetic media are becoming more sophisticated at a rapid rate, making it harder for people and organizations to tell real content from fake and increasing the risk of successful attacks. The ease of access to tools for creating synthetic media and the output's convincingness could result in a rise in the frequency and scope of social engineering attacks.
Defending Against Social Engineering Attacks
To effectively prevent social engineering attacks, particularly those using synthetic media, a multi-layered approach incorporating technical safeguards, personnel training, and proactive policies is required:
Regular employee training: Using practical examples, simulations, and exercises, teach staff how to detect social engineering techniques like as phishing emails, deepfake videos, and voice-spoofing calls.
Multi-factor authentication (MFA): Implement multi-factor authentication (MFA) across all apps and systems, especially those with privileged access, to improve security by lowering the risk of successful attacks.
Least privilege and role-based access controls: Access to sensitive data and systems should be limited, and access permissions should be evaluated and adjusted on a regular basis to ensure that only authorized persons have access.
Advanced email security tools: Deploy tools to detect and block phishing emails, fake domains, and other email-based risks. Employ DMARC to prevent domain spoofing.
Incident response plan: Develop a clear plan for dealing with social engineering attacks, test and update it on a regular basis, and use continuous system and network monitoring to identify and mitigate threats.
Strict social media rules: Control what information employees can post online to reduce the possibility of attackers gathering information for targeted social engineering efforts.
Secure communications: To reduce the risk of voice spoofing and other artificial media threats, encourage the usage of secure communication channels such as encrypted messaging apps.
Regular security audits: Detect potential security flaws and areas for development, allowing you to take precautionary measures against social engineering schemes.
Businesses and individuals can greatly reduce their exposure to social engineering threats and increase their overall security posture by using a comprehensive, multi-layered approach that incorporates technology, training, and proactive policies.
Conclusion
In short, social engineering techniques combined with generative AI pose a serious threat that is always evolving. Cybercriminals, taking advantage of developments in both technology and human psychology, are getting better at using fake news and other forms of digital deception. In today's dangerous online environment, we need to stay very alert and get the tools we need to protect ourselves from cyberattacks that are getting more sophisticated.
As referenced earlier, the use of the DALL-E generative AI model in a social engineering campaign on LinkedIn demonstrates that no platform is safe from these threats. By understanding the psychological principles behind social engineering and the various attack techniques, we can better prepare ourselves and our organizations to identify and mitigate these risks.
The growing threat posed by the intersection of social engineering and synthetic media must not be disregarded. It is critical to take a proactive approach, adopting many levels of security, such as technical safeguards, employee training, and preventive measures. As cyber threats evolve, all stakeholders, from government agencies to private persons, must strengthen their defenses and remain attentive. The moment has come to strategize, strengthen security, educate ourselves and our personnel, and be vigilant to developing risks.