Wednesday, April 9, 2025
Blog topics:
This week’s blog first appeared on the SANS Institute’s OUCH! Newsletter on March 1, 2025.
By Dhruti Mehta
Caught off Guard: Steve’s Story
Steve was at his desk when he received a frantic video call from his manager, Bela. She looked stressed in the video call, her voice hurried. “I need you to send the confidential client report to this new email right away!” she insisted. Seeing her familiar face and hearing her distinct voice, he didn’t hesitate, he sent the confidential report to the new email address.
Hours later, Bela walked into his office and asked about the report. Confused, Steve mentioned the video call. Bela’s expression turned to shock; she hadn’t called him. The person he saw on the video wasn’t Bela. It was a deepfake, created by a cybercriminal to trick him.
Steve couldn’t believe how real the fake call seemed. The face, the voice, everything matched his boss perfectly. He had fallen victim to a growing cyber threat where criminals use Artificial Intelligence (AI) to create highly convincing fakes.
What is a Deepfake?
AI can create images, audio, or videos that look real. These capabilities have many legitimate uses. For instance, marketing companies creating images for use in ad campaigns, movie companies de-aging certain actors, or teachers creating dynamic video lessons for their students.
A deepfake is when AI is used to create fake images, audio, or videos for the purpose to deceive others. The name deepfake combines “deep learning” (a type of AI) and “fake.”
Often the most damaging deepfakes are when cyber criminals create fake images, audio or video of people that you may know, doing things they actually never did. For example, cyber attackers may create fake pictures of famous celebrities or politicians committing a crime and spread them as fake news. Or they may clone someone’s voice and use it in a call to deceive a victim’s family or colleagues. What makes deepfakes so dangerous is how easily cybercriminals can replicate anyone, doing anything, and make it appear real.
Three Types of Deepfakes
Image Deepfakes
As indicated in its definition, the images, often, are either photos of fake people created by AI (who don’t even exist) or photos of real people but showing them doing something they never did. Unfortunately, these fake images can be distributed very quickly and are often used for the purpose of damaging someone’s reputation or manipulating a person’s emotions. Deepfake images are becoming increasingly common in social media when people, or even governments, are attempting to push out stories that are completely untrue, or they promote false narratives (often called fake news or it’s referred to as part of a disinformation campaign).
Audio Deepfakes (Voice Cloning)
These are fake recordings or phone calls using someone’s cloned voice. Attackers can get recordings of people's voices from podcasts or sources, such as YouTube. From there, they use those recordings to replicate their voice. Once replicated, cyber attackers can then call anyone they want pretending to be that individual, such as posing to be a manager and calling an employee to ask for sensitive data or re-create a loved one’s voice in an emergency call asking for money.
Video Deepfakes
These are fake videos, in which a person’s voice and actions are manipulated or recreated. Deepfake videos can consist of pre-recorded video, or they utilize live video to participate in an online conference call. For example, cyber attackers could create a deepfake video of a CEO making an announcement with information that’s not true about their company. It can also be used in a political campaign to make it appear as though one of the candidates said something (in the video) that, in reality, they didn’t say.
How to Detect Deepfakes: Focus on Context
Do not try to detect deepfakes by only looking for technical mistakes. Both AI and the cyber attackers, who use them, have become very sophisticated. Instead focus on context. Does the image, audio or video make sense?
- Trust Your Instincts: Does something feel “off” about the interaction? Is the request urgent or unexpected? Is the person behaving strangely, even if they look and sound normal? Is someone asking for confidential information or personal data they should not have access to? If something doesn’t feel right, trust your gut and check your facts and the situation.
- Watch Out for Emotional Manipulation: Cyber attackers often create urgency or fear to try and make you act quickly. If a message or call makes you panic, take a breath and verify the true identity of the person you believe you’re in contact with. The stronger the emotional pull, such as creating a strong sense of urgency or fear, the more likely it’s a potential attack.
- Verify Through Another Method: If you are concerned the person contacting you may be a deepfake, reach out to the individual using a different method. For example, for video calls or messages that you are concerned about may be fake, contact the person directly via phone or email. If you get a voice call asking for urgent action, hang up and call back using a trusted number.
- Establish a Code Word or Phrase: Agree upon a shared code word or phrase known only within a group, or perhaps your family, that can be used to authenticate an urgent communication. Another option is to ask a question that you are certain that only the actual individual could answer; one the criminal could not research or figure out simply by searching online.