A deepfake is a video, photo or audio file which portrays a person saying things they never said, or doing things they never did. Deepfakes are created by artificial intelligence, analysing and mapping people’s faces, bodies and voices from photographs, videos and audio.
One expert described deepfakes as ‘Photoshop on steroids’.
Recently, the number of deepfake videos has increased, the quality has improved, and the videos are getting easier for anyone to make.
Experts predict that soon the technology will be so good that people will be unable to spot a deepfake with the naked eye.
What’s the appeal of deepfakes?
The technology has been used in scientific and medical research, and for educational purposes – eg. letting visitors to museums have ‘conversations’ with historical figures. Deepfakes have also been used in movies such as Star Wars: Rogue One and the Fast and Furious films, to shoot scenes with deceased actors.
And deepfakes are often used as a joke – eg. to swap the faces of different celebrities, or to insert an ordinary person into a movie clip or music video.
What’s the problem?
Unfortunately, deepfakes can be used to cause harm. For example, they may be used in cyber bullying to portray people in scenarios that are embarrassing, offensive or hurtful.
Deepfakes are also used to degrade women, especially, by inserting their images into porn. At present, these videos mostly target celebrities, but as the technology becomes more widespread, we may well see a rise in deepfakes being used in image-based abuse.
Meanwhile, many people see deepfakes as a threat to democracy, because they show politicians and public figures saying or doing things they never said or did. Deepfakes can be used to spread messaging which is inaccurate and dangerous, and erode people’s trust in institutions such as parliament, the courts, and the news media. At the same time, it’s becoming easier for people to dismiss real, factual footage by claiming ‘it’s fake’.
What can families do?
Check out these great videos for school students explaining deepfakes and their implications, and read the position statement from the Office of the eSafety Commissioner.
Talk with your teens
Find out what your teens know about deepfakes – it’s OK if they know more than you! You don’t have to be a tech expert. It’s more important to talk about our values and how we treat other people, including things like kindness, honesty, respect and trust.
Conversation-starters might include:
- Why do you think people create deepfakes? Why do people enjoy watching them?
- Is it ever OK to trick people with fake footage? When is it not OK?
- If you’ve watched a deepfake video, consider: Did the people portrayed in the video agree to this? How do you think it made them feel? Was it a joke they took part in happily, or was it done without their consent to make them look bad? How would you feel if someone used your image in this way?
- Do you know how to reduce the risk of someone misusing your images? For example, by choosing high privacy settings on social media, sharing your posts only with close friends and family, and/or not showing your full face online?
Learn to spot a deepfake
Eventually, it may become impossible to spot a deepfake by looking at it. But the ‘bad’ deepfakes show signs including:
- Badly synced audio and video
- Blurring, flickering or pixilation, especially around the mouth, eyes, neck, or edges of the face
- Glitches in the footage
- Changes in the lighting or background
- No blinking, or weird blinking
- Facial discoloration
- Strange-looking jewellery, hair or teeth
- No clear background, or a very close crop.
And we can ask smart questions such as:
- Who is likely to benefit from this footage? Who is likely to be harmed by it?
- Is the footage outrageous?
- Would I expect the people in the footage to behave like this? Have they said or done similar things before?
- Are there different versions of the footage online?
- Where does the footage come from? Who shared it? What do they seem to want?
- Have the people portrayed in the video commented on it?
We can lobby our politicians, tech companies, and educators to do something about deepfakes, such as:
- Developing new technologies to detect and flag deepfakes
- Removing and penalising abusive and illegal deepfakes
- Supporting fact-checking of online content
- Educating the public about deepfakes
- Making sure everyone can access news sources which are trustworthy and factual.
If you know a child under 18 who has been bullied online, including through deepfakes, please contact the Office of the eSafety Commissioner. They also help people who’ve had sexual or nude images shared without consent, including via deepfakes.
And for free, confidential counselling, contact the Dolly’s Dream Support Line on 0488 881 033, Kids Helpline, 1800 RESPECT, eheadspace, or Lifeline.