THE USE OF DEEPFAKES TO BULLY: What you need to know

20 Jun 2024

Feedback from anti-bullying workshops that we deliver to schools reveals that one of parents' biggest concerns is children's online safety.

Recent news about students' use of manipulated media, or deepfakes, to bully others have alarmed parents and teachers of the implications of artificial intelligence. Initially hailed for their potential in entertainment and creative industries, deepfakes have become a rising concern due to their misuse in malicious activities, particularly as a weapon of modern-day bullying.

WHAT ARE DEEPFAKES?

Deepfakes are content created using generative AI (artificial intelligence) that mimics existing data. Originally, the term "deepfake" specifically referred to videos, but it has since been broadened to include other types of media manipulation, like images, videos, audio, or text.

The concern around deepfakes is growing because AI can be used to create strikingly convincing images and videos where people appear to say or do things they didn't actually do. This has implications for misinformation, scams, privacy violations, and bullying.

THE USE OF DEEPFAKES TO BULLY OTHERS

Last August, the office of eSafety received their first reports of sexually explicit content generated by students using generative AI to bully other students. And the number has only been growing since.

Last week, deepfakes depicting about 50 female students from a private school in regional Victoria were circulated online. Just weeks earlier, a male student in Melbourne was expelled after he created fake sexual images of a female teacher which were spread around the school. These incidents have raised concerns about the use of AI amongst students to bully others in the Australian school system.

"It is estimated around 90 per cent of all deepfake content is explicit.", said Australia's eSafety Commissioner, Julie Inman Grant.

As deepfakes can be shared with the victim’s social circles, classmates, and family, it can damage reputations and cause serious harm even if there's no blackmail involved. Victims of deepfakes often experience emotional distress, anxiety, depression, and social isolation. They also face challenges in defending themselves because the manipulated content can be highly convincing and spread rapidly across the internet, making it difficult to mitigate its impact.


WHAT SHOULD I DO IF SOMEONE BULLIES ME USING DEEPFAKES?

Before you do anything else, know you're not alone, and you don't have to cope with this all by yourself. It's not your fault and, though it can be hard, talking about it can help a great deal.

Below are some immediate actions to take when you're a target of cyberbullying using deepfakes.

1. Collect evidence

Much as you want to get rid of the images, messages or comments, try to collect as much evidence as you can. When and where was the content shared or sent? Was it on a social media platform or a game? What's the web address or link (URL) that contained the harmful content? What's the usernames of the accounts that started this? Take screenshots or recordings showing information (but not of the AI generated image or video itself, as this can be a crime).

Check out this article by the office of eSafety for more information on how to collect evidence.

2. Report it

Make an image-based abuse report to eSafety Commissioner. They will work to get the content taken down within 24 hours. They can also issue formal warnings, take-down orders and civil penalties to individuals and technology companies that fail to take action.

3. Stop further contact

Stop all contact with the person who shared the photos or videos by hiding or muting their posts or comments. You can also block them AFTER you've collected the evidence and update your privacy settings to limit who can contact you to prevent them from further upsetting you.

4. Stop the content from spreading

Stop the images or videos from being uploaded to social media and other platforms by creating a digital "hash". When you scan an image from your device using the tools below, a unique code will be generated and shared with companies participating in the scheme so they can detect and block any matches on their platform.

If you’re under 18, use takeitdown.ncmec.org – a free online tool that prevents your image or video being shared on Facebook, Instagram, TikTok, Yubo, OnlyFans and Pornhub.

If you’re 18 or older, use StopNCII.org  – a free online tool that prevents your image or video being shared on Facebook, Instagram, TikTok, Bumble, OnlyFans and Reddit.

5. Seek help

Talk to someone you trust, such as a friend, family member, teacher or school wellbeing staff. You can also chat to a qualified counsellor by calling Dolly's Dream support line at 0488 881 033 or WebChat.

WHAT CAN PARENTS DO?

If your child has been a target of bullying by deepfakes, give them emotional support and reassurance that they are not alone or at fault. Let them know you have their back and will work with them to sort this out.

Help them document all evidence of the deepfake content and report it to eSafety and the authorities. Contact the websites or platforms where the content was posted and/or shared and request it to be removed.

Consider seeking mental health support for your child from a psychologist, or Dolly's Dream support line.

For practical resources to help you navigate children’s technology use and reduce associated harms, download Beacon, our free cyber safety app.

If you are concerned about a child or young person being bullied, please seek help. Speak to a trusted GP, school wellbeing staff, or a helpline such as:

Dolly’s Dream Support Line 0488 881 033

Parentline in your state or territory

Kids Helpline 1800 55 1800

Headspace 1800 650 890

Lifeline 13 11 14

PRINTABLE EDUCATIONAL POSTER