Generative Artificial Intelligence and Cyber Bullying

07 Mar 2024

Generative Artificial Intelligence and Cyber Bullying  

Dolly’s Dream is committed to changing the culture of bullying, championing a world where everyone feels safe, valued, and supported – both in the real world and online.  

The recent discourse in the media highlights the rapid pace at which Artificial Intelligence (AI) is advancing, and how it can be weaponised to intimidate and bully individuals, leaving many families feeling overwhelmed, worried and struggling to understand its complexities.  

At Dolly's Dream, we understand that amidst this technological revolution, the wellbeing of our children and young people is the priority.  

We’ve provided some info and tips that will help families be better informed and prepared when navigating the online world and AI.  
 

What is generative AI? 

Artificial intelligence (AI) is the development of computer systems that can perform tasks that traditionally needed human brainpower. Generative AI ‘learns’ from the instructions it receives and from analysing large amounts of existing content. Then it produces new material in response to prompts.  

People use generative AI to create all sorts of things: essays, cartoons, videos, emails, song lyrics, voices, artworks, even realistic ‘conversations’ with chatbots.  

 

Generative AI and bullying 

Despite the many positives of generative AI, it raises risks for school communities. There have been some distressing reports of students using generative AI technologies to create degrading images of their classmates and teachers.  

As the technology spreads and becomes more sophisticated, there are risks that generative AI may be used to bully children in various ways, such as:  

  • Creating fake social media profiles, fake audio or videos, offensive memes, or ‘deepfake’ pornography;  
  • Bombarding the victim with nasty automated messages or comments;  
  • Using ‘bot’ accounts to mass-report the victim to a digital platform for fake wrongdoing; 
  • ‘Sextortion’, where a perpetrator threatens to release AI-generated ‘nudes’ unless the victim hands over money or real intimate photos. 

 There’s nothing new about cyber bullying. But there is a risk that generative AI could make it easier for people to bully others in ways that are fast, realistic, targeted and humiliating. This intensifies the risk of children and young people being seriously harmed. 

 

What can we do to prevent bullying via generative AI?  

Firstly, we can build our own skills and knowledge. If we know the tech, our kids are more likely to come to us with a problem. For example, we can: 

  • Try using generative AI ourselves, to see what it’s like. Visit the eSafety Guide for information about different platforms and any safety issues.    
  • Know the safety mechanisms for all the digital platforms your family uses. Download our free Beacon Cyber Safety App which provides trustworthy, practical resources to help families confidently navigate children’s technology use and reduce associated harms. Or the eSafety Guide has information about how to choose high privacy settings, control who sees our content, and report, remove, block, unfriend or mute someone.  
  • Secure all accounts using complex passwords or passphrases and multi-factor authentication.  

Secondly, we can think in advance about what we would do if something went wrong. For example, if our kids were bullied by someone using generative AI, we could: 

  • Stay calm, get the whole story, and tell the school what’s happened. 
  • Make sure our kids understand that we’re glad we found out and we recognise it’s painful for them, but there are things we can do to help resolve it. 
  • Make sure our kids understand that no one is to blame for being bullied or abused, even if the victim shared some images voluntarily to start with.  
  • Keep evidence of what happened. Note the times, dates, websites, platforms and people involved. Take screenshots or photos of any online chats. (Do not take photos or screenshots of any intimate images of children under 18.) 
  • To get the content taken down, report it to the platform where it happened (see the eSafety Guide for tips) or to eSafety 
  • Report online child sexual abuse to the ACCCE or police. 
  • Report online scams to Scamwatch. 
  • Report online offences against adults to CyberReport 
  • Check out the Take It Down site, which works with industry to get sexual images of children under 18 taken down from different sites.   
  • Tighten security – e.g. block, mute or hide the person responsible and update your privacy settings, using the eSafety Guide 
  • If there has been blackmail, cut contact with the person responsible and do not send them money, images or videos.  
  • Download our free Beacon Cyber Safety App which provides trustworthy, practical resources to help families confidently navigate children’s technology use and reduce associated harms. 
  • Seek outside help if necessary. You can find support services for teens and adults on Dolly’s Dream site. Dolly’s Dream also runs a free 24/7 support line in partnership with Kids Helpline: call 0488 881 033. 

 And finally, we can keep talking with our kids about the risks and benefits of the digital world. For example, we can: 

  • Discuss how important it is to stay in control of our personal information. For example, we can set our accounts to ‘private’, delete old accounts, block or mute people, and only accept requests from people we know well and trust. We can also think seriously about how much we want to share online at all. Even innocent pics and videos might get misused.  
  • Remind our kids that not everything online is accurate and that they should talk to an adult if they see something upsetting.   
  • Remind our kids that it’s always OK to say ‘no’ and we should always respect another person’s ‘no’. Just because we like or trust someone does not mean we have to chat with them or share pics or videos with them.  
  • Remind our kids that people don’t always tell the truth about themselves online. They should tell a trusted adult if someone they haven’t met in person asks to chat or swap pics online, or if anyone online does something concerning, such as using inappropriate language, pressuring them for pics or videos, telling them to keep secrets, or trying to move the chat to a different platform.   
  • Explain that it is never OK to share an intimate or humiliating image or video without someone’s consent, even if it is ‘fake’.  
  • Encourage empathy, asking our kids ‘How would you feel if someone did that to you or to someone you love?’ 
  • Remind our kids that they can talk to us, no matter how worried or embarrassed they are, and that we will always help them to resolve a problem.  

 

Hope for the future 

While there are many worrying things about generative AI, it’s not all bad. When generative AI is designed and used in positive ways, it can help to prevent and reduce cyber bullying. For example, some people are excited about the potential of generative AI to:  

  • Detect, flag and remove content on digital platforms which is abusive or threatening. For example, the AI might be trained to spot insulting words, aggressive language, or even nasty emojis. 
  • Use chatbots to encourage teens to seek help with a problem.  
  • Create educational materials and awareness-raising campaigns aimed at preventing bullying and encouraging respectful behaviours. 
     

Learn more about generative AI: 

 
Beacon Cyber Safety App  

eSafety Commissioner  

Cyberbullying Research Centre 

Internet Matters 

Internet Watch Foundation