How to Talk AI and Deepfakes with Children

Cristina POPOV

February 20, 2025

Promo Protect all your devices, without slowing them down.
Free 30-day trial
How to Talk AI and Deepfakes with Children

From using chatbots for homework help to creating digital art, kids are using AI tools extensively, often without their parents being aware of it, as statistics show. These tools can open up a world of possibilities, helping children learn, create, and grow in ways we couldn’t have imagined just a few years ago. But along with the good comes the need for caution.

AI also has a darker side that kids might not be prepared for—deepfakes. Unlike other AI applications, deepfakes are designed to deceive and can be harmful, especially to children who might not recognize what’s real and what’s fake.

That’s why kids need guidance. They need to understand how to make the most of the incredible opportunities AI offers while learning how to stay safe and recognize the risks.

Did You Know?

  • 70% of teenagers in the U.S. have used generative AI tools, such as ChatGPT, Gemini, and DALL-E.
  • Four in five teenagers in the UK have used generative AI tools.
  • Over 50% of surveyed teens used AI text generators and chatbots, 34% used AI image generators, and 22% used video generators.
  • The most common use of AI was for schoolwork, including "homework help" and "brainstorming ideas." Teens also use AI for tasks like translating content between languages and overcoming boredom.
  • 20% of teenagers used AI tools for joking around with friends.
  • Only 37% of parents whose kids use AI tools are aware of it.
  • Nearly 25% of parents incorrectly believed their children were not using AI tools.
  • Most parents have not discussed the use of AI tools with their kids.

(Sources: Common Sense Media Report, Statista, Wired)

What Are Deepfakes?

Deepfakes are videos, images, or audio files that are manipulated using artificial intelligence (AI) to create content that appears real but is entirely fabricated. The term "deepfake" comes from blending "deep learning," a subset of AI, with "fake." These creations can be incredibly realistic, making it difficult for even adults to tell the difference between real and fake—let alone children.

For example, a deepfake of actor Tom Hanks was used in a commercial promoting dental insurance, despite him having no involvement in it. Similarly, celebrities like Keanu Reeves, Tom Cruise, and Robert Downey Jr. have had entire social media accounts created with AI-generated versions of themselves.

What's more concerning is that anyone, not just celebrities, can be targeted. A simple photo or video of an ordinary person can now be transformed into highly realistic fake content, putting everyday people—including children—at risk of being manipulated or misrepresented online.

 

How Are Deepfakes Created, and With What Purpose?

Deepfakes are created by feeding artificial intelligence (AI) programs with large amounts of data, such as videos, photos, and audio recordings of a person. The AI analyzes these inputs to learn patterns—how someone moves, speaks, or looks—and uses that information to generate highly convincing fake content.

While some people create deepfakes for harmless fun, like pranks or creative projects, others use them for harmful purposes, such as damaging reputations, spreading misinformation, or exploiting individuals. Children, who are often active online and lack the experience to detect manipulated content, are especially vulnerable.

 

Here are five platforms commonly used to create deepfakes, illustrating both their appeal and the potential risks:

1.     FakeMe. This platform allows users to easily swap faces in videos, making it accessible for casual users. While it's marketed as a fun tool, it has been misused for pranks and even online harassment, raising concerns about its ethical use.

2.     DeepFaceLab. A free, advanced tool designed for hobbyists and professionals alike, DeepFaceLab offers extensive capabilities for creating convincing deepfakes. Its availability to the public highlights how accessible powerful AI tools have become.

3.     Zao. A mobile app that gained popularity for letting users insert their faces into movie scenes. While entertaining, it has sparked debates over privacy, as the app collects and processes personal images.

4.     Reface. Known for creating face-swapped videos and GIFs, this app is widely used for its ease of use and entertainment value. However, its collection of user data has raised privacy concerns for parents and individuals.

5.     MyHeritage Deep Nostalgia. This tool animates old photos, breathing life into historical or sentimental images. While often used for heartfelt purposes, it demonstrates the potential for AI to manipulate images, which can be concerning when applied unethically.

Risks for Children: How Deepfakes Can Affect Them

 

Deepfakes are more than just a digital trick—they can pose serious risks to children, affecting their safety, privacy, and emotional well-being. When someone can manipulate a person's face or voice with AI, the potential for harm is alarming.

Here are some specific risks for children:

  • Identity Theft: Deepfakes can impersonate a child's face or voice, potentially allowing access to personal accounts, committing fraud, or manipulating family members.
  • Cyberbullying: Deepfakes can be used to create embarrassing or humiliating content about a child, leading to online harassment.
  • Misinformation: Children may be misled by manipulated videos or images, creating confusion about reality and affecting their understanding of the world.
  • Privacy Violations: Personal photos or videos shared online can be exploited to create deepfakes, putting children at risk of being impersonated or targeted.
  • Emotional Harm: Encountering a deepfake of themselves or someone they trust can be deeply distressing, causing fear, confusion, or a sense of betrayal

How to Talk AI and Deepfakes with Children

 

Open, calm, and age-appropriate conversations are the most powerful tools you have as a parent. Here's how to address this topic in a way that informs and empowers your child:

  1. Start with the Basics

Tailor your explanation of AI and deepfakes to your child’s age and level of understanding. For younger children, keep it simple: "AI is like a very smart computer program that helps with things like finding answers, drawing pictures, or even making videos. But sometimes, people use it to change photos or videos to make them look real when they're not—that's what a deepfake is."

For older kids or teenagers, you can offer a bit more depth.

For example:
"Artificial intelligence, or AI, is advanced technology that can do things like answer questions, create art, or even make videos. It works by analyzing patterns and learning from data. But sometimes, people use AI in ways that aren’t helpful or honest, like creating fake videos or images that look real but aren’t. These are called deepfakes, and they can be used to trick or mislead people."

2.      Explain the Risks Clearly

Once they understand the basics, explain that AI can also be misused. Share examples like a fake video of someone saying or doing something they didn't actually do. Make it relatable by saying something like, "Imagine if someone made a video of you saying things you never said—it could confuse people or hurt your feelings."

 

3.      Teach Critical Thinking

Encourage your child to question what they see online. Use examples to practice together:

  • "Does this look too perfect or strange?"
  • "Who posted it? Is it from a trusted source?"
  • "Can we check this with another website to see if it's real?"

4.      Keep the Conversation Ongoing

This isn't a one-time discussion. As technology evolves, make it a regular topic. Ask them about new tools they've heard of and explore them together to build trust.

5.      Set Boundaries Together

Talk about online behavior and agree on rules for sharing photos, videos, or personal information. For example:

  • Only post pictures in private groups or with trusted friends.
  • Avoid downloading face-swap or deepfake apps.
  • Always check with you before sharing anything that feels "off."

6.      Create a Safety Plan

Prepare for potential deepfake scenarios. For instance:

  • Use a family code word for emergency calls to confirm it's really them.
  • Teach them to report, block, and delete suspicious messages or calls.
  • Practice what to do if they see something upsetting online—tell a trusted adult immediately.

7.      Be Responsible

Your actions matter. Be cautious with the photos and videos you share of your child online. If possible, limit sharing to private groups or texts with family members. Show them that privacy is something you value, too.

Related: The impact of sharenting. How the digital identity you create for your child today could affect their future

8.      Equip Them with Tools and Knowledge

As a parent, you can use tools like Bitdefender Parental Control to monitor your child's online activity and ensure they're staying safe. It allows you to see if your child downloads or uses AI platforms or deepfake apps, giving you the chance to address potential risks early and start important conversations about online safety.

To further protect them, explore together to tools like Scamio and Bitdefender Link Checker. Scamio makes it easy to check whether an email, text, or social media ad is suspicious. You can use it together on platforms like  WhatsApp, Facebook Messengerweb browser or Discord for free!

Bitdefender Link Checker is a simple way to verify whether links are safe before clicking on them, helping to avoid scams or harmful websites.

FAQs

What are deepfakes, and why are they dangerous for kids?

Deepfakes are videos, images, or audio files created using artificial intelligence (AI) to make fake content that looks or sounds real. Deepfakes can be used for cyberbullying, identity theft, spreading misinformation, or exploiting personal photos, making it hard for children to know what’s real and what’s fake online.

How can parents protect their children from deepfakes?

Tools like Bitdefender Parental Control allow parents to track their child’s online activity, including any downloaded apps or use of AI platforms. For added protection, parents can use Scamio to analyze emails, texts, and social media ads for scams or deepfakes. Bitdefender Link Checker helps verify link safety, reducing the risk of harmful content.

What should parents do if their child is targeted by a deepfake?

Report the content to the platform where it appears and request its removal. Save any evidence, such as screenshots or links, in case further action is needed. Use tools like Scamio to analyze suspicious content and confirm its authenticity. You can also contact local authorities or cybersecurity professionals for support. Most importantly, talk to your child to reassure them and help them understand the situation, emphasizing that they are not at fault. Proactively using tools like Bitdefender Parental Control can help prevent similar incidents in the future.

tags


Author


Cristina POPOV

Cristina is a freelance writer and a mother of two living in Denmark. Her 15 years experience in communication includes developing content for tv, online, mobile apps, and a chatbot.

View all posts

You might also like

Bookmarks


loader