By Nomisha Kurian

“To be in a situation where you see young girls traumatised at a vulnerable stage of their lives is hard to witness,” said the mayor of a New Jersey town in November 2023 (Blanco, 2023). Her remark referred to a fresh form of psychological warfare sweeping the local school community through deepfake technology. Male classmates of adolescent girls had used generative Artificial Intelligence (AI) to create, and circulate, nude images of them. The boys shared photos in groupchats, causing the girls to not only experience shame and fear, but also contemplate deleting their social media (Blanco, 2023). The School Superintendent noted that “all” school districts were “grappling with the challenges and impact of artificial intelligence” that had suddenly become “available to students at any time and anywhere” (McNicholas, 2023). Indeed, reports of students using generative AI to create sexually explicit or non-consensual imagery to bully their peers have emerged across the globe , from Australia (Long, 2023) to Spain (Guy, 2023). In fact, the New York Times have recently declared “deepfake nudes in schools” an “epidemic” for teenage girls (Singer, 2024).
The need for policymakers and educators to act swiftly is underscored by the words of a 15-year-old girl:
“We have to start updating our school policy. Because if the school had A.I. policies, then students like me would have been protected.”
(Francesca, 15, quoted in Singer, 2024)
Child safeguarding has been a perennial concern for educators and scholars. Yet, rapid advancements in generative AI pose a new “wicked problem” (defined as “a class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision-makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing”) (Buchanan, 1992). To better understand the nature of this particular challenge: generative AI uses sophisticated neural networks called generative adversarial networks (Wang et al., 2017). Two components define these networks. The “generator” creates fake content (e.g. images, videos, and audio). Meanwhile, the “discriminator” tries to make out the difference between real and generated content. A continuous feedback loop ensues. The loop helps the networks refine their output, generating increasingly realistic and convincing results (Wang et al., 2017). It is vital to note here that whilst the technology is sophisticated, it is simpler than ever to use it. Unlike specialised film and video production software that users pay to use, generative AI systems are freer and easier to use than ever, even with no technical knowledge. This has put the ability to produce deepfake imagery at the fingertips of anybody looking to bully or humiliate another person.
The way deepfake technology blurs the lines between truth and fiction puts children’s safety and wellbeing at risk in numerous ways. Perpetrators can manipulate videos and audio recordings to create convincing simulations. Activities or conversations that never occurred might become persuasive “realities”. It is not difficult to imagine the harms for AI’s youngest stakeholders – from harassment, spreading false information, or even fabricating evidence in cases of child exploitation. Cyber-bullying may come to include perpetrators threatening to publicise fake images or videos, and – as has already been reported – disseminating fake material through the digital spaces most important to young people’s socialisation (see Singer, 2024).
The wellbeing and safety of children – especially young girls who are most vulnerable to being exploited by deepfake tech – must be protected. Developing AI-responsive policies might mean strengthening interdisciplinary collaborations between educators, researchers and technologists. The same drive for innovation that created generative AI can be channelled into preventing its harms. For instance, digital forensics techniques are being extensively researched; these scrutinise the origins and alterations in digital files and can help identify deepfake images by analysing metadata or inconsistent pixel patterns (Lancaster, 2023). One much-discussed preventative strategy involves inserting watermarks into generated content. This would mean embedding a piece of data (such as a logo) into the digital multimedia output to flag that it has been artificially generated.
Efforts to develop robust machine learning algorithm-driven content moderation systems seem promising. These systems can analyse vast amounts of data, identifying, anomalies or discrepancies. Being trained to recognise telltale signs of manipulation may well enable these tools to automatically flag potentially harmful content before it reaches vulnerable child-audiences. Perhaps such tools can become part of educators’ safeguarding toolkits (for example, when monitoring online spaces where students interact).
However, technological solutions carry limits. For one, creators of deceptive content are adept at rapidly refining deepfake techniques. This means that safeguarding tools can only be effective with ongoing updates and improvements. AI-sensitive safeguarding policies will need to be a long-term, continuous commitment for educators and researchers; there is no immediate “fix”.
For another, technological solutions cannot compensate for a lack of human oversight and critical thinking, especially when young girls are victimised by their own peers. Proactive policy-led strategies seem necessary to foster a safer online environment for children.
For example, students can be supported to discuss, on a deeper ethical level: what does it mean to practise an ethic of care online? Take the widely-documented phenomenon of young girls around the world being humiliated by AI-generated nude imagery – created and disseminated by the very classmates with whom they go to school (Blanco, 2023; Guy, 2023; Long, 2023). This cannot be separated from deeper questions around the roots of gender-based violence and how cultures of bullying take shape for those most vulnerable to forces of shaming and stigmatisation. These deep-rooted societal inequities manifested across different mediums well before generative AI. Tackling them will demand more than a technological tweak. Perhaps educators can expose students to real-world examples of the social and psychological harms of deepfake-generated content. Listening to victim testimonies may allow young people to participate in shared meaning-making, connecting directly to the pain of another individual rather than simply being instructed by an adult on appropriate social behaviour (Kurian & Saad, 2024). It might also be helpful to expose students to hands-on activities that involve creating and detecting manipulated content. Allowing students to openly experiment with generative AI in the classroom might help begin discussions on their agency to ethically navigate complex socio-technological change. Complementing this with lessons on AI literacy can help students to think critically about online content and question sources. In turn, being trained to evaluate the authenticity of the information they see might help equip them to mindfully navigate shifting digital landscapes.
Alongside this kind of experiential learning, dialogical pedagogy seems necessary to create a safe and inclusive classroom environment. Going beyond simplistic ‘top-down’ prescriptions, and valuing children and young people’s own voices, might mean co-creating guidelines for digital citizenship. Encouraging students to share their own hopes, concerns and visions around generative AI can help emphasise their agency to work towards ethical AI futures (Kurian, 2023; Kurian & Saad, 2024). In addition, scholars of school bullying prevention have long-noted the cultures of shaming and silencing that can stop child-victims from reporting bullying or violence (O’Brien et al., 2018). It therefore becomes crucial to build teacher-student relationships of trust and open dialogue. Young people can quickly become entrapped in cycles of peer pressure to remain silent, or even compliant, with abuse. The onus is thus on educators to establish the relational foundation for students to feel safe reporting instances of cyberbullying or encounters with deepfake technology. In turn, it will be essential for practitioners and researchers to demonstrate the will and knowledge to understand and tackle these new forms of violence in young people’s lives.
Advancements in AI often outpace our understanding of their consequences. Perhaps proactively engaging with both the technical underpinnings and social implications of generative AI can help us better protect children from new risks in, and beyond, schools. In turn, cultivating this critical socio-technical literacy seems vital to ensure safe and just futures for young people.
References
Blanco, A. (2023). Teen boys at New Jersey school accused of creating AI deepfake nudes of female classmates. The Independent. Retrieved from https://www.independent.co.uk/news/deepfake-nude-westfield-high-school-nj-b2440793.html
Buchanan, R. (1992). Wicked problems in design thinking. Design issues, 8(2), 5-21.
Guy, J. (2023). Outcry in Spain as artificial intelligence used to create fake naked images of underage girls. CNN News. Retrieved from https://edition.cnn.com/2023/09/20/europe/spain-deepfake-images-investigation-scli-intl/index.html
Kurian, N. & Saad, C. (2024). Where technology meets empathy: using digital storytelling, gaming and AI to teach about peace and human rights. In O’Hair, M. J., Woods, P., & O’Hair, H. D. (Eds.). Education and communication: Promoting peace and democracy in times of crisis and conflict, Volume 3. Wiley Blackwell.
Kurian, N. (2023). AI’s empathy gap: The risks of conversational Artificial Intelligence for young children’s well-being and key ethical considerations for early childhood education and care. Contemporary Issues in Early Childhood, 14639491231206004.
Lancaster, T. (2023). Artificial intelligence, text generation tools and ChatGPT–does digital watermarking offer a solution?. International Journal for Educational Integrity, 19(1), 10.
Long, C. (2023). First reports of children using AI to bully their peers using sexually explicit generated images, eSafety commissioner says. ABC News. Retrieved from https://www.abc.net.au/news/2023-08-16/esafety-commisioner-warns-ai-safety-must-improve/102733628
McNicholas, T. (2023). New Jersey high school students accused of making AI-generated pornographic images of classmates. CBS News. Retrieved from https://www.cbsnews.com/newyork/news/westfield-high-school-ai-pornographic-images-students/
O’Brien, N., Munn-Giddings, C., & Moules, T. (2018). The repercussions of reporting bullying: some experiences of students at an independent secondary school. Pastoral Care in Education, 36(1), 29-43.
Singer, N. (2024). Teen Girls Confront an Epidemic of Deepfake Nudes in Schools. Retrieved from https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html
Wang, K., Gou, C., Duan, Y., Lin, Y., Zheng, X., & Wang, F. Y. (2017). Generative adversarial networks: introduction and outlook. IEEE/CAA Journal of Automatica Sinica, 4(4), 588-598.

This is such a timely and critical post – we need to protect all of our children from the risk of harm from AI and technology. In the digital age what’s done, cannot be undone, the impulsivity of the teenage brain is at odds with the permanency of the internet. As educators, we must seek to create mechanisms for supporting children to understand the substantial risks to themselves and others posed by the technology that is marketed to them as fun and frivolous.
LikeLike
Not only have the important issues of safeguarding and wellbeing been explicated so clearly, this well-written post clearly points to ways in which such issues may start to be addressed. The nature of wicked problems means there aren’t straightforward answers, nevertheless the crucial role of educators is highlighted.
Ros
Dr Ros McLellan Director of Learning & Teaching University Associate Professor in Teacher Education & Development / Pedagogical Innovation Fellow of Homerton College
Faculty of Education University of Cambridge 184 Hills Road Cambridge CB2 8PQ
Latest publication McLellan, R., Faucher, C., & Simovska, V. (Eds.). (2022). Wellbeing and Schooling: Cross Cultural and Cross Disciplinary Perspectives. Wellbeing and Schooling | SpringerLinkhttps://link.springer.com/book/10.1007/978-3-030-95205-1
LikeLike
A really inspiring piece! I appreciate how it links back to the roots of gender-based violence. It made me wonder about the types of problematic content that could emerge if deepfake technology were widely adopted in different cultures. Perhaps similar issues would arise elsewhere. Additionally, the fact that girls are most victimized in these scenarios might explain why, according to some recent research where smartphones are banned in schools, girls benefit the most, with significant improvements in their mental health and wellbeing.
LikeLike