Will AI Force Us to Rethink Evidence, Risk, and Harm in Child Protection?
Introduction
After reading The New Age of Sexism, I found myself reflecting on how technology is increasingly being used to perpetuate misogyny and harm women and children, but through a child protection lens.
Bates explores in depth how technology is evolving in ways that facilitate harm, and the links between misogyny and the increasing prevalence of this abuse. What I found particularly unsettling was the extent to which systems, including the criminal justice system, do not yet consistently recognise these behaviours as crimes, or fully acknowledge the level of harm they can cause.
For me, however, reading this prompted a different question: what happens when these issues begin to present within child protection work, and are we equipped to recognise and respond to them?
The Changing Landscape of Harm
The rapid development of artificial intelligence has already begun to change the landscape of online harm. Tools that can generate images, replicate voices, and manipulate content are no longer limited to specialists. Many are freely available or low-cost, and can be used with minimal technical knowledge.
From Future Concept to Present Reality
When I think about the possibilities of AI in the real world, it can still feel largely futuristic. References in popular culture, such as films like I, Robot or series like Black Mirror, tend to present AI through extreme or dystopian scenarios. While these may still feel some way off, it is important to recognise that many of these technologies are already embedded in everyday life, in ways that are less visible and often go unnoticed or misunderstood.
These tools are not something that is years away. They are already being used. There has been a significant increase in the amount of AI-generated sexual imagery being identified online, including material that is realistic and difficult to distinguish from genuine images.
These risks are not theoretical. Public reporting has already identified “nudify” websites and bots that generate non-consensual sexualised images from ordinary photos, including services such as ClothOff and Telegram-based nudify bots. Reporting has also highlighted the availability of realistic fake chat generators, such as Mockly, as well as broader concerns around AI voice-cloning technology, which regulators have already flagged as posing significant risks of deception and impersonation. In other words, the concern is not that these tools may exist in the future, but that variants of them already exist now and are likely to become more sophisticated, cheaper and more widely used over time.
What is already being recognised is that victim accounts suggest the impact of this type of abuse can be profound. Despite the absence of physical contact, women describe experiences of violation, fear, loss of control, and ongoing distress that are comparable in many ways to other forms of sexual harm. The knowledge that images can be created, altered or shared without consent introduces a level of unpredictability that can be difficult to manage and contain.
This article focuses specifically on the implications of these developments for women, and how they may begin to present within child protection work. The impact on children directly, including their own use of these technologies and exposure to harm, will be considered separately. The examples outlined above reflect both what is already being observed, alongside my own professional view of how these risks may begin to present more frequently within social care. The technology remains relatively new, and clear patterns are still emerging. However, the ways in which it may be used are consistent with established patterns of abuse, which makes these developments important to consider at an early stage.
What is emerging is not a separate category of harm, but an expansion of existing patterns of sexual abuse, now facilitated through tools that are increasingly accessible and difficult to regulate.
Deep Fakes
The use of artificially generated or manipulated sexual images, often referred to as “deepfakes”, represents a significant development in how sexual harm can be perpetrated.
There are clear parallels with what has previously been described as “revenge pornography”, which was made a criminal offence in England and Wales under the Criminal Justice and Courts Act 2015. This legislation recognised the harm caused by the non-consensual sharing of intimate images, including the impact on dignity, privacy and psychological wellbeing.
However, AI-enabled technologies extend this further. Images no longer need to exist in order to be shared. They can be created.
There is already evidence that deepfake sexual images are being used to harass, humiliate and threaten women, including through distribution within social networks, workplaces and online spaces. As outlined earlier, victim accounts indicate that the impact can be profound, with experiences of violation, fear and loss of control that mirror other forms of sexual harm.
Within the context of intimate relationships, including following separation, this creates additional opportunities for abuse. It is possible for a partner or ex-partner to create explicit and falsified images and distribute these to others, including family members or new partners. This represents a further extension of sexual harm, allowing abuse to continue without the need for physical proximity or direct contact.
These images can be highly explicit and may depict acts that the individual has never engaged in. Despite this, they may still be perceived as real by others, particularly where the technology used is sophisticated. This creates a significant risk of reputational harm, humiliation and social isolation.
There are also potential implications within family dynamics. It is conceivable that such material could be used to undermine a parent’s role or influence relationships within the family, for example by presenting falsified images to others in a way that alters perceptions of that parent. While there is currently limited formal evidence of this in practice, the accessibility of these technologies raises questions about how they may be used in the future, particularly in high-conflict or abusive contexts.
For women from different cultural or religious backgrounds, the impact may carry additional nuance. The creation of falsified images depicting a woman without religious dress, such as a headscarf or niqabs, or in a sexualised context that directly conflicts with their beliefs, may have wider implications beyond personal distress. This may include risks linked to reputation, family relationships, and in some cases the potential for honour-based violence.
In addition to the direct harm caused by the creation and distribution of falsified sexual images, there are wider implications for how this material may be used within the context of parenting and child protection.
One potential use is in the context of coercion and control within intimate relationships. Falsified images or videos could be used to blackmail a parent into complying with demands, including remaining in a relationship, withdrawing from support networks, their wider community, or acting in ways that prioritise the wishes of the perpetrator. The threat of exposure, particularly where the material is explicit or reputationally damaging, may be enough to place significant pressure on the individual, even where the content is entirely fabricated.
There is also the potential for such material to be used to construct false narratives about a parent’s lifestyle or capacity to care for their children. This may include fabricated images or videos suggesting unsafe home conditions, children in distress, or a parent being intoxicated or using substances. While these scenarios may not yet be widely documented in formal research, they represent a credible extension of how digital content could be used to influence perception, now made more accessible through AI.
This raises particular concerns in the context of safeguarding and court proceedings, where digital material is often relied upon as part of the evidential picture. The introduction of falsified but realistic content creates the potential for professionals to be presented with material that appears credible, but is difficult to verify within the timescales and constraints of frontline practice.
A further complexity arises in relation to allegations of harm towards children. It is conceivable that fabricated audio or video recordings could be used to suggest that a parent is shouting at, threatening or physically harming their child. At the same time, the increasing awareness of deepfake technology introduces the possibility that genuine recordings could in turn be dismissed as falsified.
There are also potential implications for how children themselves understand their experiences and family relationships. AI-generated content could be used to create conversations, messages or even “memories” that appear consistent and believable over time. A child may be repeatedly exposed to material that suggests events occurred in a particular way, or that a parent behaved in ways that are not accurate.
This may include the creation or alteration of images, for example inserting an absent parent into historical photographs to give the impression they were present when they were not, or accusing the other parent of editing genuine images to distort the truth. Similarly, fabricated message exchanges may be shown to a child to suggest that one parent has been trying to maintain contact while the other has prevented it. Over time, this can influence how a child understands what has happened between their parents, based on information that is not real.
Over time, repeated exposure to consistent but inaccurate narratives may influence how a child understands their relationships, potentially reinforcing conflict, confusion or alignment with one parent or detrimentally impacting the relationship with another.
This creates a challenging dynamic for practitioners. Where concerns are raised, they cannot be ignored. However, the ability to confidently determine what is authentic and what is not may become harder. This has the potential to introduce uncertainty into assessment processes, with implications for how risk is understood, managed and evidenced.
In consideration to the above, these examples highlight that the significance of AI-generated content extends beyond the creation of harmful imagery itself. It has the potential to influence how narratives about parenting, credibility and risk are constructed, both within families and within professional systems.
Coercive Control
While the use of falsified imagery represents one aspect of emerging harm, the broader implications of artificial intelligence sit within the expansion of coercive and controlling behaviour.
Coercive control has traditionally relied on a perpetrator’s ability to monitor, intimidate and restrict a partner’s autonomy. What these technologies introduce is not a new form of abuse, but an increased capacity to extend these behaviours in ways that are less visible and more difficult to evidence.
One area that warrants consideration is how developments in artificial intelligence may change the way monitoring and surveillance occur within relationships. Access to messages, accounts and location data is not new. However, what is changing is the level of effort required to make use of this information.
AI tools have the potential to analyse large volumes of data quickly, identify patterns, and highlight changes in behaviour without the need for continuous manual checking. This may include summarising communications, flagging particular words or themes, or identifying deviations from established routines.
In practice, this means monitoring can become more passive and continuous. Rather than actively searching for information, a perpetrator may rely on automated systems to draw attention to activity they consider significant. This may reduce a victim’s ability to act independently or seek support without their actions being noticed.
Where a perpetrator has access to devices or accounts, they may also be able to monitor messages, intercept information, or identify attempts to seek help across multiple platforms, with AI further assisting in filtering and prioritising relevant communications.
A further concern is the potential for impersonation. The increasing sophistication of AI-generated communication raises the possibility that perpetrators may present themselves as professionals, such as social workers or police, in order to influence, intimidate or mislead a victim. While impersonation is not new, the ability to produce realistic and convincing communications may increase the likelihood that such attempts are believed, particularly where they appear consistent with expected professional language or format.
This may extend beyond written communication. It is now increasingly possible for perpetrators to generate or use cloned voices during live or recorded calls, presenting themselves as professionals or trusted individuals.
In practice, this may look like a perpetrator telling a victim that professionals have given certain instructions, for example “the police told me to do this” or “social services said this should happen”. AI introduces the possibility of generating audio that appears to support these claims, such as a recording that sounds like a social worker or police officer confirming the instruction.
This does not only increase the likelihood that a victim will believe what they are being told, but also creates the potential for professionals themselves to be misled. For example, fabricated material may be used to suggest that one agency has given advice that contradicts another, or that certain decisions have already been agreed. This could create confusion, undermine professional relationships, and in some cases position professionals against one another. In this context, the information no longer relies solely on the perpetrator’s account, but appears to be independently verified.
Artificial intelligence may also be used to support perpetrators in constructing and maintaining more coherent and persuasive narratives. This may include scripting responses, generating explanations, or presenting reflections that appear insightful and accountable. In practice, this may mean a parent presenting as calm, reflective and cooperative in written communication or assessments, while minimising their own behaviour and shifting responsibility onto the other parent.
What is notable here is that AI systems can be used to generate responses that align with what the user is seeking, rather than providing challenge or scrutiny. This creates a risk that perpetrators are able to refine and reinforce their own narratives, rather than being encouraged to reflect meaningfully on their behaviour. In practice, this may enable individuals to present as a safe or cooperative parent in a way that is not reflective of their actual behaviour, complicating professional assessment.
Control is no longer limited by proximity, effort or even direct interaction. It can be continuous, automated and, at times, invisible.
While some of these risks are still emerging, they are consistent with established patterns of coercive control. The difference lies in the ease with which these behaviours can now be carried out, and the challenges this presents for both victims and professionals in recognising and evidencing harm.
Implications for Child Protection Practice
Social workers work with ‘evidence’. However, the way this is understood differs from the criminal justice system. Criminal courts work to a standard of “beyond reasonable doubt”, whereas social workers make decisions based on the “balance of probabilities”. This distinction has always required a degree of professional judgement, but the increasing presence of AI-generated and manipulated content introduces a new level of complexity into how that judgement is exercised.
In practice, this means that information which may once have been taken at face value, such as a photograph, message thread or screenshot, can no longer be assumed to reflect a genuine account of events. Material may appear coherent, chronological and plausible, but still be inaccurate or entirely fabricated. At the same time, frontline practitioners are unlikely to have timely access to tools that can reliably verify the authenticity of such content, meaning that decisions may need to be made without definitive clarity about what is real.
Safeguarding systems often rely on observable indicators, third-party information, or tangible evidence. AI-facilitated abuse challenges this. Digital content can be easily created, altered or removed, meaning that what is presented to professionals may represent only a partial, distorted or carefully curated version of events.
This creates a tension between lived experience and evidential standards. Practitioners may be required to make judgments about risk in the absence of clear, verifiable proof, increasing both professional uncertainty and the potential for harm to be minimised. There is also a risk that, in the absence of “reliable” evidence, greater weight is placed on presentation and demeanour. This may disadvantage victims who are experiencing ongoing coercion or distress, and whose presentation may be shaped by that experience.
Within child protection, the implications are not always immediately visible.
A parent experiencing ongoing digital surveillance or threats may present as inconsistent, anxious, or emotionally unavailable. For example, they may struggle to maintain routines, appear distracted in interactions, or respond in ways that seem disproportionate or unclear. Without an understanding of the underlying context, these behaviours may be interpreted as concerns about parenting capacity, rather than recognised as responses to coercion. This is particularly relevant in cases where routine, supervision or emotional availability are being assessed without full consideration of external pressures or fear-based control.
This creates the potential for misattribution of concern, where the impact of abuse is located within the parent, rather than understood as arising from external harm. In turn, this may influence assessment outcomes and intervention planning in ways that do not fully address the source of risk. In some cases, this may lead to increased scrutiny of the non-abusive parent, rather than a focus on the behaviour and impact of the perpetrator.
The legal framework is beginning to respond to some aspects of AI-facilitated abuse. However, developments remain inconsistent, and awareness of existing legislation is variable in practice.
This creates a gap between the pace at which harm is evolving and the systems designed to respond to it. For practitioners, this can result in a reliance on professional judgement in areas where formal guidance is limited. It also highlights a broader issue: that recognition of harm within legal frameworks often lags behind lived experience, leaving professionals working in spaces that are not yet clearly defined or consistently understood.
There are also implications for how professionals themselves may be drawn into these dynamics. The increasing accessibility of AI tools means that parents and carers are able to construct written complaints, referrals or accounts that are highly coherent, structured and persuasive. While this may support accessibility for some individuals, it may also result in an increased volume of detailed complaints or reports that require a response.
Where such complaints are not grounded in substantive concerns, this has the potential to divert professional time and attention away from direct safeguarding work. Similarly, the use of AI to generate referrals to children’s services may result in an increased number of reports that appear credible on face value, requiring assessment and triage within already pressured systems.
At the same time, child protection occupies a unique position. Unlike the criminal justice system, intervention does not rely on proving that a criminal offence has occurred. This creates an opportunity for practitioners to recognise and respond to emerging forms of abuse based on their impact, rather than their legal classification.
This is particularly important in the context of AI-facilitated harm, where behaviours may not yet be widely recognised, clearly defined, or consistently prosecuted, but may still have a significant and harmful impact on both the adult and the child.
For social workers, this does not necessarily require entirely new frameworks, but it does require an expansion of professional curiosity.
It may involve considering questions that would not previously have been central to assessment, including how technology is being used within relationships, what access others may have to devices or accounts, and whether threats or harm may exist in forms that are not immediately visible. This may also include exploring inconsistencies in digital information with a greater degree of curiosity, rather than accepting material at face value.
It also requires an awareness that absence of evidence does not equate to absence of harm. Equally, the presence of what appears to be “evidence” does not necessarily equate to truth.
Conclusion
Artificial intelligence is not creating entirely new forms of harm, but it is changing how existing abuse can be carried out, sustained and understood.
For child protection, the challenge is not only in responding to what is already visible, but in recognising what may not yet be fully understood. As these technologies continue to develop, so too must our ability to identify, interpret and respond to harm in forms that do not always present clearly.
The question is not whether these issues will arise in practice, but whether we are prepared to recognise them when they do.