Framed as a space for reflection rather than definitive answers, Insight to Impact brought together expertise spanning lived experience, frontline delivery, research and system leadership. Across the afternoon, speakers explored the scale of current challenges, the realities of data and technological maturity, and the choices the sector must now make as AI becomes increasingly embedded in professional and everyday life.
Speakers and Contributors
Keynotes & Presentations
- Ade Adetosoye CBE – Chief Executive, Bromley Council (Chair)
- Lisa Harker – Director, Nuffield Family Justice Observatory
- Simon Bailey CBE – Chair, International Policing and Public Protection Research Institute, Anglia Ruskin University
- Professor Sir Anthony Finkelstein CBE – President, City St George’s University of London
- Andrew Newman – Principal Data Scientist, Open Data Institute
- Matthew Wagner – Chief Analyst, Kent County Council
- Kevin Yong – Consultant, Coram
- Dame Carol Homden – Chief Executive, Coram (closing remarks)
Panel: Emerging Questions and Intergenerational Perspectives
- Ade Banjoko – Director, Parents Action and Resource Centre CIC
- Jenny Coles CBE – Child Safeguarding Practice Review Panel; ADCS President 2020/21
- Jabed Hussain – Associate Director of Business Efficiency & Digital Transformation, Kingston & Richmond Councils
- Renuka Jeyarajah-Dent – Consultant, Coram
- Dr Aoife O’Higgins – Director of Evidence, Foundations
- John Caldicott – former pupil of the Foundling Hospital
- Claire Noble – Artist and international adoptee
- Olivuh – Youth Ambassador

Lisa Harker — Can Data Improve Children’s Lives?
Lisa Harker, Director of the Nuffield Family Justice Observatory, opened the keynote sessions with a clear proposition: data can improve children’s lives—but only if it is used critically, contextually and through an intersectional lens.
She reminded the audience that children’s lives are shaped by multiple, overlapping factors – ethnicity, deprivation, age, disability, geography and family context – and that analysing these characteristics in isolation risks obscuring rather than illuminating inequality. Drawing on research from the Family Justice Observatory and wider studies, Harker illustrated how patterns of involvement in children’s services and family court outcomes differ significantly when these factors intersect, particularly in relation to ethnicity and deprivation.
She also highlighted major demographic shifts, including growing diversity among children and young people and the rapid expansion of the “mixed ethnicity” category – arguing that broad labels are becoming increasingly blunt tools for understanding lived experience.
Turning to the use of AI in practice, Harker examined the rapid uptake of AI‑enabled transcription and summarisation tools in children’s social care. While these tools offer clear potential benefits – reducing administrative burden and freeing up time for relational work – she warned of risks including hallucination, bias in speech recognition, and the danger of evaluation frameworks that prioritise time saved over children’s and families’ experiences.
Her challenge to the sector was clear: innovation must be judged by whether it helps families feel heard, understood and better supported.

Simon Bailey CBE — The Scale We Grapple With
Former National Police Lead for Child Protection, Simon Bailey CBE, grounded the conversation in the realities of harm facing children and young people, particularly in the digital sphere.
Focusing on child sexual abuse and online exploitation, Bailey described a global threat landscape that is growing in scale, speed and severity. He emphasised that online abuse is not a lesser harm: for many victim‑survivors, the trauma is deeper, more enduring and compounded by the knowledge that images may persist indefinitely.
Bailey outlined how grooming methods have changed, with AI enabling rapid coercion and sextortion, and how peer‑on‑peer abuse now accounts for a significant proportion of reported cases. He also raised serious concerns about the misuse of AI, including the generation of synthetic abuse material and the weaponisation of technology against schools and young people.
However, Bailey was equally clear that AI must be part of the solution. Drawing on his experience of safeguarding practice reviews, he argued that AI could help transform training, supervision and early warning systems – surfacing patterns and risks that are currently missed due to fragmented data and overwhelmed systems. He challenged long‑standing assumptions about data‑sharing, noting that many of the most serious failures stem not from inappropriate sharing, but from vital information not being shared at the right time.
His contribution underscored the stakes of the discussion: used badly, AI can magnify harm; used well, it can save lives.

Professor Sir Anthony Finkelstein CBE — Let’s Think Blue‑Sky Before We Scale Down
Professor Sir Anthony Finkelstein CBE, President of City St George’s University of London, offered a technologist’s perspective on the profound shift represented by generative AI.
Speaking candidly, he described his own surprise at the capabilities of large language models – arguing that even experts failed to anticipate the scale of behavioural change they would bring. AI, he suggested, has crossed a frontier: its tendencies towards approximation and creativity are not flaws, but features that open new ways of thinking and working.
Importantly, Finkelstein challenged the sector to engage now rather than wait for perfect systems or complete certainty. AI adoption is happening at unprecedented speed, often at an individual level, and it will continue regardless of institutional readiness.
He illustrated what near‑term use of AI in children’s social care could look like – from real‑time transcription and multilingual communication to automated workflows and child‑friendly representations of complex information – emphasising that these tools should augment, not replace, professional judgement and human relationships.
At the same time, he was explicit about prerequisites: strong data infrastructure, investment in skills, robust governance, attention to bias and accountability, and a commitment to experimentation. The greater risk, he argued, lies in passivity: failing to engage allows others to define the future for the sector.

Andrew Newman, Matthew Wagner and Kevin Yong — The Reality of Data and Technological Maturity in Children’s Social Care
This presentation grounded the preceding discussion in the practical realities of implementation.
Matthew Wagner (Chief Analyst, Kent County Council), opened by acknowledging the instinctive fear many leaders feel when imagining AI making decisions about children. The task, he argued, is to move from asking whether we are comfortable with AI, to defining what we are comfortable with AI doing – and under what conditions.
Andrew Newman (Principal Data Scientist, Open Data Institute), provided a clear framework for understanding AI as a spectrum of capabilities, from narrow, task‑specific tools to autonomous systems. Crucially, greater capability brings greater risk, and responsible implementation depends on making conscious decisions about where on that spectrum specific uses should sit. All AI, he emphasised, depends on strong, well‑governed data infrastructure.
Kevin Yong (Consultant, Coram), translated this into practical use cases already within reach: improving data quality, supporting translations and accessibility, summarising records, mapping relationships, assisting supervision and management, analysing feedback at scale, and helping children and young people access their own records in meaningful ways. These applications focus on reducing friction and cognitive load – not automating professional judgement.
Returning to responsibility and readiness, Newman highlighted findings from the Local Government Association showing that only a small minority of local authorities feel confident in their workforce or data readiness for AI. The most concerning insight was not experimentation, but the mismatch between organisational confidence and workforce preparedness.
The presenters closed by outlining principles for responsible AI in children’s social care: augmentation over replacement, human oversight, transparency, ethical clarity, workforce capability, and design that includes children, families and practitioners. Successful adoption, they argued, is ultimately a leadership challenge rather than a technical one.

Panel Discussion: Workforce, Ethics and Leadership
A wide‑ranging Q&A panel brought speakers together with the audience, surfacing some of the most pressing tensions facing the sector.
Participants raised concerns about the psychological impact of AI on the workforce, including over‑reliance and the erosion of professional judgement. Panel responses emphasised the importance of purposeful design: AI should support thinking, not shortcut it. Imperfect drafts that require professional input were seen as preferable to “perfect” outputs that risk deskilling staff.
Questions also focused on training and responsibility: with only a minority of staff feeling equipped to understand AI, who owns workforce readiness? Panellists agreed this cannot sit solely with individual local authorities and requires coordinated, system‑level leadership.
On governance and accountability, contributors warned against fragmented, secretive experimentation. Greater transparency, shared learning, and national coordination were repeatedly highlighted as essential – both to protect children and to use public resources wisely.
Underlying the discussion was a consistent message: AI intensifies existing leadership challenges rather than replacing them.

Intergenerational Panel: Listening Across Decades and Acting at the Child’s Timescale
Chaired by Renuka Jeyarajah‑Dent (Consultant, Coram), the intergenerational panel brought together lived experience, youth voice, community leadership and system expertise to ask what responsible AI looks like when judged by the realities of children’s and families’ lives today.
Learning from history, without repeating it.
Opening the discussion, John Caldicott reflected on growing up at the Foundling Hospital – “a very regimented” upbringing that offered order and practical care, but little recognition of children’s feelings. He urged today’s professionals to ensure children in care “have better chances for their later life,” and to “please listen to the children, then think about AI.” Dr Aoife O’Higgins (Director of Evidence, Foundations) cautioned that the sector too often assumes progress and neglects lessons from earlier decades; rigorous evaluation – of what works, what doesn’t, for whom, and in what contexts – must be embedded rather than retrospective.
Trust, transparency and co‑production.
From a youth perspective, Olivuh asked how councils can deploy AI ethically when communities may not trust it, pushing for consent, choice and visible safeguards. Jabed Hussain (Associate Director of Business Efficiency & Digital Transformation, Kingston & Richmond Children’s Services) argued that transparency starts with basics like a central register of AI use and must extend to co‑design with young people and families, moving beyond consultation after products are built. Vendors, he said, should explain how models make decisions and who builds them; otherwise “black‑box” systems will erode confidence. Jenny Coles CBE (Child Safeguarding Practice Review Panel; ADCS President 2020/21) reinforced that statutory agencies should treat openness as default: local communities should know where AI is used – across councils, police and schools – and be enabled to help shape it.
Bias: revealing it, not reinforcing it.
Bias featured prominently. Ade Banjoko (Director, Parents Action and Resource Centre CIC) described how disciplinary policies can disproportionately impact Black pupils – down to subjective rules such as “professional hairstyle” – and called for parents and communities to be involved from the start when policies and tools are designed. Hussain added that historic datasets reflect historic bias; councils should re‑read case notes with people with lived experience and test for fairness before models influence decisions.
Language, culture and the limits of the model.
On a lighter but telling note, Olivuk asked whether AI keeps up with Gen‑Z slang. O’Higgins noted that while models can list terms, language is about culture, context and relationships – reminding the room that AI can support understanding but cannot replace the relational work of building trust across generations.
Ideas the sector can act on now.
Panelists converged on several practical proposals:
- Publish what you use. Councils should make AI use visible (e.g., public registers) and explainable, including procurement questions on training data, decision logic and team diversity.
- Co‑design from day one. Move from post‑hoc consultation to co‑production with young people, families and communities, especially those most affected.
- Evaluate impact on children and families. Complement productivity metrics with measures of voice, fairness, comprehension and relational quality.
- Mind the readiness gap. Excitement is high but capability is uneven; off‑the‑shelf tools rarely fit the nuances of children’s services without careful problem definition and local adaptation.
- Build for empowerment. Responding to Claire Noble’s question about identity and adoption, Hussain backed the idea of an AI‑supported resource to help adoptive families integrate a child’s culture into daily life—designed as a safe, 24/7, family‑facing aid.
Care, connection and appropriate emotional support.
Asked whether AI can provide emotional support, O’Higgins said there is emerging promise for low‑level psycho‑educational help, alongside real risks (e.g., harmful guidance and gaps for certain mental‑health conditions). The consensus: AI may signpost and scaffold, but human relationships remain irreplaceable.
One‑word takeaways.
The panel closed with four anchors for responsible adoption: Inclusion (Ade Banjoko), Transparency (Jenny Coles CBE), Evaluation (Aoife O’Higgins) and Co‑design (Jabed Hussain) – with John Caldicott’s reminder ringing loudest: listen to the children first.
Dame Carol Homden — Closing Reflections
In closing the event, Dame Carol Homden (Chief Executive, Coram), drew together the threads of the afternoon, reminding attendees that adaptation is not optional. She highlighted the intergenerational challenges facing society, noting that children are now the smallest proportion of the population in history – with profound consequences for voice, trust and resource allocation.
Homden emphasised that AI arrives at a moment of declining trust in institutions and data, and that technological change is as much a behavioural and relational challenge as a technical one. Efficiency alone, she argued, will never meet the scale of need.
Returning repeatedly to children’s voices, she invoked a familiar refrain from young people: “I’ve been telling you for ages – but you didn’t listen, and you didn’t act.” Used well, AI offers the possibility of acting at the child’s timescale – helping services listen once, decide swiftly and respond meaningfully, while remembering that nothing can replace human care, connection and compassion.
She closed by inviting participants to stay engaged with Coram’s ongoing work on digital futures and youth‑led insight, reinforcing that the future may be digital – but it must also remain deeply human.
Conclusion
Insight to Impact: Intersectionality, AI and the Future of Children’s Social Care closed on a shared recognition that technological change is already reshaping the landscape in which children, families and practitioners live and work. Across the day, speakers and panelists emphasised that AI is neither a silver bullet nor an existential threat – it is a powerful set of tools whose value will depend on how intentionally, collaboratively and ethically they are developed and deployed.
What emerged was a striking alignment across generations, disciplines and roles. Leaders in data and technology stressed the importance of infrastructure, readiness and robust evaluation; practitioners highlighted the need for transparency, fairness and safeguards; young people and community advocates reminded the room that trust must be earned, not assumed; and those with lived experience of earlier eras of care urged today’s system to hold history close, learning from what it got right and what it did not.
Throughout the event, one message resurfaced repeatedly: relationships remain at the heart of children’s social care. AI may ease administrative burdens, reveal patterns, support decisions and even offer low‑level emotional scaffolding – but it cannot replace the sense of safety, connection and dignity that only humans can provide. If used well, AI could create more space for the work that matters most: listening, understanding and responding on the child’s timescale.
The call to action is therefore a collective one. Build inclusion into every stage. Make transparency a norm rather than an afterthought. Invest in evaluation and evidence. Co‑design with children, families and communities, not around them. Above all, approach innovation with the humility and courage to recognise that the future of children’s social care will be shaped not only by what is technologically possible, but by what is morally and relationally essential.
As the panel’s closing words reminded us…