Research Summary – ‘A double-edged sword’: Potential uses of AI in the Children’s Hearings System
Introduction
We have defined AI as: ‘Technologies that can learn from and respond to the information they receive. To do this, they use algorithms, which are sets of rules or instructions that have been written by humans’. This definition aligns with OECD and Scottish AI Alliance definitions.
Several international and national guidelines and frameworks have been produced in recent years with the aim of ensuring that AI is developed and implemented ethically and responsibly but some think they do not have enough practical impact, and children are not usually explicitly considered in these guidelines.
There are ethical and practical issues to consider when deciding whether and how to use AI in the public sector, including around transparency, bias, accuracy, human oversight, and privacy and consent.
Although AI is increasingly being used in the public sector, and public awareness of AI has increased in recent years, in-depth understanding of AI does not appear to be commonplace.
Research with children and adults has found that although people can see the potential benefits of AI, they are often concerned about data protection, online safety, inaccuracy and misinformation, decision-making, human relationships, wider quality of life, inequalities, and real-world uses.
Any decision to use AI at SCRA should be well thought through and based on good evidence, which is why we carried out this research project.
Methods
This study aims to support SCRA’s decision making around AI by exploring perceptions of the ethical, legal and rights-based issues around the potential uses of artificial intelligence (AI) within the Children’s Hearings System.
We used workshop-style focus groups to explore participants’ views about the use of AI technology, how AI affects their life, what they think the benefits and risks of using these technologies might be for society, and how AI could be used within the Children’s Hearings System. We included educational and interactive elements to support participants to build their knowledge and confidence.
163 people participated, across 29 workshops. Participants included employees of SCRA and Children’s Hearings Scotland; Children’s Panel Members; advocacy workers, safeguarders and employees of organisations advocating for children, young people and families; social workers; solicitors and legal organisations; children and young people (aged 12+); parents/carers; and other professionals.
We used thematic analysis to analyse the data.
Findings
Knowledge, uses and perceptions of AI
Most participants brought some knowledge of AI to the workshops, but few said they were experts or knew nothing. Awareness and understanding of AI tended to increase as the workshop progressed.
Participants generally used AI more than they were initially aware of and were often surprised by the prevalence of AI integrated within the apps and websites they used day-to-day.
There was a common perception among adults that young people liked AI and used it more than adults. Young people’s views contrasted with this, with young people explaining that they did use AI but mainly for fun. Young people told us that they would generally not use it for anything they saw as important such as writing essays or applying for jobs, and not for anything that would normally involve direct contact with a human being.
Participants pointed out that AI systems reflect and potentially exacerbate the inequalities, biases and harms that exist in the human worlds they operate within. Many could see that AI could in theory be used for social good, but were sceptical about how likely this was in our current profit-driven society. Many participants said that while AI could in theory be used for various purposes, in our current society AI is often used to sell products or further political agendas rather than to support or protect children and young people.
Participants across all participant groups highlighted that there are aspects of being human that cannot be replaced by AI: emotion, values and intuition; human intelligence and development; and relationships and relational practice. Participants, especially young people and parents/carers, very strongly emphasised the importance of human connection and relationships.
AI’s impact on children and young people
Participants often highlighted the potential for AI to support the inclusion and participation of children and young people in general.
In relation to the Children’s Hearings System, participants were unanimously in favour of improving children’s participation and inclusion opportunities, but they did not always agree with each other about whether and how AI should be used to do so.
AI could be used to improve participation by supporting the gathering of information, helping young people to share their views, helping to frame information in a more child friendly way, and making information more accessible. However, participants questioned why these types of tasks would be carried out by an AI system rather than a human. They highlighted the importance of human involvement in AI tasks to ensure that children and young people and their families were adequately supported, because people need to be responded to as individuals with different needs.
Participants often highlighted the tension between supporting children’s access to information, inclusion and participation, and keeping them safe. Across all groups, participants expressed concerns about the potentially negative impacts of AI on the safety of children and young people, by exposing children and young people to online sexual abuse or otherwise harmful images or videos; creating deepfake photos or videos from children’s online images; supporting online grooming; and helping adults identify young people online and then abuse them in ‘real life’.
Concerns were also raised about potential impacts on children’s brain development and how meaningful AI-supported learning is long-term. Children and young people were clear that they did not see AI as a good tool for supporting meaningful learning or development.
There were also wider wellbeing concerns, including the potential impacts on children’s ability to form and maintain human relationships, the ‘addictive’ nature of algorithm-driven social media feeds, and longer-term impacts on the labour market and subsequent quality of life.
There was a strong message throughout the workshops that any use of AI within society should prioritise children’s best interests. Participants highlighted that any AI systems introduced to the Children’s Hearings System should have benefits for children and young people and should not solely be introduced as a response to challenging financial circumstances.
Participants sometimes expressed concern that AI outputs would not be adequately trauma informed. They emphasised the importance meaningful human involvement in reviewing any AI-generated outputs.
Benefits and risks of AI use
Participants were given five hypothetical scenarios to help them consider the benefits and risks of AI use in the Children’s Hearings System:
- Using AI to scan case files to see how many children and young people have ADHD
- Using AI to summarise multiple reports to make a child-friendly summary
- Using AI profiling to detect risk of Childhood Sexual Exploitation (CSE)
- Using AI to redact sensitive information in hearing papers
- Using AI to fill in forms automatically from police reports
Discussions highlighted the nuanced and complex impacts of AI, with participants almost always describing positive and negative impacts for every use of AI that was discussed.
Participants raised several potential benefits of using AI in the Children’s Hearings System, including: efficiency; access to information; accessibility; the scale and scope of data it can use; the ability to count/ analyse; and consistency.
Alongside these potential benefits, participants raised several concerns and emphasised the potential risks and unintended consequences involved. These risks included: increased workloads and inefficiencies, often due to the time taken to carry out human checks and fix any mistakes; inaccuracies; AI tools being unable to understand nuance and the potential for exacerbating trauma through AI-generated outputs; concerns around data protection and privacy, and meaningful informed consent; impacts on human decision-making, including decisions being made based upon incomplete or inaccurate data; and a lack of transparency around how decisions are made.
Participants across all participant groups cautioned strongly against the use of AI for anything replacing human interaction or making decisions and highlighted the importance of human checking and accountability. Some participants were more positive about the idea of using AI for administrative tasks.
Moving forward
Although there was a general sense of inevitability about AI use within the Children’s Hearings System, participants were not wholly positive about it and they often appeared resigned to the idea rather than excited about it.
A minority were positive about the idea of using AI within the Children’s Hearings System and were optimistic and excited about the opportunities the technology could bring, but these participants still emphasised that safeguards would need to be in place for AI use to be safe and acceptable.
Participants often said that AI technology was a ‘sticking plaster’ for much wider, systemic issues resulting from over-worked and underfunded public sector departments. This led some to question whether AI was really the solution, or whether these problems should instead be addressed at source.
There was a strong consensus that any work carried out by AI within the Children’s Hearings System should always include built-in elements of human intervention. Human checking and oversight were described as necessary across all the potential functions of AI discussed, including in administrative tasks. Crucially, participants were clear that any decision-making affecting children and young people and their families should always be the responsibility of humans and should not be left to AI.
Participants consistently pointed out that any AI use in the Children’s Hearings System would need to be implemented with safeguards. The strongest of these was the need for humans to be involved throughout the entire process, including through decision-making, checking, relational work, and accountability. This was not only about avoiding inaccuracies and misunderstandings, but also about ensuring that children and young people and their families understood what was happening, being trauma informed, and improving participation.
Other safeguards related to regulation, implementation and governance. Participants stressed the importance of careful planning, training and testing before AI systems were implemented. They also highlighted that issues around: consent, transparency and proportionality; data protection and privacy; and challenge, ownership and accountability should be carefully considered at this stage.
Discussion
The findings of this research project align with a wide range of previous evidence which has highlighted that although people can see the potential benefits of AI, they are often concerned about data protection, online safety, inaccuracy and misinformation, decision-making, human relationships, wider quality of life, inequalities, and real-world uses.
The need to monitor children’s safety and rights is particularly strong in the Children’s Hearings System, where children, young people and families may already be experiencing multiple adversities.
Participants often expressed discomfort about AI being involved in the complex, high-stakes decision making undertaken by those working in the Children’s Hearings System at all, even with human intervention.
Based on the findings of this study, we recommend that if AI tools are used in the Children’s Hearings System, they should not be used for anything replacing human interaction or making decisions, should only be used where necessary, and should not be used to fix structural problems whose root causes require addressing.
If AI tools are used to support administrative tasks, there should be a thorough cost/benefit analysis; strict privacy protocols; meaningful collaboration with those who will be using or affected by the tool; transparency around how and why the tool will be used and managed; and ongoing planning, regulation and monitoring; for each tool.
When thinking about AI in general, participants emphasised that transparency, bias, accuracy, human oversight, and privacy and consent were key ethical and practical considerations. They were concerned about the potential impacts of AI on children and young people and were unanimously clear that human connection and relationships are crucial and should not be replaced by AI. These findings align with Scotland’s AI strategy’s call for AI to be ethical, transparent and responsible.
Receive our e-news bulletin
Enter email to sign up to our newsletter
Thank you for signing up to the SCRA Newsletter!