New research report exploring AI in the Children’s Hearings System
AI means ‘artificial intelligence’. AI can learn from and respond to the information it receives. To do this, it uses sets of rules or instructions that have been written by humans.
The use of AI is becoming much more common. It is used in video games, online search engines, chatbots and social media photo tagging. AI is also used by public services like local councils, the NHS and the police. Within these organisations, AI is being used to automate workflows, to make predictions around risk, help manage large volumes of complex data, support decision making, improve safety, and support the care of individuals. A range of AI technology is being used, including machine learning, natural language processing, data matching, biometrics and profiling.
SCRA has been thinking about using AI in the Children’s Hearings System. It could help with administrative tasks, like inputting information into databases; screening emails, then sending them to the right person; removing/redacting sensitive information from reports; transcribing meetings; and arranging times for hearings. AI could also be used to help us analyse information.
If SCRA decided to use AI, all of the written information we hold would be scanned using computer software that can turn any written document into a fully searchable digital document. We would then develop, train and test the AI to perform the task that we wanted it to do. By getting AI to do some of these tasks, SCRA staff could have more time to support children, young people and families coming to Hearings.
Any decision to use AI at SCRA should be well thought through and based on good evidence, which is why we carried out this research project.
We used workshop-style focus groups to explore participants’ views about the use of AI technology, how AI affects their life, what they think the benefits and risks of using these technologies might be for society, and how AI could be used within the Children’s Hearings System. We included educational and interactive elements to support participants to build their knowledge and confidence.
A total of 163 people participated, across 29 workshops. Participants included employees of SCRA and Children’s Hearings Scotland; Children’s Panel Members; advocacy workers, safeguarders and employees of organisations advocating for children, young people and families; social workers; solicitors and legal organisations; children and young people (aged 12+); parents/carers; and other professionals.
The findings of this research project align with a wide range of previous evidence which has highlighted that although people can see the potential benefits of AI, they are often concerned about data protection, online safety, inaccuracy and misinformation, decision-making, human relationships, wider quality of life, inequalities, and real-world uses.
When thinking about AI in general, our participants emphasised that transparency, bias, accuracy, human oversight, and privacy and consent were key ethical and practical considerations. They were concerned about the potential impacts of AI on children and young people and were unanimously clear that human connection and relationships are crucial and should not be replaced by AI. These findings align with Scotland’s AI strategy’s call for AI to be ethical, transparent and responsible.
The need to monitor children’s safety and rights is particularly strong in the Children’s Hearings System, where children, young people and families may already be experiencing multiple adversities. Participants often expressed discomfort about AI being involved in the complex, high-stakes decision making undertaken by those working in the Children’s Hearings System at all, even with human intervention.
Based on the findings of this study, we recommend that if AI tools are used in the Children’s Hearings System, they should not be used for anything replacing human interaction or making decisions, should only be used where necessary, and should not be used to fix structural problems whose root causes require addressing. If AI tools are used to support administrative tasks, there should be a thorough cost/benefit analysis; strict privacy protocols; meaningful collaboration with those who will be using or affected by the tool; transparency around how and why the tool will be used and managed; and ongoing planning, regulation and monitoring; for each tool.
You can read the research report in full. There is also an executive summary version.
We have also produced a visual version of the report for children and young people.
There is a plain text version for children and young people.
In addition to this research project, SCRA is currently exploring the use of two AI tools; one to support the redaction of sensitive information from reports, and one to aid Children’s Reporters when creating witness statements. More information about these can be found in the Scottish AI Register.