AI in the Children’s Hearings System

Why did we do this project?

AI means ‘artificial intelligence’. AI can learn from and respond to the information it receives. To do this, it uses sets of rules or instructions that have been written by humans.

The use of AI is becoming much more common. It is used in video games, online search engines, chatbots and social media photo tagging. AI is also used by public services like local councils, the NHS and the police. It can be used for lots of things including to: save time, make information easier to read or access, manage information and count things, and help people make decisions.

The Scottish Children’s Reporter Administration (SCRA) is thinking about using AI in the Children’s Hearings System.

The Children’s Hearings System is part of the child protection and youth justice system in Scotland. It makes decisions about how to help children if their parents/carers need help caring for them, if they are getting into trouble with the police, if they are being abused, if they are taking drugs or alcohol, or if they are not going to school.

We know from research about AI that children and adults are concerned about:

  • Online safety,
  • AI making mistakes,
  • Keeping people’s information safe and private,
  • What might happen if AI makes decisions,
  • How AI could change human relationships,
  • Changes to jobs and work,
  • Impacts on the environment,
  • Whether AI could make things less fair and equal,
  • How big companies use AI.

We wanted to understand whether children and adults had the same or different concerns about using AI within the Children’s Hearings System. To help us understand what people think, we carried out this research project.


What did we do?

We asked people what they thought about:

  • the use of AI,
  • how AI affects their life,
  • what they think the benefits and risks of using AI are,
  • how AI could be used within the Children’s Hearings System.

Each person came to a 3-hour long workshop with other people. 163 people took part, across 29 workshops. The people that took part were:

  • People who work in the Children’s Hearings System,
  • Children’s Panel Members,
  • Advocacy workers and safeguarders,
  • Social workers,
  • Legal professionals,
  • Other professionals,
  • Children and young people (aged 12+),
  • Parents/carers.

What did we find out?

Knowledge, uses and perceptions of AI

Most people knew a bit about AI. Most of the time they learnt more about it during the workshop.

People were often surprised by how much AI was in the apps and websites they already used.

Adults often thought that young people liked and used AI more than adults. Young people said they mainly used AI for fun and not for anything important.

People said that the world is sometimes unfair and unequal. They did not often think AI would fix these problems. Often they thought using AI might make things less fair.

People said that when children and families need help, having a person to talk to is very important. They did not want the people who could help them replaced by AI. Young people and parents/carers were very clear about this.


AI’s impact on children and young people

People often said AI might be able to help children and young people to be included and take part in things.

It can be scary for children and young people to share their views in hearings. Everyone said the Children’s Hearings System could be better at helping children and young people take part. People didn’t always agree with each other about whether and how AI should be used for this.

AI could be used to:

  • Help to gather information,
  • Help young people to share their views,
  • Help to make information and reports easier to read or access.

But often people said these things could just be done by a human. They also said it was important to make sure humans were involved in these things to make sure people were treated as individuals, and had their needs met.

People were worried that AI could make children and young people unsafe by:

  • exposing them to online abuse or harmful images or videos,
  • creating fake photos or videos from their online images,
  • leading to ‘real life’ abuse.

People were also concerned about impacts on children’s learning. Children and young people did not see AI as helpful for supporting real learning.

There was a strong message that any AI should put children’s best interests first. Any AI in the Children’s Hearings System should make things better for children and young people and should not just be about saving money.


Benefits and risks of AI use

People said there were good and bad sides to any uses of AI. Good things about using AI in the Children’s Hearings System could be:

  • It might save humans time that could be used to do other work.
  • It might make information easier to read or access.
  • It could use lots of data to see the ‘bigger picture’ and count things.
  • It might be better than humans when carrying out some tasks like counting.

People also said there were risks from using AI in the Children’s Hearings System:

  • It might make mistakes.
  • It might mean people have more work or take longer to do their work. This is mainly because of how long it might take to check the work done by an AI and fix any mistakes.
  • It might use words that upset people.
  • It might not keep people’s information safe and private.
  • Because people often don’t really understand AI, it might be hard for them to say whether it’s ok for SCRA to use it with their information.
  • It might affect how humans make decisions.
  • People might not be able to tell how decisions about them have been made.

People strongly said they did not want AI to replace human contact or to make decisions. Some people said it might be ok to use AI for some things like counting, if humans check it.


Moving forward

Most people thought AI would probably end up being used in the Children’s Hearings System. Some people were happy about this, but they still said it was important to be careful.

People often said that there were bigger problems like government departments not having enough money. They usually said that these problems couldn’t be solved by AI.

People said that to keep children and young people and their families safe:

  • Humans should always check AI work.
  • Humans should be in charge.
  • AI should not make decisions about children and young people and their families.
  • Humans, not AI, should speak to children and young people and their families.
  • Any AI should be carefully planned, trained and tested before it’s used.
  • SCRA should think carefully about how to keep people and their information safe before each time a new type of AI is used.

What does this mean for the Children’s Hearings System?

The findings of this research project agree with other research on AI. People are concerned about:

  • Online safety,
  • AI making mistakes or being misleading,
  • Keeping people’s information safe and private,
  • What might happen if AI makes decisions,
  • How AI could change human relationships,
  • Possible impacts on jobs and the environment,
  • Whether AI could make the world less fair and equal,
  • How big companies would use AI.

In the Children’s Hearings System, we need to be even more careful about these issues. This is because the children, young people and families who come to us are already having a hard time.

Based on the findings of our research, we think that any AI used in the Children’s Hearings System:

  • Should not be used to replace human contact.
  • Should not make decisions.
  • Should only be used where it’s needed.
  • Should not be used to try to fix bigger problems that need something else to fix them.

If AI is used to help with some things like counting or some types of paperwork, then:

  • SCRA should first think carefully about the good and bad things that could happen.
  • People’s data must be kept private.
  • SCRA should work with people who will be using the AI to check it works properly.
  • It should be clear to everyone how and why AI will be used and managed.
  • SCRA should keep checking that the AI is working ok and following these rules.

This should happen for every type of AI that is used.

You can also read the full research report.

Receive our e-news bulletin

Enter email to sign up to our newsletter

There has been a problem submitting your email address

Thank you for signing up to the SCRA Newsletter!