"It's kind of weird talking to a sphere": How do children interact with social robots?


Children participating in the TICTAC study

A recent study conducted by academics within the TICTAC (Technology, Innovation, Community and Children) research group at Lancaster’s School of Computing and Communications (SCC) reveals the importance of appearance and design when building robots intended for children’s education and wellbeing.

PhD student Gail Collyer-Hoar and Dr Elisa Rubegni of the TICTAC research group at SCC, alongside Dr Laura Malinverni and Professor Jason Yip of the Universities of Barcelona and Washington respectively, have recently conducted a ground-breaking study on children’s perceptions of interactive social robots with a view of informing future developments and designs of such devices.

Social robots robots with an embodied form that have the ability to communicate with and respond to human commands - are becoming an increasingly common feature of homes and classrooms, used as tools to help foster children’s learning and development on both a social and educational level. However, as children are a particularly vulnerable and impressionable demographic, great care needs to be taken in not just the capabilities of these robots, but also in their physical. How these robots look – be it human-like, animal-esque, or simply a “talking sphere” – has potential to influence how children interact with and perceive them, for better or worse.

The Lancaster-led team therefore conducted a study on 36 young children aged between 9 and 11 years old, presenting them with images of three different robot designs: a human-like robot named “Gilo”, a rabbit-like robot called Miro, and finally, Sphero – the talking sphere. The children were then presented with scenarios of children interacting with the different robots in different ways, including talking to the robot about their problems, playing football, or simply reading with it, and asked to rate how they felt about each scenario. Afterwards, they were then asked to reflect as a group about what they liked and disliked about each scenario, how they would feel if they were interacting with the robot, and what they would change about the robot or the situation.

The team discovered that in children, the physical form of the robot hugely influenced the children’s perceptions of the robots in the situations, shaping what they believed the robot would be capable of in both positive and negative respects. The human-like Gilo, for example, was seen as being more intelligent, and better at performing physical tasks (such as playing football), however a number of the children voiced that they would be scared of it becoming violent or “turning on them”. Likewise, “Sphero” was perceived as more knowledgeable, yet less trustworthy due to its inhuman form, whilst “Miro” was seen as friendlier and more approachable, yet less intelligent due to its animalistic characteristics. The full paper was presented at the “Designing Interactive Systems Conference”, which took place in Copenhagen in July 2024.

On the findings of the study, Gail Collyer-Hoar commented: "Our research shows that children don’t necessarily see robots as homogeneous gadgets; they form strong opinions based on how these robots are designed. Whether it’s a robot that resembles a person, an animal, or even a strange, abstractly-shaped sphere, each design sparks unique hopes and fears, sometimes even altering what they believe the robot is capable of doing. Whilst we, as adults, may understand that the shape is irrelevant and that it’s the programming that matters, our work offers a nice glimpse into how children perceive what we consider obvious, and highlights how we, as designers and researchers, need to rethink how we design products to account for children's perspectives.”

The team hope to develop their study by expanding into the realm of generative AI. Gail remarked: “Having observed how tangible robotics influence children's perceptions, we want to investigate what findings might emerge from technology that has no physical form, and what impact this could have. Since AI holds great promise, yet also significant potential for harm, we hope to guide the development of future ethical technologies that are not only functional but also help to ensure positive outcomes for such a vulnerable population."

Back to News