Winning the Arms Race Between Kids and Tech

I knew we were in trouble one Christmas when our 7 year-old son hacked his uncle’s phone and acquired a big stash of Minecraft minecoins. The subsequent investigation revealed that the trusting uncle divulged his date of birth to the boy and hours later handed over his phone. A classic mistake in cybersecurity. Since this warning shot, we have tried everything to navigate our children’s online life. Video game incentive systems. Daily time-limits and content restrictions. Cold turkey tech holidays. 

Such is the arms race unfolding between technology and families.  In the media, narratives are dominated by concern about the creation of an “Anxious Generation” and the great tech debate is one that affects every family in every country. 

In May, I hosted a workshop at Reuben College in Oxford with fifty of the world’s experts on artificial intelligence, mental health, and child development to help chart a path forward. 

In the past decade, AI systems have surpassed human performance on tasks, such as image classification in 2015, basic reading comprehension in 2017, visual reasoning in 2020, and natural language inference in 2021.  

All of this has fuelled the generative AI bonanza: technology that creates passable human-level content on an enormous scale. It is estimated that as much as 57% of the content on the internet has been machine generated or machine-translated. These systems are becoming more flexible. Voice becomes video, brain scans become stories, and text prompts become 3D printed materials. The next frontier will move from creation to interaction, as systems break down complex tasks into smaller manageable chunks without supervision and solving them.  One might use the same system to “book the cheapest direct flight from London to Washington before Christmas” as well as “turn off the oven.” These are steps towards a more general intelligence. 

But today’s major generative AI platforms are explicitly not for children, a huge issue as 50% of UK school children routinely use generative AI tools. Moreover, digital technologies are already embedded in the daily life of children. In the UK, one-quarter of 5-7 year olds  own a smartphone and one-third report using social media independently. Given these developments, it is likely that children will grow and learn in a world where AI agents may be as convincing, capable, and common as humans. 

A key challenge for families is figuring out how to navigate between two seemingly bad scenarios. Should we all follow the example of Ned Ludd, the legendary weaver whose followers broke the textile machines in Nottingham? Or should we embrace the techno-utopians whose mantra seems to be ‘Save The Everything. Click Here?’ 

We need a middle way.  

Consider the evolution of nutritional guidance. In the 1970s national guidelines in many were primarily concerned with limits and restrictions on food components such as salt, sugar, and fat. Since then expert consensus and national policy has focused on reducing excessive intake of certain food components while increasing consumption of foods that are more nutritious. Think of the Eatwell Guide images that stress diversity of diets. Instead of blanket bans or automatic restrictions, we need a balanced technology diet. 

Such an approach recognizes that each child is different. The same technology platform can produce different kinds of effects on different children depending on their circumstances and usage patterns.  For example researchers have found substantial differences between active and passive users of social media platforms. Chatting, posting, and responding by active users may increase social capital and connectedness while lurking or browsing behaviour of passive users promotes anxiety and decreases wellbeing.  

In the quest for a more balanced technology diet, we need to understand what kinds of positive features - what nutrients - we should cultivate through technology use.  As a starting point here are three:  

Relationships. Authentic relationships require give-and-take. We process social cues and infer another’s intentions from behaviour. Yet the current online world is not set up for this. Digital social networks tend to elevate our social uncertainty. Is the person I just messaged angry at me or are they just busy? This uncertainty makes maintaining human connections harder.  

It is possible to build AI systems that actually do help grease the wheels of friendship. To achieve this, technology must go beyond optimising for user engagement or shares. Instead, software should consider relational metrics for example promoting caregiver-child bond and peer-to-peer engagement. 
 

Tandem is a start-up company taking this approach. It is a co-reading app that allows children and adults to generate stories for children and generate conversation and connection around them. The relational metrics it monitors include how many times the reader switched in any given story or measuring conversation that is generated that extends the story in new directions. 
 

Agency. Many psychologists prize agency as an advanced and highly desirable level of personal competence. Agency is being active, engaged, and choiceful, as opposed to passive and alienated. Popular parental control schemes stand in their suppression of agency.  They generate emotional tones of anxiety, frustration, and confusion. I am excited to see growing activity in this space amongst technology start-ups such as Screenable, whose motto is self-control not parental control.  

Imagination. GenAI tools offer access to a whole new universe of means of expression. There may be good reasons to believe that children may be more expressive and willing to take creative risks with computer-based systems. This may be because children feel less pressure to perform or experience less shame or judgement with computers in comparison to an adult.  

AI tools that enhance relationships, agency, and imagination will be a transformative step in the right direction. But much more is needed. Researchers need to do better science. They need the resources to conduct prospective cohort studies of technology use over time, tracking the many ways young people use these platforms. Such studies could enable personalised guidance to identify risk factors and appropriate interventions. Policymakers need access to the best research, available in near real-time, synthesised for a wide variety of audiences. Designers and entrepreneurs need patient capital and market systems to scale the right kinds of technologies. And schools, churches, and communities need to focus on creating new norms and positive cultures of technology use. Navigating our life with AI is not a single player game; it is a team sport.  

Andrew Serazin is a Senior Research Fellow at Reuben College, University of Oxford where he directs the Global Challenges Programme.  

Next
Next

Learning to think & thinking to learn: The implications of AI for the next generation