Generative AI for Young People: From promises to proof

A reflection on the Generation AI Critical Perspectives seminar series


Dr. Anthony Bridgen on behalf of Generation AI

Over the latter half of 2025, Generation AI hosted ten seminars from high-profile voices across research, innovation, and policy, with the aim of looking more critically at the promises and proliferation of generative AI. This editorial is a reflection on the lessons and learnings from this series.

AI is everywhere; in the media we consume, our work, our education systems, and our relationships. OpenAI’s ChatGPT alone receives 2.5 billion prompts per day and young people are on the frontier of this adoption: 15-24 year olds account for 40% of monthly visits to OpenAI, 70% of students use it, and 72% of 13-17 year olds have interacted with an AI companion. Reports from Australia note that 10-11 year olds are spending 5-6 hours per day with AI systems and companion apps. 

What is lacking, however, is high-quality evidence regarding the impact this widespread adoption is having on our cognition, socialisation, emotional wellbeing, and productivity. Some of the first data on this subject came from Professor Pattie Maes’ group, whose recent work indicates that use of LLMs – in this case ChatGPT - decreases quotability and ownership when used in an essay writing context. In addition, interaction with LLMs is capable of creating false memories and effectively spreading misinformation when determining true/false statements in the news. These outcomes highlight the importance of leveraging socratic methods and introducing elements of friction when engaging with AI generated content. The importance of critical thinking and critical thought was reiterated by Alan Greenberg. Whilst AI tools for education must be built for purpose and around authentic, reviewed content, those using them must be able to engage critically with the input and output. In that respect, AI design should be counter to what we typically look for in software, prompting us to pause and reflect on what we are asking of it and the way we engage with its outputs.

It is not only the way we engage with AI systems but the systems themselves which threaten our self-determination. Professor Maximilian Kiener used a novel interactive tool to explore future scenarios where embedded AI assistants might use subtle linguistic cues to introduce or reinforce bias by presenting biased information, validating opinion and emotional levers. These concerns are not unfounded, as Professor Mor Naaman demonstrated, elements of this risk are already a reality. Mor surveyed participant views on an array of societal issues, from capital punishment to fracking. When subsequently asked to write about one of these topics with or without AI assistance, not only did those receiving AI suggestions produce essays more aligned with the AI’s bias, they showed an attitudinal shift when their views were resurveyed. Notably, AI users did not feel that they had been influenced, and warnings about the presence of bias in the AI before or after writing had no impact on these outcomes. This highlights that AI systems represent a real risk to our agency and make us vulnerable to manipulation, whether we are aware or not. If we are to deploy systems. We must ensure that they are working for us, not working on us.

With AI toys making it into the Christmas stockings of many children this year, it is vital we understand the risks AI poses to child rights such as safety, privacy and agency. Professor Garvin Brod addressed the importance of agency as a developmental continuum, the trajectory of which is critical for children’s learning and comprehension in the early years. Recent research from his group showed that, when children are able to demonstrate agency by making outcome-oriented predictions, they show significantly better learning outcomes than children who only observe predictions. Indicating that learning is not only predicated on having knowledge, but on the way we acquired it. This outcome is concerning given the deployment of AI-based tools oriented towards children, whether at home or at school. Many of which are not designed with children, let alone children’s agency, in mind. As such, it is critical that designers and developers adopt ethical principles to preserve agency, promote curiosity and support child development in a way that respects their rights.

Many countries are racing to embed AI technology in their education systems, with some countries such as Estonia going so far as to make it a centrepiece of their education sector. Baroness Beeban Kidron asked us to pause and reflect on whether there is any evidence of pedagogical benefit for such tools. The answer? In short, no, there is little to no indication that AI edtech tools support learning outcomes. Historically speaking, there is little evidence that deploying more technology in educational contexts has led to better outcomes. Before leveraging such tools, we need to step back and ask ourselves, what is the problem we are trying to address and is AI the best available solution? Considering not only what Edtech does or does not deliver, but also the opportunities it takes away by consuming resources such as time and capital. We must also recognise that schools are ill-equipped to cope with the safety and privacy risks that AI tools bring, whether that be large-scale, opaque data processing practices or a lack of ability to filter inappropriate content. Whilst generative AI brings to the fore questions about the future of assessment in our education systems, there is little to suggest it can solve the foundational challenges facing education. Considering these issues, perhaps it is time to reframe our conversations around Edtech from whether it is bad or good, to, is it built for pedagogy?

Given that over half of UK teachers are reportedly using generative AI tools, Dr. Lyndsay Grant gave a timely intervention on the impact of AI in education from a teacher perspective. AI companies view their products as a replacement for teachers whilst governments perceive potential for freeing up teacher time through resource creation, but the reality is much more nuanced. Teachers report that, whilst AI use can produce modest time savings, these are largely lost to various factors such as checking and correcting AI outputs, adapting resources and the expectation of increased productivity. Not only this, but teachers view aspects of teaching such as marking and lesson planning as core aspects of a job they truly enjoy and as mechanisms through which they connect with their students. AI is unable to address the structural issues of teachers being overworked and underpaid, but it does pose the risk of taking some of the joy of the profession away. Douglas Rushkoff reflected on a similar sentiment in the business sector, where AI is failing to make  significant efficiency gains in many areas, with recent data indicating that 95% of AI pilots fail to deliver returns. Emphasising that we need to consider AI as an augmentation to human work, not a replacement, ensuring that we reflect on why and where AI should be used before deploying it.

Students too are adopting AI, using it to support homework, writing and researching, raising concerns over the integrity of our performance-focused education systems. Megan Ennion drew a key distinction between performance in assessment and learning, with the latter being poorly measured by grades. She investigated learning behaviours including effort, perseverance, resilience and challenge, in 16-18 year old students  and how these change when scaffolded by a human teacher, an AI tutor or internet search. Do AI chatbots better support these learning behaviours by providing more tailored support in a non-judgemental manner? When students received 1:1 human tutoring, they found task questions easier and made more attempts at answering them, but required less effort than when given an AI tutor. Suggesting that humans may provide better scaffolding but that there may be a risk of over-scaffolding, reducing perceived challenge. Generative AI tools seem to alter how learners perform and engage with challenge, making it clear we need to reframe education beyond simple ‘performance’ metrics and move toward a system which values critical thinking, effort and perseverance.

Not only do we need new metrics for learning beyond assessment, we need evidence to understand how AI is impacting our learning, mental health & development. Undertaking such research through traditional methods can be challenging and oftentimes can only support relatively small cohorts - it remains difficult to gain rich data without significant time burden on researchers and participants. Dr. Petr Slovak presented a new approach using micronarratives to scaffold participant thinking, feeding responses to an LLM which formulates narratives for participants to choose. Research participants perceive this method to better reflect their experience and more accurately capture key aspects of their response compared to traditional open-text questions. Such tools will be necessary to gather experiential data at a scale and richness necessary to inform the fast moving development and deployment of AI tools.

Throughout these webinars, we have seen the repeated emergence of four themes


Users should demand evidence

Claims made by technology providers should be independently evaluated and verified by researchers, it is important to focus on the details of such reports. Data on user outcomes, including self-report and behavioral analysis is currently lacking, making it difficult to assess outcomes of technology adoption, and  adoption does not guarantee efficacy. In light of this, funders must work to develop new platforms which enable evidence generation at scale.

Does the Risk Merit the Reward?

It is clear that Generative AI poses a risk to our agency, cognition, safety and privacy, yet there is little evidence of the realisation of promised benefits. Even where AI causes no harm, does it have an opportunity cost by blocking more effective interventions? In deploying any AI tool in any sector, we must ask whether it addresses a specific problem, if it has been deliberately designed to do so, and is it the most effective solution? The UK government has invested £4,000,000 in integrating AI into education to provide high-quality teaching materials, [so] teachers can time-manage better to enhance educational outcomes without any evidence that AI has this capability or that it supports teacher time management. This strategy of adoption for adoption’s sake is no strategy at all.

This sentiment echoes that at the core of the AI & Children Design Code to “ensure you are clear on what you want your AI system to do and why.

Designing for human flourishing

If and when we deploy AI, we must ensure it has been designed to respect key rights such as agency, safety, privacy and connectedness. These must be integrated at the design stage and we must provide technologists with the guidelines and evidence to do so. New research should be undertaken to translate measurements of human flourishing such as those found in The Global Flourishing Study to rapid, momentary, on-platform assessment.

The form and function of our education system

We must also consider whether our current pedagogical system is functional in the context of AI. Has AI simply highlighted the flaws inherent in an education sector focused on assessment and do we need to reframe learning around concepts of critical thinking, critical thought, and mastery? We look to efforts like those at OECD Education for Human Flourishing at providing a good starting point to address these questions.

Generation AI is an initiative of the Oxford Global Challenges Programme which is generously funded by the Templeton World Charity Foundation, Elevate Great, and hosted by Reuben College Oxford.

Next
Next

Winning the Arms Race Between Kids and Tech