Preparing for Generation Beta
Are We Going to be Raising Researchers or Test Subjects?
I have been listening to Jean Twenge’s book Generations, and it has made me aware of something that I cannot quite stop thinking about. We spend a great deal of time talking about Generation Alpha, about screens and technology and attention and anxiety and education and artificial intelligence, but there is another generation already forming behind them. Generation Beta does not exist yet as a cultural identity, but they exist in practice. They exist in our classrooms, in our parenting, in our policies, in our technology adoption, in our school systems, and in the decisions we are making right now without fully understanding the long-term consequences.
We are forming Generation Beta now, and the uncomfortable truth is that we may be forming them as a kind of test. When we talk about beta testing in technology, we are talking about testing theories, testing systems, testing interactions, testing how humans respond to something new. Beta testing means the product is not finished. It means the product is still being adjusted. It means mistakes are expected. It means failures are expected. It means data is being collected so the system can be improved. But beta testing usually happens with software, not with children. And yet it is very hard to look at what is happening in education right now and not see an enormous, uncontrolled beta test taking place across an entire generation of human beings.
Artificial intelligence is entering schools quickly, unevenly, and often without a coherent philosophy. Some schools are banning it, some are embracing it, some are partnering with multiple companies, some are using one platform exclusively, some are using it for lesson planning, some for grading, some for student writing, some for tutoring, some for data analysis.
Across districts, across states, across countries, there are thousands of different approaches happening simultaneously. If we step back and look at this historically, we will probably see this period as a massive educational experiment conducted in real time on millions of children who did not consent to be part of the experiment and who have very little power in how it unfolds. With the technology companies involved unwittingly branding our precious and unsuspecting children as their test subjects.
Artificial intelligence is something that produces results very quickly and at scale. That is part of its power and part of its danger. When something works, it works at scale. When something fails, it fails at scale. If we implement artificial intelligence across public education systems and it produces more compliance but less agency, we will not see that slowly over a century. We will see it quickly across an entire generation.
If we allow multiple technology companies into schools without clear boundaries, we should not be surprised when branding appears everywhere, when funding flows in and then flows back out through software contracts, when children grow up believing there is only one artificial intelligence because they have only ever used one platform and were never taught to compare, evaluate, and research technologies critically.
That is what beta testing looks like in a commercial environment, and we have to ask ourselves whether we are comfortable with children occupying that role in society.
What worries me most is not that artificial intelligence will be used in education. Artificial intelligence will be used in education. That is already decided. What worries me is the difference between two kinds of children who may emerge from this period. There will be children who were test subjects, and there will be children who were researchers. The test subjects will experience artificial intelligence as something done to them. Systems will be chosen for them, platforms will be chosen for them, workflows will be designed for them, and they will learn to operate inside systems that they did not design and do not fully understand. They may become very efficient. They may become very compliant. They may become very productive. But they may not become agentic.
The children who are treated as researchers, as in Montessori, will experience artificial intelligence differently. They will study it. They will compare different systems. They will test outputs. They will question results. They will understand bias, energy use, data sources, training models, and limitations. They will not simply use artificial intelligence; they will research artificial intelligence as a phenomenon, as a tool, as a social force, as an economic force, and as an ethical landscape. Those children will not feel that artificial intelligence was done to them. They will feel that they were part of understanding and shaping it.
By the time these children reach middle school and secondary school, the difference between those two groups may be very visible. One group may have experienced education as something delivered to them faster and more efficiently, with more personalization and differentiation, but still largely inside a system where learning happens inside buildings, inside screens, inside platforms, and inside assignments. The other group may have experienced education as research, as exploration, as critical inquiry, as partnership with educators, and as engagement with the real world alongside the digital one. One group may feel compliant or resentful. The other group may feel capable and responsible.
This is where the Montessori lens becomes very important, not because Montessori is a brand or a set of materials, but because Montessori is fundamentally a way of understanding the child as an agent in their own development. Montessori education has always asked a very long-term question: when this child is twenty five years old and standing in front of you as an adult, what do you want them to say about their childhood and their education? Do you want them to say thank you for standing beside me when I was stressed and angry, thank you for helping me learn to regulate my emotions, thank you for allowing me to process independently, thank you for letting me follow my interests and become fully myself? Or, do we risk a generation of adults who feel that things were done to them rather than with them, who feel managed rather than mentored, who feel processed rather than educated?
In Montessori thought, we often talk about following the child, but what that really means is that the child is not an object to be shaped but a person to be collaborated with. Education is not something we do to children, and even the phrase that we do things for children can be problematic because it still removes their agency. Education should be something we do with children.
If artificial intelligence enters education in a way that reinforces a performative system where output is prioritized, speed is prioritized, and compliance is prioritized, then artificial intelligence will amplify the weaknesses that already exist in education. But if artificial intelligence enters education through a research lens, where students and educators together study the technology, question it, test it, and understand its implications, then artificial intelligence could amplify agency rather than compliance.
This is why the idea of classrooms as research environments becomes so important. Montessori classrooms, particularly in the public sector, could function as research centers where the focus is not on adding more technology but on studying human development, attention, ethics, collaboration, and agency in an age of intelligent machines. Montessori is not fundamentally about materials; it is fundamentally about ontology. It is about who you are being in the presence of the child. It is about the adult stepping out of the role of controller and into the role of collaborator. It is about the educator and the student standing side by side exploring the world in a relational, respectful, and academic manner. In that environment, artificial intelligence becomes a subject of study, not a system of control.
From the child’s perspective, this entire moment must look very strange. Children watch adults carefully. They watch what we adopt quickly, what we question slowly, what we fear, what we celebrate, and what we ignore. They are watching us adopt technologies very quickly, sometimes without fully understanding them, sometimes because they are efficient, sometimes because they are impressive, sometimes because they are inevitable. They are watching us restructure education around tools rather than around human development. They are watching us decide things on their behalf.
The child is always watching the adult to understand what kind of world they are entering. The question is whether they will see adults who are thoughtful, careful, collaborative, and ethical in how they introduce new technologies, or whether they will see adults who are rushing, competing, adopting, and adjusting in real time while the children themselves are the ones living inside the experiment.
We cannot stop technological change, and we probably should not try to. But we can decide whether children are test subjects or researchers. That is a philosophical decision, not a technological one. If we treat children as passive recipients of systems, then Generation Beta may grow up feeling that the world is something that happens to them. If we treat children as researchers, collaborators, and thinkers, then Generation Beta may grow up feeling that the world is something they help shape.
We are forming Generation Beta right now. They are not an abstract demographic category. They will soon be in our classrooms, in our homes, in our communities, watching how we respond to the most powerful technologies humanity has ever created. The question is not what artificial intelligence will do to education. The question is what kind of humans we are trying to raise in a world where artificial intelligence exists. And that question, more than any software or platform or policy, will determine what Generation Beta becomes.
If you wish to follow the research and thinking that inform this work, the books Mapping Montessori Materials for AI Competency Development and Montessori & AI -Volume I are available through my website, katebroughton.com.


What resonates most is the reminder that this is ultimately a philosophical decision, not a technological one. In schools, it would be easy to default to convenience—using AI to speed up tasks, generate outputs, or manage workload—but the real opportunity lies in slowing down and making the technology visible. Helping students question it, test it, compare it, and understand its limitations. That’s where we preserve thinking. That’s where we build agency. Because if this generation is going to grow up surrounded by intelligent systems, the most important thing we can give them is not just access—but the ability to interrogate, challenge, and shape those systems for themselves.