The Human Line
Knowledge, Children, and the Question We Must Still Ask
I am so interested in knowledge and how we know that we know things, and what we know, and whether the things we know are true, and the knowing of knowledge itself. And I have been thinking about this more and more as artificial intelligence enters everything, because suddenly knowledge feels different. It feels less solid somehow, or perhaps less rare. It is not that knowledge is less important, but it is no longer scarce in the way it once was. Answers are everywhere. Information is everywhere. And so the question begins to shift from what we know to how we know, and even further, to how we decide what matters.
I was listening to something recently about discoveries in science that at first people assumed were artificial intelligence or advanced technology, but in fact they were the result of the human hand, the human eye, and long, careful exploration. That stayed with me because it reminded me that so much of what we call knowledge does not come from speed or efficiency or computation, but from fascination, patience, and love of the work itself.
I grew up around science. My father was a Fellow of the Royal Astronomical Society in England, and my childhood was filled with conversations about stars and planets and black holes and quantum theory and quarks and the great questions of why we are here at all. So I have always been surrounded by knowledge and by people who loved knowledge. But what I noticed even then was that the people who loved knowledge the most were not the ones who wanted to arrive at certainty as fast as possible. They were the ones who were willing to live in the question.
We educate children for knowledge, and that is important. But there are different kinds of knowledge. There is knowledge from books, knowledge from research, knowledge from data, and then there is knowledge that comes from making, from experimenting, from failing, from trying again, from touching the world and seeing what happens. There is knowledge by doing, knowledge by experiencing, knowledge by living.
As we move up through academia and research and doctoral work and specialization, something interesting can happen. Knowledge can become the destination instead of the process. And I sometimes wonder whether when we reach the top of a field, we are still free to experiment, to play, to posit strange ideas, to wander down rabbit holes and wormholes and follow fascination just because something is interesting. Or whether at that point there is something to protect, something to maintain, a position to uphold, a reputation to keep safe. Risks go down because there is too much to lose.
I have no judgment in this. It is just an observation. But I do wonder whether as we know more and more and more, we are keeping our humanness with us. Or whether knowledge can sometimes become a fortress, a wall, something to defend rather than something to explore.
I think about Dr. Maria Montessori often in this context. She was a scientist, a doctor, a researcher, but she was also profoundly human. She dissected cadavers alone at night because the male medical students did not want a woman present. She worked with children nobody else believed could learn. She had a child out of wedlock in a time when that could end a career. She was always on the edges of society, always on the skinny branches, always doing the harder thing. She was not protecting a position. She was following a question. She had the mind of a scientist and the heart of a human, and she never separated the two.
I think this matters now more than ever, because we are entering a time when we can know almost anything instantly. Artificial intelligence can summarize books, write essays, generate ideas, analyze data, produce plans, and answer questions at extraordinary speed. So education can no longer be about access to information. That problem has been solved. The question now is what a human being needs to be able to do in a world where information is everywhere and answers are cheap.
And this is where the child comes in, because I keep thinking about the child watching all of this. The child is always watching us. The child is watching how we make decisions, how we use technology, how we respond to difficulty, how we treat each other, how we think, how we question, how we doubt, how we change our minds. The child is not just learning information. The child is learning how to be human.
If the child grows up in a world where every question is immediately answered by a machine, then the child must learn something else that cannot be answered by a machine. The child must learn judgment. The child must learn discernment. The child must learn patience, curiosity, ethics, empathy, collaboration, and the ability to sit with a question without immediately closing it.
Montessori understood this deeply. She did not create an education system for memorizing information. She created an environment where children learn how to think, how to observe, how to experiment, how to concentrate, how to work with others, how to persist, how to make decisions, how to care for their environment, and how to become independent human beings who can function in society with integrity and responsibility.
In many ways, Montessori education was never about the materials themselves. It was about the development of the human being. The materials were tools for building the mind, the will, the emotions, the social self, the moral self. And if we look carefully, those are exactly the capacities that matter in a world where knowledge is everywhere and automation is increasing.
So perhaps the question is not whether artificial intelligence will replace knowledge work or decision making or writing or research. Perhaps the question is who will hold the human line. Who will ask not only can we do something, but should we do it. Who will think about long term consequences, about quality of life, about human dignity, about childhood, about relationships, about meaning, about purpose.
We have seen many times in history that technological or scientific solutions fix one problem and create another. Medicines that cure one condition create side effects somewhere else. Industrial progress improves efficiency but damages environments. Systems become optimized and people become exhausted. We fix one thing and break another because we look at isolated problems instead of whole systems and whole human lives.
Artificial intelligence gives us an extraordinary opportunity because we are still early enough to ask the human questions alongside the technical questions. We can ask not only how fast, how efficient, how accurate, how scalable, but also how human, how ethical, how meaningful, how sustainable, how does this affect childhood, how does this affect relationships, how does this affect attention, how does this affect society fifty years from now.
Maybe in every university there should be scientists and engineers working side by side with philosophers and psychologists and historians and educators and artists, not as an afterthought but as part of the same conversation. Not to stop progress, but to guide it. Not to slow everything down, but to make sure we are moving in a direction that actually improves human life rather than just accelerating change.
Because we could have all the knowledge in the world. We could have the fastest systems, the most efficient companies, the most powerful machines, the most advanced algorithms. But if we lose empathy, kindness, generosity, wisdom, patience, curiosity, and love, then we have gained everything and lost the thing that made it worth gaining anything at all.
And so I keep coming back to the child, because the child is always the clearest lens. The child does not care how fast a machine can write an essay. The child cares whether someone is listening. The child cares whether their work matters. The child cares whether they belong. The child cares whether the world makes sense and whether the adults around them are thinking carefully and acting with integrity.
The child is watching us build this next world. The child is watching how easily we hand decisions over to machines, how quickly we accept answers without questioning, how comfortable we become with convenience, how willing we are to trade thinking for speed and judgment for efficiency.
So perhaps the real question for education, for science, for technology, for society, is not how intelligent our machines become. The real question is whether we remain fully human as they do.
If you wish to follow the research and thinking that inform this work, the books Mapping Montessori Materials for AI Competency Development and Montessori & AI -Volume I are available through my website, katebroughton.com.


In classrooms, we can sometimes rush students towards answers—especially in an age where answers are instant—but the real intellectual work often sits in the uncertainty, the exploration, the willingness to test and revise ideas. That’s where curiosity and deep understanding are built. It also connects strongly to your point about the child watching us. Students aren’t just absorbing content; they are watching how we respond to complexity, how we use technology, and whether we prioritise speed or thoughtfulness.
If anything, AI makes this more important, not less. The more easily answers can be generated, the more valuable it becomes to teach students how to question, evaluate, and think ethically about those answers. In that sense, the “human line” you describe feels like the core work of education moving forward—not protecting knowledge, but protecting the conditions in which thoughtful, curious, and principled human beings can develop.