Geoffrey Hinton’s connections to Bristol are both personal and scientific. His father, Howard Everest Hinton, was a world-renowned entomologist and served as Head of Zoology at the University of Bristol in the 1970s.
During his own formative years, Geoffrey had long conversations with Richard Gregory, the visionary psychologist known for his pioneering work on visual illusions, and even participated in Gregory’s perceptual experiments.
These early influences were not only biographical curiosities but formative in shaping an approach to artificial intelligence grounded in perception, uncertainty, and neural plausibility — not through linguistics, which might have seemed the more intuitive route given the eventual rise of large language models.
A group of academics and postgraduate research students from the Business School took the time to reflect on Geoffrey Hinton’s lecture.
Professor Hinton’s lecture offered a wide-ranging and intellectually ambitious account of the evolution of artificial intelligence — tracing its trajectory from rule-based symbolic reasoning to biologically inspired systems capable of learning.
“He began by outlining two major schools of thought in AI development. The first, rooted in logic and symbolic reasoning, treats intelligence as a system of rules for manipulating symbols. In contrast, the second — the one to which Hinton has dedicated his career — draws inspiration from biology and the way the human brain learns through neural connections.
As a Business School student with a strong personal interest in chemistry and biology, I found this biological analogy particularly captivating.” — Bing Lu, PhD student
For those of us trained in vision science, the lecture resonated at a particularly deep level. I spent my doctoral studies investigating visual illusions, specifically, why they engage us so powerfully, regardless of age or educational background. Hinton’s framing of the problem through perception, uncertainty, and learned heuristics felt intellectually familiar, rooted in the kinds of questions we ask in experimental psychology. One moment stood out in particular: Hinton’s description of placing a prism in front of an AI system and asking it to identify the location of an object directly ahead. The system responded:
“Oh, I see — the prism bent the light rays, so I had a subjective experience that the object was off to one side. But actually, it was straight in front of me.”
This moment — when the AI described its misperception due to the prism — was not simply a case of anthropomorphism, it mirrored the reasoning often heard from human participants in vision experiments: not expressions of error but attempts to explain why perception deviated from expectation. In vision science, illusions have long been used to show that perception is not a direct imprint of reality, but a process of inference. Richard Gregory famously described illusions as “departures from truth, or from physical reality,” or as “disagreements with the external world of objects.” Yet this presumes we have unmediated access to that reality. In fact, we only ever receive proximal stimuli — sensory inputs such as light patterns on the retina — and must infer the distal stimulus, the actual object or event in the world. This means perception is always interpretive, always a best guess.
From this perspective, the idea that only humans possess subjective experience begins to falter. Perception, whether biological or artificial, is probabilistic and context-dependent. When an AI system interprets a perceptual distortion and articulates why it occurred, it follows the same heuristic logic as a human brain attempting to make sense of uncertainty. This does not imply that AI has achieved sentience in any definitive way, but it does challenge the assumption that subjective experience is uniquely human.
While much of the lecture focused on cognitive models and computational theory, Professor Melissa Gregg’s reflection introduced a sharp reminder that the development of AI is never purely technical — it is also institutional and political. Hinton acknowledged that most cutting-edge work in AI now takes place within large tech corporations.
“What truly concerned me was hearing hundreds of aspiring computer scientists being told that in order to participate in cutting-edge AI research, they must work in large tech companies. When those companies show no signs of improving diversity in leadership and governance, the version of intelligence being built will be a colossal compromise, at the very least.” — Professor Melissa Gregg, Professor of Digital Futures
Her point reframes the question of intelligence: not just what AI is, or how it works, but who is building it, under what assumptions, and for whom.
While some responses focused on the risks and constraints shaping AI’s development, others saw in Hinton’s lecture a generative model for interdisciplinary exchange. Dr Eleonora Pantano, Associate Professor at the Business School researching AI in business, reflected on how the talk offered a perspective rarely encountered in her field — one rooted in physics and biological modelling rather than abstract theorising or managerial applications:
“Professor Hinton’s talk was very inspiring and enriching. As a researcher on AI in business, I had the pleasure of listening to his research on biological vs. digital intelligence from a rigorous approach based on physics — much different from the approach usually adopted by business and management scholars. He started from the paradigms of intelligence to go into detail on what digital (artificial) intelligence is, how it can be built and developed, and the actual and prospective risks for human (biological) intelligence.
I think that in this talk, Professor Hinton moved the boundaries of many fields, bringing together different approaches to the same phenomenon that might work together for the social good.” — Dr Eleonora Pantano, Associate Professor in Retail and Marketing Technology
Although Hinton clearly acknowledged the risks — including strategic deception and emergent behaviour in large language models — his lecture also conveyed a cautiously hopeful view of AI’s potential.
“I was inspired by his way of seeing AI as a potential hope that will revolutionise healthcare and accelerate scientific discovery in the future. Although digital intelligence is evolving quickly, his way of seeing this potential gives a sign that, if steered correctly, it can improve the living conditions of human beings.” — Revalda Putawara, postgraduate student
This sense of transformation extended beyond scientific discovery and into education:
“At the University of Bristol Business School, we have had many conversations about our concerns about the challenges AI has brought to our teaching and assessment systems. However, the real question we should ask ourselves is whether we have truly followed the advancement of AI and incorporated it to change the way we design our teaching and assessment for our students.
When we complain about students misusing AI to let it do their assessments, shouldn’t we also question why we design assessments that can be easily completed by a few prompts to chatbots? Instead of standalone actions on setting policies to prevent AI, changes on our side also should be made to understand and embrace it.” — Yilong Chan, Graduate Teacher
“As a teacher, I was particularly struck by Professor Hinton’s vision of AI revolutionizing education. Professor Hinton’s assertion that AI can analyse extensive data to customize learning experiences suggests a transformative shift in education. Such technology could provide equitable, individualized instruction, dynamically adapting to a student’s strengths and challenges.
This prompted a critical question: what becomes of teachers when AI assumes the instructional role? I believe their role will not be eliminated but redefined, akin to that of a general manager in basketball. Unlike coaches, who focus on teaching athletes specific skills, general managers orchestrate the broader team environment, strategy, and morale. Similarly, teachers will shift from delivering content to fostering a positive and inclusive classroom atmosphere that forms the foundation for effective learning and personal development.
In this new paradigm, teachers, as general managers, will prioritise cultivating emotional intelligence, critical thinking, and interpersonal skills—qualities beyond AI’s current capabilities. By creating a welcoming environment, they will ensure students thrive as individuals, not merely as learners.” — Edoardo Tozzi, Graduate Teacher
More than ever before, I understood the tremendous value of fostering interdisciplinarity in student education. Geoffrey Hinton’s work is a powerful reminder of how deeply early scientific training shapes the way we ask questions, build arguments, and engage with the world. Hinton’s return to Bristol was not just symbolic; it was a return to a place where the foundations for AI were laid through long conversations, visual illusions, and experiments that asked whether things really are the way they appear. It reminded me why it is so important to keep those spaces alive in education — spaces where intellectual freedom and disciplinary openness are not only possible but essential.
He was the perfect speaker to honour and celebrate the legacy of Richard Gregory: someone whose own career was shaped by Gregory’s vision of science. This was a particularly special moment for the University, as members of Richard Gregory’s family were present in the audience. And in that shared space — where once a boy who joined Gregory’s experiments returned as an emeritus scientist of global significance — the lecture became not only a glimpse into the future of intelligence, but also a profoundly moving walk down the memory lane of Bristol’s scientific history.
Special thanks to our dean, Professor Brian Squire, who supported us and special thanks to Laura Pugh, who carried the organisation of this event almost single-handedly. Although working behind the scenes as our administrator, her contribution was absolutely essential to the success of the evening.