Will Digital Intelligence Replace Biological Intelligence? Reflections on Geoffrey Hinton’s Lecture on AI

Author: Dr Jasmina Stevanov, Lecturer in Consumer Psychology, Business School and Management Board Member, Bristol Vision Institute.

This public-facing University of Bristol event was held in June as part of the annual Richard Gregory Memorial Lecture series, organised by the Bristol Vision Institute. This year, the University also celebrated being named ‘AI University of the Year’ at the 2024 National AI Awards — a timely moment for Professor Geoffrey Hinton, often called the ‘Godfather of AI’, to return to Bristol and reflect on the future of artificial intelligence.


About Professor Geoffrey Hinton

Geoffrey Hinton

Geoffrey Hinton’s connections to Bristol are both personal and scientific. His father, Howard Everest Hinton, was a world-renowned entomologist and served as Head of Zoology at the University of Bristol in the 1970s.

During his own formative years, Geoffrey had long conversations with Richard Gregory, the visionary psychologist known for his pioneering work on visual illusions, and even participated in Gregory’s perceptual experiments.

These early influences were not only biographical curiosities but formative in shaping an approach to artificial intelligence grounded in perception, uncertainty, and neural plausibility — not through linguistics, which might have seemed the more intuitive route given the eventual rise of large language models.


A group of academics and postgraduate research students from the Business School took the time to reflect on Geoffrey Hinton’s lecture.

Professor Hinton’s lecture offered a wide-ranging and intellectually ambitious account of the evolution of artificial intelligence — tracing its trajectory from rule-based symbolic reasoning to biologically inspired systems capable of learning.

For those of us trained in vision science, the lecture resonated at a particularly deep level. I spent my doctoral studies investigating visual illusions, specifically, why they engage us so powerfully, regardless of age or educational background. Hinton’s framing of the problem through perception, uncertainty, and learned heuristics felt intellectually familiar, rooted in the kinds of questions we ask in experimental psychology. One moment stood out in particular: Hinton’s description of placing a prism in front of an AI system and asking it to identify the location of an object directly ahead. The system responded:

This moment — when the AI described its misperception due to the prism — was not simply a case of anthropomorphism, it mirrored the reasoning often heard from human participants in vision experiments: not expressions of error but attempts to explain why perception deviated from expectation. In vision science, illusions have long been used to show that perception is not a direct imprint of reality, but a process of inference. Richard Gregory famously described illusions as “departures from truth, or from physical reality,” or as “disagreements with the external world of objects.” Yet this presumes we have unmediated access to that reality. In fact, we only ever receive proximal stimuli — sensory inputs such as light patterns on the retina — and must infer the distal stimulus, the actual object or event in the world. This means perception is always interpretive, always a best guess.

From this perspective, the idea that only humans possess subjective experience begins to falter. Perception, whether biological or artificial, is probabilistic and context-dependent. When an AI system interprets a perceptual distortion and articulates why it occurred, it follows the same heuristic logic as a human brain attempting to make sense of uncertainty. This does not imply that AI has achieved sentience in any definitive way, but it does challenge the assumption that subjective experience is uniquely human.

While much of the lecture focused on cognitive models and computational theory, Professor Melissa Gregg’s reflection introduced a sharp reminder that the development of AI is never purely technical — it is also institutional and political. Hinton acknowledged that most cutting-edge work in AI now takes place within large tech corporations.

Her point reframes the question of intelligence: not just what AI is, or how it works, but who is building it, under what assumptions, and for whom.

While some responses focused on the risks and constraints shaping AI’s development, others saw in Hinton’s lecture a generative model for interdisciplinary exchange. Dr Eleonora Pantano, Associate Professor at the Business School researching AI in business, reflected on how the talk offered a perspective rarely encountered in her field — one rooted in physics and biological modelling rather than abstract theorising or managerial applications:

Although Hinton clearly acknowledged the risks — including strategic deception and emergent behaviour in large language models — his lecture also conveyed a cautiously hopeful view of AI’s potential.

This sense of transformation extended beyond scientific discovery and into education:

Geoffrey Hinton speaking at the event

Leave a Reply

Your email address will not be published. Required fields are marked *