Event round up: AI and Business: Opportunity, Threats and Responsibility

On 18th February 2025, at Burges Salmon in central Bristol, we held an event on the challenges of AI. In this blog, Professor Martin Parker tells us what happened, and why it matters.

Discussion about AI seems to be divided between commentators who see it as a new industrial revolution, and those who predict the end of the world. This event, part of the University of Bristol Business School’s Thought Leaders Series, brought together innovators and leaders from various organisations with researchers to discuss the challenges and opportunities affecting businesses and policy areas in Bristol and beyond.

Our invited speaker for this event was Nigel Toon, a leading AI entrepreneur and founder of Graphcore, a British semiconductor company developing AI and machine learning. Nigel is the author of the best-selling book How AI Thinks and holds a Doctor of Science from the University of Bristol.

Dr Sadaf Alam, Chief Technology Officer of the Bristol Centre for Supercomputing, home to Isambard 3 and Isambard-AI, part of the national AI Research Resource, opened the event with a short introduction. Nigel’s talk covered the history of AI and some speculations on what might come next. His highlights were:

  • Current large language models (LLMs) are in their infancy. This means that they will develop rapidly but probably towards solving more specific problems. At the moment, AI has been based on throwing larger and larger amounts of computing at LLMs, but the launch of the Chinese Deepseek model suggests that the next task will be the building of intelligent search which moves beyond automation to system solutions.
  • Who will be affected by AI? LLMs can address complex problems in law, education and science, and will almost certainly affect businesses and organisations which have large numbers of people involved in routine knowledge processing. Those employed in people-facing occupations are less likely to be affected, as are those with the skills and education to work alongside AI systems.
  • AI plus humans. Nigel argued that it was unlikely, and perhaps undesirable, to imagine that AI would ever work alone, but rather that it would become a tool used alongside other tools to achieve complex tasks.

The Panel Discussion proved to be very stimulating and was chaired by Professor Florian Bauer, a Professor in Strategy at the University of Bristol Business School. Our other panellists were Sue Turner OBE and Professor Richard Owen. Sue is dedicated to using her expertise in AI governance and ethics to inspire people to use AI with wisdom and integrity. She established AI Governance Limited in 2020 to advise businesses and policymakers on pragmatic AI, data ethics and governance issues and making a positive societal impact.

Richard, Professor of Innovation Management at the University of Bristol Business School, studies the ethics, risks, politics and governance of innovation and is a member of the University’s Centre for Sociodigital Futures. Last year he began to experiment with generative AI, resulting in the creation of Avatar Rich, a photo-real, voice-cloned avatar of himself that he is still coming to terms with. He has just begun to use this polished and arguably better version of himself in his teaching, the implications and impacts of which will be explored over the coming year.

In an animated conversation, followed by questions from the audience, we covered some important issues:

  • One concerned the geopolitics of regulation, particularly in the context of US big tech and its closeness to the Trump administration. All our panellists agreed that regulation is needed to ensure that the worst fears about AI do not materialise, but who is to regulate? In February 2025, the UK and US refused to sign the Paris International Agreement on AI Safety, arguing that regulation would kill an emerging industry. Without the active cooperation of China, and the power of big tech, it is difficult to see what global bodies will emerge as regulators of an industry that knows no borders.
  • Another question explored was the problem of thinking about economic and social futures, and the importance of understanding that the future is unlikely to be like the past. Who makes these futures is the big question, with a real danger that the unregulated interests of digital technology firms will push us further towards the attention economy, deepfakes, ambient surveillance, the neglect of questions of responsible innovation and so on. Our ‘sociodigital’ future is being shaped in some demonstrably undemocratic ways, but the genie is out of the bottle and there is no point in ‘blaming’ AI for these dangers.
  • Sue Turner noted that some businesses might have some sort of “code of business ethics”, but none as far as she knew had a “code of AI ethics”.
  • The carbon emissions of LLMs, the reshaping of UK higher education away from the arts and humanities and the prospect of an artificial general intelligence provided further topics for discussion.

Overall, this was a fascinating conversation which began to scratch the surface of one of the most pressing questions of our time.

Thanks to all our speakers for participating, to Burges Salmon for hosting, and look out for more events in our series.


Author: Professor Martin ParkerProfessor of Organisation Studies

Leave a Reply

Your email address will not be published. Required fields are marked *