Ethics in the Age of AI: Why Transdisciplinary Thinkers Are Key to Balancing Responsibility, Profitability, Safety and Security 

By Prof. Nayef Al-Rodhan

How do we prepare for a future where the long-term effects of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) will almost certainly surpass our current imaginations of what it can do? And how do we deploy AI responsibly without potentially causing serious harm and widening inequality? These questions were at the heart of discussions held by world leaders and tech industry pioneers recently at the AI Summit in Paris.

To unlock the full potential of AI, we must reconcile long-term collective safety, ethics, equity and responsibility with corporate profitability and national security needs. Our current ethical guidelines are insufficient for a new era of disruptive technological change and it is becoming increasingly clear that there is a disconnect between technological advancement and society’s understanding of the related safety and ethical potholes. To bridge this gap, the world needs highly trained transdisciplinary thinkers, including philosophers, international relations scholars, policy practitioners, political scientists, neuroscientists, anthropologists, social scientists, AI experts and others who can connect the dots between various academic disciplines and ask, if not answer, important generation-defining questions.

Artificial intelligence is advancing rapidly. Some experts suggest that we may achieve AGI by the end of this decade. The implications are profound, potentially reshaping society and challenging our norms and values, our societal and global order, and what it means to be human.Years ago, the scientist and futurist Roy Amara famously observed that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. We could very well be guilty of this when it comes to the looming potential of AGI, machines that could supersede the cognitive work that humans currently do. Either way, we’re entering an era of radical technological change and yet we’re still ill-equipped – logistically, ethically, morally and philosophically – for the technological changes coming our way. Part of the problem lies in the fact that each technology presents its own specific set of ethical dilemmas. The regulations in place to address these challenges include a patchwork of voluntary ethical codes and non-binding treaties. This is obviously not unique to the field of Generative AI. For example, the World Health Organisation has published guidelines for responsible gene-editing, but these are voluntary recommendations. Overall, the ethical ecosystem governing AI is falling short, in part due to geopolitical wrangling. A case in point is the UN Convention on Certain Conventional Weapons, which has so far failed to reach an agreement on banning “killer robots” (ie. autonomous weapons that can pick targets - often erroneously - and kill them without human interference), due to opposition from Russia and the United States, amongst others. 

 As a result of these regulatory shortcomings, we are seeing a moral and ethical vacuum when it comes to setting the rules for AI and other disruptive new technologies. We should also be alert to the rise of “ethics dumping” – where scientists travel to countries with less stringent laws to carry out morally questionable procedures. This is becoming common in the field of genetics and could very soon trickle into the field of AI ethics as well. That said, in recent years progress has been made with regard to securing “neurorights”, which help determine the proper use of neurotechnologies. In 2017, the neurobiologist Rafael Yuste and his team worked with the Chilean government to establish “cerebral integrity” – a basic right now signed into Chilean law via an amendment to its constitution. More recently, Yuste has worked with the United Nations to update human rights to include “frontier issues”. This is an important mission that should expand beyond transformative and often intrusive neurotechnologies to cover all forms of Generative AI, as well as many emerging technologies and innovative biological interventions that will shape the coming decades.

However, the discussion about rights begs an uncomfortable question: will internationally-binding rights be enough to protect humanity from the potential perils of AI? After all, international support for human rights does not – as University of Chicago law professor Eric Posner points out – prevent some UN member states from engaging in torture, as well as severe descrimination, genocide and ethnic cleansing. Building AI that is safe, ethical as well as beneficial will therefore require forming a transdisciplinary coalition focused on the ethical, moral, societal, cultural and political implications of rapidly evolving emerging technologies such as AI. Transdisciplinary frameworks can help us keep pace with the changes happening around us. They can also play an important role in helping achieve societal cohesion and sustainable transnational peace, security, dignity and prosperity for all. 

Transdisciplinary thinkers can help by asking timely philosophical and existential questions, not least to understand how and why societal trust is at risk as a result of developments in the AI space. We are entering a post-truth era with significant and dangerous societal and global consequences, in which AI-generated content is becoming ever more realistic, further blurring the boundaries between reality and fiction. The mere existence of such high-quality, AI-generated content is giving people, including politicians, cover to question the truth—even if AI was not used. Unless we take urgent action, we may soon find ourselves in a morally clouded reality, where anything can be true or false depending on whether an individual is already primed to believe it. In this dystopian future, truth is subjective, and reality may depend on the sophistication of technological methodologies used, as well as whatever reinforces one’s prior beliefs. Given its current trajectory, AI-generated media is likely to become even more realistic as well as more pervasive and persuasive.

Does this mean that innovation has become an end of its own? My personal experience has shown me that the worlds of science, neuroscience, philosophy, applied history, strategic culture, cultural studies, disruptive technologies, international relations and many other disciplines are very much complementary. By transcending academic discipline we create the space for new ideas to flourish and for academics and practitioners to broaden their horizons - as we are currently seeing in efforts by tech policy experts to “rewild” and rebuild the internet using lessons learned by ecologists. Transdisciplinary tools, such as my Neuro-Techno-Philosophy framework, can help us keep an open mind, not just about the immediate man-made dangers of AI technologies but also their potential to redefine what it means to be human. In contrast to neurophilosophy, which focuses on the human mind and nature as they are, Neuro-Techno-Philosophy examines the effects of highly transformative innovations on the human mind and nature as they will be. The latter is becoming increasingly pertinent as AI starts to shape how we understand and engage with the world, in doing so making us reevaluate our place in it.

In a highly connected and deeply interdependent world, disruptive technologies such as AI and synthetic biology amongst others, can easily create individual and collective dignity deficits, which in turn feed contempt. We must therefore apply transdisciplinary approaches to make sure that the use and governance of AI is always steered by human dignity needs for all, at all times and under all circumstances. If we ensure that these dignity needs are met, our neurochemically-mediated emotions and motivations are more likely to promote social cohesion and cooperative behaviour. If not, the opposite is likely to happen.

A transdisciplinary approach to AI ethics recognises that technological governance must be symbiotic—ensuring security without suppressing innovation, safeguarding profitability without compromising fundamental rights, and fostering progress without exacerbating societal inequalities. If AI is to serve humanity rather than subjugate it, we must cultivate a responsible innovation ecosystem—one where ethical considerations are not afterthoughts but core design principles, embedded in the very fabric of corporate strategy and national policy. Above all, to truly unleash the best in cooperative and peaceful human behaviour, and steer humanity towards a more sustainable and prosperous global order for all, we must think outside the box and strive towards win-win, multi-sum, absolute gains and non-conflictual competition, embodied by my Symbiotic Realism paradigm. This is true of AI ethics as it is of many other challenges currently facing humanity in social, cultural, economic, environmental and geopolitical realms, on earth and in Outer Space

Transdisciplinary thinking can help us untangle intractable philosophical and ethical problems about human nature, emotion and morality that have been brought to the fore by the AI revolution. We may not have all the answers about the future potential and trajectory of Generative AI, but transdisciplinary approaches will be invaluable in asking the right questions that will help prepare humanity for what is to come - or at least help mitigate some of the more serious negative consequences.

 Professor Nayef Al-Rodhan is a philosopher, neuroscientist, geostrategist and futurologist. He is an Honorary Fellow of St. Antony’s College, Oxford University; Head of the Geopolitics and Global Futures Department, Geneva Center for Security Policy (GCSP) in Switzerland; Senior Research Fellow, Institute of Philosophy at the University of London; and a Member of the Global Future Council on the Future of Complex Risks at the World Economic Forum. He is also a Fellow of the Royal Society of Arts (FRSA). He holds an MD and PhD, and was educated and worked at the Mayo Clinic, Yale University and Harvard University in the United States. 

He is a prize-winning scholar who has written more than 300 articles and 25 books, including most recently 21st-Century Statecraft: Reconciling Power, Justice And Meta-Geopolitical Interests, Sustainable History And Human Dignity, Emotional Amoral Egoism: A Neurophilosophy Of Human Nature And Motivations, and On Power: Neurophilosophical Foundations And Policy Implications. His current research focuses on transdisciplinarity, neuro-techno-philosophy, and the future of philosophy, with a particular emphasis on the interplay between philosophy, neuroscience, strategic culture, applied history, geopolitics, disruptive technologies, international relations, and global security. His books and articles may be found at www.sustainablehistory.com.

Next
Next

What could ‘public philosophy’ do for philosophy today?