The Voices of War

100. Carl Miller - The AI Revolution: Exploring Its Ethical, Social, And Political Ramifications

Join The Voices Of War at Can’t afford the subscription? Email me for an alternative solution. Universities and educational institutions can always reach out for full access to episode files an no cost.

Today, I welcome back Carl Miller, a leading expert in the ethics of artificial intelligence and the Research Director at Demos, a UK-based think tank.

After his insightful discussion on information warfare, cyber-attacks, and weaponisation of social media in Episode 51, Carl returns to delve deeper into the ethical, social, and political ramifications of AI. From exploring the centralisation of power in AI development to discussing real-world case studies on deepfake technology and ethical risks, this episode offers a comprehensive look at the complex landscape of AI governance and ethics.

Carl also shares inspiring stories of AI’s potential for social good, making this a must-listen for anyone interested in the future of AI and its impact on society. As you’ll hear, Carl also shared details about his latest project Recursive Public, an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the public on key questions of governance. Please consider supporting this important and worthwhile AI governance project. You can register here.

Some of the other topics we covered are:

A Think Tanker’s Foray into AI Ethics Carl’s background in public policy and his role at Demos, a leading UK think tank with a focus on AI ethics and governance.

Understanding Power Dynamics in the AI Era Carl defines the concept of power in the digital age and its implications for AI governance.

Categorising different forms of power, including economic and coercive power, and how they relate to AI development.

Who Holds the Reins of AI? The Centralisation of Power An in-depth look into the entities that hold the levers of power in AI, from tech giants to governmental bodies.

Centralisation of AI Technologies Discussion on the centralisation of AI technologies like GPT and its societal implications.

Fostering a Collaborative Community in AI Carl emphasises the need for a less hostile, more collaborative approach to AI ethics and governance.

Ethical Concerns: AI’s Potential for Social Harm Exploration of the ethical risks associated with AI, particularly in polarised societies and areas lacking human rights frameworks.

The Agency of AI: A Risk Assessment Discussion on the conditions under which AI could become a dangerous entity, including the potential for autonomous action.

Deepfakes and Misinformation: A Case Study A real-world example of how deepfake technology was utilised to create misleading news broadcasts in Venezuela.

Positive AI Use Cases: Empowering the Marginalised Inspiring stories from India where AI is bridging the digital divide and improving access to government services.

Future of AI Ethics The episode concludes with a look at what’s next in the field of AI ethics and governance.

Resources: Carl Miller’s Book: The Death of the Gods Intelligence Squared Podcast Episode 51 with Carl Miller Finally, don’t forget to subscribe, rate, and share The Voices of War to help us continue exploring the complex narratives of war.

To comment or take the conversation further, please connect to us here:

Listen to the podcast here


Carl Miller – The AI Revolution: Exploring Its Ethical, Social, And Political Ramifications

My guest is Carl Miller. Carl is the Research Director of the Centre for the Analysis of Social Media at Demos, a UK-based think tank. He’s also the author of the critically acclaimed book, The Death Of The Gods: The New Global Power Grab published in 2018, which delves into the seismic shifts in power dynamics brought about by the digital age.

Carl also presents programs for the BBC’s flagship technology show, Click, hosts insightful conversations on the popular Intelligence Squared Podcast, and has written for Wyatt, New Scientist, The Sunday Times, The Telegraph, and The Guardian. This is Carl’s second appearance on the show. The last time was on episode 51 where we discussed information warfare, cyber-attacks, and weaponisation of social media with a particular focus on the Russian invasion of Ukraine. In this episode, Carl joins me to explore the ethical social and political implications of the ongoing boom in artificial intelligence and how it’s reshaping the very fabric of our world.

Carl, welcome back to the show.

Thanks so much for having me back. A hundred episodes. What a milestone for you and all your lucky audience.

Thank you for being episode 100. 51 and nearly doubled it into 100. I might have 150 as well. We have to get you back. Before we get to the exciting topic of AI, there might be some audience who might not have come across your work previously. Maybe you can give us a little bit about your background as an author and researcher. Tell us a little bit about the work you’ve been involved in and most importantly, what have you been up to.

I think about the background. In Demos, which is a tradition associated with the centre-left in the UK, it’s enduringly interested in powerful citizens, how power works, and how to make politics more porous and accountable to the people who like shapes. That’s why I joined as a young researcher many years ago now and I fell fairly quickly into that career.

I realised how important the digital world was going to be, both in the landscape of public policy-making but all so as an amazing opportunity to research people. I know all of that sounds unbelievably obvious now, but back in 2012, 2011 or 2010 even, it was a little bit less. Me and a colleague founded CASM. It’s the sense of the united social media there and the rest is history as they say.

CASM began to grow. We began to focus on research methods as well. We began to build technology to research social media and that became CASM Technology, which was a standalone tech organisation that I’m one of the founders of. A few years after that, we’d be doing lots of analysis and measurements in very quantitative ways to try to understand the digital world. It’s quite necessary to do that because the digital environment is massive. I’m more interested in understanding human stories.

I wrote a book about power. That was driven much less by numbers and much more by humans. Since then, I’ve had one foot in both worlds. I still do lots of more formal data-driven projects merging the digital world, but then also spend a lot of my time making podcasts, writing articles, and doing more journalistic work and trying to weave both of those two things together.

VOW 100 | AI Revolution
The Death of the Gods: The New Global Power Grab

I’m largely investigating the mysteries of the digital age and what it creates. That’s what I’m fascinated by so that might be influence operations, disinformation, and many things that we spoke about last time. I’m constantly interested in the idea of power, how our lives are shaped, who does that shaping, and why. With that in mind, I’ve come back from a similar journey trying to understand how powers are shifting in the age of AI. That’s going to be in a podcast Intelligence Squared.

Before we started, you mentioned it was going to be six episodes.

It’s going to be a narrative series. I’m your trustee guide and take you on a journey beginning with very narrow questions to do with power. That’s to do with power over the tech itself, what the tech actually is itself, where it’s come from, what it can do, and then we expand the lens out quickly. We look at the workforce and society. We look at how power is changing within culture, geopolitics, and relationships in global North and South, and then expand even further to answer that question. It is scarcely credible to even think that we’re seriously considering questions around humanity and whether we are going to remain in the driving seat of our own destiny as a civilisation and as a species.

These are real risks and for some, they might seem rather satiric but more and more people are starting to take this very seriously. Not least because we’ve globally witnessed the power of this so-called AI, especially ChatGPT coming out. That’s been, at least in my view, the cultural catalyst to where this might go and how quickly it’s developing. Maybe we can start with this idea of power. I’d like to hear how you define power through your lens. Also, what does that mean in a digital domain? What is digital power and who has it or what is the cost of others not having it given the almost ubiquitous nature of technology and digital media?

Power is both a very important idea but also quite slippery one. Authors have always turned to it at moments of great revolutionary change rightly and understandably to basically try and ask the deepest and the most important questions about what that change is and what it’s doing. In doing so, they’ve always come up with slightly different ideas of what power is and where it sits.

Marx would say that power is control over the commanding heights of the economy and of capital whereas someone like Foucault was much more interested in peering into the hidden influences of language and the operations of power there. Go to Machiavelli, power is simply politics. Holding a political office is power or the only power worth having. They’ve got all different ideas. Even defining power in itself is an active power.

Defining power is itself an active power. Share on X

That is why I wanted to ask that question because mission matters and it also dictates the terms under which we discuss.

The definition does matter and being clear on what power is extremely important. For me, power is simply the ability to shape the world and the lines within it. That can take a vast array of different forms and it doesn’t make sense to me to rule out or eliminate the different forms of power can take. It can be everything from very coercive forms of power of force and threat all the way through to economic power, like incentives, inducements, money, and wealth.

VOW 100 | AI Revolution
AI Revolution: Power is simply the ability to shape the world and the lines within it.


You see broad in the world most often but is often the hardest to see and that’s the power of connecting persuasion and ideas. Power ranges across all of those things, but ultimately, I always see it as the ability for you to reach into my life, me into yours, and all of us to shape the world that we live in. What that is and who has it is I contend changing radically right now and it’s worth like Foucault, Marx, and all these others did sitting in their own moments of great change. It’s worth us looking at it again now. It feels like our own society is being reshaped, as you say, in these early months of the post-GPT age.

I like that definition because it nests well with how I view power. Power can shape their behaviour. When we talk about AI, this is where it becomes important. To come back to that question I originally asked, who holds the levers of power when we talk about AI?

What a great question and that is what those hot six whole episodes are trying to untangle. To shortcut some of that journey, it’s clear that power doesn’t exist in linear or unidirectional ways anymore. One of the things I was struck by was how anyone was warning about the risks, the concentrations, and hidden forms of pressure but also, in the same breath, talking about the amazing liberatory opportunities and vice versa. Two things seem clear. One is that right now, especially large language models work and these generative models, which is what everyone is talking and thinking about like GPT and all the others benefit from scale in a very important way.

The actual underlying approach is to develop these models that haven’t changed much in the last few decades. The actual thinking about how machine learning works and how artificial intelligence should be built as we may be remarkably stable. What has changed is the levels of data and compute power which has been put into training these models. When that data increased, it was unleashed. One foundation of model developers describes it to me as looking inside the mind of an alien. He said, “We’re in an alchemical stage of understanding what most can do.” However, it took off in an exponential way, that performance, which no one expected.

That’s the scary part.

It’s scary but in more ways than one way. What that firstly means is that lots of people will build foundation models and will have their own LLMs. However, it’s likely that a small number of extremely large companies will continue to break the frontiers. At the moment, they’re exclusively positioned in Silicon Valley, which has always been the place in the world that’s been best able to accumulate and marshal huge amounts of money, compute power, data, and all the talent. That’s an old story and AI is a chapter in that. That’s one direction of power. The data, money, and control are still flowing into a small number of people and founders in one place in the world and maybe that’ll continue. On the other hand, what’s also true is two other things.

One, these tools are much more general than other iterations of machine learning. Whereas in previous generations, you’ve got a model or some application that has been trained for a specific purpose like it pays checks or it predicts a certain thing. That training and that use case is set by the engineering monoculture that builds these things. That’s largely reflected the concerns or interests of that particular community, but with GPT and these other models, they’re far more general because they’ve been stuck with the human culture at this point. They can do things and put to use that the people who built them have no idea about or couldn’t even imagine and that we couldn’t imagine.

In one of the chapters, I speak to one of the visionaries, I suppose, of indies use of AI. He was taking me through all these use cases that I couldn’t imagine. They’re using these models to translate loads of difficult impenetrable government documents about access to agrarian subsidies and translate that into spoken things to assist farmers. You’re like, “That’s amazing.”

As long as the models remain general in that way and they remain freely accessible, we’re going to see both at the same time. We’re both going to see them being put to use, which are genuinely liberating and empowering people who don’t have much power and haven’t been part of other tech revolutions up to this point. I could also see a world where that’s happening at the same time as these super-centralised models remain centralised.

We're going to see AI being put to uses which are genuinely liberating and empowering to people who don't have much power and haven't really been part of other tech revolutions up to this point. Share on X

You mentioned, if I understand correctly, the first iteration is to narrow AI. The very specific is like the chess game or even something like Siri, which is rather limited in its application. It’s got a very clear left and right. You can’t generate knowledge or ideas, which is the leap of ChatGPT or the LLMs that we’re talking about now. They can create knowledge that humanity, perhaps, can combine concepts into new ideas and knowledge. We call that General Artificial Intelligence. That’s now rich. Is that the point?

There’s lots of debate around the idea of AGIs or Artificial General Intelligence. The engineers preferred to talk about strong AI, which is a slightly different set but along the same lines. Do you have something that is general? The answer is no, not yet. One example given was they might be able to drive a car and they’re about to talk to you but they can’t do all the things a human being can do but we’re moving in that direction.

GPT is not just a cultural phenomenon. That new generation of LLM or Large Language Model is also a technical breakthrough. It’s not all hype. The AI researchers were telling me they never thought they would see these sorts of things happen in their lifetime. It’s moving closer to becoming more general as one of those surprising directions of travel.

You can ask it to talk to you like a pirate that’s lost its keys and it will give you a good go or you can ask it to write code and it’ll give it a good go. That means that it will connect to all the millions of different ideas and use cases that people have around the world that the developers themselves could scarcely have imagined or will not have been able to imagine. That has serious liberatory potential because it doesn’t matter that a small community in Silicon Valley can’t necessarily ever imagine what the thing is supposed to do. What’s important is that it can connect to the imaginations of millions of other people at the same time.

It creates value for them, but the fact that even the AI engineers were surprised by the result or the product. If I understand it correctly, the difference between ChatGPT 2, 3, and 3.54 is exponential. It’s not linear. The computing power is enormously different so we can expect ChatGPT 5 to be exponentially better than ChatGPT 4. If they don’t know how it does what it does, is that part of the concern?

Even people like Marx and so on who you are talking about have a pause on AI development, is this part of the concern? We don’t know what’s happening inside the machine, especially since there is no one there. There’s no being as such as much as I even catch myself thinking ChatGPT 4. I chuckle in myself and it responds, “You’re welcome.” Even though I know there is no one there. I’m talking to a large language model that is using data as you described to give me something that’s human-like but it’s not human.

Let’s begin with where we are right now and then let’s talk about exponential growth. It’s important to say that these models are much better at seeming like they are intelligent humans and being so. The best way of understanding them is they’re sophisticated auto-complete. They are unbelievably good at sounding human-like because they have swallowed up everything we have produced and they’ve learned through this raw statistical inference that when you say, “Thank you very much,” they normally respond, “That’s fine. No problem. You’re welcome.” When you chat to it, it feels so human. It’s very easy for people to begin to think that there are some sentients surely lurking somewhere in that model peering back at you. There’s not and there isn’t yet.

We then get onto exponential growth. Exponentials work in very different ways to linear development. The concern I hear from engineers is about exponential development. What it means is that it’s becoming very hard for us to be confident even in the midterm future we might be in. Our horizons are drawing back, especially if this collides with a series of other megatrends like quantum computing.

It’s not the only huge technical thing happening but there’s a series of these things that’s bouncing off each other. That means that our world and what these models look like or the next generation of models in 6 months, 1 year, or 2 years is becoming very mysterious. It’s hard to know what things are there in say five years’ time.

VOW 100 | AI Revolution
AI Revolution: It’s very hard to know what things are going to be like in five years’ time.


This is the other big power dynamic and probably the most important power dynamic of all is the relationship between AI and governments and governance. It’s around the rules that we set for it. How it should be developed? How it should be used? How it should behave? The real concern is whether can we do the squishy human art of consensus forming and institution building and trying to find ways of beginning to create democratic and genuinely credible signals and rules to suddenly start shaping this. The concern is that, on the one hand, you’ve got an exponential growth curve in the power of the technology and therefore, its potential to both do great good and harm. We’re never going to exponentially become faster at building consensus around AI governance.

That’s always been something that isn’t even linear. We go back or forward. Politics doesn’t work like tech and that’s the concern. That’s another project I’m doing. It is to try and build new ways of creating democratic signals for AI governance. I try to do that in ways that are as broad-based and diversely sourced as fast as we can.

If I understand it correctly, that’s the recursive republic project.

It’s an OpenAI-funded project. The idea there are ten different teams around the world. Everyone is pursuing their own idea about how to source democratic signals for AI governance, especially by using AI. Almost all the teams are using different forms of artificial intelligence themselves to try and create these signals. The idea is to try and create something as rapidly developing as we possibly can and then begin to practically use that, hopefully, to begin to guide the development of AI models, whether that’s OpenAI’s or others.

VOW 100 | AI Revolution
AI Revolution: Probably the most important power dynamic of all is the relationship between AI and governance.


What do you mean by developing that a little bit more? If I understand correctly, the intent is to build consensus using AI to promote areas of common ground as opposed to what we know of social media. I promote the points of difference, in other words, growing our division and pushing us back into our echo chambers. If I understand this, it is the reverse of that.

I couldn’t put it better myself exactly.

I read your work. That’s why.

I am a super fan of the digital democrats in vTaiwan. They’re the most accomplished and successful digital democrats in the world. I went over there to do a short commentary for the BBC a couple of years ago. They’ve built a way of using partly the mixed reality processes, including digital debate. It’s draft regulation and law. I don’t think anywhere else in the world has managed to do that.

For this project, I teamed up with Chatham House and the vTaiwan community. It’s a large hacker community in Taiwan. The idea has been and is happening right now to create a consensus-seeking online deliberation to set the agenda for AI governance that we will then pursue going forward. It’s happening on a platform called Polis. That’s what they use in Taiwan as well. Colin is one of the founders of Polis. He’s part of of the coalition of the willing that we’ve put together for the project. Polis is interesting exactly for the reason that you said. What it does is you log on to this platform, you ask a few questions, and it starts drafting answers or your own proposals so far in social media, but it draws up a conversation space.

People who agree tend to be similar. People who disagree tend to be further apart. It’s drawing that space up. You can see different tribes that might be quite polarised and it then starts making proposals more visible that get agreement spanning different important divisions within the day. It’s a game of unseen consensus. In Taiwan, we’ve seen this happen time and time again. You begin with debates, which are on Uber, online sales of alcohol, or eScooters. They can begin with two different groups, quite entrenched, disagreeing with each other on social media. All that disagreement is the stuff that you see and here, it submerges all of that and it allows unseen consensus to float up instead. It reveals a reality, even on most topics in which we disagree with each other.

We tend to agree on quite a lot as well, usually more, but that’s all normally eclipsed. It’s excavating hidden commonness or commonalities that human beings have often like Uber. Everyone was concerned with rider safety and driver safety. That sorts it out. Regulation could work pretty well. That’s our hope here. What we’ve been trying to do is to bring in all the different tribes in debates around AI governance.

The boosters, the gloomers, the tech bros, the AI governance, think tank wonks, and everyone else. We bring them all into this policy environment. Hundreds of young people have been inducted into it too and AI engineers as well. We’ve been running this debate for months now where we’re trying to find consensus items. The next step, hopefully, is we’ll start posing to that community with specific questions that OpenAI themselves set.

I’ve logged onto policy and I’ve joined the research since you shared about it on LinkedIn. It’s incredible because it’s rather easy and intuitive because the answers are very quick yes and no, do you agree or disagree, or you don’t know. If I understood it correctly, there are certainly more than 100 questions because you keep going and then it charts you ever so gradually into a tribe as you said, which is also from a user interface. It is quite interesting because it then helps you identify perhaps more clearly on a bigger chart that you are part of a tribe. There are certain biases, values, life orientation, personal history, etc. that are setting you towards a particular view on this particular issue.

That in itself, that reflective piece or self-reflective piece is also, in my view, at least quite interesting to see. As I kept on answering more and more questions, I could see how I moved ever so slightly more towards the centre of a particular tribe. That’s a useful tool and I’d be keen to see how that then unfolds into building this consensus. In many ways, it does sound a little bit too good to be true because what it seems to me like we’re doing is we’re flipping very much that idea of power. We’re in traditional social media.

The power is in division, anger, and getting eyeballs on a device because we’re drawn to that. You talked about this last time. It’s part of our evolutionary by-products to be drawn to drama, danger, and anger. Whereas here, the power is flipped very differently to find consensus. I wondered, are we drawn as much to that? Have you got any sense of whether people are getting as engaged or as interested, or a selective group of people are already interested in this problem and trying to solve it? Can we talk about hot topics or polarising topics like the US election, QAnon, the COVID vaccine, etc.? I know it’s early stage but have you got any thoughts on that so far?

Firstly, people quite like being in a space that is less angry, less hostile, and where it feels like we’re more part of team humanity with nothing other than the goodwill have joined and we’ve had across the deliberations we’ve run. There’s one in Taiwan happening at the same time and that’s going to be an interesting comparison. We’ve had over a thousand people and tens of thousands of votes. That’s on a pretty small project team trying to bootstrap this initial pilot as we go because people genuinely care about the answers this creates and can see how it will affect their lives. To me, the big unanswered question is not finding consensus. I can see it in the debates. It’s not polarised to try to believe it.

People actually quite like being in a space which is less angry and less hostile and where it feels like we're more part of “Team Humanity”. Share on X

We can see squadrons. We can see that some people are more optimistic or pessimistic. Different empathy is happening, but I can already see tons of consensus coming out of this discussion. There’s one statement in which the goal of AI is to build a superintelligence, which is extremely divisive. Everything else is broadly consensual and there are dozens of hyper-consensual statements everyone agrees with coming out. I’ve got no doubt that consensus can be identified. The big unanswered question is can we connect this into power to make it meaningful as in it does stuff?

That’s returning to policy in government.

In government, the tech companies themselves, the decisions that developers might make, and the things that think tanks might raise. That’s what we’re trying to work hard on and scratch our heads about now. It is how we repay the time and investment that people will put into this process. We’re asking people to continue to do so. Anyone tuning into this, get involved. Google recursive public and jumping forward.

How do we turn that into a meaningful outcome? An AI company making a decision to do X rather than Y or a government deciding to look at A rather than B or a rule being passed. That’s what’s the bigger problem that no one has solved. How does AI governance work? Where does it sit? Institutionally, what does it look like? Is it ultimately to do with nation-state regulation and law but it might be something else?

China has taken one approach to AI regulation. Europe has taken another. America is taking a third. The UK is convening a summit because we don’t know what to do. These are questions bigger than my product. This is one of the global questions that we’re trying to solve at the moment, but that’s also an important part of this project.

If it’s a talking shop, then it won’t matter to people. If it doesn’t matter to people, they won’t do it. If we can connect democratic signals into something, our decision will matter then we’ll see this process continue to evolve. The idea of the recursive public by the way, in case there are people who are slightly confused about the name, is a public that continues to form, live, debate, and answer questions. We want this to be something that doesn’t go away. We want it to grow as a community of people from all walks of life or anyone who cares about AI governance and wants to be part of debates about it. That’s something that will then be created and used so we’ll start asking live questions.

Say AI engineers are thinking about and trying and use that public to respond to those. Also, use the agenda-setting statement because of what they’re doing to try and drive the questions and think about what happens within the companies at the same time. That’s what I did. Whether it can solve QAnon, probably not. When disagreement becomes the point or discord becomes the identity that people have, people need to enter into a consensus-seeking debate wanting to try and find consensus.

You need to want to see the other person as opposed to hate them and embrace your narrative. As you say, identity is important.

Zero some contests are not going to be the thing that we’ll be able to navigate through by trying to get people to respect.


Important Links