My guest today is Carl Miller who is the co-founder of the Centre for the Analysis of Social Media at Demos. For the past nine years, he’s been building new machine learning-driven approaches to robustly study online life and has written over 20 major studies spanning online electoral interference, radicalisation, digital politics, conspiracy theories, cyber-crime, and internet governance.
His debut book, The Death of the Gods: The New Global Power Grab was published in 2018 and won the 2019 Transmission Prize. He presents programmes for the BBC’s flagship technology show, Click and has written for Wired, New Scientist, the Sunday Times, the Telegraph and the Guardian.
He joined me to discuss Information Warfare, cyber-attacks, weaponisation of social media and other challenges, and to do so with a particular focus on the ongoing invasion of Ukraine. Some of the topics we discussed are:
- Carl’s entry into this field
- Terrorists as conspiracy theorists
- Profiling a conspiracy theorist
- Understanding data
- On the ‘Death of Gods’
- The power of open-source intelligence
- Emergence of information warfare
- Frictionless engagement and the Attention Economy
- TikTok and censoring of data
- Has Ukraine really won the Information War against Russia?
- Suspicious Pro-Russian influence operations in BRICS countries
- Background and methodology of the BRICS research
- Assessment of pro-Russian campaigns in BRICS countries
- Cost of attacking vs defending against information operations
- Risk of hubris
You can find out more about Carl and his work here, and you can see his recent research on pro-Russian influence operations in BRICS countries here.
—
Listen to the podcast here
Carl Miller – On Information Warfare, Social Media And Pro-Russian Influence Campaigns In BRICS
My guest is Carl Miller, who is the Cofounder of the Centre for Analysis of Social Media at Demos, the first UK Think Tank Institute dedicated to studying the digital world. For the past years, he’s been its research director building new machine learning-driven approaches to robustly study online life and has written over twenty major studies spanning online electoral interference, radicalisation, digital politics, conspiracy theories, cybercrime, and internet governance. His debut book, The Death of the Gods: The New Global Power Grab, was published in 2018 and won the 2019 Transmission Prize. He presents programs to the BBC’s flagship technology show, Click, and has written for Wired, New Scientists, the Sunday Times, the Telegraph, and the Guardian.
He’s a visiting research fellow at King’s College London, a senior fellow at the Institute of Strategic Dialogue, a fellow at the Global Initiative Against Transactional Organised Crime, an associate of the Imperial War Museum, and a member of the Society Board of the British Computing Society. He joins me to discuss information warfare, cyber-attacks, weaponisation of social media, and other challenges, as well as opportunities brought about by the internet, and to do so with a particular focus on what we are seeing play out as a result of the ongoing invasion of Ukraine. Carl, thank you very much for joining me on the show.
Thanks so much for having me.
Before we get into the murky waters of the ongoing information war around Ukraine, maybe a little bit about your rather eclectic career. How did you end up doing the work that you do?
Many things are not planned out, but one thing leads to another. My background is as a think tanker. I have worked for a think tank called Demos, which is one of the standard political think tanks based in Westminster, London. I joined that coming out of an internship in the foreign office looking at counterterrorism and counter-radicalisation. I joined the counter-violence team at Demos. We wrote a pamphlet back in 2010 saying that all terrorists and violent extremists are also conspiracy theorists. It was a link that hadn’t been made too much at that point.
We were very interested in it and trying to explore how it being conspiracy theorists might create all kinds of dynamics, both personal and group, which would lead you down pathways of radicalisation. Unfortunately, the 9/11 Truth Movement in the UK thought I was saying all conspiracy theorists are terrorists. It began two years of angry debate with them.
Over those two years, I realised that this was a movement that was almost entirely living online. This was before Twitter, but this was on David Icke’s and Infowars’ forums. They were organising unbelievably, effectively, and extremely galvanised, very energetic as a movement. That began, I suppose, with me and my colleagues, this realisation of how important the internet was to change the nature of social movements and groups.
Also, as a researcher, I love data. We realised that not only was social media a huge agent of social change, it was changing all of us. It’s changing the environment around us, but also it was producing these amazing and unbelievable new data sets that we could study all of that with. I thought that was the research challenge of a generation that my generation was responsible for and tasked with this massive challenge of how to make sense of all this data. One thing led to another, and here we are decades later.
Social media is a huge agent of social change. It is changing people and the environment around them. Click To TweetA number of TEDx Talks, a book, probably another one in the making, and quite prominent in this social media space yourself. There were a couple of things that you brought up that were interesting. The first one is that terrorists are conspiracy theorists. It makes absolute sense, but I can’t say I have ever thought about it that way. Why would you put those two together? What is it that makes a terrorist a conspiracy theorist? We come to know terrorists as political violence. It’s the result of the oppressed, arguably, who don’t have another means to prosecute their political aims. Why do you link those two?
It was the strange association of both of those two things and the same people that got me interested in the idea. That’s exactly what the pamphlet tries to explain. It was co-written by a colleague of mine named Jamie Bartlett. What we ended up with was the idea that conspiracy theories are a radicalising multiplier. They do all kinds of things, which makes it more likely for someone to become a violent extremist. They spur action in some ways. If you believe that you need to wake up a benighted sheephole from the slumber of the mainstream media and corporate politico elites, sometimes a spectacular active resistance is what’s required.
They spur people into action. They also create a bunch of group dynamics, which make it more likely for a group to be hived off from any mainstream or countervailing information and go down a particular rabbit hole. It’s a lot easier for a leader of a group to squash dissent within the group if every member of the group believes that there might be infiltration by a CIA agent or an MI6 agent. If you believe that there’s a conspiracy or a dark plot against you, which in some cases, in violent extremist groups, there are plots to infiltrate them, it makes you far more inclined to adopt all kinds of operational secrecies and hermetic seals.
To go back to the confusion that began my foray into the internet. The vast majority of conspiracy theorists are not violent extremists. They might be the 9/11 Truth Movement or believe in QAnon, but that doesn’t necessarily mean that they believe that indiscriminate violence is either legitimate or necessary to achieve a political end. Certainly, being a conspiracy theorist helps if you are going down that pathway. That was the conclusion.
The behaviours of the conspiracy theories will motivate you greater because once you start self-policing within your chosen social group, you start embodying the norms of that group. It becomes part of your identity. This is why debunking or talking to somebody who has gone down a rabbit hole of a particular conspiracy theory becomes so hard because to become so wedded to the idea and the idea of autonomy. You can’t fool me, so there’s a cognitive leap that needs to be made for somebody who has gone down that rabbit hole to step outside of their own identity they have chosen to embrace. Have you found that?
One of the most infuriating things that anyone who has spent any time trying to debunk or argue with conspiracy theorists is simply the self-sealing nature of all those ideas. When there is a government media alliance, as so many conspiracy theories believe, there is to refute and cover up the real existence of 9/11, the role of the Illuminati in modern affairs, or anything that. They are going to produce evidence that undermines your theory. Being able to have control experts and be a controlled journalist is almost the only idea where evidence to the contrary is taken as evidence of the actual existence of the conspiracy itself.
It’s infuriating. I would also say one thing I did learn from spending years talking to the 9/11 Truth Movement is that a lot of this isn’t acting on the level of rational thought. This is a point I will make again when it comes to information warfare, perhaps. This is as much to do with identity, belonging, kinship, and friendship as it is to do with evidence and reason. I remember being involved in a big debate in London with a celebrity conspiracy theorist. People would turn up to cheer on their side, and they were wearing T-shirts with true seekers on the front. They knew each other. They were friends with each other. They’d go down to the pub afterwards and have a pint.
They were selling T-shirts in the back of the room, tapes, and memorabilia. It was like going to a concert where you could buy the merch of your favourite band. There are lots of social and emotional ties of people to that. What better way of finding meaning in the world than truly believing that you are part of a tiny chosen few who can see the true nature of world affairs? That is a huge way of carving out a role for yourself in a world that often gives far too little meaning to people.

Is there a particular profile of a person that you have found that is more susceptible to conspiratorial thinking than others?
I wrote quite an angry, and I think probably in retrospect, fairly rude article trying to call a beastery of the 9/11 Movement, where I try to divide it into different groups. Looking back at 20 or 22-year-old Carl that was slightly unnecessary, but there are different tribes that join up. This is all generalising, but there were twenty-something post-modern alliance students who were questioning the truth in general. There are people, both genuinely hard left and hard right, that go into conspiracy theories already with the belief that the state is unrecoverably corrupt and compromised.
There’s an older conspiracy theory as well. I don’t want to sound rude, but it has an enormous amount of time on its hands and has become an expert in a particular niche within the field in the same way that one would become an expert in a niche in any area of life. For instance, a former Physics teacher would write about the melting point of steel in the peer-reviewed journal that the 9/11 Truth Movement would maintain. That would therefore have something of a currency and a following within that community. A lot of people, as I said, are very angry with the state, the establishment, and mainstream media.
Sometimes from a completely personal experience of being very wrongly treated by a journalist and not leading them to believe that journalism is corrupt, which leads them to believe, “What else are they saying,” which are lies and so on. A final group would be the utterly self-seeking and profit-minded, which we cannot deny. Economic incentives are always very powerful ones. There would always be a few voices in these debates that would have held or at least claimed to have some position in the CIA or FBI. They would immediately become celebrities in this world, immediately monetise, and be the people selling the T-shirts in the back of the room.
It’s nothing new. Conspiracy theories have always been around, but to come back to the second point that I liked in your opening answer is data and the value of data. You mentioned before that we are generalising it. That’s the beauty of data. It gives us sufficient data points that we can draw some patterns from it to make some generalisations about broader macro-level behaviours.
The quieter fascination of the last years for me has been around the methods and technologies to make sense of all of that. I, with my colleagues, largely formed CASM, Centre for the Analysis of Social Media. Both CASM technology, the company, and the think tank build new ways of leveraging all that data and turning it into something that we could understand amidst all the chaos, complexity, and internal contradiction, which data sets that social media often have for us.
That leads me to the title of your book, The Death of the Gods: The New Global Power Grab, which is a well-known international bestseller. Maybe we can zero in on what the book is about and what you mean by the death of gods. This big data speaks well to what you try to cover in the book, doesn’t it?

It does. The book took a year to write because, at that point, I’d spent maybe 5 or 6 years, and we’d been working on terrorism over here, conspiracy theories over there, and early disinformation operations over there. It was a way of trying to construct a bigger picture across the different empirical islands of our work to provide a more general explanation for what’s going on.
It was a book that was wrapping together data, analytics, machine learning, and all the methodologies that we were developing at CASM. I suppose it’s the beginning of interest that I had in journalistic storytelling as well as how to wrap the stories of individual people and groups in amongst all of that data, whether I could fuse those two things.
There was an idea at the heart of which was power. I thought power was an idea which freedom, liberty, equality, and justice. It’s one of these huge unit ideas which is so important. It’s very emotional. It’s an idea that writers have always turned to throughout history. Their societies are locked in this period of convulsive big shift. Machiavelli did it in the Florentine Renaissance. Marx did it as he was watching his society lurch to adjust to the industrial revolution. Foucault did it to make sense of the liberations of the 1960s.
All these writers, I have admired so much. All of my education has used power to ask this question about how our lives are shaped. I thought power seems to me to be quite an overlooked idea now, and it seems to be shifting in quite important ways in lots of different parts of life. The book was an endeavour to go out on the road and bring myself face-to-face with power, and it shifts, whether that meant face-to-face with it in terms of the data or seeing it happen.
I like the title, The Death of the Gods. What did you find through the book?
The Gods were all those old holders of power, be it a large hierarchical company, states, mainstream media outlets, or in many ways, conventional military forces. I imagine we will come back to that particular claim in a little while. The starting point was that a lot of the parts of life that we regard to be conventionally powerful have become a lot less powerful than they used to be and suppose they are now.
The question which my editor had given me for the book was, is this so much quite clear? Is this a shift in power? Is it liberating us or imprisoning us? Are we awakening in this digital age with more power at each of our fingertips than ever before, or are our lives being shaped by these distant, sometimes murky forces that we don’t understand?
The answer was both are happening at the same time. There are both all these tremendous liberations, and it is a book. It came out at this time when there was a tech clash happening, and there was book after book coming out that was criticising the overweening power of Facebook, which my book does as well. It’s hard to avoid the power of the tech giants, but it wasn’t simply that. There were all these amazing liberations happening at the same time across politics, journalism, and business.
Each one of them was causing all these people who were previously powered by us to find new roots in power. At the same time, there were also new forces of control for sure. No one needs to get the book now because I’m giving them the final page. Power had gone wild. That was the argument. We have always tried to control power and cage it.
The bars of that cage are things like the law, professional standards, and norms. We try and control it. We manage the way in which you reach into my life, and I reach into yours. We say there are certain ways where that’s legitimate and that isn’t. All the way from embarrassment and social censure down to imprisonment are all the penalties that we levy to try and steer people away from illegitimate and illicit ways in which we reach and choose each other’s lives.
Whether you look at the remarkable explosion of cybercrime and the sheer difficulty that law enforcement agencies have with bringing cyber criminals to any form of justice now, or you look at the fact that information warfare is being fought outside of the rules of war and any published rules of engagement. Whether you look at large companies that simply have new forms of monopoly we don’t recognise yet, or you look at electioneering online or entirely new forms of political parties that are setting up. Everywhere you look, power has flowed in and between those bars. It’s not being controlled by them anymore.
Information warfare is being fought outside the rules of war. Large companies now have new forms of monopolies and new forms of political parties are setting up everywhere you look. Click To TweetI like the fact that you are suggesting that it’s flowing in between. It’s very easy for us. I interviewed Peter Singer, and we talked about the democratisation of war. That’s something you were seeing particularly playing out in Ukraine. Also, there’s this feedback loop. If we look at what we talked about, the pigeon holes and stove piping that we find ourselves in, when this interplays back and forth?
I also get sucked into it, so while I might feel powerful for having my megaphone, through that megaphone, I’m also siding with a particular view or bias. Through that alone, the likes, shares, and the inner network that I have, I’m digging a deeper hole. I’m going down deeper into the little rabbit hole of my exclusive group of people, which, therefore, takes power away from me in a way or makes me more susceptible to manipulation, without getting conspiratorial on it but by those who are steering narratives and conversations bring out macro ones.
Social media is a great example of how liberation and a new form of control come wrapped hand in hand. We all have tremendous new capacities at our fingertips to not only capture attention and do something for the world with social media but also to learn about it. One of the stories in the book is about a very famous individual called Eliot Higgins, who runs Bellingcat. It isn’t quite the global brand that they are now. For those of you who haven’t heard it, they are a group of open-source intelligence volunteers and investigators.
Eliot had begun his career unemployed in Leicester, arguing in the common threads of the Guardian, but realising that he could learn about the Iraq and Libyan wars as well as any journalists in the field by looking at all of the battlefield imagery that was emerging. At that point, it was completely unexploited by journalists. He started to have mini scoops, and then he got a big scoop which he realised was Croatian weaponry going into Libyan rebels. That landed him on the front page of the New York Times. Now he’s a talisman of the open-source intelligence community and the CEO of Bellingcat.
There’s a great example of liberation. Honestly, it’s enormous to detail a lot of time that transformed Bellingcat and Elliot into the global force for good that they are now. On the other hand, there’s a lot of murky control from the platforms that are creating the content and how the pipes work, but also murky control from lots of actors that are attacking their platforms.
Another murkiness in our not knowing where the dynamic sits between the platforms trying to defend themselves and the range of actors around the world that are trying to manipulate and subvert them. That conflict fascinates me. It’s been a secret war, if war is quite the right metaphorical word to use, but a secret struggle that’s been happening back and forth between social media platforms, military states, and other actors for years. It’s very dynamic and constantly changing.
Information war is a secret struggle that has been happening back and forth between social media platforms and military states for many years now. Click To TweetWhat do you mean? Explore that a little because I can’t say I’m intimately familiar with what you are talking about.
Should we begin with information warfare and we will build up to that conflict?
Yeah. Sounds good.
Let’s begin with this shift that began to happen to operational doctrines around the world in around 2005. I place it. Others might point to slightly different times. One after the other, militaries around the world, both liberal, democratic, and autocratic alike, all began the same line of thinking. I’m quoting most directly here from the UK doctrine. We live in an information age of how we, as a military, stay relevant in an age where information is increasingly central to everything.
Each of them rewrote their operational doctrines and integrated actions. The UK is one. All of them are noticing that the end of any application of controlled violence is ultimately a behavioural change of some kind, either stopping people from doing something or making them do something. They all realised that the answer was an enormously important conceptual flip in what information was. The information had long been a tool of war. Everyone recognised that. We all see the blow-up tanks before Normandy. We all see crusader kings trying to manipulate the enemy’s picture of their forces.
It’s always been used to assimilate, confuse, and convince, but all these doctrines reconceived information not as a tool of war but as a theatre of war, a space that war happens within. They were quite explicit about this in many cases, like air, sea, land, space, cyber, and information. It changed the language that militaries began to use to talk about this.
They began to talk about it as a space information manoeuvre, mimetic weaponry, information dominance operations, and things like that. That’s where we get to this question of the conflict between militaries and social media platforms. Militaries, liberal democratic, and autocratic alike began to build the capability to manoeuvre in the information space.
Perhaps, they realised that if information is a space, then attention is the most important territory within this space. Capturing and holding attention rather than the information equivalent of capturing and holding a hint, ravine, crossing, or something. They began to build capability. We see some of this as much that we don’t see being able to manoeuvre across the information spaces, which were most important and social media was part of it. We know more now about what’s happening on the autocratic side of things. That’s where the story becomes a bit less general. Autocratic militaries began to generate ways of being able to manipulate information flows on social media accounts.
That would involve mass accounts set up, understanding how you seed memes in certain ways to get to spread, or how you would game algorithmic curation happening on social media platforms to make things more visible or perhaps even to suppress information as well. The slightly softer and squishier questions around glueing together cognitive psychology, dopamine studies, the other human-centred disciplines, and understanding how we shape messaging in order to achieve the effects that we want. The social media platforms realised that there were all these accounts being created, I’m going back years now, which were inauthentic in their mind. They didn’t say who they were. Possibly very great in number.
Initially, they treated this as a form of spam. It’s like, “We have always had harmless spam.” It might harm the user experience. We don’t want people getting loads of porn bots because they won’t use Twitter anymore. That’s the thinking in the early days. Not just in Twitter, but this was more general. There’s been the realisation that maybe a military doing this is a qualitatively or categorically different threat to a spam agency and setting up teams to defend their platforms against this. Facebook is especially recruiting lots of people from more mill, security, and intel backgrounds onto their own platform integrity and safety teams in order to set up a detection and enforcement response. Find it happening, learn more about it if you can, and then respond.
It’s an interesting dilemma when you are talking about organisations like Facebook, Twitter, and so on. Also, you mentioned the attention economy. It’s maintaining the eyeball on the product. That’s how their algorithms are making money for them. It’s to over time seed advertising and so on. It’s very difficult to combat that, regardless of how many people you have, when your business model is purely designed for eyes on. To combat information or disinformation, it becomes very difficult when your entire platform depends on a continuous flow of information to the end user, and the end user, purely by their likes and shares, will be driven by the very same algorithms toward a particular social group identity subset.
If I remember correctly, there were studies that 70% of the videos that are viewed on YouTube are through the right-hand side recommended videos, which is insane. Only 30% of the videos watched are what people go on there to watch. The rest is what’s being fed through the algorithms based on what they have previously watched. She also found that they become more extreme as you are watching. I don’t know how you can combat that when the entire ecosystem and economic business model is built on keeping you hooked on the platform. This is nothing new, but I wonder if you had any thoughts on that particular dilemma.
The dilemma, sadly, is broader than simply the optimised engagement algorithms. Information creation, you very correctly say, is optimised to keep people on the platform, but the entire platform is self-optimised for growth in a more general sense. All of these platforms have been designed to be about as frictionless as possible.
Friction is an important idea in the world of big tech. We don’t talk about it enough outside, but they are obsessed with it inside. The idea is that making anything slightly easier or harder has vast consequences as to whether millions of people click on something or something else. You make it a slightly different colour, bigger, more central, or all of that. Much of this isn’t to do with making things possible or impossible. It’s making things easier or more difficult.
If you want to leave Facebook, please write a letter to a company in France, in Penn, and Inc. I remember when I was trying to get my data out of some companies, the digital platforms would require me to physically mail them a letter in order to get data. For sure, that’s all friction at work. On the other side, completely frictionless is account sign-up and then sharing stuff because they want to maximise both of those things. When it’s so easy to set up accounts, and there’s so little information, delay, or friction that’s put in place of you doing that, that also is something that makes the platform more exploitable.

Likewise, in terms of sharing things, there are all kinds of frictions that we put in place around dampening down viral about the war, which we don’t know whether they are right or not, causing people to delay before they can share something, or capping the amount of sharing activity that each account can do. There are all kinds of friction that you might put in place, which isn’t put in place in many ways. My reading is that these conversations take the form of a struggle that tends to be happening on the inside of each of the tech giants. You have voices, especially on the policy, legal, safety, and security side of the companies, that would love more friction.
It is a lot easier to catch Russian disinformation operations if you ask people for more identifying information about themselves, for instance. It’s a lot easier if you shut off account creation say, on a particular IP range that might be compromised. There are all kinds of technical things you could do. It is notionally capped, but it’s ridiculously high level. It doesn’t seem to have much meaningful effect at all in the campaigns that we see. On the other side of the fence, you have got growth and revenue. Their incentives are obviously to get these platforms as big as possible and as profitable as possible. They are never going to want anything other than frictionless experiences because friction sounds in the way of all of that.
That’s very interesting. I was listening to Tristan Harris’s podcast, Your Undivided Attention. I forget the name of the person he had on, but there were two from China, and he was talking about the Chinese WeChat, which has quite a lot of restrictions like the ones you mentioned there. The user experience is overall, by the sounds of it, quite better. The virality of particular information is nowhere near as great. The penetration into the network is far more tightly controlled. At least, from these two people that Tristan Harris had on, their analysis was that the user experience is a lot better and, in many ways, even safer, which is strange to hear given that we are talking about something we would describe as an autocratic state like China.
That is an interesting point. It reflects another as well, which is the Chinese state has been much more willing to involve itself in questions of platform engineering. In the Western conversations around platform governance, a lot of it, far too much of the conversation, in my view, has been concentrated on moderation and takedown. Finding and removing content that is illegal, hate speech, and so on, is fine, but it hasn’t concerned itself with everything that we are talking about here.
It’s not structural and systemic. It’s only very surface levels. There’s much more that states could do, for instance, to force Facebook, Twitter, YouTube, or Reddit into making themselves harder targets for information operations if they require these platforms to put in place certain systemic, structural, algorithmic, and curative changes to make them more difficult. While we are on the topic of Chinese social media platforms, TikTok is the one that is used by tons of Western audiences. Right now, as researchers, we cannot legitimately get any scale data from TikTok. We have no idea how much information operations are happening on TikTok. That is terrifying to me.
I get all kinds of anecdotal accounts of very mimetic, sometimes quite shocking, extremely graphic, pro-invasion messaging happening on TikTok all the time. We know that the Russian state has recruited TikTok influencers, so they know how important this is the battleground. There is no real way in which civic society and academic researchers can do anything more than anecdotal, qualitative research. That, to me, is complete madness. In 2022, we are still in a state where civic society is blind to the possible scale of disinformation on a platform that is now central to our media diet.
The size of TikTok is incredible. That’s maybe an interesting pivot. One other question. If you can clarify for me before we pivot onto Ukraine and Russia, that’s certainly an area I wanted to explore. I remember also hearing or reading somewhere that it was in 2009 and 2012 that Facebook and Twitter introduced the like and the share buttons. The years I might be off. Someone is probably going to fact-check me on it, but it’s around then, anyway. That changed the game. What do you think about that? If that is the case, why did that change the game so much?
It’s hard to say, to be honest, as an external researcher exactly what the inputs to these are. Presumably, what you mean there is that it changes the game because it changes the way that information flows.
The user engagement also. You are no longer passive. You are now an active participant as a node within the network.
It’s super important. My sense is that the algorithms flow back and forth and are constantly tweaked all the time. The one bit of algorithmic Croatian around 2012, it’s the one you already mentioned around YouTube recommended videos. It’s true to say that over that time, people were being led into more and more extreme content. If you look at the emergence of extremist political parties from online venues, they all came out of YouTube. This conception that we have of this liberal idea is that good ideas tend to win out over bad ideas in this even contest. It wasn’t an even contest.

Enormous amounts of exposure were given to a series of niche ideas, which caused a traumatic series of cultural shifts we are still living with now. YouTube quite drastically changed the way that it recommended videos in the subsequent year. The damage at that point was done, and a lot of these channels had already garnered very large subscription bases. We are living with those consequences now. That is one of the saddest episodes in history of how important the cultural consequences can be of something that seems as benign and simple as a series of videos to watch next.
You talked about power before. That is incredible power when you are aware of those networks and the reach of those networks. Seating ideas is therefore becoming quite easy. That may be an interesting place to also pivot to Ukraine and what we are seeing going on right now because we see a new changed application of power but also politics. We are seeing that play out in Ukraine. As I mentioned before, I had Peter Singer on the show, and we explored how Ukraine won the information war.
What we didn’t discuss is that it has done only in the West, largely. That’s what I’m exposed to, and that’s what I’m seeing in my Twitter feed and so on. Your research shows something very different when we are looking at the BRICS countries, which is the acronym for the five major emerging economies, Brazil, Russia, India, China, and South Africa. How is the war in Ukraine perceived by the BRICS countries?
I have always been somewhat uncomfortable with the idea that Ukraine has won the information war. With all due respect, Peter Singer has a fairly complacent idea that it has. I’m not necessarily sure that it has. It wasn’t a single voice. There were tons of media talking about the success of Kyiv’s information warfare and how Putin had left his propaganda in the lurch. They didn’t know who was going to be on the Donbas. This is another consequence of secrecy, war planning, and so on. It felt to me a lot like in the 2016 presidential election and during Brexit. It’s felt at other times when we are in information spaces, which we believe are far more general than they are. We are surrounded by pro-Ukraine solidarity. There is not a single pro-Russian voice in my timeline on any social media.
That is so interesting. I don’t know any Trump voters who voted for Trump.
That’s exactly what it was, and no one could believe it. No one could believe that the UK voted for Brexit. There was not a single Brexit voice in my timeline. I’m a sensitive think tanker who lives in London. Of course, there isn’t, but everyone that I know was utterly remaining. It felt slightly like that, but it is a hunch. That’s where the actual data and analysis comes in. The idea was essentially focused on the most obvious Russian influence operation to date.
Our research wasn’t disposed to provide more evidence that it was a Russian influence operation. I was taking that as a given, to be honest, going into the research. They’d already been researched by Marc Owen Jones, DFRLab, Atlantic Council, and others that noticed around these two hashtags, #IStandWithPutin and #IStandWithRussia. Don’t use them to make some trend again. Around those two hashtags, both started trending on March 2nd and 3rd, 2022. I’d read a piece in The Times of London where they’d written up the fact that it was trending in India as an example of anti-colonial sentiment in India.
As in born in India, is that what they were trying to say? There was domestically driven.
It was saying hashtag I found Putin is trending in India, and India is quite anti-colonial. Maybe India isn’t behind Ukraine at all. That was the nature of the article. This research emerged on that day and the day after, which had noticed suspicious patterns in the behaviours of accounts that we use in those hashtags. There are lots of things that researchers look at to try and identify an influence operation and normally a series of interlocking patterns that are very different from normal organic versions of those patterns. Together, it is the nearest that we can get. Loads of accounts were set up on the day of the invasion. All the other accounts were set up on the vast majority of them. It has extremely high retweet and tweet ratios.
Marc Owen Jones had noticed that some of the profile pictures of the account show and the hashtag had been used in romance scams beforehand. There were extremely dense retweet networks, meaning that all the same messages were being retweeted by all of these accounts. Organic social media is noisy, contradictory, and very human.
You cannot expect social media to be organic. It is noisy, contradictory, and very human. Click To TweetIt doesn’t have a clear pattern nation like that. We stepped in to do research, which was presuming that there was a Russian influence operation. It probably mixed some automation, human activity, accounts that were compromised, and accounts that might be being paid to engage. Typically, these campaigns have all these things overlapping, but we wanted to try and learn more about the accounts that were pumping these hashtags.
We did lots of data wrangling, collected 20-odd million tweets and hundreds of thousands of Twitter accounts, and then wined it down to 10,000 accounts that had shared hashtags more than 5 times. Not the whole network, for sure. We thought a pretty high signal part of the network. These accounts are pumping out the hashtag. We did a new technique. We haven’t done this very much. It’s a technique that my colleagues Chris and David have pioneered, which is called Semantic Profiling or Semantic Fingerprinting.
It’s an application of a new part of natural language processing. I won’t get into too much technical detail, but it uses one of the new deep learning models that Google has developed. It can allow you to build a much more general picture of someone’s language use. Normally, in this word of research, we might build an algorithm, which is like, “Are you talking about politics? Yes or no? Are you talking about sports? Yes or no?” We shape these algorithms, and they can then make that decision at scale, which is why we need them because we were dealing here with 1.6 million tweets. It’s far too much for us to manually read.
These algorithms are the only way to do it. It’s the only way of dealing with it. These deep learning models or transformer-based models build very general encodings. We fed them the last 200 tweets that each of these 10,000 accounts had sent, not just the pro-Russian messaging but everything else they were doing as well. We bring that back and encode each. It’s a semantic similarity. What it’s doing is it’s measuring the similarity of every message with every other message that we have collected. It’s a very mathematically complex series.
On an individual profile or of all those profiles that you have got?
Firstly, the 200 messages for each profile, then the profile, and in doing so, all the profiles.
That’s incredible.
We bring back an encoding for each user in 768-dimensional space or something. It’s a very high-dimensionality encoding. We turn that into a vector position on a map. What that is showing us is accounts that over those 200 tweets have used similar language. We will be close to each other, and accounts that have used different languages will be further away. When we get the network back, we then run another algorithm on top to divide that into communities for you. It says, “That’s community and so on.”
When we got that back, and this was a moment of joy as a researcher, we realised that there were very clearly different clusters within this network topography. If you get back like everything is all over the place, this approach has managed to identify very clear nodes. There’s a bunch of accounts there, up there, and over there. It was the task of getting analysts, manually appraising, and then developing a bit of a framework as we went. It randomly selected accounts at the very core of each of those communities. We select accounts that are what we call high modularity schools from very central to each community.
That produced a network map, which I think you might have seen. Let’s remember all of these accounts have shared #IStandWithPutin or #IStandWithRussia five or more times. Before and after they were doing that, you have got accounts, which are one cluster, which is pro-BJP, Hindi language, Modi meme pushing spam bots. You have got a completely different Indian community, which is Tamil, very anti-Modi up there.
You have got a longer tunnel of Indian English accounts, a completely different Pakistan and Iranian cluster, or a South African Nigerian cluster linking into a very distinct pro-Zuma and pro-BRICS cluster and then a South Asian tunnel over the top. I think of quite a different operation, which was a multilingual, primarily English-language spam network in a blue cluster that looked very different from the others. This is realising two things. First, realising this algorithm is different languages that are being used.
That’s the algorithm, right? To make sure that I have understood what you mean when we are talking about it. It’s looking for semantic similarities and similar language use. Are we talking about the actual language? As you are saying, you have got South Asian languages, and it’s bringing it down to the actual language or geographical language that is speaking, or we are also talking about the context, how it’s using language in relation to the #IStandWithPutin. It’s building a dense profile of the message that’s being seated within this hashtag and creating a very dense node. Over the top, you are then overlaying the actual national language/geolocation of where that message is being said. Is that broadly what it is?
It’s doing both of those things. It’s both realising that an account that speaks in Urdu and Hindi are different from each other. It also realises that two accounts speaking in Hindi about football will be closer than one account speaking BJP. To be honest, this is an interesting methodological choice we made. There are a number of different deep learning models. Another one would have been less sensitive to formal language differences. Another one would have tried to translate everything into one common language and then go from there.
That would have been less sensitive to Javanese and an Urdu account. If they were both talking about Chelsea Football Club, they would have been much closer to each other. I didn’t want that because I thought the actual languages they were using were an important part of the story. I wanted an algorithm to be sensitive and reflect in the output when accounts were speaking Hindi versus Javanese or Urdu.
I think it’s a hugely important point because of identity, a sense of belonging and everything else. Why did you want that additional layer of the language in there as well? What was your thinking behind that?
I think that is one of the most important ways in which you characterise accounts. If an account is talking in Hindi, that’s an extremely important behaviour that the account is exhibiting versus an account in English. Bear in mind what we are trying to do. The whole point of this research is to characterise accounts that have been pumping these hashtags in order to possibly say something about targeting or underlying strategy. We want to know whether it’s Hindi, Urdu, or Tamil. Not that we were expecting there to be such language distinction. The beauty of research in general, and perhaps this method, is that you can go into it and use it. I had no expectation that I was looking for Javanese or Urdu language.
I thought these were going to be primary because the hashtags are in English. We didn’t even translate the #iStandWithPutin in Urdu. I was expecting most of these to be English language accounts and to be talking about different things. Maybe there’s going to be a Brexit cluster, a sports cluster, or something. Recognising that there’s a big difference in the actual languages that accounts are using is one of the important takeaways for me from this whole research. What we realised was all these different language groups, none of them angled at the West at all.
I would say there are probably two things happening here in terms of how to interpret these results. Firstly, I suspect that what’s happening here is an exploitation of pay-to-engage services. We often don’t talk much about the intersection between geopolitics, information warfare, spam, and online fraud, but they are a lot more linked than we think they are.
What is happening is that these networks, perhaps different overlapping networks, have been rented to pump this hashtag. In part, there are some of them. There are spam bots in here. They are not real people, I think, who are now talking about the Kashmir files or a video release in Iran or something. Over those days, they all switched and started sharing the hashtag in very large quantities.

That struck me incredibly. You have some graphs as well as the jump in accounts from the 3rd of March onwards, at least for some of the data sets. It’s incredible. Have you been able to establish why on the 3rd of March? Have you been able to pinpoint some action or something that’s happened? The invasion started some days before then. Was this a matter of catching up that it took him so long? Why the 3rd of March?
This is my interpretation, as research is more than straight up what the data necessarily says. My suspicion is that this campaign was linked to the UN vote condemning the invasion. There is quite a lot of messaging linked to the UN vote, and it exploits this pro-BRICS solidarity idea, especially with South Africa and India.
What might have been happening here is this was a campaign to try and make a #IStandWithPutin trend in these countries, both as a way of possibly influencing the general populist in those countries, but also politicians and diplomatic communities that might have been on the fence or trying to decide what to do about the vote. That is simply a possibility on the basis of what I think these influence operations are trying to do. If you want, my guess is that this was an attempt to target BRICS countries and then run up to the vote to try and use anti-colonial, anti-Western, longer-standing ideas in those societies to puncture the picture of global condemnation of the invasion.
I guess fan old flames that already exist give fuel to some of those old narratives that already exist and amplify them. You said a lot of those were bots. Have you been able to get a sense of what percentage of them were bots? How far and wide did these messages spread? How effective were they as that’s ultimately what it comes down to?
We can never definitively call out a bot on an individual level. I’m much more comfortable talking about suspicious patterns at the network level because they are much more empirically demonstrable to be different from an organic equivalent.
To get a bit of clarity on that, can you explain what are some of those things that would make you believe that it’s likely to be? I know you can’t definitively say it’s a bot, but what are some of those patterns? I know you mentioned some already. To recap on that before you continue, what are some of those patterns that would make you believe that it is a bot farm that you are dealing with?
I think some of this is paid to engage amplification. I do think there’s some organic activity here as well. Let’s talk about the actual hashtags. They come out of nowhere. It looks as if a switch is being flipped on. There’s a completely flat line at zero, and then on March 2nd, it spikes and starts globally, trending all over the world, and then sharply declines on the other side as well.
You can very rarely see hashtag activity like that on Twitter. Society is like social momentum. It often takes time to build up. There will be a discussion about it for a while. Normally, an event will cause it to rise. It will fall and then rise again. You very rarely see a from zero. On a particular day, it explodes into global resonance.

Ally, to that, on the per account level, you have got accounts that have extremely few followers retweeting Hindi and then suddenly sending an English language pro-invasion series of messaging, which received 10,000 retweets a piece. This account has sixteen followers. Is it possible that this person is real and that this pro-evasion meme was sent? Yes. I don’t work for Twitter. I can’t look into this person’s head.
For anyone tuning into this, I’m willing to take whatever odds you want to give me on account profiling or being part of an influence operation. You have got all of these different networks spread around the world, Tamil, BJP, South Africa, South Asia, and all on the March 2nd flip. Some start retweeting the same English language, pro-Russian mimetics from the same accounts. It’s extremely unlikely. Twitter has taken down some of these accounts now. We are also seeing enforcement action on their side.
Has Twitter approached you in any way, given that this is your research? Have they engaged you for your support in this or to understand the ecosystem any better?
They have. I discuss my work with Twitter quite constantly. I would say I genuinely don’t think that we can do anything that Twitter can’t do with its data. They have some of the best data science teams in the world. They understand their platform and have way more information about their platform than I ever would. The difficulty is that there is a gap between when a researcher like me can say something is super suspicious and when Twitter takes down tons of accounts. They are like a mini-government. They have a very heavy responsibility not to start deleting normal people. It is in that gap in many ways with information warfare and habits. It’s got to be quite suspicious because otherwise, it’s not effective.
If they are not pumping the hashtag, what are they doing? They have to create suspicious patterns to have an effect. It’s the job of the people operating these campaigns to know or try and work out where the evidentiary line threshold would be for Twitter to take down and to all stay below that. I was talking about this hidden conflict in a way between militaries or states and social media platforms. This is it.
That dynamic is exactly the conflict I’m talking about, where they will be taking up new accounts, running them, and probably doing research on the accounts that got shut down and trying to understand why they were. Twitter, on the other side, is trying to do the same thing, detection and enforcement.
As you said, it’s below the detection threshold. That’s where these types of grey zone operations exist. Also, there’s plausible deniability because it’s very difficult to ultimately trace where the originator is sitting. Have you found that with this particular data set as well? Is it very difficult to geolocate where the button is being pushed? It doesn’t matter whether it’s in Moscow or Delhi.
It’s completely my interpretation, but I have often fathomed that whoever runs these campaigns sometimes slightly overeggs the nationalism. When we were doing research on information operations around Brexit, English nationalists would have like, “These accounts would have a wall of St. George crosses and lions and talk about nothing else. Every tweet would be then shrouded in a Union Jack.” It’s not particularly convincing behaviour of an English nationalist. As I said, I used to do lots of research on the English Defence League. They are human beings and talk about loads of other things other than English nationalism.
In some ways, my impression about the other accounts, Hindi and BJP, felt like they were created by people who didn’t understand a huge amount about the regions they were trying to get these accounts to be part of. We cannot geolocate these accounts. That’s a technical trail of breadcrumbs, which Twitter might do, but we certainly can’t. I have no idea who’s operating these things. My impression was that many of these accounts weren’t intimately familiar with what life is like in South Africa or India.
They didn’t have algorithms that would help them determine the semantic differences of particular areas.
This is the thing. Hopefully, your audience is getting a sense of the massive difference and how easy this is to do versus how easy it is at the spot. If this was a series of overlapping pay-to-engage exploitations, this might have cost them £500 to do from a spammy marketing team that got a bit of training and a couple of spammy tools in order to set up an account to pump hashtags. On the defensive side of this, not only do you have world-class data scientists and Twitter trying to do something about this.
On our side, we have got two professors of natural language processing. I have got a team of 25 developers. We do foundational research in the methodologies that we are using here. We are using next-generation Google-based deep learning models, and it took us over a week to pull apart the investigation. Even now, I can’t come on your show and say, “For sure, these are bots, and these are Russia.”
It’s incredibly dangerous and scary. We also have to realise that if the West is employing very similar tactics. We can’t deny that. We have seen the group Anonymous openly side with Ukraine and openly target Russia through cyber-attacks playing Ukrainian folk songs on Russian TV stations, intercepting radio channels, and so on and so forth. I guess that’s the point you are making. It is war exercising the information domain.
There’s no blood, so to say, it’s a very fine line when we then cross into the real world. That’s when people become motivated and start taking action based on the information that they are being fed. Have you seen any action being taken so far? I know it’s very difficult to make an assessment of the effectiveness of any of these campaigns, but have you been able to get a sense of a growing sentiment or support for the Russian side of the conflict? What’s your take on this?
I have no idea. It’s too early days. This question of how online behaviour translates into offline effects, attitudinal behavioural or otherwise, is such an important one. In many ways, it’s the holy grail of all of this research. In other parts, we have correlated online hate activity and offline hate crime. We have done tracking of QAnon groups and seen how that directly translates into 5G towers being attacked. We can see it. There is follow-through, but it’s far too early to say. I can’t even tell you how much organic these campaigns have had so far.
The very furthest that this research can take us so far is to suggest if it were Russia buying this. Again, I can’t guarantee that it, but if it were, they might be going after non-Western audiences more than we think. That perhaps leads me to where some of our discussion started, which is around this idea of Ukraine winning the information war. If there’s one message from it, it is to at least lightly suggest that just because me and you, and maybe the people tuning into this, can’t see pro-Russian, pro-invasion messaging, it doesn’t mean that it’s not happening or not working. It means that we are not the battlegrounds that are being fought over.
If you cannot see pro-Russian invasion messaging, it doesn't mean it's not happening or working. We are just not in the battle grounds currently being fought over. Click To TweetThat’s such an important point because it’s very easy to fall for the trap, especially because we have seen such humongous geopolitical shifts in no small part due to what we perceive as the success of the Ukrainian information operations. We have seen countries that have been neutral for hundreds of years turn against Russia. Germany has changed. They pivoted completely. NATO and the EU are more galvanised.
This is something I spoke to Peter Singer about. Even in the US, we are seeing people who were particularly Republicans and originally pro-Putin who have now shifted away because the public sentiment is shifting away from, broadly speaking, pro-Russia and Putin to pro-Zelenskyy and Ukraine. For that very reason, it was very easy for me to speak to Peter about that because it’s what I’m seeing in my echo chamber. The important lesson here is to let our hubris catch us. Many have been caught out by Brexit, Trump, and the like.
Maybe that’s the ending thought, which is that Zelenskyy is a great success in couching this conflict as not one between Russia and Ukraine but one between Russia and the West. It has consequences that we are learning about now. What it does do is galvanise and unite NATO and Europe. It also provides opportunities for Russian influence to have effects in other parts of the world that have much less sympathy with Western military adventurism and much longer histories of real experiences of colonialism and all the abuses that they can bring. However spectacularly wrong it might seem to you and me to justify what is essentially an imperial invasion using anti-colonial motifs and means. That seems to be what Russia is doing.
Realpolitik doesn’t care.
Information warfare doesn’t either because it’s all about behavioural effects. Information sadly dies in this world. Right and wrong, interesting or not, good and bad are all become instruments to be used for ulterior means, which is quite upsetting as an author. This is the world that we seem to be living in now.
Information dies in this world. Right or wrong, interesting or not, good or bad, they all become instruments for ulterior means. Click To TweetAs you said, it’s the death of Gods or the end of Gods. The information is the new God. On that note, Carl, it’s been a fascinating discussion. Thank you so much for giving me so much of your time. That’s a very useful warning for us to know. Let’s not forget that there’s a battle, but there’s also a bigger war at play. Let’s not feel that we are winning one and then lose the other. That’s the key message here. Thank you so much for your time. I appreciate it.
Thanks very much.