[Boots Off the Ground: Security in Transition in the Middle East and Beyond] Episode 29: Digital Authoritarianism

Abstract

Dr. Marc Owen Jones discusses digital authoritarianism, the recent developments in technology and tactics of repression in the Middle East.

This podcast series is presented by Dr Alessandro Arduino, Principal Research Fellow at the Middle East Institute, National University of Singapore.

Listen to the full podcast here:

 

Full Transcript:

[Dr. Alessandro Arduino]: Welcome to the 29th episode of our Middle East Institute podcast series boots off the ground, security in transition from the Middle East and beyond. In this series, we look at the future of warfare and uniformed soldier or boots on the ground, being replaced by private military companies, autonomous weapon system, and cyber weapon. I’m Alessandro Arduino and I will be your host today.

[Dr. Arduino]: Last month we explored the future of cyber warfare and cyber intelligence with Roy Zur and today I’m very excited to explore digital authoritarianism, with our special guest, Dr. Marc Owen Jones. He’s Assistant Professor in Middle East Studies at Hamad bin Khalifa University. Marc’s knowledge of political repression and information control strategy is second to no one has been spoken about and published extensively on the subject. And what I want to focus [on] today, is his most recent book, Digital Authoritarianism in the Middle East, the book provides invaluable insight on the dimension of the cyberspace, especially if we look at how digital authoritarianism is playing around and I really want to advise to our audience, it’s really good reading. And, Marc, thank you for being with us today.

[Dr. Marc Owen Jones]: Thank you. It’s a pleasure to be here.

[Dr. Arduino]: In your book, you mentioned the strategy of control in the service of elite power maintenance. I mean, strategy that affect people’s lives, propaganda and disinformation definitely are not a new thing. But the digital revolution is accelerating this trend, especially now if we focus on the Middle East, your area of expertise. Can you please trace for our audience? How we move it for, let’s say, a kind of euphoria during the Arab Spring, when we shared a very positive vision of technological driving revolution to what we have today? Digital authoritarianism. Let’s say, with a more dystopian approach to technology. The floor is yours, Marc.

Absolutely. Although before it began, I think maybe this episode instead of being called Boots [off] the ground should be called Bots [off] the ground. But yeah, I’d love to track this change. I think it’s such a crucial question. I mean, as you said, 2011, I think was beset till 2010, the beginning of the Arab uprisings, was beset by this techno utopianism. This idea that technology would liberate us, it would, it would allow for greater democratization and allow people who lived in authoritarian regimes and as we know them at least almost always seems to slightly authoritarian or authoritarian, who would allow them to hold the government to account and liberalize the space. This idea of liberation technology was ascendant.

I think therein lies the problem. We have I think it’s very normal and I don’t know if it’s a postindustrial thing. To believe that technology somehow is deterministic, that technology is devoid of how it’s used in social context, that somehow technology will solve our problems. And, and we’re used to this, you know, technology has been important in medicine. It’s, you know, it’s helped us, you know, improve our infrastructure. It obviously does great things. But that doesn’t mean technology is inherently benevolent.

And I think what happened in 2010-11, it was useful. These new tools bring along I would say, a honeymoon period. And a honeymoon period is a time where people’s lack of experience, their naivety, and that sense of optimism are ascendant and they use these technologies as in a way they’re expected to, and I say that expected you because there was a lot of stuff in the media and even from the companies themselves about how these technologies were for freedom of speech. So, when you think about it, if you’re using these Facebook or Twitter, using them to communicate your frustrations of the regime or your criticism regime, using them to organize, because, in a sense, you are generally young, is unfamiliar space. You don’t necessarily think of the potential consequences. That can be the negative consequences.

So, I think people were swept up in this euphoria, as was mentioned, the barrier of fear was broken. There was definitely this idea that change was imminent. And I think this allowed people to let their guard down. The problem with [it], is not necessarily technology itself is that this was all happening in the context of authoritarianism. And authoritarian regimes by the definition they will use anything a bit disposable, to actually try and repress opposition. So, if you have someone who’s posting a photo of themselves at the Tahrir Square, or Dawwār al-luʾluʾ[ah] (Pearl Roundabout) and saying yeah, you know, like, you know, let’s we want change, then all it takes is a couple of people from Mukhābarāt or intelligence services or pro regime loyalists to circulate that photo and say, hey, this person is a traitor. What’s the name? Where do they live? And that’s exactly what happened in places like Bahrain and Egypt. Right. So, we saw very quickly this technology was co-opted, and I say quickly within months I mean, I started my PhD in 2011 on Bahrain. I’m behind my initial and within about the space of three months, my thesis was already technologies being co-opted as a tool of repression. So, I don’t think it took long.

You know, we even saw the use of spyware like Pegasus but not Pegasus FinFisher, in this case being used in 2011 and 12. So even these kinds of more intrusive things, I think that we associate with 2015 on[ward], were being used, and when, when, when there are examples in the public sphere of activists, opposition people who use technology, being arrested or tortured as has happened then all this does is spread distrust between people and technology. People know then, that if they use technology for criticizing the government, there’s a good chance that they could be found out. And I think over the past 10 years, civil society has become far more aware of the consequences of technology. And so, as well as governments using this technology to prop people, I think the element of trust that is necessary for social movements and the trust in technology has been destroyed by the many high-profile examples of states using digital technology to oppress people.

[Dr. Arduino]: Thank you Marc, as you mentioned, that the honeymoon period is over. And what we like to focus here, and we started with a discussion with the United Nation[s], we move it on talking with the founder of Bellingcat, [Elliot] Higgins and we also had the founder of NSO (NSO Group Technologies) in our podcast, is the cyber disinformation sphere or what you call in your book, ‘pseudo reality industry’. What exactly is a ‘pseudo reality industry’?

I think the idea of pseudo reality is interesting. I mean, as you as you mentioned, that this information is not propaganda is not new. Many of the techniques that we’re seeing are not necessarily new, obviously, technology has changed. But I think the proliferation of digital technologies has provided so many new business opportunities for firms to exploit that technology, to create what I call, pseudo realities. Now, pseudo, I use this term, I’m not going to theoretical but just the idea of a pseudo-event which was coined by Boorstin (Dr Daniel J. Boorstin), the idea of you creat[ing] an event that’s kind of staged in many ways, or at least fake and then that replaces reality.

So, what I mean by this is that there’s a bevy and I’ll just use an example because there’s many people involved [in this], but I’ll use an example of say, Western PR firms. I say Western, I use the term loosely these PR firms could be everywhere, but I use that term because I want, I want to emphasize the point that there is a chain here. It’s not just something, this information is not just something that happens in authoritarian regimes versus non; there’s a supply chain. So, for example, you have a company like right Project Associates, right. This is a British company. We know because they filed VERA filings on a US government website, the Department of Justice. And this company, they were contracted by the UAE Supreme Media Council (SMC). To create a campaign, they worked with the parent company of Cambridge Analytica, SEL social, the idea was to create a global media campaign, right.

So, part of this campaign was to create social media adverts and social media accounts. And I think this is a really good example because we see this as a playbook of many of these PR companies, including the likes of Bell Pottinger. What they do is create fake accounts that then, you know, demonstrate that they have certain political opinions. They’re engaging in discussions online trying to shape discussions online, but these accounts aren’t real people. You know, there are artificial civil society, they’re artificial grassroots, astroturfing. So, what they’re doing not only are they trying to shape the information space, but they’re actually trying to create the illusion of popular opinion by creating fake accounts, right?

Because in the digital world, sometimes the idea of just being an account is enough to create a voice. So, these industries are created and the narratives the spreading. Remember, these narratives are in many cases paid for by a client, whoever the client is, so the client will have a desired message that they wish to spread. In the case of this one that was like [unintelligible] supporting terrorism through Al Jazeera and the use of [unintelligible]. There [are] elements of truth in that it, of course, but the emphasis is about a specific message that those generally in power, who have the money to pay for these operations to trying to do.

And there’s many examples of this that I’d be happy to give, you know, Bell Pottinger worked in South Africa to do something similar, and then they, you know, provoked racial tensions because they’ve basically had like a kind of a race baiting campaign created by these fake accounts. So, they’re stimulating, injecting controversy, then injecting polarization into civil society through the creation of people who do not even exist. Right. And to me, this is this idea of the Pseudo Reality Industry. There are people out there who’s basically mak[ing] money off monetizing the services provided to high paying clients in order to manipulate the public sphere and create the illusion of opinions that don’t really exist or they may exist, but they need to be amplified.

[Dr. Arduino]: I think what you just mentioned, is very compelling, especially if we look at how the virtual world have a direct effect on the real world. In our previous podcast, we discussed how cyber disinformation campaign also supported the role of Boots on the ground, of mercenaries.

So, when we have virtual and real mentioned together with violence, one example as you mentioned, at Africa, we have an extensive expanding footprint of the Wagner group [which] is a paramilitary group Russian parameter group in Africa. And at the same time, it’s combined with the use of Russian Troll Farm[s] in spreading disinformation in the continent. So, in your opinion, when we try to point the finger and say someone is a mercenary is not easy but [its] being done. But I think it’s even more difficult to point the finger to say someone is a cyber mercenary.

So, in your opinion, one cyber mercenary share similarity with conventional mercenary, let’s say try to make an example a Bot Master using Puppet socket to spread cyber disinformation. These as you say the example of South Africa ignited violent act that ended up with people killed in the real world and not only harassed in the virtual one. so, we can call him a cyber mercenary?

[Dr. Jones]: I think so. I mean, I think you know, I think obviously mercenary has connotations and those connotations that often like you said, the more associated with someone like the people who are the boots on the ground, but, you know, I think if someone receives money to engage in behavior that is deceptive, where they are misleading someone. And this is crucial because intent is crucial in this whole definition. Their intent is to cause some form of harm, even if they agree with it ideologically. There is an element of [a] mercenary there. Right?

Trying to determine the outcomes is hard because I don’t know if we can just term someone a mercenary based on the fact that we know that what they did resulted in violence, because at the same time, you can’t guarantee outcomes. Someone could intend to [write] online in order to create a public shift in mood that may result in violence. They may not succeed in that, that doesn’t mean that [they’re] not a mercenary because their intent was always to do something particularly. But I think at the end of the day, if people are engaged in deception operations that have an intention to cause harm, and they are receiving some sort of compensation from it, then they are a mercenary. A Cyber Mercenary.

[Dr. Arduino]: Now you pointed out a compelling fact, that is intention. [And] in the United Nation[s] definition of mercenaries, [the] intention also plays an important part. But we are moving in an uncharted territory now in the definition of mercenaries. Something that per se it’s very compelling and I’m pretty disgusted. But also, it’s another thing that come out from our previous episode talking in a different way on not only on mercenary, but even on combat drone[s] is not a problem only of intention, but it’s a problem of attribution. And in the cyber realm attribution and we discuss it with Roy Zur, in the previous podcast is extremely difficult to pinpoint who is the real puppet master behind the scene[s]. So, when you were investigating the cases for your book did you experience the same problem of attribution?

[Dr. Jones]: Attribution is so difficult, and you know even as you know in cybersecurity, where we’ll talk more about penetration issues, even those who are tackling it will I would say they’re talking degrees of confidence about who might be behind something for example. Because it can be very hard to attribute. And I think it’s even harder and disinformation because you’re not necessarily looking at someone who’s going to trace back to an IP address used by the National Security Agency or some country, right. It doesn’t work that way.

And I think there’s been a number of investigations I think one of the most obvious examples I mean, I look at Bot and Troll campaigns all the time. But I think one of the most interesting examples was when I worked with the journalists from The Daily Beast, Adam Ronsley, and we looked at this network of journalists, who were 20 journalists who managed to fool 46 different international outlets into publishing over 100 opinion articles and these journalists didn’t exist. You know, there were social media accounts that were obviously backed by some sort of company, whether it be a PR company, strategic communication company or in cyber intelligence when we don’t know. However, in that case, it was interesting because what we had to do when we had the weight of evidence that was publicly available, which was these 100 opinion articles. We looked at them and we analyzed them from it to sort of discord headspace. What was their kind of argument, what was their overarching argument, and we sort of have concluded that really the, the arguments been used by these fake journalists. Broadly supported the policy of the UAE, maybe right-wing American hawks or Israel, possibly.

So, in that case, we could narrow it down. But there was no smoking gun. There was no someone who came from and say, I’m a whistleblower. Yeah, we [work for] this kind of company. Right. So it was really hard to determine and, you know, this is kind of an investigation that could be ongoing, you know, hints could keep coming in, and in which case, and sometimes there might be a case where, you know, someone at Google might say, hey, one of the accounts using this operation forgot to use a VPN and the IP address at this company, these things can happen down the line.

And even with this case, that might happen. But I think it’s a good example of where attribution is hard and you have to make sort of educated analysis, and then end up with a degree of, of likelihood rather than anything certain. But this is a problem with the information space and it’s exploited and weaponized by disinformation actors, because they know that accountability and attribution is really hard.

[Dr. Arduino]: And this is very compelling. Because there is a lot of talk about fake news, but it’s mind blowing the fact, that not only new media platform, but also journalists are fake online. It was really something that after reading it in your book, every time I get a press call to request or something like this is the first thing [is I] go and check their background but as you mentioned correctly, have a face-to-face talk, even on Zoom can be very important while you just Google it, you’re fine. You’ll find a couple of references and you go with that and that’s wrong. But another part that was really intriguing reading while reading your book is what you term Hack and Leak operation. So, what is a hack and leak operation and how does it affect digital authoritarianism?

[Dr. Jones]: So, I think the hack and leak operation is this now I think it’s becoming more widespread. We saw a lot of it in the Gulf crisis around 2017 is where state or state back actors or non-attributed actors hacked sensitive material from certain people and leaked that information in order to cause some level of harm to the person who’s exposed. So, we saw a normal that in the Gulf crisis, Qatar accused of hacking the UAE ambassador to the US Yousef Al Otaiba. There was a Al Jazeera journalist whose phone was hacked and photos of her in a bikini were then circulated along with fake news stories that she was sleeping with the head of Al Jazeera and in return for promotion.

So, this idea that now I think, in terms of a security vector we all have you know, I always like to think of it you know, the ultimate totalitarianism is being able to read your thoughts, right. This is our brain is the last bastion of privacy, but a close second would be a mobile phone, we document so many of our intimate details on there, whether it’s selfies, it’s conversations with friends, close friends. So, the ability to have the information, which is obviously now done through hacker cooperation, using things like Pegasus, or even social engineering, is we’re able to weaponize people’s personal information in a way that I think was much harder before because of digital technology and then use that information to smear them. And we’ve seen the national level debates, I mean, the Qatar crisis was one, where we saw the information used in these hacks to kind of denigrate opposition everywhere and provide forms in a way of trying to kind of rationalize foreign policy decision say. Hey, we’re doing this but we’re doing this because this other state is also doing this.

And I think then, you know that the problem with these hacking leave operations again, is that we don’t know always who’s behind them. But this the supply chain is interesting. I mean, the example of this calm I think, we saw we see heads of state being hacked now. I think the Emir of Qatar was hacked by KARMA. And this was an interesting operation because it involves ex NSA, American NSA employees working for the Emirati government to target American citizens and non-American citizens. So, in this hack and leak operations, we see the full supply chain in action. We see people trained at once they’ve been hired by another state to engage in spying on people in a transnational way.

And what happens with the information taken then becomes another question. And this is the thing it’s the weaponization of personal information, political game that I think [that] alarms me, I don’t think states a fair game necessarily, but I think if you’re a politician or a state leader, there’s a reasonable expectation that you might be targeted. I think what alarmed me so much is seeing citizens and civilians and activists being hacked and then potentially mal-information we can use the smear them that really worried me because, you know, I know activists are public or journalists are public, but they didn’t sign up for political office role. And to me, this has a really strong chilling effect on freedom of speech,

[Dr. Arduino]: As you mentioned that the operation in the UAE, with former NSA employee[s] that’s quite in my opinion and definition of cyber mercenary, and if I recall correctly, when the head of NSA was asked about it. There are some kinds of contingency for former operative to apply in let’s say their dark art abroad, but then there are not so much barrier and we knew about it, just because if I recall correctly, there was a whistleblower that came back to the United States and said, “Oh, we have started to do some nasty stuff on our own citizen and it’s better to take a look about it if it’s something wrong”.

But then up to now we have been talking about how states operate in the digital sphere in what has been also called war against reality. But states are not necessarily the fundamental unit of analysis because we see there are a lot of private operators but also non state actors, that are bent on destabilizing the state using this information and using cyber sphere operation. As an academician, I’m asking you, this information cannot only be researched using a very narrow or let’s say only one single academic discipline. In my personal opinion, that’s an obstacle to a proper research widespread, but in this respect, what do you suggest to address this compelling problem?

[Dr. Jones]: I mean, I think it’s a really an important one and I make that point very clearly that this kind of research transcends disciplinary boundaries. I mean, there’s different elements in the chain. There’s the detection of disinformation and monitoring. And then down the line, there’s how do we address these problems? In terms of detecting you know, I think we need increased cooperation from technical experts, who were a little might be well versed in it, but I don’t just mean you know, your classic IT kind of experts. Also, people who are like in the digital humanities who do things like focus analysis, you know, author profiling or analyze large amounts of text to determine who might be behind them.

But crucially, what we need in this our political scientists, we need security analysts because there’s one element where you can get something’s fake. But you need people to know the local context of politics in to better explain or to try and understand in some cases, why these might be the problem. If we look at cases where there’s no attribution, one of the ones I mentioned was this case of, of the fake journalists. I mean, here you have so many issues that needs to be remedied. Firstly, you have the journalist or the editors who fail to do proper verification of the sources.

How do you solve that particular problem? Well, you need better training, you need to address the media literacy and because the journalists, how do you determine who be behind that? Well, you need people who are familiar with the security or geopolitics of the region or different regions to be able to determine that those 100 articles probably fell within or could be explained as like pro Emirati propaganda. But at the same time, you also need people who are able to understand the notion of deep fakes are people who are able to do investigative journalism. So, I think you really need this kind of collaboration between different people to first understand it and crucially when it comes to it. We need language experts when if, if we’re looking at expanding this beyond one language, and that’s crucial, Arabic language you every language in the world will have these disinformation problems.

And so you’ll need linguistics, linguists, translators, really unit you need everyone you need institutions, you know, so things like groups, like Citizen Lab are a good example of people who combine elements of expertise to produce compelling reports. I mean, they’re more technical led, I think in the way they publish a lot of information. But at the same time, they still have a holistic approach to this problem. And as public service oriented, you know, you can’t just you can’t have just a security analysis.

But I think what we’re talking about here is an ontology in which we generally want to accept that disinformation or deception is a problem and not something that needs to be encouraged. So, we need people who are who believe that tackling this is actually a public service. And that’s where its crucial I think that’s what will affect how we deal with it. Because in order for this to remain a public service, that needs to be funded or managed by people who aren’t in this to try and use as a tool of manipulation themselves. I mean, there’s a few examples of fact checking organizations that themselves are very partisan. I can name a couple in Spain, but it’s a different debate altogether.

[Dr. Arduino]: Now, having said that, the problem of language, it’s very compelling because when I was talking with specialists about fake news, the result of attention now on the Russian blogosphere. And I will say no, that’s that’s the case. It’s important that you look more at that. So, for example, China is not an issue, and they were very blunt and they say. We don’t know because we have a Russian language expert. We don’t have Chinese language expert. We are not looking at that just for the problem.

And machine translation in Chinese. Trust me is not working as well as with other European language, but this is a critical issue and university and place of when you train people to think, still need to address this kind of problem. Same problem is with the lack of IT expert and not only on cybersecurity everywhere. What company like the one we interview with Roy Zur are doing is training people in bootcamp doing working come very intensive because they mentioned that waiting University time, three slash six year to have a proper IT expert with this kind of demand from the market. It’s undoable. There is so much that we need right now. And waiting this very lengthy time, and not many people are interested up to now, to enroll in this kind of job. But doing this course, looking at fake news. It’s a critical issue.

And what I always ask as the last question to our guests, is to look in a 30 year time frame, but I realized that moving from the boots on the ground to the cyber sphere, talking about 30 year it’s really impossible. So, I’m asking you another impossible question.

But it’s to look at the next let’s say, five, eight year and your book mentioned several times something that for me is very important that a critical issue that fake news is not the fight about truth is a fight about power. So, in our private discussion with [Elliot] Higgins, the founder of Bellingcat and with Roy Xur, [who] was a retired Major from Israeli army Unit 8200. We discuss it as we did with you today, the need for accountability and transparency. So, if you look at the evolution of digital authoritarianism, let’s say from today, to 2030. What is the reaction that you can forecast?

[Dr. Jones]: I mean, that’s the news. Obviously, something I’ve seen as, as a negative trend, increasing, I would say, authoritarianism. You know, I think in my experience, the trend is also I’m cynical about the direction of the trend, or what I see is a rise of what used to be called totalitarianism in some respects. I mean, totalitarianism became unfashionable, after the demise of the Soviet Union.

And we started using authoritarianism, for obvious reasons. But I think one element of totalitarianism always struck me, and [totalitarianism], the key distinguishing feature, I think, is the desire to invade one’s private space, right? Regimes where I think, they ‘fetishize’ security, right. So, all these digital technologies become fetishes, this idea of Pegasus and the scalable use shows how authoritarian regimes just crave information.

So, the next logical step for that is the ability to know as much as possible about the individual. And I think that trend seems to be going in a direction that doesn’t seem to be countered by sufficient safeguards. As far as I know, especially in authoritarian regimes. There’s no compelling reason to believe that what’s happening in Saudi or the UAE, or what Israel is selling is showing any sign of abatement. I don’t see it.

You know, where are who’s going to stop, for example, the Saudi government from, you know, deploying their own or hiring their own in-house specialists to develop technology that spies on people. What’s stopping, there’s no parliament, there’s no obvious transparency. I would say you know, in the current guys, the Chinese government have shown no, you know, no regard or willingness to curtail their use of surveillance on the Uyghurs or the general population, the social credit system.

US at the moment I think is plagued by its own internal battles and I think, you know, regardless of who wins the next election, I think there’s this ascendancy of right-wing populism in which issues of, you know, personal freedom of going to be subsumed into this idea of security states where nationalism transcends any sense of personal freedom despite the rhetoric. So I see it’s moving in a very negative direction. And I think all these talks for transparency, this kind of critique of Pegasus tone from the US is superficial. We know that the US is pragmatic. We saw that with Saudi Arabia Joe Biden said it would make [pariah of MBS] but visited and when they need gas. If the US allies require the use of Pegasus, then it will be allowed. So, I don’t see anything a sufficient counterweight. I don’t think we’re in that moment, unfortunately.

[Dr. Arduino]: So, Marc, I’m going to thank you besides the fact that you give us a very negative end note of our future but you also helped me to rope in our next podcast where we are going to discuss about another aspect of the cyber sphere security that is cyber sovereignty. As you just mentioned it, Marc, you say that China is having its own direction, and for example, in country like China or Russia, the cyber sphere is national territory. And it means that everything that happened in the cyber sphere is part of, [and] comprehend[ed] in the national security law, and is a huge repercussion not only on local citizen[s], but of course in the exchange of cross border data, but it’s something that we are going to discuss in our next podcast, but it’s Marc allow me to thank you again, for your time and to thanks to our audience and for following us on BOTG. Thank you and have a great day.

 

About the Speakers
Dr. Marc Owen Jones

Dr. Marc Owen Jones is currently an Assistant Professor in Middle Studies at Hamad bin Khalifa University, Doha, where he lectures and researches political repression and informational control strategies.

Having published several books centred around the Middle East, Dr. Jones possesses expertise in a wide range of research topics, from historical revisions, postcolonialism, de-democratisation and revolutionary cultural production, policing, digital authoritarianism, and human rights. A recent focus of his work is on how social media is used in the Middle East to spread disinformation and fake news.

In addition to his academic work, Dr. Jones specialises in providing timely analysis on disinformation campaigns, and has taken active roles in numerous high-profile investigations. He has written for several international media outlets such as the Washington Post, CNN, the independent, and the New Statesman. He also makes regular appearances on cable news outlets such as BBC and Al Jazeera.

 

 

 

 

 

 

Event Details

Related Events