INTERVIEW

How does Israel use social media and technology for disinformation?

Published

on

Digitalization expert Associate Professor Marc Owen Jones spoke to Harici about how Israel has been using social media to manipulate and spread fake news during the Gaza war.

Marc Owen Jones is an associate professor of Middle East Studies at Hamad bin Khalifa University, where he lectures and researches on political repression and informational control strategies.

His work focuses on how social media has been used to spread disinformation and fake news in the Middle East, exposing the disinformation campaigns that accompanied Israel’s attacks on Gaza.

Dr. Jones answered our questions on the psychological warfare waged on social media on the Gaza war, the role of large corporations and the use of artificial intelligence algorithms in these campaigns.

Let’s start talking about your researches which you do on disinformation and misinformation. So, you say that Israeli attacks on Gaza have been accompanied by endemic disinformation and misinformation. Can you specify which methods and tools Israel use for that?

So, ever since October, the 7th, we’ve seen essentially a campaign of disinformation and misinformation by Israel. The purpose of the disinformation is primarily to demonize Hamas and make Hamas’s attacks seem as brutal as possible in order to legitimize Israel’s response. Some of the most egregious examples, some of the most blatant examples we’ve seen are, for example, these accusations that Hamas beheaded 40 babies. And we have seen accusations that Hamas conducted systematic rape against women.  Now it became clear quite soon that these narratives were false. But these narratives are deliberate. They’re not accidental.  Throughout the history of all conflicts, we’ve seen narratives from the first world war that show how the enemy attacks babies and rapes women.  Why? Because these are red lines in almost every culture.  People think the idea of killing babies and children is horrific which it is. So, if you can convince people that that’s what the enemy do, that’s what Hamas are doing, then you can also convince those same people especially in the west like the US, a big Israeli Ally, to support Israel’s brutal genocide in Gaza.  Those are some of the big examples but there are number of other techniques and tools. For example, the use of fake accounts online. This is a very common tactic we know.  For example, that Israeli firms have skill set where they can create thousands of fake accounts, not just on X, but on Twitter, on Facebook, on TikTok. And then these accounts will engage in spreading propaganda and misinformation. One particular campaign that I thought was very interesting was the use of loads of fake accounts to spread disinformation about UNWRA, the United Nations Relief and Works Agency. This has been a big part of Israel’s campaign, is to smear UNWRA.  Why? Because UNWRA is one of the biggest employers of Palestinians and it’s one of the entities that basically sustains Palestinians claim to statehood. Israel have been on a huge campaign to link those the UNWRA to terrorism. This is another tactic of disinformation. It’s to tie legitimate organizations and accuse them of terror.  In this disinformation campaign, we had fake accounts creating fake websites, creating fake social media accounts, and then using them to try to spread disinformation about UNWRA being connected to terrorism.  This is just another example of it. The effectiveness of these campaigns is not clear but we do know, for example, that a number of countries started to remove funding from UNWRA. And recently the United States for example passed a federal funding bill that also banned funding to UNWRA next year.  So, it does seem that these campaigns actually do have an impact which is very unfortunate and that impact is to trying undermine harm and delegitimize the Palestinian cause.

So, you said that actually what Israel targeted resulted in according to what they aimed. Some countries stopped funding UNWRA and the claims were spread internationally that they were employing Hamas militants.  To talk about more specifically, what do these online deceptions cause in the understanding of Gaza?

I think you know information now is increasingly consumed online. And it’s easy for anyone to create the illusion of a narrative.  An important element of techniques, if one person says something, you ignore it. If you have a thousand people saying something, then it becomes a narrative, it becomes a piece of information. The problem is online. It’s easy to create a thousand people, a thousand fake accounts. That’s exactly what we’re seeing. I think so much of the information around Gaza is being consumed on online through social media. Why? Because Israel are prohibiting journalists from going into Gaza to see what’s on the ground. So, the only information we see is either filtered information through Israeli State Ministries or disinformation or when we’re lucky, footage from Palestinian citizens.  And so, it’s a very online war, a disinformation war, a very social media-oriented disinformation war. The thing is with online is that it’s very easy for something to go viral. We know that the Israeli official state accounts linked to Israel whether it’s the IDF or the Israeli Ministry of Foreign Affairs are using social media to spread clear disinformation.  And when I say that it’s disinformation that they even deleted. I’ll give you an example. The Israel official Ministry of Foreign Affairs account has, on several occasions, spread, for example, the information that Palestinians are using toy dolls and pretend that they’re dead babies.  There’s been at least three occasions where the Foreign Affairs account has said this. And it’s proven to be false.  There’s no way in hell that they know that that to be true.  So, they are deliberately spreading this kind of false narrative but these get thousands and thousands of retweets.  Then you have someone in, for example, the US repeating these claims.  We know, for example, that Secretary Blinken and Joe Biden himself have repeated claims that originated on social media about beheaded babies and repeated them in a press conference.  So, information from the social media space generated by Israel or the IDF or Israel entities then breaks out.  It goes from social media to other forms of media and that’s the secret. That’s how you do it. You start disinformation narrative online on digital media and you feed it into the mainstream media. So, it looks credible.

What role do the social media companies have in that?

Well, social media companies we’re talking about with the exception of Tik Tok which is Chinese, most social media companies are US-based and they are all slightly different but for the most part, social media companies are accused of being more pro-Israel, siding with Israel.  Well, with the exception of TikTok which is Chinese, most social media companies that are commonly used Instagram, META, Snapchat are Western.  They’re based in the US and most of these social media companies have been accused of having a pro-Israel bias. We know from studies done that when they do content moderation which is to make sure that the content isn’t harmful, they generally favor pro-Israeli narratives. We’ve had lots of examples of Palestinian accounts, and Palestinian activists being shadowed, having their accounts limited by social media companies. We’ve even had a few examples where, for example, the automatic translation of Arabic on Instagram, for example, a Palestinian had the phrase “Ana Falestini, Alhamdulillah” which translates as “I’m Palestinian, praise be to God” that translated as I’m a Palestinian terrorist. So, the social media companies tried to say this was a hallucination.  But this is actually very much reflective of how these companies have taught their machine learning models to associate, I think, Arabic terminology with terrorism.  So, there’s a natural bias there against Palestinians. There’s other interesting examples Motaz Azaiza who’s one of the most well-known citizen journalists to come out of the recent conflict.  He was recently banned from Facebook.  So, I think, what we’re seeing is a disproportionate policing of Palestinian voices on social media by American companies who generally align with the US position on Israel. And now we’re also seeing this kind of war on TikTok. The war on TikTok is obviously designed primarily because of concerns about data privacy.  But there’s also an argument to be made that many Americans have raised concerns that TikTok is allowing pro-Palestinian content to flourish. And now there are forces trying to get TikTok banned not because of privacy issues but because it’s seen as being pro-Palestinian. So, social media companies are definitely; American ones siding with Israel in terms of how they censor and block content.  And we even see now cause to ban other social media companies that aren’t American simply because they are not censoring Palestinian content as much as the American ones.

There are also so many claims about Israel is using artificial intelligence both in the offensive technologies and social media manipulation.  What can you tell us about that? What do we know about the use of Israel in terms of AI tools?

Well, I think, firstly AI in terms of social media and disinformation is obviously a growing problem.  We’ve seen number of examples. I’m not going to say they are necessarily Israel because they’re not all directly linked to Israel that we know of. They might be.

But, for example, we do see a lot of pro-Israel, anti-Palestinian disinformation that has used AI just as an example.  There was Bella Hadid who’s a US model and has a strong pro-Palestinian voice early on in the conflict.  Someone manipulated a video of her to say that she was condemning Hamas and she was apologetic for her previous stance this was obviously false. We’ve seen number of instances for example where Israel pro-Israel accounts and including the Israeli account for Minister Foreign Affairs has shared AI generated images claiming that they represented, for example, in one case delivering aid to Palestinians.  They, then, deleted this and acknowledged that it was created by artificial intelligence.  So, we’re definitely seeing use the fake images to trick people. But, I think, the more alarming element of this is now the creation of systems like Habsora which is the Hebrew term for Gospel.  This is a new AI tool that is meant to select and acquire targets in Gaza. So, they’re using this tool to automatically select areas of Gaza to bomb. They claim that this tool is much more efficient and faster than a human.  So, essentially AI is now doing the job of what humans used to do in selecting targets. As far as we know this is one of the first times this tool has been used. But we also know this is the deadliest war in Gaza in history and over 32,000 people have been killed.  What we’re seeing is the use of these new tools at a time when the civilian death toll is huge.  This seems to suggest that this new efficiency of AI targeting is actually also correlated with the mass killing of Palestinians. And it’s particularly alarming because AI models are trained on data and because Israel is an occupying state, an apartheid state; it’s very probable that the data that it’s trained its model one is probably anti-Palestinian and it probably kind of inherits these biases and says things “I’m going to select the target and it doesn’t matter if five Palestinian civilians die because I’ve been trained to do that”. So, we don’t know what exactly much about this AI model because they’re not transparent. But they’re using this to kill people. And I think this is something that’s really, really alarming.

What do we know about the legacy of AI tools in both in war zone and in the digital sphere?

So, Israel is using Gospel and maybe some other AI tools.  I know that it’s not your field but you’re a researcher in Middle Eastern areas. Maybe you have a take on this.

AI probabilities are endless.  This is the problem. So, in theory now you could have AI tools that would create thousands, perhaps millions of fake accounts and allow thousands perhaps millions of fake accounts to generate disinformation and propaganda at a scale we’ve never seen before.  I think I’ve seen evidence of this in the past few years, not necessarily in Israel. In terms of warfare, again, hugely damaging.  We talked about Gospel but sure, what about facial recognition? The ability to process thousands of faces, millions of faces at once, to be able to do that and then target those faces automatically, to be able to process DNA, to be able to process the kind of information profiles of people using complex algorithms that then determines whether that person is a threat or not.  I think you know there there’s the limits of AI in terms of warfare are only limits of human creativity.  Unfortunately, I think, the problem with AI is not necessarily in what it can do.  It’s in who is calling the shots about, who’s controlling it. So, if we have a political system that’s one of a apartheid occupation, the use of AI is going to reflect that. We’re definitely seeing that.  The information space, social media, disinformation is not ready for AI. We’re going to see increasingly the weaponization of AI to create propaganda on a scale that I think is unprecedented in history.

Recently United Nations passed a resolution on the good use of AI. So, it was kind of supportive and sponsored by so many countries including Türkiye and some others in the region.  Do you follow any discussions in United Nations regarding the use of AI in these negative terms as Israel does?  What is going to be the future of AI if it’s going to be used so much more on the warfare zone and in digital front?

Well, the problem with any of these legislations whether the UN or, I know, the EU have also initiated some legislation on ethical use of AI.  But, again, when it comes to security and national security, often these areas are separate in terms of the legislation. This is because national security is often seen as a red line. All this does to me basically suggests that countries still have a bit of a cart blanch to do whatever they want with AI. I think the use of generative AI for warfare is going to be necessarily going to be limited or controlled or constrained in the same way, for example, we see the regulation of the nuclear space.  We still live in a place where people have nuclear weapons right and it’s in over the trajectory of the past 10 years, we’ve seen perhaps a space in which nuclear weapons aren’t as controlled as they were.  If we apply that same logic to AI it basically says that the era we’re in is one in which states are increasingly taking more and more dangerous risks with their defense industry. I don’t see why suddenly we’re going to make our approach to nuclear warfare more liable but we’re going to police AI. I don’t see the logic in that.  And so, I don’t think there’s necessarily a decoupling. I don’t think these resolutions in terms of AI are necessarily going to affect how they might use for warfare. If anything, what’s going on in Gaza is anything to go by, then, in fact, people will be going to countries like Israel to say how does your technology work and how can we use it and how can we make it better.  And we’re shifting the AI generally is going to be shift to, sort of, what they call preventative policing, is trying to police before crimes happen.  And that’s a very alarming thing.  That’s why I mentioned facial recognition. Because if you can monitor, track people based on their information in theory, you can arrest them or control them at an early point. So, I think AI is going to shift to this point of preventative policing but as it does that, I think we are, then, in the area of potentially having a police state.

My last question, Professor Jones: Can you tell us how the audience can distinguish the fake news, the disinformation from the real ones?  Because misinformation is really widespread and it’s kind of impossible to recognize whether it is fake or not.  So, what can you tell us shortly as recommendations?

There’s no one way to detect AI to disinformation, it’s impossible.  But you can start to do things that will help you not be so willing to believe things.  For example, let’s say if you use X, if you see a tweet from someone, is that account real?  Do they are they linked to any other institution?  Does the photo look like a stock photo?  Does their timeline consistently talk about the same issues? These are important things to notion about. If it triggers an emotional response in you, there’s a good chance it could be fake. Remember one aspect of disinformation is that it’s designed to make you react either to feel angry, to feel sad, mad or laugh.  So, if it triggers an emotional response in you, it could be that it’s trying to manipulate you. So, if you feel a particular way after seeing some information, then, treat it extra carefully. I think that’s the biggest piece of advice I could give you, beware of content that makes you particularly emotional. Because this is what people who design this information are trying to do.

So, you’re mainly recommending all of us to double check anything we see on social media.

No. Triple check.

MOST READ

Exit mobile version