How social media platforms are caught in Israel-Hamas crossfire
New Delhi: It has been 18 months since Russia invaded Ukraine. Even as that conflict continues, an attack by Hamas on Israel has sparked another war. In between these two events, the social media landscape has undergone a sea change, the chief among them being the acquisition of Twitter, now X, by tech billionaire Elon Musk. Meta, Facebook’s parent, has also launched a competitor called Threads.
On comparing the response of regulators and governments to the spread of news and misinformation about the two wars on social media platforms, which includes Facebook, Instagram, X and YouTube, it has become increasingly clear that decisions on moderating online content are inextricably intertwined with geopolitics.
We're now on WhatsApp. Click to join.Google, which owns YouTube, and Meta are clear in their stance that Hamas is a terrorist organisation and people affiliated with it will not be allowed to accounts, and no content by Hamas can be posted on their platforms. The position of X has been less clear, although CEO Linda Yaccarino announced on Thursday that the platform had taken down hundreds of accounts linked to Hamas.
Difficulties in pleasing all
The conflict between Israel and Gaza has laid bare difficulties in satisfying everyone on how content is moderated on social platforms. Unlike the Ukraine conflict, where most of the world, barring a handful of nations, has sided with Ukraine or remained neutral, the war in Gaza and Israel is more complex.
Although the global community has condemned Hamas’s attack, when it comes to choosing a side, the world is divided. The US and western Europe are unconditionally supporting Israel, but Arab nations as well as India have reaffirmed their support for Palestine’s existence as a separate state.
All major social media platforms, except TikTok, which is blocked in India, are headquartered in the US. In such a world, how do social media platforms make moderation choices that align with the laws of the countries they operate in without antagonising US authorities?
For instance, Qatar may consider some content from Israel, or in favour of Israel, violative of its stance on the war while nations in the European Union may think any post that supports Hamas and Palestine must be removed.
Should Facebook or X block such content in Qatar or the EU, and other nations that think the same, or should it take it down for the entire world? Should the platforms make such a decision proactively, of their own volition, or do they wait for orders from different governments? If they decide to wait for government directives, they also run the risk of allowing harmful but legal content to foment further violence.
Almost all major news organisations are calling the conflict in West Asia a war between Hamas and Israel. Depending on the origin of the news publication, some have called Hamas a terrorist organisation, while others have stuck to calling it an armed group. Differences is framing the conflict often trickle down from positions taken by various countries — for instance, some might see the BBC in the UK leaning towards Israel and Qatar-owned Al Jazeera the other way. But the social media landscape remains much of the same in both these countries.
Who decides correctness?
First-hand reports are now being posted from the middle of a war. Who is going to decide on the correctness and legality of every post?
Social media platforms have to contend with internal politics as well. On October 9, four students of Aligarh Muslim University were booked for taking out a pro-Palestine march on campus without permission. On October 10, Prime Minister Narendra Modi posted on X that “People of India stand firmly with Israel” and that India “condemns terrorism in all its forms and manifestations”. He did not name Hamas. On October 12, India’s foreign ministry said India recognises Palestine as a “sovereign, independent, and viable” state.
How are then social media companies supposed to implement their moderation mechanisms, especially those that are automated since these sort of takedowns constitute the bulk of content policing given the sheer scale at which content is produced on each of these platforms?
Given India’s stance, online posts in favour of either nation should then be legal.
From a legal perspective, if the social platforms take these vexed decisions themselves in different countries, they run the risk of losing their protection from liability for third party content, which is their safe harbour. At the same time, if they do not remove harmful or violent content, they run the risk of falling afoul of their due diligence requirements, and could again lose their safe harbour.
Fixing hate speech
Determining what is hate speech is also a fraught question. All social media platforms ban it on their platforms, but are reliant on specific socio-political contexts.
In the current situation, dealing with content that is outrightly supportive of Hamas is easier — take it down, don’t amplify it. But what happens when Hamas and Palestine mirror each other? Or, when the source of hateful speech is not Hamas but Israel?
On October 9, Israeli defence minister Yoav Gallant said: “We are fighting against human animals.” Dehumanising language of this kind is not allowed on any platform under their hate speech policies.
Multiple Israeli accounts are posting caricatures of Palestinians shown as cockroaches crushed under boots of the Israeli defence forces across different platforms. Should such statements from the Israeli government be allowed on their official social media accounts?
During the Holocaust, the Nazis called Jews rats, and Hutus involved in the Rwanda genocide called Tutsis cockroaches. Would that content remain or be removed today?
All social media platforms have the newsworthiness exemption, which is premised on the idea that some content, either by its nature or by virtue of who posts it, is worthy of being preserved despite violating the platforms’ community guidelines.
Former US president Donald Trump kept tweeting for such a long time because of his position as the president of the most powerful country on this planet. It was only when he tweeted in favour of the Capitol Hill insurrectionists that his accounts were blocked. But under Musk, his X account has been restored.
In 2020, during a town hall meeting with Facebook employees, Mark Zuckerberg referred to a tweet by Bharatiya Janata Party leader Kapil Mishra as a clear case of incitement to violence. In the tweet, which had been taken down (though it is unclear whether Twitter took it down of its own volition or received a government order to do so), Mishra had written, “I give Delhi Police a three-day ultimatum to clear the streets of Jaffrabad and Chand Bagh. After that, don’t explain anything to us, we will not listen. Only three days.” Zuckerberg did not mention Mishra by name.
Platforms as archivists
Israel has cut off essential supplies and electricity to Gaza. It is in the middle of imposing a complete communications blackout. According to a CNN report on Friday, only one internet service provider remains in the region and internet connectivity was down to 66%. With dwindling electricity reserves and dying batteries, people are increasingly turning to social media to post what could be their last messages.
Social media platforms also allow individuals to post their own version of events, which becomes even more important when a place is under a violent siege. Both Israelis and Gazans have been able to post their stories without relying on traditional media — the horrors are captured more accurately when the images and videos come from the eyes of the victims, like those of the Supernova festival massacre in Israel and those who were bombed while live-streaming in Gaza.
To evade liability for this violent content, do they take it down, or do they retain it as documentation of a war? Multiple war crimes, committed by both Israel and Hamas, have been captured on social media. Should that evidence be taken down?
Persistent problem of disinformation
Disinformation has been a persistent problem in the Ukraine conflict, but not at the scale that is happening with the Israel-Hamas war. The main reason is said to be X’s dismantled verified programme, under which anyone can purchase a verified badge and have their content ranked higher than others.
While X alternatives, including Threads, have cropped up, it remains the go-to platform for news. If you are caught in the middle of a war or are reporting from a war zone, you would not post on multiple social media platforms; you will post on one where you are guaranteed to garner maximum amplification. And for better or worse, most of global news organisations still rely on X for amplification. Threads, as per the Instagram CEO Adam Mosseri, is not meant for news and politics.
Social media platforms reward content that leads to higher engagement by ranking it higher. The macabre and the gory tend to go viral faster. When false information, either intentionally produced (disinformation) or unintentionally produced (misinformation), is macabre, it becomes hard to prevent it from going viral.
When getting ranked higher costs just $8 a month, those engaging in disinformation and propaganda can easily game the system.
When it comes to the verified information problem, it was a solved one that was dismantled by Musk. Instagram followed suit and allowed users to purchase verified badges. The problem of misinformation and disinformation has become much larger.
Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.
Title:How social media platforms are caught in Israel-Hamas crossfire
Url:https://www.investsfocus.com