Open one social media platform and you’re hit with a fake video; open another and you’re hit with bigotry. Open a news article, and you’ll find some victims “killed” but others “dying.” Each account of events in Israel and Palestine seems to rely on different facts. What’s clear is that misinformation, hate speech, and factual distortions are running rampant.
How do we vet what we see in such a landscape? I spoke to experts across the field of media, politics, tech, and communications about information networks around Israel’s war in Gaza. This interview, with communications and policy scholar Ayse Lokmanoglu, is the fifth in a five-part series that also includes computer scientist Megan Squire, journalist and news analyst Dina Ibrahim, Bellingcat founder Eliot Higgins, and media researcher Tamara Kharroub.
Computational social scientist Ayse Lokmanoglu is an assistant professor at Clemson University and a faculty member at its Media Forensics Hub. Her award-winning research examines the effectiveness of extremist disinformation, tracks radicalization in the public sphere, and analyzes the digital communications of “armed non-state actors”—essentially, what it means for the Islamic State to be on the internet.
We spoke about the role of social media in armed conflicts and political tension. Users, Lokmanoglu reminds us, “have agency [to] slow down the spread” of digital misinformation.
What’s the significance of social media for misinformation regarding around Israel and Palestine?
Social media has a lot of advantages. We have easier access to global information—previously, you couldn’t get information from conflict areas. In cases of authoritarian regimes, it allows you to have a variety of different information—we can see the differences between what we get from state actors vs. what we get from the ground.
In the Arab Spring, we saw a change in tactical information: You see people using English postcards and English signs. That is a clear sign that they want to reach more than just their territorial language borders—they want to reach a global audience. Social media facilitates this.
Now, when we have very politically tense events and conflicts, there’s more demand for information. Social media provides a marketplace: We have all this demand, and we have the supply. It creates this very fast information exchange that is filled with emotion. Everyone’s guard is down. We need measures to validate the information, because tensions are high. Information integrity becomes very important because we have all this supply and demand.
What happens when information isn’t validated?
One of the most common things I’ve seen about this conflict is people saying “I don’t want to share anything because I don’t know what’s true anymore.” We have the sense of the sense of not knowing what’s true and not knowing how to verify the truth.
It’s important for us as consumers of social media to gauge information integrity. We’re seeing platforms such as X take a step back on content moderation. Blue checks [on X] used to help people verify accounts, and now we’ve lost that. Community notes are helping to some level, but they’re not fast enough. Also, as a peer reviewing system, there is internal bias, so it’s not going to be 100%.
We’ve had media and misinformation for centuries—if you look at the Second World War, radios were used to spread information warfare. [But] the reach, access, and speed of information were really amplified by social media. We’re exposed to it much more, on multiple screens, in multiple places. It’s not only what we see on X, it’s also what our friends share, what’s in private group chats.
What we found is that when people share disinformation, they’re sometimes doing it unconsciously. They trust the sources, or they trust other people who shared it, [so] they’re not actually doing due diligence.
Rather than sharing immediately, take a moment and wait for some reputable organization to verify the information. I think what we need to do is slow down rather than be part of the immediate sharing and liking. When you wait for other organizations—reputable news organizations, reputable OSINT organizations—to verify it, you are actually helping the cause more than harming.
Your research has looked at extremist violence and social media. How are recent social media policies affecting the spread of disinformation?
I do think the spread of information, especially extremist information, is dangerous because it can be very triggering. It could be very personal, private, and inhumane to share some of these things. You go into the personal or private rights of people when sharing images of [them] without knowing them.
Our research has shown that images have a higher retention rate. They hit your senses much more: because you don’t need language, you process much more easily. Seeing images intensifies the tenseness and intensifies the emotions. In both cases, in the Hamas attack and the Gaza bombing, you’re really blurring these lines of what should be shared and what’s not to share.
For the Islamic State, when they were prominent on the internet, there were initiatives to moderate harmful content, but also come up with solutions that were not censoring information. The solutions were for safety and for mitigating and minimizing harm. My hope is that these platforms come together again and talk about common solutions, rather than individual.
How does social media contribute to the rise of hate speech?
When you have this information, especially graphic images, it really heightens the “this group is our group” notions—even though the group you’re in is socially constructed in many ways and is very fluid and changing. It becomes solidified at the moment when you feel like you’re under a target.
When we start really polarizing, and start losing the human aspect, that’s when hateful rhetoric, dehumanization, really starts.
No one is correcting each other. It’s not actually socially acceptable [not to do]. If you hear someone say hateful rhetoric next to you at a dinner party, you will be uncomfortable. You would either correct them or, if you’re conflict-avoidant, leave or hint through social cues that it’s not socially acceptable. We need to bring that back into social media.
How does the technology behind social media contribute to the spread of misinformation?
Since there’s a demand for information, bad actors will want to supply. Technology really helps them through deep fakes and manipulated images and videos. You can have multiple fake accounts “fact-checking” each other. You can also unleash bots and trolls that give millions of likes so the algorithm pushes [certain content].
When it comes to deep fakes, right now, it takes so much to identify them. You also need much more advanced technology that as lay users, we don’t have. As long as we’re aware, and we try to address this, we have a force as users. We have agency. We can slow the spread. We should try to go back to the place where social media was good—just slow down a little bit to actually change social media, back to when it was actually helping us and benefiting people on the ground.
This interview has been lightly edited and condensed for clarity.
"Stop" - Google News
November 26, 2023 at 02:39AM
https://ift.tt/5OeaYrL
Don't Stop to Think! How Our Digital Habits Feed Israel-Palestine Disinfo – Mother Jones - Mother Jones
"Stop" - Google News
https://ift.tt/lSJGdes
https://ift.tt/RgO2Uaz
Bagikan Berita Ini
0 Response to "Don't Stop to Think! How Our Digital Habits Feed Israel-Palestine Disinfo – Mother Jones - Mother Jones"
Post a Comment