Friday, May 17, 2024

AdvertiseDonateSubmit
NewsSportsArtsOpinionThe QuadPhotoVideoIllustrationsCartoonsGraphicsThe StackPRIMEEnterpriseInteractivesPodcastsBruinwalkClassifieds

IN THE NEWS:

USAC Elections 2024SJP and UC Divest Coalition Demonstrations at UCLA

Code Red: Episode 2

Photo credit: Izzy Greig

By Ciara Terry and Izzy Greig

May 2, 2024 12:35 p.m.

This three-part series explores the causes and consequences of extremism in the online world. In this episode, Podcasts contributors Izzy Greig and Ciara Terry discuss how technology can foster extremism.

Izzy Greig: Hello and welcome to the second episode of Code Red, a three-part mini series by Daily Bruin Podcasts that explores the causes and dangers behind extremism in the online world. I am Izzy Greig.

Ciara Terry: And I am Ciara Terry and we are your hosts for today’s episode. In the last episode we introduced the concept of online extremism and the internal factors behind it, but today we will cover how technology has played a role in this rising phenomenon. The first topic we’ll be covering is algorithms, which is one of the growing concerns mentioned when it comes to technology and radicalization.

IG: For those who don’t know what algorithms are, they are calculations used by social media platforms to control the content that users see in their feeds. Some factors that can go into these calculations are a person’s past online behavior, the relevance of the content to them, and the popularity of the post.

CT: One of the rising issues with algorithms that experts have found alarming is that not much is actually known about how they work. Tech companies actually hide the code behind their algorithms, meaning we have no idea what their actual design is. But from what experts have been able to study, we know that the main function of algorithms is to predict exactly what a user wants to see in order to keep them on the site for as long as possible.

IG: Another notably dangerous effect of algorithms is their ability to create online echo chambers. What that means is that over time on these social media sites you start only seeing content that the tech companies, or at least their algorithms, have decided you want to see. You begin to see less and less content that maybe challenges your views or shows you a different side of things. Expert Hadi Elzayn compared this process to the difference between eating healthy foods like vegetables to more junk food like cookies. Over time algorithms will notice that you prefer cookies to broccoli, and even though the broccoli is maybe better for your health, it doesn’t take that into consideration as the more cookies it gives you, the more likely you are to come back to get more. The cookies in this scenario tend to be extreme content in the real world. And obviously like any industry, Big Tech’s main goal is revenue, so getting people to stay on their platform longer in turn leads to them seeing more ads, and thus leads to more revenue.

And this doesn’t mean that algorithms are inherently bad and only lead to extreme content. For example I love watching food recipe videos, and if you opened my Instagram feed you’re probably only going to see food recipe videos. So obviously that’s not some deep, dark hole of the internet I’m being sent to, but Instagram does know that food content is what keeps me on the app the longest – which is why it’s being shown to me.

CT: We see this play out in the political sphere. If people show interest in one side of the political spectrum, eventually they are only going to see posts from people who agree with them. But over time people often get bored of that, so to keep people engaged these platforms will start sending them more and more extreme content. Content like this gets people angry or emotional, and makes them keep wanting to go back and see more. And so it’s not that tech companies created these algorithms in order to radicalize the whole world. They just wanted to find a way to keep people engaged. But, unfortunately, it has in fact led some people to become addicted to more and more radical content.

IG: The next big buzzword you’ll hear when it comes to online extremism is misinformation. Which at its simplest, is untrue information. Misinformation often spreads due to algorithms. It usually targets people’s emotions, which in turn gets the poster more engagement and attention, which then tells the algorithms to send it to more people. We’ve seen misinformation come into play a lot recently when it comes to things like election fraud or something like names in Jeffery Epstein’s flight log. It goes to show that misinformation can have serious, real-world consequences.

CT: One of the often hidden characters in misinformation are online bots. Online bots are “a software application that runs automated tasks on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale.” The danger of bots stems from their ability to imitate humans. This leads people to trust them more than they would if they knew they were actually interacting with a robot. A 2017 paper by Soroush Vosoughi and others found that there were around 190 million bots across social media. These bots live online, but the issues they create do not stay there. In a 2017 senate intelligence committee hearing about Russian interference in the 2016 election, Clint Watts, a senior fellow at the Center for Cyber and Homeland Security at George Washington University said this, “Social bots play on this psychology broadcasting at such high volumes that it makes falsehoods appear more credible. As social bots repeat falsehoods, they often drown out fact-based stories or overpower refutations of their conspiracies.”

IG: Watts believes that these tactics led to Russian bots playing a role in Russia’s interference in the 2016 election, he continues on later in his testimony saying quote “Top government officials, political campaigns, news reporters and producers can easily be duped into unwittingly furthering Russian Active Measures in America. Those speaking out against the Kremlin are challenged by social media sweatshops and automated accounts attacking their credibility, harming their reputation and diminishing their influence amongst American policymakers.” End quote.

CT: And while this is shocking and obviously shows that technology is already manipulating our elections, we can’t blame bots for everything. Bots are not autonomous entities, they are made by a real person with some sort of agenda. It has been proven by numerous studies that online, people are rewarded for sharing untrue information. A USC study found that “Users who post and share frequently, especially sensational, eye-catching information, are likely to attract attention.” The same study also found that in an experiment, if participants were rewarded for sharing true information, they were more likely to do so. What this tells us is that many people actually know that the information they are sharing is not true, but due to the way social media networks are structured, there is a larger reward for sharing false information compared to true information. Misinformation can play a big role in radicalizing people, as oftentimes the misinformation that people spread furthers some kind of extreme agenda. Organizations like QAnon have a well documented past of sharing false information online disguised as facts in order to bring more people to their organization.

IG: Now it would be hard to talk about all the different issues regarding social media and technology without talking about the giants behind it. In America the current big five tech companies are Alphabet, Amazon, Apple, Meta and Microsoft. And the main concern that people bring up when it comes to these tech conglomerates is that they are for-profit companies. Obviously that is not an inherently bad thing. But we start noticing the red flags when we talk about topics like algorithms or misinformation. Can Big Tech be entrusted to make decisions that prioritize society rather than their companies? Historically the answer has been no. The UN Secretary General António Guterres put it like this:

António Guterres: Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy and social impact. This is no secret.

IG: So obviously we see a problem here. Like we previously mentioned, algorithms are built to keep people on sites more, in turn raising a company’s profit. So what’s the incentive for companies to alter their algorithms?

CT: And when it comes to misinformation, companies simply argue that they have no control over what their users post. There is even a law that companies use to back this up. Section 230 of the Communications Decency Act says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In simple terms, what this means is that you cannot sue a tech company for something said on their website, you can only sue the person who posted it. This precedent has allowed for tech companies to escape punishment and continue profiting.

IG: To further protect themselves from public criticism, social media companies have created community guidelines. And they censor and suspend accounts that violate these guidelines. But that has then led social media users to develop their own legal defense. Many people have raised 1st amendment arguments when lawmakers and activists have called for accountability and more restrictions on what people can and cannot post. The specific legal argument used comes from a 1946 Supreme Court case, Marsh vs. Alabama. The simplified version of the case is basically that a man handing out religious pamphlets on a sidewalk still had First Amendment rights despite the fact that the town he was in was a privately-owned company town. The precedent of this case is being used in the context of the internet now. There are growing arguments that social media has now become the sidewalk in which people spread their ideas. And thus internet users should be provided First Amendment protections.

CT: What this all means for us is that it historically has been very hard to hold tech companies accountable, and while many lawmakers have taken a stab at it, we’ve seen very little return. It starts to become clear how online extremism spreads when it is nearly impossible to discern true information from false information, and the false information starts to become the only thing you see.

IG: And again, we don’t want to make you think that the internet is this evil and bad place, there are plenty of good and positive things that take place on the internet. But it is hard to ignore the policies in place that tend to do more harm than good. In the next episode, podcast contributors Alicia Ying and Zoë Bordes will sit down with UCLA professors to further discuss online extremism and its implications. Thank you for listening to Code Red. I’m Izzy Greig.

CT: And I’m Ciara Terry. And we hope you have a good day.

Share this story:FacebookTwitterRedditEmail
Ciara Terry
COMMENTS
Featured Classifieds
Room for Rent

Frnshd room with pvt bath in prvt home, includes Drct TV, internet, util, wash/dryer use, week maid serv pool/jacuzzi gate grded, walk to UCLA, no prk or kitchen $1300 310 310-309-9999

More classifieds »
Related Posts