The Great On-li(e)-ne

Why we need to talk about filter bubbles, algorithms and trolls before it is too late and what these have to do with the surge of hate speech, right-wing populism and climate change denialism. 

Text: Sandra Schober, Journalism & Media Managment
Illustration: The Electome, MIT Media Lab

It is the 15th of October 2017. With only one hashtag, actress Alyssa Milano starts a worldwide movement against the sexual harassment of women. Within a day, more than 500,000 people on Twitter and over 4.7 million Facebook users post the hashtag #metoo. The enormous effect of just one post shows how social media connects people in a way that did not exist before.  

Movements like #metoo or the Arab Spring in 2010 could not have started without Twitter, Facebook, and other similar platforms. The world we live in is more connected than ever, but this does not translate to a world in which everybody can participate in public debate and every voice is heard. 

More information, less informing 
What is a filter bubble?
“Your filter bubble is your own personal, unique universe of information that you live in online. And what’s in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out.” 
– Eli Pariser, internet activist 

Every day we are flooded with information online, be it from a Facebook timeline, Google news or a Twitter feed. But it is impossible to access all of the information available, and the selection that we engage with ultimately serves to confirm us in our pre-existing opinions. During the 2016 presidential elections in the United States, the Electome project at the MIT Media Lab provided an analysis of Twitter users. Their data showed that both Clinton and Trump supporters seemed to stay within their own so-called ‘filter bubble’, which led to isolation of opinion and a lack of interaction between different interest groups.  

This might not seem like a shockingly new phenomenon. People have always preferred to stay within their own comfort zone. Why bother hanging out with people who have a completely different opinion when you can be with people who will always agree with you? 

An uninformed public or restriction of individual rights to public expression are both barriers to effective democracy. Being able to publish criticism and information concerning grievances is a fundamental component of a functioning democracy. However, when information is unilaterally filtered, this essential element is missing. Even though there is theoretically free access to information in democratic countries, algorithms designed by social media platforms filter that access and create a form of censorship. 

Post-truth instead of true post 

By filtering out information, Facebook, Twitter and similar platforms influence people and their political opinions without their being aware of it. Whether in discussion surrounding Brexit or the 2016 US presidential election, emotions have become more important than profound research. We have gone from, “I think, therefore I am” to “I believe therefore I’m right”, because we prefer comforting lies rather than unpleasant truths.  

How can we define ‘post-truth’? 
According to the Oxford English Dictionary, the term ‘post-truth’ relates to or denotes, “circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” 

These algorithms and those who create them are complicit in diverse misdemeanors such as fostering climate change denial or advocating for the election of oppressive governments. How we relate to information in the post-truth age is a central question at the core of every new 21st century crisis. 

There is a shockingly tight link between attacks on the free press, the surge of right-wing populism and the censorship of information online. A study carried out by Facebook in 2015 demonstrated that their algorithms filter posts and rank the content in a person’s timeline based on their user activity. Users that self-identified as liberals saw 6% less diverse content, whilst conservatives ended up seeing 17% less diverse content. So how do Facebook and similar platforms decide which content we get to see and what is to be left out? 

Due to the vast amount of information available on the internet, social media platforms must find some way to filter information. These filters are a necessary evil when used to filter relatively trivial information such as our friend’s Facebook posts, but cause more problems when applied to news stories. News stories that we interact more with naturally correspond with our own interests. Future stories are filtered according to these interests, contributing to the polarization of public debate as participants become disconnected from points of view that disagree with their own. Debates that are already heated in real life become ever more inflamed when subjected to this process online.  

In March 2018, the New York Times published an article by Zeynep Tufekci, a Turkish journalist and techno-sociologist, about how YouTube’s autoplay algorithm works. Tufekci was watching videos of Trump speeches in order to confirm quotations and soon found out that, no matter which topic she searched for, YouTube’s algorithm automatically showed her more and more radical content. “Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century”, the writer claimed in the article. Tufekci further explained that, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.” Given the fact that almost every university student has access to the internet and a big part of the educational system is based on Google, YouTube and similar platforms, it is essential that we figure out a way to overcome this vicious circle of radicalization.  

How to live in a world without a Standard Google 

With algorithms contributing to increase the consumption of ever more radical material, we are rapidly finding ourselves in a world where we only see what the algorithms show us, and not the things we really need to see. There is no such thing as a ‘Standard Google’ because search machine results are personalized based on our online behavior and interests. We need to learn how to handle social media properly and how to break out of filter bubbles in order to see the bigger picture again. Only by educating our children in media use and politics from an early age we can counteract the effects of algorithms and radicalization on society. 

As long as existing information is only selectively available, we will find ourselves stuck in a world where good journalism is unable to fulfil its remit. As a society, we need quality journalism that fights to uncover the truth and make it visible to the public, but most importantly, we need businesses like Google and Facebook to allow each and every one of us to see all of this valuable information. If we don’t, we will find ourselves in a globalized and interconnected, yet also more isolated world than ever before.  

Source: https://www.vice.com/en_us/article/d3xamx/journalists-and-trump-voters-live-in-separate-online-bubbles-mit-analysis-shows).

Bookmark the permalink.