Campus screening event and conversations!
| Discover what’s hiding on the other side of your screen.
We tweet, we like, and we share— but what are the consequences of our growing dependence on social media?
The Social Dilemma.
:The readings below are from the Center for Humane Technology.
They have identified readings that make up a "Ledger of Harms" that can result from human interaction with technology and social media and the impact on: How We Make Sense of the World, our Social Relationships, Physical and Mental Health, Politics and Government, Systemic Oppression, Attention and Cognition, Future Generations, and How We Treat One Another."
of all extremist group joins are due to our recommendation tools...our recommendation systems grow the problem”, noted an internal Facebook presentation in 2016. Yet repeated attempts to counteract this have been repeatedly ignored, diluted, or deliberately shut down by senior Facebook officers, according to a 2020 Wall Street Journal investigation. In 2018, Facebook managers told employees the company’s priorities were shifting “away from societal good to individual value.”
Fake news spreads six times faster than true news. According to researchers, this is because fake news grabs our attention more than authentic information: fake news items usually have a higher emotional content and contain unexpected information which inevitably means that they will be shared and reposted more often.
Copy LinkPEER-REVIEWED · Vosoughi, S., Roy, D., & Aral, S., 2018. Science ↗
Reading a fake news item even once increases the chances of a reader judging that it is true when they next encounter it, even when the news item has been labeled as suspect by fact-checkers or is counter to the reader’s own political standpoint. The damage done by fake news items in the past continues to reverberate today. Psychological mechanisms such as these, twinned with the speed at which fake news travels, highlight our vulnerability demonstrating how we can easily be manipulated by anyone planting fakes news or using bots to spread their own viewpoints.
of tweets about corona virus are from bots spreading fake information, according to research from Carnegie Mellon University. An analysis of more than 200 million tweets created since January 2020 indicates more than 100 false narratives, including conspiracy theories that hospitals are full of mannequins. Researchers note that these posts appear to be aimed at sowing division within America, commenting “We do know that it looks like a propaganda machine”.
As the pandemic develops, there has been a significant increase in posting fake news and false information even among human users, due to the algorithms underlying social media platforms. Researchers note that people naturally repost messages on the basis of their popularity, rather than their accuracy. Fact-checking has been unable to keep pace. Such false information is particularly dangerous because, as noted above, it tends to be retained for a long time, irrespective of fact correction.
The primary driving force behind whether someone will share a piece of information is not its accuracy or even its content; the main reason we share a post is because it comes from a friend or a celebrity with whom we want to be associated. As humans, we’re often more concerned with status, popularity, and establishing a trusted “friends” circle, than with maintaining the truth. As a result, social media spaces will inevitably be spaces where the truth is easily downgraded.
of exposure to a conspiracy theory video reduces people’s pro-social attitudes (such as their willingness to help others), as well as reducing their belief in established scientific facts.
Anger is the emotion that travels fastest and farthest on social media, compared to all other emotions. As a result, those who post angry messages will inevitably have the greatest influence, and social media platforms will tend to be dominated by anger.
Each word of moral outrage added to a tweet increases the rate of retweets by 17%. It takes very little effort to tip the emotional balance within social media spaces, catalyzing and accelerating further polarization.
Analysis indicates that bots wield a disproportionate influence, dominating social media platforms such as Twitter. An estimated 66% of tweeted links to popular websites are tweeted by bots, with this number climbing to 89% for popular news sites. In addition, bots overwhelmed human users: in this study, 500 bots were responsible for 22% of the tweets, compared to the top 500 human users who only accounted for 6% of tweets. As a result, those who create bots can manipulate and artificially tilt the balance of shared social spaces.
Twitter now plays a key role in how journalists find news. According to a recent survey, many journalists see tweets as equally newsworthy compared to headlines from the Associated Press. As a result, the neutrality of the press can be easily undermined: on the one hand professional journalists can be manipulated by bots and bad faith actors and on the other hand, the chance of radical content, conspiracies and other types of disinformation occurring in professional news articles are extremely high.
Copy LinkPEER-REVIEWED · McGregor and Molyneux, 2018. Journal of Journalism ↗
An Oxford research study of 22 million tweets showed that Twitter users had shared more “misinformation, polarizing, and conspiratorial content” than had shared actual news stories
Analysis indicates that foreign governments place and promote misinformation stories on multiple social media channels, creating the illusion of known truth emerging from diverse "independent" sources.
Using a wide range of tricky techniques, malicious actors of all types use social media to rapidly advance their agenda. They have developed sophisticated media manipulation strategies, including hijacking existing memes and seeding false narratives widely. Manuals for journalists and other media professionals to defend against these strategies naturally lag far behind, and are just starting to be developed in civil society.
Fake news items contain more anger than posts of real news. According to research conducted with more than 1,000 active users on China' Weibo platform, angry posts generate more anxiety and this in turn motivates readers to share them further. Analyzing over 30,000 posts on Weibo, the researchers found that fake news posts contained 17% fewer "joy" words but 6% more "anger" words compared to real news posts. They found similar trends in an analysis of 40,000 posts on Twitter.
of screen content is viewed for less than 1 minute, according to a study that tracked computer multitasking across the course of 1 day . Results indicate that most people switched between different content every 19 seconds. Biological analysis demonstrated that participants experienced a neurological "high" whenever they switched — explaining why we feel driven to keep switching and underscoring how human biology makes us vulnerable to being manipulating by attention-extractive economies.
The mere presence of your smartphone, even when it is turned off and face down, drains your attention. An experimental study of several hundred adults showed that both working memory and the ability to solve new problems were drastically reduced when their phones were turned off but present on their desks, as opposed to being in another room. Ironically, participants who said they were highly dependent on their phones showed the greatest increase in memory and fluid intelligence scores when their phones were moved to the other room. Researchers noted that smartphones act as "high-priority stimuli," unconsciously draining significant attentional resources even when we consciously ignore them.
1 hour per day
is the amount of time most Americans spend dealing with distractions and then getting focused and back on track each day, which comes to a grand total of 5 full weeks in a year.
is the average time we can typically focus while working on computers, before our attention is broken. As tech companies work to capture our attention in the current attention-extraction economy, our ability to focus can only become harder.
A meta-analysis of several dozen research studies indicates that higher levels of switching between different media channels is significantly linked to lower levels of both working memory and long-term memory. Given the current Extractive Attention Economy, and the increasing number of social media platforms and apps competing to capture our attention, basic human capacities — such as our memories — are increasingly under attack.
Chamath Palihapitiya, former VP of user growth at Facebook, has said that: “I can control my decision, which is that I don’t use that sh%t. I can control my kids’ decisions, which is that they’re not allowed to use that sh%t... The short-term, dopamine-driven feedback loops that we have created are destroying how society works.”
Steve Jobs, who was CEO of Apple for many years, told reporters that his kids don’t use iPads and that “We limit how much technology our kids use at home.”
Sean Parker, who was the founding president of Facebook, has publicly called himself "something of a conscientious objector" on social media and said, “God only knows what it's doing to our children's brains.”
Many modern Silicon Valley parents strongly restrict technology use at home, and some of the area’s top schools minimize tech in the classroom. In the words of one 44-year-old parent who used to work at Google, "We know at some point they will need to get their own phones, but we are prolonging it as long as possible."
“We’ve unleashed a beast, but there’s a lot of unintended consequences,” says Tony Fadell, inventor of the iPod and co-inventor of the iPhone. “I don’t think we have the tools we need to understand what we do every day… we have zero data about our habits on our devices.”
The mere presence of a mobile phone can disrupt the connection between two people, leading to reduced feelings of empathy, trust, and a sense of closeness. In a series of studies, researchers found that when pairs of strangers were asked to have meaningful conversations, their ability to connect emotionally was significantly reduced if a mobile phone was visible.
of parents reported that mobile devices typically interrupted the time they spent with their children 3 or more times each day; only 11% reported that mobile devices did NOT interrupt their time with their children.
Copy LinkPEER-REVIEWED · McDaniel, B., & Radesky, J., 2017. Child Development ↗
The more that someone treats an AI (such as Siri) as if it has human qualities, the more they later dehumanize actual humans, and treat them poorly.
Copy LinkCONFERENCE PROCEEDINGS · Kim, Hye-Young, 2019. Journal of Consumer Research ↗
Children under age 14 spend nearly twice as long with tech devices (3 hours and 18 minutes per day) as they do in conversation with their families (1 hour and 43 minutes per day).
of Americans report that their partner is often or sometimes distracted by their devices when they are trying to talk to them.
of cellphone users admit to using their phones during their last social gathering (34% were checking for alerts). During social gatherings, 82% of millenials judge that it’s ok to read texts & emails, while 75% think it’s ok to send texts & emails.
Copy LinkPRIVATE STUDY · Rainie, L., & Zucker, K., 2015. Pew Research Center ↗
People who took photos to share on Facebook experienced less enjoyment and less engagement with the scene compared to those who took photos purely for their own pleasure. Closer analysis indicates that taking photos to share on social media increases a user's focus on their own self-identity and self-presentation, distracting them from connecting to the world around them.
Parental use of mobile devices during playtime with their children can lead to significant levels of child distress. A study of 50 infant-mother pairs indicated that infants showed greater unhappiness, fewer positive emotions, and were significantly less likely to play with toys when their mothers looked at their devices for as little as 2 minutes.
When encountering someone with an opposing political viewpoint, people are more likely to judge them as warm and intelligent if they hear that person’s ideas spoken rather than written down. Unfortunately, many social media platforms are currently designed to focus on text, reducing the chances of genuine discussion and debate and increasing the possibility of polarization.
We are so distracted by our phones that we often fail to see the most basic things, sometimes at great cost to ourselves and others. Security camera footage from San Francisco public transit reveals that a gunman was able to pull out his gun and openly handle it at length without anyone noticing, before he eventually shot a fellow passenger.
Fake news stories posted before the 2016 US elections were still in the top 10 news stories circulating across Twitter almost 2 years later, indicating the staying power of such stories and their long-term impact on ongoing political dialogue.
More fake political headlines were shared on Facebook than real ones during the last 3 months of the 2016 US elections.
Exposure to a fake political news story can rewire your memories: in a study, where over 3,000 voters were shown fake stories, many voters later not only “remembered” the fake stories as if they were real events but also "remembered" additional, rich details of how and when the events took place.
The most popular news story of the 2016 US elections was fake. In fact, three times as many Americans read and shared it on their social media accounts as they did the top-performing article from the New York Times. (The fake news story alleged that the Pope endorsed Donald Trump for President).
Americans were reached by Russian propaganda posts on Facebook during the 2016 US elections, according to Facebook's estimates.
Game theory analysis has shown how a few bots with extreme political views, carefully placed within a network of real people, can have a disproportionate effect within current social media systems. Studies demonstrate how an extremist minority political group can have undue influence using such bots—for example, reversing a 2:1 voter difference to win a majority of the votes.
Analyzing over 2 million recommendations and 72 million comments on YouTube in 2019, researchers demonstrated that viewers consistently moved from watching moderate to extremist videos; simulation experiments run on YouTube revealed that its recommendation system steers viewers towards politically extreme content. The study notes "a comprehensive picture of user radicalization on YouTube".
The order in which search engines present results has a powerful impact on users' political opinions. Experimental studies show that when undecided voters search for information about political candidates, more than 20% will change their opinion based on the ordering of their search results. Few people are aware of bias in search engine results or how their own choice of political candidate changed as a result.
The outcomes of elections around the world are being more easily manipulated via social media: during the 2018 Mexican election, 25% of Facebook and Twitter posts were created by bots and trolls; during Ecuador's 2017 elections, president Lenin Moreno's advisors bought tens of thousands of fake followers; China's state-run news agency (Xinhua) has paid for hundreds of thousands of fake followers, tweeting propaganda to the Twitter accounts of Western users.
The 2017 genocide in Myanmar was exacerbated by unmoderated fake news, with only 4 Burmese speakers at Facebook to monitor its 7.3 million Burmese users.
3X more likely
Children who have been cyberbullied are 3x more likely to contemplate suicide compared to their peers. The experience of being bullied online is significantly more harrowing than "traditional bullying", potentially due to the victim’s awareness that this is taking place in front of a much larger public audience.
Preschoolers who use screen-based media for more than 1 hour each day have been shown to have significantly less development in core brain regions involved in language and literacy. Brain scans indicate that the more time spent on screens, the lower the child's language skills, and the less structural integrity in key brain areas responsible for language. This is one of the first studies to assess the structural neurobiological impacts of screen-based media use in preschoolers; it raises serious questions as to how screen use may affect the basic development of young children's brains.
per day is the average amount of time 2-4 years old spend on mobile devices. And 46% children under the age of 2 years have used a mobile device at least once, despite the American Academy of Pediatrics' recommendation that children under 2 years should not use any screen media.
In a longitudinal study tracking over 200 children from the age of 2 years to 5 years old, children with higher levels of screen time showed greater delays in development across a range of important measures, including language, problem-solving, and social interaction. Analyses indicated that the level of screen time was significantly linked to the specific level of developmental delay 12 -14 months later. This is a critical period in a child's life: as the researchers note, the current data indicates that exposure to excessive screen time during these early years can have serious effects "impinging on children's ability to develop optimally".
Children who experienced cyberbullying during their adolescence were significantly more likely to engage in risk-taking health behavior as adults. Boys who were cyberbully-victims were significantly more likely to smoke as young adults (p = 0.014) while teenage girls were significantly more likely to show a lifetime usage of drugs (p < 0.04).
The level of electronic media use before bedtime is significantly correlated with depression in adolescence. Measurements from several hundred teenagers indicate that this is primarily due to the impact on sleep: compared to video game players, teens with high levels of social media use experienced greater sleep difficulties, which in turn strongly correlated with higher levels of depression.
After nearly two decades in decline, high depressive symptoms for 13-18 year old teen girls rose by 65% between 2010-2017
The amount of time spent using social media is significantly correlated with later levels of alcohol use. Research on several thousand teens demonstrated that while time spent on other forms of electronic media (including TV or video games) has comparatively little impact, the amount of time spent on social media is significantly linked to alcohol use 4 years later. Data indicates that social media has this unique effect through "social norming": repeatedly exposing teens to multiple images of their peers and role models drinking alcohol makes such behavior seem normal and acceptable, encouraging imitation.
US teens spend an average of more than 7 hours on screen media. This does not include time spent on screens for school or homework.
is the increase in the risk of suicide-related outcomes among teen girls who spend more than 5 hours a day (vs. 1 hour a day) on social media.
Media multi-tasking is significantly linked to later levels of attentional difficulties. Tracking more than 800 adolescents across time demonstrated that the degree to which young teens (aged 11-13 years old) multi-tasked was a significant predictor of attentional problems 3 months later (p < 0.05), highlighting the potential impact of distracting digital environments on young teens' development.
A systematic review and meta-analysis (of 20 studies) showed strong, consistent evidence of an association between bedtime access to or use of electronic devices and reduced sleep quantity and quality, as well as increased daytime sleepiness
A longitudinal study of several thousand adolescents indicated that their level of social media usage was a significant predictor of their depression levels over the course of 4 years. For every increased hour spent using social media, teens show a 2% increase in depressive symptoms.
More than half US middle-schoolers cannot distinguish advertising from real news, or fact from fiction. Many state that “If it’s viral, it must be true”. As a result, the next generation are poorly equipped to make sense of the world in their future decisions, whether with regards to drug use, risky sexual behavior, political extremism or any other issues in their future lives.
Posting alcohol-related messages on Facebook can lead to an increase in alcoholic behavior and alcoholic identity in real life. Research analysis of several hundred college students revealed that the more they posted alcohol-related messages, the more their real life social groups tended to shift a few months later towards friends with higher alcohol use, which then in turn linked to an increase in their own levels of drinking a few months after that.
of 18-44 year olds feel anxious if they haven’t checked Facebook in the last 2 hours. A recent survey of over 2,000 American adults indicates a high incidence of potential warning signs of Facebook addiction, particularly among 18-44 year olds, among whom 30% feel anxious if they haven't checked it for 2 hours. In fact, many are so hooked that 31% report checking it while driving and 16% while making love.
The greater your level of Facebook addiction, the lower your brain volume. MRI brain scans of Facebook users demonstrated a significant reduction in gray matter in the amygdala correlated with their level of addiction to Facebook. This pruning away of brain matter is similar to the type of cell death seen in cocaine addicts.
away from Facebook leads to a significant improvement in emotional well-being. In an experimental study of over 1,600 American adults (who normally used Facebook for up to an hour each day), deactivating Facebook accounts led to a significant increase in emotional well-being (including a reduction in loneliness and an increase in happiness), as well as a significant reduction in political polarization.
The more time you spend on Instagram, the more likely you are to suffer eating disorders such as orthorexia nervosa, (a clinical condition where sufferers obsess about ideal foods so much that they stop eating adequately, seriously endangering their health). According to research, no other social media platforms have this correlative effect. Scientists believe this is because images of food have more impact — and are remembered longer— than text, and because food images from "celebrity" Instagram users have a dramatically disproportionate influence on their followers' reactions to food. According to researchers, Instagram's algorithm recommendations allow othorexia sufferers to become trapped in an echo chamber of images which only show a distorted reality of food images and how to react to food.
The number of "Likes" on a celebrity Instagram account can significantly change how you see yourself. An experimental study showed that when women were exposed to different celebrity Instagram images, their ratings of their own facial appearance dropped in direct proportion to the number of "likes" attached to each image they saw. Given that there are 1 billion active Instagram users, and some celebrities have more than 150 million followers, the scale of impact is vast.
In just 3 years, there has been a quadrupling in the number of plastic surgeons with patients undergoing cosmetic surgery for the sake of looking good on social media (from 13% in 2016 to 55% in 2019). The greatest increase is in patients under the age of 30, particularly teenagers. Doctors point to the role of social media in creating an exaggerated idea of what is normal in beauty and as a result, distorting viewers' sense of their own appearance. According to clinicians, such Body Dysmorphic Disorder (BDD) (aka “Snapchat Dysmorphia”) is rapidly on the increase.
Facebook's internal training materials for its moderators state: "We allow praise, support, and representation of white separatism as an ideology, e.g. 'The US should be a white-only nation'". At the same time, Facebook notes that "Our Implementation Standards prohibit organizations and people dedicated to promoting hatred and violence against people based on their protected characteristics.”
Sustained disinformation campaigns, made viral by social media, can dilute, distract, and deny the reality of oppression. Within a week of George Floyd’s killing by police, social media platforms hosted a range of counter-information: one video asserting that the death was faked reached 1.3 million people, while thousands of posts on both Facebook and Twitter claimed that the police officer involved was an actor and that the event was faked by the state.
of the most shared Facebook posts about Black Lives Matter in June 2020 were critical of the movement, despite the fact that the majority of Americans support BLM, according to research by data analysis company CrowdTangle. Such fake representations of public opinion can play a significant role in distorting the basis for democratic dialogue and diminishing the momentum for social change. Even as societies take action to challenge racism and other forms of systemic oppression, social media platforms are being hijacked to discourage or even deny change.
Russia's propaganda program (IRA) primarily targeted African-Americans in the US between 2015-2017: fake African-American campaigns on Facebook and Instagram, such as "Black Matters US" and "Blacktivist", reached 15 million users and successfully prompted over 1.5 million users to click through to fake websites which purported to support African-American interests but promoted initiatives such as "Not voting is a way to exercise our rights".
Russia's IRA spread false information designed to create outrage about Black Lives Matter and deepen social division in the US. Research indicates that one of the IRA's major strategies was to use social media platforms to target conservative groups who supported the police or veterans and specifically feed them misinformation about BLM. The Oxford University report concludes that "the affordances of social media platforms make them powerful infrastructures for spreading computational propaganda".
Even after the shooting of 2 law enforcement officers by a Boogaloo activist, Facebook allows many Boogaloo groups to continue organizing. The tech giant argues that its June 2020 ban identifies and removes violent Boogaloo groups leaving non-violent groups intact; external researchers disagree, noting that at least 20 violent Boogaloo groups have side-stepped Facebook's new restrictions and continue to operate on the platform.
With over 800 million users, TikTok promotes itself as a place for self-expression and unrestricted creativity, yet its internal documents reveal a policy of downgrading content from users who do not fit normative ideals of gender, race, class, sexuality, or able-bodiedness, with moderators urged to censor users with "abnormal body shape", "too many wrinkles", or whose environment shows signs of poverty such as "cracks in the wall" or "old decorations".
Google image search systematically distorts the way that genders are represented in the workplace, leading to knock-on effects in our perception of real life, according to research. Analysis of Google's top 100 images for each of 45 different jobs demonstrated that Google displayed significantly fewer images of women compared to the actual percentage of women in each of these professions: for example, while in real life, 27% of CEOs are women, only 11% of images generated by a Google search depicted women. Further experiments showed that exposure to such search results significantly distorted viewers' later estimates of how many women worked in these fields.
Rigorous testing of industry AI algorithms, including Google search’s natural language processing, discovered significant stereotypical bias by gender, race, profession, and religion. For example, during "fill-in-the-blank" tests the AI's regularly associated the word "African" with words such as "poor". Researchers noted that GPT-2 showed less bias compared to the other A.I. language models, suggesting that this may be due to GPT-2 was trained on the type of real world datasets that are moderated to reduce bias (such as Reddit forums).
Twitter users were actively involved in Gamergate at its peak, with the hashtag #Gamergate being tweeted hundreds of thousands of times per month, mostly supporting the campaign of abuse and violent threats against specific female game designers and those who spoke up to support them. Gamergate played out primarily on Twitter, whose platform design and administration, according to researchers, make the platform particularly adaptable for online abuse— due to the highly public nature of tweets, the potential for mass targeting of individuals, and the fact that abusive responses can’t be removed.
Until 2019, Facebook allowed advertisers to use discriminatory targeting in ads: those advertising jobs, housing, and credit offers, could choose to exclude people on the basis of gender, race, disability and other characteristics, in direct contravention of federal laws such as the Fair Housing Act which bans discrimination. While Facebook has agreed to block such targeting, experts note that measures are not stringent enough and can be easily "gamed": for example, advertisers can still exclude users on the basis of their location.
The algorithmic basis of Google search makes it vulnerable to exploitation by those with enough capital to deploy search engine optimization tactics, often in ways that perpetuate existing forms of race and gender oppression. Researchers note that the porn industry has publicly boasted of how easily it can subvert Google safeguards to place porn in the first page of search results. In addition, commercial incentives for promoting degraded stereotypes of women, especially women of color, has knock-on effects for non-porn related Google searches: an innocent Google search for "black girls" returned pornographic results for many years, via both ads and non-ad search results.
For many years, 92% of the ads that appeared when searching for a black-identified name on Google mentioned the word "arrest", according to Harvard researchers, compared to only 80% of the ads prompted by searching for white-identified names, a statistically significant difference (p < 0.01). Even where a white-identified name (e.g. "Karen Lindquist") belonged to a person with an arrest record, a Google name search still only generated neutral ads that did not mention arrest. In contrast, black-identified names attracted "arrest ads", even when no-one with this name had an arrest record.