The readings below are from the Center for Humane Technology.
These selected readings have been identified to further our understanding of the "Ledger of Harms" that can result from human interaction with technology and social media and the impact on: How We Make Sense of the World, our Social Relationships, Physical and Mental Health, Politics and Government, Systemic Oppression, Attention and Cognition, Future Generations, and How We Treat One Another."
Facebook's internal training materials for its moderators state: "We allow praise, support, and representation of white separatism as an ideology, e.g. 'The US should be a white-only nation'". At the same time, Facebook notes that "Our Implementation Standards prohibit organizations and people dedicated to promoting hatred and violence against people based on their protected characteristics.”
Sustained disinformation campaigns, made viral by social media, can dilute, distract, and deny the reality of oppression. Within a week of George Floyd’s killing by police, social media platforms hosted a range of counter-information: one video asserting that the death was faked reached 1.3 million people, while thousands of posts on both Facebook and Twitter claimed that the police officer involved was an actor and that the event was faked by the state.
of the most shared Facebook posts about Black Lives Matter in June 2020 were critical of the movement, despite the fact that the majority of Americans support BLM, according to research by data analysis company CrowdTangle. Such fake representations of public opinion can play a significant role in distorting the basis for democratic dialogue and diminishing the momentum for social change. Even as societies take action to challenge racism and other forms of systemic oppression, social media platforms are being hijacked to discourage or even deny change.
Russia's propaganda program (IRA) primarily targeted African-Americans in the US between 2015-2017: fake African-American campaigns on Facebook and Instagram, such as "Black Matters US" and "Blacktivist", reached 15 million users and successfully prompted over 1.5 million users to click through to fake websites which purported to support African-American interests but promoted initiatives such as "Not voting is a way to exercise our rights".
Russia's IRA spread false information designed to create outrage about Black Lives Matter and deepen social division in the US. Research indicates that one of the IRA's major strategies was to use social media platforms to target conservative groups who supported the police or veterans and specifically feed them misinformation about BLM. The Oxford University report concludes that "the affordances of social media platforms make them powerful infrastructures for spreading computational propaganda".
Even after the shooting of 2 law enforcement officers by a Boogaloo activist, Facebook allows many Boogaloo groups to continue organizing. The tech giant argues that its June 2020 ban identifies and removes violent Boogaloo groups leaving non-violent groups intact; external researchers disagree, noting that at least 20 violent Boogaloo groups have side-stepped Facebook's new restrictions and continue to operate on the platform.
With over 800 million users, TikTok promotes itself as a place for self-expression and unrestricted creativity, yet its internal documents reveal a policy of downgrading content from users who do not fit normative ideals of gender, race, class, sexuality, or able-bodiedness, with moderators urged to censor users with "abnormal body shape", "too many wrinkles", or whose environment shows signs of poverty such as "cracks in the wall" or "old decorations".
Google image search systematically distorts the way that genders are represented in the workplace, leading to knock-on effects in our perception of real life, according to research. Analysis of Google's top 100 images for each of 45 different jobs demonstrated that Google displayed significantly fewer images of women compared to the actual percentage of women in each of these professions: for example, while in real life, 27% of CEOs are women, only 11% of images generated by a Google search depicted women. Further experiments showed that exposure to such search results significantly distorted viewers' later estimates of how many women worked in these fields.
Rigorous testing of industry AI algorithms, including Google search’s natural language processing, discovered significant stereotypical bias by gender, race, profession, and religion. For example, during "fill-in-the-blank" tests the AI's regularly associated the word "African" with words such as "poor". Researchers noted that GPT-2 showed less bias compared to the other A.I. language models, suggesting that this may be due to GPT-2 was trained on the type of real world datasets that are moderated to reduce bias (such as Reddit forums).
Twitter users were actively involved in Gamergate at its peak, with the hashtag #Gamergate being tweeted hundreds of thousands of times per month, mostly supporting the campaign of abuse and violent threats against specific female game designers and those who spoke up to support them. Gamergate played out primarily on Twitter, whose platform design and administration, according to researchers, make the platform particularly adaptable for online abuse— due to the highly public nature of tweets, the potential for mass targeting of individuals, and the fact that abusive responses can’t be removed.
Until 2019, Facebook allowed advertisers to use discriminatory targeting in ads: those advertising jobs, housing, and credit offers, could choose to exclude people on the basis of gender, race, disability and other characteristics, in direct contravention of federal laws such as the Fair Housing Act which bans discrimination. While Facebook has agreed to block such targeting, experts note that measures are not stringent enough and can be easily "gamed": for example, advertisers can still exclude users on the basis of their location.
The algorithmic basis of Google search makes it vulnerable to exploitation by those with enough capital to deploy search engine optimization tactics, often in ways that perpetuate existing forms of race and gender oppression. Researchers note that the porn industry has publicly boasted of how easily it can subvert Google safeguards to place porn in the first page of search results. In addition, commercial incentives for promoting degraded stereotypes of women, especially women of color, has knock-on effects for non-porn related Google searches: an innocent Google search for "black girls" returned pornographic results for many years, via both ads and non-ad search results.
For many years, 92% of the ads that appeared when searching for a black-identified name on Google mentioned the word "arrest", according to Harvard researchers, compared to only 80% of the ads prompted by searching for white-identified names, a statistically significant difference (p < 0.01). Even where a white-identified name (e.g. "Karen Lindquist") belonged to a person with an arrest record, a Google name search still only generated neutral ads that did not mention arrest. In contrast, black-identified names attracted "arrest ads", even when no-one with this name had an arrest record.
Chamath Palihapitiya, former VP of user growth at Facebook, has said that: “I can control my decision, which is that I don’t use that sh%t. I can control my kids’ decisions, which is that they’re not allowed to use that sh%t... The short-term, dopamine-driven feedback loops that we have created are destroying how society works.”
Steve Jobs, who was CEO of Apple for many years, told reporters that his kids don’t use iPads and that “We limit how much technology our kids use at home.”
Sean Parker, who was the founding president of Facebook, has publicly called himself "something of a conscientious objector" on social media and said, “God only knows what it's doing to our children's brains.”
Many modern Silicon Valley parents strongly restrict technology use at home, and some of the area’s top schools minimize tech in the classroom. In the words of one 44-year-old parent who used to work at Google, "We know at some point they will need to get their own phones, but we are prolonging it as long as possible."
“We’ve unleashed a beast, but there’s a lot of unintended consequences,” says Tony Fadell, inventor of the iPod and co-inventor of the iPhone. “I don’t think we have the tools we need to understand what we do every day… we have zero data about our habits on our devices.”
3X more likely
Children who have been cyberbullied are 3x more likely to contemplate suicide compared to their peers. The experience of being bullied online is significantly more harrowing than "traditional bullying", potentially due to the victim’s awareness that this is taking place in front of a much larger public audience.
Preschoolers who use screen-based media for more than 1 hour each day have been shown to have significantly less development in core brain regions involved in language and literacy. Brain scans indicate that the more time spent on screens, the lower the child's language skills, and the less structural integrity in key brain areas responsible for language. This is one of the first studies to assess the structural neurobiological impacts of screen-based media use in preschoolers; it raises serious questions as to how screen use may affect the basic development of young children's brains.
per day is the average amount of time 2-4 years old spend on mobile devices. And 46% children under the age of 2 years have used a mobile device at least once, despite the American Academy of Pediatrics' recommendation that children under 2 years should not use any screen media.
In a longitudinal study tracking over 200 children from the age of 2 years to 5 years old, children with higher levels of screen time showed greater delays in development across a range of important measures, including language, problem-solving, and social interaction. Analyses indicated that the level of screen time was significantly linked to the specific level of developmental delay 12 -14 months later. This is a critical period in a child's life: as the researchers note, the current data indicates that exposure to excessive screen time during these early years can have serious effects "impinging on children's ability to develop optimally".
Children who experienced cyberbullying during their adolescence were significantly more likely to engage in risk-taking health behavior as adults. Boys who were cyberbully-victims were significantly more likely to smoke as young adults (p = 0.014) while teenage girls were significantly more likely to show a lifetime usage of drugs (p < 0.04).
The level of electronic media use before bedtime is significantly correlated with depression in adolescence. Measurements from several hundred teenagers indicate that this is primarily due to the impact on sleep: compared to video game players, teens with high levels of social media use experienced greater sleep difficulties, which in turn strongly correlated with higher levels of depression.
After nearly two decades in decline, high depressive symptoms for 13-18 year old teen girls rose by 65% between 2010-2017
The amount of time spent using social media is significantly correlated with later levels of alcohol use. Research on several thousand teens demonstrated that while time spent on other forms of electronic media (including TV or video games) has comparatively little impact, the amount of time spent on social media is significantly linked to alcohol use 4 years later. Data indicates that social media has this unique effect through "social norming": repeatedly exposing teens to multiple images of their peers and role models drinking alcohol makes such behavior seem normal and acceptable, encouraging imitation.
US teens spend an average of more than 7 hours on screen media. This does not include time spent on screens for school or homework.
is the increase in the risk of suicide-related outcomes among teen girls who spend more than 5 hours a day (vs. 1 hour a day) on social media.
Media multi-tasking is significantly linked to later levels of attentional difficulties. Tracking more than 800 adolescents across time demonstrated that the degree to which young teens (aged 11-13 years old) multi-tasked was a significant predictor of attentional problems 3 months later (p < 0.05), highlighting the potential impact of distracting digital environments on young teens' development.
A systematic review and meta-analysis (of 20 studies) showed strong, consistent evidence of an association between bedtime access to or use of electronic devices and reduced sleep quantity and quality, as well as increased daytime sleepiness
A longitudinal study of several thousand adolescents indicated that their level of social media usage was a significant predictor of their depression levels over the course of 4 years. For every increased hour spent using social media, teens show a 2% increase in depressive symptoms.
More than half US middle-schoolers cannot distinguish advertising from real news, or fact from fiction. Many state that “If it’s viral, it must be true”. As a result, the next generation are poorly equipped to make sense of the world in their future decisions, whether with regards to drug use, risky sexual behavior, political extremism or any other issues in their future lives.