Growing Calls in the US for Algorithmic Transparency Following the Passage of the EU Digital Service Act
By Geo Thelen (I)
New research questioning the algorithm practices of some of the United States’ largest tech companies is prompting a call to action for social media algorithmic transparency.
Department of Computer Science researchers at North Carolina State University provided data in March from a study looking at potential platform bias in Google’s Gmail email service as it relates to political funding. The NC State team used more than 100 Gmail, Yahoo and Outlook accounts to obtain more than 300,000 emails from May through November 2020. What they found was that Google’s free product (the nation’s most popular email provider) was “substantially more likely” to mark Republican fundraising emails as spam during the height of the 2020 campaign than Democratic solicitations. Republican-leaning emails ended up in a targets spam box half the time, while Democratic-leaning emails ending up in a targets primary inbox nearly 80% of the time, according to their report. A separate study done by The Mark Up .org in 2020, showed groups with the highest percentage of fundraiser emails ending up in a targets primary inbox (as opposed to spam) included the American Enterprise Institute (99%), Democratic Socialists of America (75%), and Democratic presidential candidate Pete Buttigieg (63%). Although some disparities result from user behavior (based on your mark as spam & similar actions), the NC State study noted, “the percentage of emails marked by Gmail as spam from the right-wing candidates grew steadily as the election date approached while the percentage of emails marked as spam from the left-wing candidates remained about the same.” Researchers concluded, “Fairness of spam filtering algorithms is an important problem that needs dedicated attention from email service providers.”
The release of the Gmail data comes at a time when Elon Musk’s purchase of Twitter and the European Union’s passage of the Digital Services Act are sparking conversations regarding platform bias, manipulative algorithm system practices, and the overall issue of algorithmic transparency.
The night before the Twitter deal with Musk was announced on Monday, April 25; many public figure accounts began seeing dramatic swings in the number of their followers. “Organic” fluctuations, according to Twitter’s official statement. But accusations of “secretly” suppressing certain accounts to make them less popular -known as shadow banning, have been leveled at both Twitter and Facebook. Sudden losses in followers could point to an algorithmic “cleanup” within Twitter after the deal was eminent eliminating bot-generated follows and fake accounts (something many Twitter users have been calling for). Wild gains in followers could be a sign of algorithmic restrictions on particular accounts being lifted or widened allowing new eyeballs to see more organically, based on user engagement algorithms and not some closed source third-party basement bro code.
According to an analysis compiled during the week of the Twitter deal by USA Today, congressional Twitter accounts recording the most significant loss of followers included; Adam Schiff, Nancy Pelosi, Alexandria Ocasio-Cortez, and Maxine Waters, Bernie Sanders, and Chuck Schumer. Specific numbers provided by social media analytics tracker SocialBlade showed AOC losing more than 10,000 followers in the first 24 hours after the deal was announced, while Speaker Pelosi lost 13,000+, Elizabeth Warren 14,000+, and Bernie Sanders nearly 19,000 in 48 hours from Sunday, April 25 to Tuesday, April 27. Over the same period, Republican politicians saw the most considerable increase in followers, with Florida rep Matt Gaetz gaining almost 25,000 followers, 50,000+ for Georgia Congresswoman Marjorie Taylor Greene, and over 53,000 new followers for Ted Cruz of Texas. Georgia Congresswoman, Taylor Greene had an increase of 41,000+ followers in the first 24 hours, while her personal account remained suspended by Twitter.
The issues of content safety and legality as well as inherent platform bias being tackled by the passage of the Digital Services Act in the EU reflects concern for the larger impact an employee’s personal bias can have over the programming and operations of a particular content platform. Perhaps a correlation can be observed from the data provided by Open Secrets showing tech company employee political donations as they were during the 2020 election cycle. The (public) donation data for employees of six California-based media companies were compiled, revealing that 80%+ of the donations at five of the six companies went to Democratic causes. 98% of Twitter employee political donations went to the Democratic party, and Facebook / Meta employees gave more than 80% of their declared financial support to Democratic causes. (Netflix had the most, with over 99% of its employee political contributions going to Democratic causes. Did these employees’ internal personnel biases play into engineering, programming, and curation choices? Or vetting of information? Or fact-checking? Or canceling, Dave Chappelle?
Why does any of this matter?
Because when you lose trust, you are left with distrust. When you lose integrity, you no longer have integrity. People are losing trust and belief in the integrity of their government and media institutions. Suppose Gmail, Facebook and Twitter are proven to have been algorithmically manipulated in some manner with a bias towards one party over the other. In that case, your feelings of overall distrust are validated (even if you don’t identify with either major party). Followed are questions over the validity of all other digital content, security & financials. Will we ever hear chants in the streets of “more constants, less variables” and “free open-source algorithms?” Probably not. But while the technical aspects and achievability of “algorithmic transparency” may be hard to see, consumers are starting to see through what they are NOT seeing. Until that transparency comes, expect more.
Geo Thelen (I) of Thelen Creative is a digital content creator. Observer / Participant
Follow @ instatoblast on the ‘gram
Sources:
North Carolina State University, Department of Computer Science “A Peek Into The Political Biases in Email Spam Filtering Algorithms During US Election 2020” / USA Today / The Mark Up .org / Thelen Creative / SocialBlade, Social Media Analytics / The Digital Services Act & Digital Markets Act, EU, 2022