Two-Year Facebook Audit Reveals Deep Anti-Conservative Biases In Algorithm Used To Police Speech

Advertisement

A civil rights audit two years in the making has revealed a questionable algorithmic bias that some argue is responsible for Facebook’s practice of “policing speech,” especially that of conservatives.

This week, the social media behemoth published the results of its civil rights audit. While the bulk of the 89-page report is spent focusing on workplace diversity and content policies, it devotes an entire chapter to algorithmic bias and the company’s growing reliance on artificial intelligence to moderate content on the platform.

“As Silicon Valley increasingly outsources censorship to opaque AI algorithms,” Real Clear Politics’ Kalev Leetaru asks, “what are the possible inadvertent consequences for democracy?”

Leetaru explains that, like most of Silicon Valley, Facebook has replaced its army of human content moderators with artificial intelligence, saving tons of money and allowing the platform to review exponentially more content.

AI has become central to Facebook’s future, Leetaru goes on, noting that “algorithm” appears 73 times in the report: “Just one year ago, 65% of the hate speech posts the company removed were first identified by one of its algorithms before any human reported it. By March of this year, that number had increased to 89%.”

Facebook says in the report that it “removes some posts automatically, but only when the content is either identical or near-identical to text or images previously removed by its content review team as violating Community Standards, or where content very closely matches common attacks that violated policies. … Automated removal has only recently become possible because its automated systems have been trained on hundreds of thousands of different examples of violating content and common attacks. … In all other cases when its systems proactively detect potential hate speech, the content is still sent to its [human] review teams to make a final determination.”

Leetaru explains that automated post removal of past deleted hate speech is “similar to the blacklists of known terrorism or child exploitation imagery the company uses and is relatively uncontroversial, preventing users only from reposting previously banned content.”

Leetaru goes on:

Yet the acknowledgement that the company now goes beyond this to remove “content very closely match[ing] common attacks that violated policies” shows that Facebook’s algorithms now actually make their own decisions about what kinds of speech to prohibit. That almost 90% of the company’s hate speech removals were initiated by an algorithm means these black boxes of software code now codify the de facto “speech laws” of modern society.

Despite their enormous power, these algorithms are among the company’s most protected trade secrets, with even U.S. policymakers kept in the dark on their functioning. Indeed, even when it comes to the most basic of details of how often the algorithms are wrong or how much they miss, social media companies release only carefully worded statements and decline to comment when asked for the kinds of industry standard statistics that would permit further scrutiny of their accuracy.

“AI is often presented as objective, scientific and accurate, but in many cases it is not,” the sixth chapter of the report begins. “Algorithms are created by people who inevitably have biases and assumptions, and those biases can be injected into algorithms through decisions about what data is important or how the algorithm is structured, and by trusting data that reflects past practices, existing or historic inequalities, assumptions, or stereotypes.”

The chapter also notes that “as algorithms become more ubiquitous in our society it becomes increasingly imperative to ensure that they are fair, unbiased, and non-discriminatory, and that they do not merely magnify pre-existing stereotypes or disparities.”

Facebook has developed several tools to help address algorithimic bias, but the audit report acknowledges that they can only do so much to address the problem. “Truly mitigating bias,” Leetaru contends, “requires a diverse workforce that can see the kinds of biases such tools aren’t designed to capture.”

“A key part of driving fairness in algorithms in ensuring companies are focused on increasing the diversity of the people working on and developing FB’s algorithms,” the report states.

“…Without a sufficiently diverse workforce,” Leetaru warns, “the biases found by those audits might not be seen as problems.”

“As Facebook races to narrow its acceptable speech rules and flag posts by President Trump and other Republicans,” he goes on, “its algorithms will in turn increasingly learn to flag posts by conservatives as ‘hate speech.’ An AI fairness audit might flag that the algorithms are heavily biased against conservatives, but if Facebook’s workforce skews largely liberal, that bias might be considered a desired feature rather than a problematic one to be removed.”

A 2017 algorithmic bias audit by Facebook showed that conservative news outlets would be vastly more impacted than any others. Intervention by the company’s most senior Republican, Vice President of Global Policy Joel Kaplan, pushed its engineers to tweak the algorithm to address its unequal impact on conservative news outlets. Prior to Kaplan’s move, the company viewed the extra scrutiny of conservative voices as a useful tool in reining in misinformation.

“In other words,” Leetaru argues, “AI bias audits and algorithmic fairness initiatives mean little when a company’s workforce is so homogeneous that the biases those tools uncover are considered positives to be encouraged rather than biases to be removed. This is especially important given that Facebook’s employee base, like the rest of Silicon Valley, skews overwhelmingly liberal.”

“Orwell had it wrong when he saw the government as [the] ultimate censor,” Leetaru concludes. “As Facebook increasingly outsources its future to opaque AI algorithms that even it doesn’t fully understand, the future of our nation is increasingly being decided by a small cadre of unaccountable companies building secretive AI processes that have become the new de facto laws of acceptable speech in the United States.”

If you appreciate the work we are doing to fight the leftist assault on our values, please consider a small donation to help us continue. Thank you so much!

Sponsor