Our manifesto
Algorithms are gatekeepers of our information.
Online platforms are the gateways to majority of the internet. There is so much content that the only way to access it is for an algorithm to surface it for us. The algorithms' curation can take many forms: a list of search results on Google, a selection of posts on Instagram's news feed, a recommended product on Amazon.
On YouTube alone, 70% of the video views are recommended by an algorithm.
Algorithms select and rank content by crunching massive amounts of data. The resulting models impact billions of lives, but their decision mechanisms remain opaque: users do not know what they optimize for, or which content is favored.
Moreover, what could be known about these systems is withheld by the platforms. Platforms conceal the objective and design of the model, as well as its training data; and also do not provide APIs for researchers to monitor the behavior of algorithms.
There is no single best answer to selecting a handful of pieces of content from billions of options. To do so, algorithms need to make assumptions about the world, and optimize for a particular metric of success - one that is defined by the platforms.
Just like human editors, algorithms always have biases, whether explicit or implicit.
Algorithms don't have your best interest in mind.
Social media companies are legally bound to maximize shareholder profit. This profit is often correlated with the number of ads that are shown to users. As a result, maximizing engagement (clicks, comments, and watch time) is often prioritized over the interests of the user.
According to Mark Zuckerberg, extreme and sensationalist content drives more user engagement.
Recommender systems have learnt to exploit these well-known human biases, by disproportionately promoting divisive or conspiratorial content.
The algorithm's biases can be exploited to manipulate its great influence. There are experts at this game of computational propaganda, typically those with the strongest political or financial motives.
The lack of transparency makes it impossible to ensure a level playing-field, both for users and content creators.
Transparency is necessary to make algorithms trustworthy.
We believe that platforms should be responsible for the content they promote. We therefore work to provide transparency about the behavior of their algorithms, so that platforms can be held accountable for the important curatorial decisions they make.
For instance, talking publicly of harmful conspiracy theories that were being promoted by YouTube by the hundreds of millions led to YouTube take more than 30 measures to reduce amplification of harmful content.
The job of platforms—to moderate public forums and design the algorithms that access content—is not an easy one. There is no perfect moderation, nor perfect algorithm. There are trade-offs.
We believe that users should be in control of what they want to see, instead of being unknowingly influenced by a system with misaligned incentives.
Through transparency and public awareness, we aim to reduce the information asymmetry between the users and the platforms, a necessary first step for users to gain back agency and control over the information they engage with.