Citizen Erased

And Suddenly They Wanted to be Regulated

Big tech is watching you. Who's watching the big tech?

“EU rejects Facebook’s proposal for online regulation” reads the headline of a Financial Times piece[1], published shortly after Facebook’s CEO meeting with EU executives in Brussels on February 17. In recent years, and especially after the sweeping GDPR framework, European Commission has gained the fame of the strict, no-nonsense regulator, making Big Tech companies flock to Brussels now and then to meet with EU executives. This mythos was once more strengthened by yesterday’s rejection of Facebook’s proposal. On February 19 the European Commission unveiled a holistic vision of EU’s digital strategy through a document named “Shaping Europe’s Digital Future,” drafted by Margrethe Vestager, the European Commission’s Executive Vice President. Ursula von der Leyen mandated Vestager for this job, shortly after the former was elected new President of the European Commission[2].

Thus, in less than a month Brussels welcomed almost all of Silicon Valley’s key figures[3]. The last to visit Belgium’s capital in this series of lobbying events, is Mark Zuckerberg, who met with Margrethe Vestager, Vera Jourova, European Commission’s Vice-President in charge of Transparency and Values, and Thierry Bretton, European Commissioner for Internal Market and Services.

This visit came shortly after Alphabet’s (Google’s parent company) CEO, Sundar Pichai, as well as Apple’s Senior Vice-President for Artificial Intelligence (AI), John Giannandrea, had both visited Brussels to discuss AI policies, ahead of EU’s plan to regulate AI implementation. What is more, Zuckerberg’s visit coincided with two important publications: a white paper published by Facebook itself[4], describing what would be a best-practices approach to regulating online content – not only on their platform but elsewhere too – and, second, an op-ed by Facebook’s CEO on Financial Times[5], asking for more regulation, even if that means “[hurting Facebook] in the short term but [helping] in the long term.”

Before discussing more the recent crescendo of Big Tech companies regarding the new EU Commission’s regulatory endeavours, one should first take a step back and examine the context these events unfold in. Scholars who have been studying regulation, have mainly outlined three regulatory models: top-down regulation, self-regulation, and co-regulation. Naturally, each of said models comes with a specific agenda.

Top-down regulation is passed in form of legislation by public authorities, on a (supra)national level, and can be seen as intervening and, at times, obsolete, depending on who is voicing the critique. A recent example is the “Netzwerkdurchsetzungsgesetz” or “NetzDG” for short, a law passed by the German Bundestagin June 2017, demanding platforms, like Facebook and Twitter, to remove harmful or “fake” content within 24 hours. The self-regulation model is primarily a voluntary way of platforms to adhere to some ethical standards and is done principally via private channels, aiming to privatise regulation. An example is the “International Fact-Checking Network”[6], which certifies fact-checkers and is a key partner to Facebook. This model holds a special place in the American business tradition, aiming to keep public scrutiny at bay and it was also the dominant model of (non) regulation in the EU until 2015.

Finally, we have the co-regulation model, which is often presented as the “Holy Grail” by corporations, as well as some public officials, experts and scholars. The argument in favour of such a regulatory approach could be summarised as following: corporations have the “know-how,” governments have the means to impose legislation and public authorisation, so why not combine the best of the two worlds? However, what is craftily obscured is that this type of regulation is being methodically employed to promote self-regulation, avoid public accountability and preserve oligopolistic practices.

Circling back to the latest developments in the regulatory arena, we can see Big Tech’s lobbying efforts as a way of doing pre-emptive damage control. Sundar Pichai talked about AI’s “potential negative consequences,”[7] and Mark Zuckerberg asked for more taxes[8] on Facebook and other platforms, as well as for more regulation, because “private companies should [not] make so many decisions alone when they touch on fundamental democratic values.” It certainly feels that the targeted audience is investors rather than policy makers, given that few would want to invest in companies that do not give the impression of at least doing an effort, all the while lowering shareholders’ profit expectations, just to be on the safe side.

So, their goal is rather straightforward: minimise cost expenses that come with adopting new regulation by reshaping current regulation to fit their needs. So, while platforms are demanding new rules, they want to be the ones to write them and have public authorities sign them. Shaping the new internet governance to fit their interests is of course part of their overall strategy to make their platforms and AI algorithms irreplaceable at the core of every online activity.

It is true that these platforms hold some of the most powerful AI technologies, from automated advertising to self-driving cars. However, what is often not mentioned is that we, as users, have trained these algorithms, and most of the time unbeknownst to us; from selecting an image at a “Are you a robot?” captcha to telling Siri to play a specific song. Though it is not only a matter of unpaid or, even, unintended labour that we provide to platforms. These platforms are actively appropriating our lives and social interactions on spaces that are accessible ostensibly for free – which are not.

This is done thanks to data. Data are not a natural resource that someone can just extract and utilise. Data are traces of our online presence, which capitalist platforms have been structured to seamlessly appropriate and profit from. Nick Couldry and Ulises Mejias have named this phenomenon: “data colonialism,” and it does not only include platform capitalism, but rather capitalism at large.

Not everything can be analysed through the lens of competition or privacy practices.

What Facebook seeks to accomplish by publishing a white paper with regulatory guidelines, criticising existing national laws, like the “NetzDG,” is another way of passing off responsibility. It is another way of presenting itself as a victim and coming off as the expert who should handle regulation. And so far, they have succeeded by presenting the co-regulation framework as the most fair and balanced solution. The truth is, that Facebook and other platforms seek not truly a co-regulatory framework, as regulation means adhering to imposed measures. Instead they seek official validation of their own demands; it could sound something like “give us a stamp of approval and we’ll take it from there.” And, more importantly, they aim to consolidate their position as a state-like organisation, with power to directly negotiate and consult governments.

The European Commission has been really slow to take action and it only recently began to push harder for stricter regulation, notably under Margrethe Vestager’s initiative; but, still, it won’t address issues related to cultural and political contingency on these platforms. Put simply, EU demands fairness, transparency of data handling, respect to privacy, but asks little about socio-political power asymmetries and the unprecedented effect of platforms on cultural production and distribution. Not everything can be analysed through the lens of competition or privacy practices.

So, where do we go from here? The new EU Commission presents its holistic digital strategy plan on Wednesday. Germany pretty much carved its own path to online regulation, while France is considering a similar approach with a law against “online hate speech”[9] being still discussed. Silicon Valley CEOs use every trick in the damage control book to minimise costs and save face. I believe that we are at a very exciting point in time. AI is here to stay, so are platforms, that could not exist without AI technologies. We ought not be technophobes but rather skeptical. And we have to think beyond AI’s consequences on data privacy or safety; these concepts are crucial but they have monopolised our attention. It is high time we scrutinised their effects on social and cultural facets of our lives. We need a radicalisation of social and political thinking to start reflecting on platforms not merely as private companies but as players in a larger, conflicting arena of politics.


[1] .