“Surveillance is the business model of the internet”—Bruce Schneier, security expert and privacy specialist
Google, the company now almost synonymous with looking up information on the internet, in 2007 launched Street View, a service which allows users to view panoramic images of streets around the world. These seemingly harmless images of streets and buildings, mapped by Google vans driving across streets, are an example of the privacy challenges the world faces today. In 2013, Google had to pay a fine of USD 7 million for violation of privacy during these mapping processes: the vans had been scooping up passwords, e-mails and other personal information from computers while driving by houses.
Street View was eventually restricted from operating in several countries. But as Shoshana Zuboff of the Harvard Business School—author of the recently published “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power”—argues, what people witnessed with Street View was not a one-off example, but is the very essence of the business model of the big data giants. As she writes in a related paper, the “modus operandi is that of incursion into undefended private territory until resistance is encountered.” And without adequate and effective laws in place, tech companies have faced little resistance in encroaching ever more into the private domain. Pioneered by Google, surveillance capitalism is now the essence of how Facebook and Microsoft operate, and what other tech companies want to achieve in scale.
In 2013, after Snowden leaked classified government information about the global surveillance apparatus, it was not the fact that governments put citizens under surveillance that came as a shock; it was the range of information that these programmes collected and the “mass” nature of this collection that put focus on the debate about privacy. But it is not only states that collect information en masse. The Brexit referendum and the Cambridge Analytica scandal in the aftermath of the 2016 US Presidential Elections highlighted how the issue goes beyond the state to business models in the digital age. This is what Zuboff calls surveillance capitalism—a form of capitalism in which human behaviour is the raw material collected for the purposes of prediction and modification of future behaviour, and where the individual has little control over how their data is processed and used.
“Surveillance Capitalism”describes how Google, in its early years, trying to develop a brilliant search engine, came upon data that was being collected but was of no use. When the dot-com bubble burst and the founders of Google came under increasing pressure to deliver revenue, the company learned that this data that was collected but not used for improvement of services could be processed to create “prediction products”. These could then be sold to advertisers; as time passed, the extent of information that was being collected and the sophistication of the analysis increased manifold. With time and increasing competition, she writes, it was realised by “surveillance capitalists that extracting human experience is not enough. The most-predictive raw-material supplies come from intervening in our experience to shape our behaviour in ways that favour … commercial outcomes”.
This new model was focussed on data extraction rather than production of new goods. Zuboff writes, Google went from re-investing excess data for betterment services to this new model, where the data collected was turned into products to be sold to third parties. And the extent of information collected by Google and Facebook has been widely documented, especially in recent years. What Google and Facebook know include, but is not limited to, one’s locations and search history; based on these, Google has an ad profile on every user.
So why is this a problem, rather than just a novel—revolutionary even—business model? Zuboff, and countless other privacy advocates, have argued how surveillance capitalism strikes at the core of democracy and individual sovereignty. Examples of use of personal data to predict and influence political outcomes have been rife in recent years: from President Obama’s 2008 campaign (which used online behavioural data for predictive modelling) to Cambridge Analytica’s harvesting data from Facebook profiles, without consent, which was used by Ted Cruz in 2015 and then by Donald Trump’s campaign. As early as 2012, Facebook was openly celebrating its ability to influence voter turnouts during elections through manipulating what is produced in front of users on social media.
Whether it is for influencing our political decisions or for selling us more soap, the issue is about meaningful autonomy, participation and violation of privacy. Of course, surveillance capitalists claim that user consent is taken before any data is collected (although, as the Street View example shows, that is not always the case). But even the “I Agree” clicks that are the norm for obtaining consent are no real consent at all, since it gives users little choice. A 2008 study titled “The cost of reading privacy policies” found that reading all privacy policies one comes across in a given year would need 76 full working days at a national opportunity cost of USD 781 billion. Central to exercising the right to privacy is to participate meaningfully in the process of what is shared and who it is shared with. It is the asymmetry of knowledge and power to decide that makes all of this problematic. It is not just about Facebook knowing my birthday in exchange for offering a free service, but its use of my data in ways that I have no control over to be processed into “products” to be sold.
And all this is very relevant for Bangladesh today. In 2018, a well-known tech start-up in Bangladesh was found to be collecting private data, including SMS and contact lists of its users. When confronted with this, the company’s response was that the data was being stored securely. The entire issue of consent is absent from legal and public discussion. An entire industry has developed in the country around selling of contact information (resulting in floods of unwarranted promotional texts that users cannot opt out of). A cursory survey of some of the immensely popular smartphone apps in the country shows that by default, they are allowed to access the location, storage, and even contact information of users.
In 2016, the EU’s General Data Protection Regulation (GDPR) was adopted with the aim of giving control to people over their personal data, recognising certain “digital” rights that individuals are entitled to regarding how their personal data is collected and used. The extent to which this can curtail mass collection of information by corporations may be debated, but we have a complete absence of this conversation about data privacy and consent. The laws which could have attempted to define and defend privacy rights, and set limits to collection and use, instead have been most famous for policing the internet.
Surveillance capitalists claim that all their incursions are innovative and beneficial, with examples such as live monitoring of speeds during driving, where if one goes over the speed limit, insurance companies can lower premiums. But as Zuboff argues in her book, this can be no utopia at all, but concentration of “instrumentarian power” (power to shape human behaviour), where the social contract, which rests on trust, is broken: “Deception-induced ignorance is no social contract, and freedom from uncertainty is no freedom”.
Moyukh Mahtab is a Master’s student of development studies and a former journalist of The Daily Star.