Amicus Curaie: Social Media and the Cost of Privacy

By Patricia Lu

Social media platforms and the privilege they have over our data is the elephant in the room that’s finally being talked about. With the Cambridge Analytica scandal and the commencement of the General Data Protection Regulation (GDPR) in Europe on May 25, it is important to understand the extent of how our privacy is being used for profit.It is generally accepted that some personal information needs to be exchanged in order to enjoy the benefits of social media. Over 3 billion people in the world use social media but it is only recently that we are getting a glimpse into what this actually entails. The Cambridge Analytica scandal, which revealed that the data analytic firm harvested the personal information of over 50 million Facebook users, marked a turning point to this conversation. This is not only due to the extent of the breach. It also revealed just how little control we have over our own data and how lacking our knowledge is on how it’s being used.

"Our privacy is essentially their currency."

Data harvesting is the social media business model. The primary source of income for these social media sites is advertisements, though clearly the more users lead to better ad income. In 2014, 89% of Google’s $66 billion dollar revenue came from ads. And as of 2017, the average Facebook user earns $20.21 each year – even more if you come from a big market economy like the United States. Each ad is specially designed and targeted using the billions of user data available, which can give insights into personality, ethnicity, income, and more. Sally Hubbard, a tech law and antitrust enforcement expert, expressed it best when she said: “The business model of a company like Facebook is surveillance; they’re harvesting data, and that data can and will be misused.” Our privacy is essentially their currency.Many of us accept this social contract in order to enjoy free services. And this surveillance-based, ad-focused business model is what keeps social media alive. However, there is an imbalance of information and control between both sides. When accepting the terms and conditions of an app or a website, most of us  don’t fully understand what our consent entails. And going even further is the lack of consent and control that we have once our data becomes part of their domain. A lot of the backlash from Cambridge Analytica stemmed from the fact that majority of those affected never consented to their data being used. It is often argued that there should be no problems if you have nothing to hide. But this mindset is only justified if one has full knowledge and consent to how their information is being used. There is a gap between what we believe and the reality of what is happening.

"[Data breaches] bring unintended security risks. For example, location tracking from exercise app Strava has led to the disclosure of several military bases and the identities of soldiers around the world."

Our worries about our privacy are justified. On a personal level, our freedom of choice and information is curtailed. Because the algorithms embedded in social media predict and choose what we see, whether news stories or ads, what we get exposed to becomes narrower and less diverse. These “filter bubbles” create polarization, which in turn affects our understanding of the world and our opportunity to engage in critical discussions. There is also no guarantee of knowing whether the information we see is the same as that of other individuals. This hurts our democracy. We lose the plurality of voices and it becomes an endless cycle of reinforcement, where our views are manipulated according to what online media chooses for us. In a sense, the way that we become evaluated, categorized and rated, which in turn affects how we are treated, is a form of discrimination. Social media becomes the gatekeeper to our rights: to our free speech and information, through its control over content.On a global scale, besides the well-covered concerns over political campaign manipulation, these breaches also bring in other unanticipated security risks. For example, location tracking from exercise app Strava has led to the disclosure of several military bases and the identities of soldiers around the world. And because what gets the most clicks will often get the most exposure, the dangers of Facebook’s algorithms are even more felt in developing countries. The proliferation of radical or provocative misinformation in these weaker institutions has led to violent conflict in countries like Sri Lanka and Indonesia, where there are existing tensions between cultural groups.There is a sense of powerlessness felt as a user. The prevalence of social media means that avoiding it is often not a choice. But even when we feel affronted, there is also little recourse to pursue. The onus should not be on the user. New technology brings new challenges, which naturally evoke slower responses from government. The added risks involved in this privacy issue make the stakes even higher to get it right. But leaving industry to self-regulate has clearly been the wrong answer. Despite Mark Zuckerberg’s assertions that “keeping people safe will always be more important than maximizing [their] profits,” their business model, founded on our personal data, says otherwise.What is clear, however, is the need for stronger regulation to keep these companies accountable and compliant. The European Union (EU) has responded to this issue by implementing the GDPR, giving data regulation the punch that has long been missing in legislation of the kind. Not only will companies be required to demonstrate what data they collect, they are also limited to only using this data for what its been intended for. Fines are now in place for non-compliance – which may actually impact large companies as this can amount to up to 4% of their global revenue – and the right to be forgotten can be invoked if you want your data to be deleted. There are concerns that this may come at the cost of innovation, and only time knows whether this is the right response to take. However, it does show the strength that the EU is willing to go to protect the privacy of their citizens. Strong legislation may be what is needed to push big data into transparency and accountability.New Zealand is also on its way to modernizing our privacy laws. Though the Privacy Bill is based on 2011 recommendations, it does give extra protection such as implementing mandatory reporting of breaches and improving trans-border data flow protections. But it lacks the strength of the GDPR and doesn’t have the power to penalize these companies yet. This is something that John Edwards, New Zealand’s Privacy Commissioner, acknowledges. The right to be forgotten, the right to object automated processing, and stronger algorithm transparency are just some aspects he expressed were missing from the Privacy Bill.  Though these will not likely be part of our next privacy laws, it shows that the Privacy Commissioner understands the concern and an inclination towards more vigorous privacy protection.The idea of global consumer data principles has also been thrown around, rooted in keeping control at the hands of the customers and ensuring company compliance. Given the unbound nature of social media and privacy, this may not be such a far off assertion in the future.As privacy researcher Gennie Gebhat emphasized, it is “not only informed consent but ongoing consent”. Whichever form regulation will take, transparency and trust are at the core of the answer._The views expressed in the posts and comments of this blog do not necessarily reflect those of the Equal Justice Project. They should be understood as the personal opinions of the author. No information on this blog will be understood as official. The Equal Justice Project makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The Equal Justice Project will not be liable for any errors or omissions in this information nor for the availability of this information.Featured image: