Data mining: Quietly changing our social contract

College selection, highlighted content on social networks, justice and predictive medicine, autonomous vehicles, crowd monitoring... Today algorithms are massively used in many parts of political, social and economic life.

The terms "algorithm", "data" or "artificial intelligence" (AI) are often equated with magic words. Some people see these "tools" as infallible, perfectly rational beings whose help could prove invaluable in delegating certain tasks - and even certain responsibilities.

But the massive collection of data and the widespread use of algorithms also constitute a threat to society, democracy and ultimately the social contract, which is the foundation of the modern conception of the state. In exchange for a service (most often free), users consciously or unconsciously delegate part of their decision-making power as well as the possibility of acting on their choices and opinions.

The culmination of the cult of reason

AI systems are built to handle huge amounts of data. The aim is to make the most informed and objective choices possible. Far from being an inevitability, this large-scale deployment responds to political choices and to the promotion of what legal researcher Antoinette Rouvroy calls "algorithmic rationality".

Inheriting the scientific and philosophical revolution of the 17th century, our Western societies have been built around the notions of freedom and progress. Condorcet, in his Esquisse d’un tableau historique de l’esprit humain (1793) , proclaimed the harmony between the emancipation of the human being and technical development.

On the political level, the social contract theory is based on the ideas of liberty, democracy and privacy. All social contract perspectives seek to understand why individuals would exchange some of their freedom for a political order. The social contract thus presupposes the existence of rational agents who come together out of interest.

So what could be more rational than the management of various sectors by artificial intelligence?

In this conception, the human being is seen as fallible in the face of an AI that is infallible because it is based on "data" considered as mathematical objects. The advent of an "algorithmic governmentality" - decisions are now based on data processing rather than on politics, law or social norms - would finally make the reign of reason possible.

Thus, any decision would become irrefutable because it would be based on statistical arguments. This is to forget the numerous biases that exist in the capture of data and in their exploitation by algorithms.

The logic of the social contract (especially since the industrial revolution and the development of the welfare state in the 20th century) was an insurance logic. Ignoring the future, individuals had an interest in insuring themselves collectively against risk. Today, the development of predictive analysis renders this version of the contract obsolete. Insurance offers can be tailored to the precise risks that each individual faces.

The digital giants know our preferences, our opinions, our desires and lock us in what the essayist Eli Pariser calls a "filter bubble" The content in agreement with our ideas is over-represented and contradictory opinions are missing, increasing the diffusion of fake news - with a higher potential of reactions and therefore of diffusion. We share less and less common truths and experiences, which are necessary for democracy to function.

Trading democracy for apps

By analyzing our data to predict our behavior, capitalism is becoming a "surveillance capitalism" in the words of academic Shoshana Zuboff. For these companies, individuals are no longer customers but products for advertisers. The philosopher Bernard Stiegler explains that individuals have been transformed into "data providers". Already individualized, they are also de-individualized: their data allows them to be dispossessed of their will.

For example, the fact that we are exposed to targeted advertising shows that our desires are anticipated. We don’t really know anymore if we have desired the object we have bought since it has been shown to us before we have even desired it. Our desire is automated.

Accustomed to technical progress, individuals have become accustomed to an environment where the quest for comfort, speed, and entertainment allows the generalization and perpetuation of invasive technical systems, to the detriment of certain fundamental freedoms (right to privacy, anonymity, independence of thought...), guarantees of our democratic societies.

By providing our data, we are transferring part of our free will and the ability to influence our opinions to the point of influencing elections. The case of Cambridge Analytica was the most publicized: it showed the world the capacity of political manipulation that social networks had in elections as decisive as the American presidential elections of 2016 or the British referendum on membership in the European Union the same year. While the company was shut down in 2018, nothing has really changed.

Targeted by advertisers, individualized and profiled by insurers, politically influenced and subject to the opaque and arbitrary decisions of algorithms, we isolate ourselves and no longer share the general will that Rousseau defined as the sum of particular wills and considered as a prerequisite for a sense of society.

Politicizing the issue of technology use

This change of contract is done quietly and individuals can then be victims of misuse of their data by these supposedly apolitical technological systems. All the more so if they are already victims of discrimination.

Technology always seems to be outside the political debate and to impose itself on societies that have no other choice than (more or less partial) acceptance. Aware of the risks, parliaments and international institutions are starting to legislate on the issue, to draft ethical charters and regulations. This is the case of the various European regulations, of which the General Data Protection Regulation (GDPR) is the best known.

However, these issues often remain very technical and legal, excluding from the outset the individuals who suffer the damage caused by the processing of their data (targeting, reduction of free will, discrimination, surveillance, rating, influence...).

For Rousseau, only free individuals can build a free society. However, the lack of critical distance and awareness of the stakes of digital technology as well as the absence of digital education threaten the foundations of our democratic societies. It would be necessary to politicize the issue, for citizens to seize these subjects and debate them in order to draw together the contours of an enviable technological future for and by all.

Adrien Tallent, PhD candidate in political philosophy and ethics, Sorbonne University This article has been republished from The Conversation under a Creative Commons license. Read the original article in French.

À lire aussi