In the article: ‘The Threat of Algocracy: Reality, Resistance and Accommodation’, John Danaher discusses the question of algorithmic governance. Algorithmic governance refers to an increasing algorithmization, that is, an increasing use of algorithms in solving administrative problems and modifying human decision-making. Danaher fears that such merging of legal and political forms with algorithmic systems, questions of privacy and personal data aside, creates the danger of what is termed “The Threat of Algocracy”, one that raises some serious moral concerns. After defining the threat in more detail, Danaher offers two possible solutions to the problem: “Resistance” and “Accommodation”.
From trading stocks and exposing tax-evasion, to participating in scientific discovery and finding a dating partner, with the data revolution and the IoT, algorithmic decision-making is now well established as a natural element of our environment. The trend will only grow. Is it morally acceptable to rely on automation where human lives are affected? Given that algorithms are either invisible or incomprehensible to most of us, how can we rely on their legitimacy, even (and perhaps more so) if we can thoroughly trust their efficiency?
Danaher draws on David Estlund’s work and his notion of epistocracy to formulate the problem of algocratic governance. According to Danaher, J.: An Algocracy is “a situation in which algorithm-based systems structure and constrain the opportunities for human participation in, and comprehension of, public decision-making” (2016). A case in point is that human agency is sacrificed for overall efficiency in decision-making.
An important distinction needs to be made from the outset. Danaher is not speaking about a potential take-over of the entire human civilization by AI’s that have gone rogue. This is not another treatise on the dangers of a future dystopian society ruled by machines. He is referring to a specific and current problem of algorithmic bureaucratization of the social sphere.
That being said, an algocracy is distinct from other forms of constraining systems. The market is a system which constraints human agency according to the movements of price, a typical bureaucracy is one that constraints human action through legal forms, whereas an algocracy is a system where humans are constrained through computerized algorithms. These systems often crisscross and overlap. There isn’t always a clear boundary separating them.
There are many types of algorithmic systems. Danaher chooses to focus on the type that he finds most intrusive: The algorithms used for predictive and descriptive data-mining practices. Danaher draws on Zarsky’s definition of data-mining: “the non-trivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data” (Zarsky 2011, 291). Big Data is used to monitor, predict, sometimes even incite and control human behavior.
There are various levels of automation and analogously, various levels of human participation involved in the use of algorithmic systems. Danaher uses the example of military drones to illustrate a relevant and ethically problematic instance.
There are three distinct types of robotic weapon systems:
Human-in-the-loop weapons: The machine will not execute a command without human approval.
Human-on-the-loop weapons: The machine can decide and execute a violent attack, but humans can intercept and override the system at any time.
Human-out-of-the-loop weapons: The machine is left to its own devices. It can select, decide and execute without the possibility of human override.
The same distinctions apply to all algorithmic systems, including data-mining software. Some data-mining systems are more open to human modification and override than others. These are referred to as interpretable and non-interpretable systems.
As mentioned before, the threat of algocracy is reducible to two categories of problems concerning algorithmic governance and algorithmic systems:
The problem of hiddenness and the problem of opacity. The first problem was supposed to be solved by the Google EU User Consent Policy, at least for European countries. And indeed the problem of hiddenness is in a way resolved, but only to be substituted with a new one, which I would call the problem of imposed consent where we have no choice but to consent to the privacy policy, because we are required to do so either by our employer or the educational institution.
The second problem, the problem of opacity, states that even if we knew “how our data is used”, we would not be able to understand it on a technical level. Rendering us helpless as to its potential implications. It is the opacity concern that proves to be particularly relevant for the explication of the threat of algocracy. One of the reasons for the importance of emphasizing this concern, is that it does not fall into the ready-made category of data-privacy and questions of personal information, meaning; more institutions can get away with failing to address it.
If decision-making processes are too opaque, they can no longer have the legitimacy required to exercise their authority over our lives. What makes a decision-making procedure legitimate? Instrumentalism argues that a decision-making process is legitimate if it achieves the desired outcome. It is a way of being consequentialist about decision-making processes. Proceduralists argue that transparency is more important than efficiency. In other words, its far better to have an inefficient algorithm, rather than a decision-making process that is comprehensible only to the select few. I.e. it is better to have a flawed, rather than a coercive system.
Danaher favors a “mixed” approach, meaning that he aims to get the best of both worlds. Which is indeed quite intuitive if not outright obvious; the whole idea of decision-making breaks down if we assent to either of the two camps to an extreme degree. What’s the point of a decision-making system which is simple and accessible to all, but one that fails to make an effective decision and fulfill its function? On the other hand, a perfectly efficient system that renders most of the human population helpless would be a modern alternative to Nazi Germany. A less exaggerated version of extreme instrumentalism would be the notion of epistocracy that we already mentioned, that was first introduced by David Estlund. “These are systems that favour a narrow set of epistemic elites over the broader public” (Danaher, J. 2016).
The danger with this version of instrumentalism is that it actually sounds convincing. It pairs nicely with the modern form of elitism or what we could term the professionalization of power. People tend to be fairly tolerant of opacity in decision-making as long as they “know” that the people in charge are not generic bureaucrats or politicians, but “competent experts”. The nucleus of the instrumentalist account remains the same throughout: Legitimacy is conferred by outcomes.
An epistocracy, therefore implies a monopoly on decision-making power by a privileged sub-group of elite professionals. An algocracy, on the other hand, is a similar instance of concentrated power, but the epistemic elite in this case is an AI. To quote Danaher: “Thus, when I talk about a threat of algocracy, I am talking about a threat that arises from this sort of epistemic favouring of algocratic systems” (Danaher, J. 2016).
To large degree, the question of legitimacy boils down to the question of opacity which in turn is a matter of human participation in algorithmic decision-making. The more complex the system, the less human-friendly or interpretable it becomes, which in the end leads to an algocracy.
The problem with creating more transparent, simple and interpretable systems is both technical and institutional. Corporate and governmental power operates essentially through institutionalized secrecy. Trading secrets and political secrets are protected by algorithmic encryption. They don’t want people hacking into their data. Once again, questions of human freedom are deeply intertwined with the political economy of (in this case) algorithms. Diverging from Danaher’s views, I argue that data-mining software is only the last word of economic governance; the algorithmization of human lives and the rendering docile, profitable and manageable of bodies.
The technical aspect of the issue lies with the problem of the sheer size of the data-sets that data-mining systems need to digest. The amount of data that needs to be processed could never become manageable through human effort alone, that is why we have to rely on machine-learning to get the job done. But the need for the transformation of human lives, nature and the whole biosphere into a standing reserve of manageable data is something that even Danaher seems to have taken for granted. Again, these are pseudo-problems that result from an artificially created demand. A demand that we are trained to have through various governing techniques inherent to the existing overall system of the liberal economy. Referring here to consumerism.
Should we offer resistance to the threat of algocracy? How? One option is to sabotage the system and liberate ourselves through revolutionary practice. Danaher urges against it. Unsurprisingly, for the same reason that he “fails” to address the very root of the problem of mining for large data-sets — the logic of Capital. In Danaher’s view the benefits of algorithmic decision-making outweigh the trade-offs at least to the extent that direct resistance is the wrong way to go. It would abolish too much. Like sustainable energy management, which is largely governed by algorithmic technology.
Danaher’s pro-algocracy arguments are not strictly outcome-oriented and instrumental. In terms of a procedural argument that favors algorithmic governance, Danaher brings to light the impossibility of implicit bias when it comes to algorithmic systems. An example includes the case of profiling as an anti-terrorist measure. An algorithm is more likely to remain objective without lapsing into ethnic or racial bias. Again, I would like to create some distance between Danaher’s views and my own. The very idea of terrorism serves in large part as a powerful ideological supplement to the United States expansionist imperial agenda. It plays an indispensable role in the securing of what Derrida terms organized terror (Derrida, J. 2003). Therefore the question of profiling is not settled once the question of bias is addressed (the latter obviously makes the problem much worse), in fact, whether profiling is a justified police measure at all and whether it exposes or in truth, constructs and invents terrorism, is the real issue. Again, algorithmic governance is only another expression of what is at heart a question of governance in general.
The second “solution” to the threat of algocracy is accommodation. Simply put, accommodation implies partial algorithmic decision-making. Every data-driven process needs to allow for human intervention and control. “This would protect against the problem of opacity whilst still allowing us to reap the benefits of the algocratic systems.” (Danaher, J. 2016) Danaher continues to offer us four possible ways of accommodating for algorithms.
- “Insist Upon Human Review of Algorithms”
The first solution is located at the border between resistance and accommodation. It relies on ensuring that every data-driven process that has an impact on an individual life should be closely monitored and controlled by a human being. The EU has already implemented a version of this law under the Data Protection Directive:
15.1 — Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.
The problem here is that the law concerns proper residents of EU, which creates a blind spot in terms of human rights where international students and immigrants do not fall under the protection of this law, resulting in their being subject to constant surveillance and economic bureau-algorithmic exclusion.
More so, Danaher argues that the Human Review solution is futile, as it fails to address the real issue. And rightly so, the false solution will only transfer decision making power from impartial algorithms to partial experts, bringing us right back to the problem of epistocracy.
2. “Epistemic Enhancement of Human Beings”
According to Danaher, human enhancement can be used to improve public decision-making. Could the same technology be used as a deterrent to the threat of algocracy? Let us quote Danaher to define epistemic enhancement: “any biomedical intervention intended to improve or add to the capacities humans use to acquire knowledge, both theoretical and practical/moral” (Danaher 2013, 88). Given that the problem of equal access to such technologies could be solved, this would prevent the uprising of an epistemic elite. But it is not clear whether equal access is a possibility at all.
The argument from accommodation through epistemic enhancement fails due to the degree of disparity between the calculative power of algorithms compared to human potential with or without enhancement. There is no possible enhancement of human capacities that could reach the computational level of data-mining systems. In fact, equal access to human enhancement could level out the differences between human elites and the general public, but that will only address the problem of epistocratic governance.
Herein lies Danaher’s solution. If human enhancements are used in conjunction with human review (the first solution), than the algocratic danger of epistocracy will be resolved. There will no longer be a transfer of informational power from the algorithms to the elite, but instead it will be seized by enhanced citizens.
The obvious problem here is that we do not seem to possess the proper enhancement technologies to perform this feat. Second, it is not at all clear that even with the technologies at hand, we would have the resources to make them publicly available to all.
3. “Embrace Sousveillance Technologies”
Sousveillance is the art of reversing the power-relation inherent in Surveillance. It is the art of “running surveillance from below”. Surveying the surveyors – so to speak. “If the problem with big data algorithms is the constant monitoring and surveillance of our activities by economic and political elites, then the solution is to turn the surveillance technology back on those economic and political elites” (Danaher, J. 2016). Sousveillance technologies coupled with privacy laws should provide the right resources to empower and legitimate public decision-making, since both surveillance and sousveillance technologies rely on algorithms.
According to Danaher, this will not work either. Unlike mere data-collection devices, which is what sousveillance technologies are, an algocracy implies a particular and goal-oriented collection of data which accounts for epistemic asymmetry in power-relations. Sousveillance cannot compensate for this. Especially considering the fact that the complexity is introduced at the level of the software, not at the human level. No direct human confrontation could resolve the problem.
4. “Form Individual Partnerships with Algorithms”
The fourth accommodation solution is supposed to resolve this complexity. Danaher explains two possible ways of aligning human beings with algorithms in a way that should legitimate decision-making procedures. The first is non-integrative which means that there is no direct biological integration between the human organism and the algorithmic system. The second is integrative and implies a direct unity between the algorithm and the human being. The first is obviously more feasible in the near future, the latter is more of a foreshadowing of future possibilities.
The non-integrative solution is only a slight add-on to the sousveillance solution. On top of data-monitoring software, the non-integrative solution also adds a data-mining algorithm to the process. An individualized AI as a form of counter-conduct to the status quo. “This is effectively like having your own private AI assistant, who can help you to comprehend and understand the other algorithmic processes that affect your life. The idea is that this is empowering as you no longer need to defer to an epistemic elite in order to understand what is going on” (Danaher, J. 2016). There is an entire movement dedicated to a human-machine alliance called the “Quantified Self” movement. See Thompson, 2013 for more on the Quantified Self movement.
The problem with the non-integrative approach is that the implementation of algorithmic data-mining software only makes sense within the context of Big Data. But the amount of data that surrounds the individual is not that vast, thus it fails to address the problem of the bio-informational divide. Not to mention the more obvious threat, that by forming the above-mentioned alliance with algorithmic technologies we might in fact hasten the onset of algocracy by either participating in it, or becoming passive recipients of it. We may know exactly how our data is used and collected without actually participating in the decision-making process.
The problem of integrative approaches is the simple fact that it is highly speculative. We do not even know if we will ever be able to possess such technologies let alone ensure equal access and safety for all. The other, more subtle issue, concerns human agency. Is it the case that a direct biological union with a technological artefact will be able to preserve human agency, let alone empower it? Who knows what other conditions one would have to fulfill before one could act meaningfully at this level of integration? Numerous problems float to the surface.
REF
- Danaher, J. (2016). The threat of algocracy: Reality, resistance and accommodation. Philosophy & Technology, 29(3), 245–268.
- Derrida, J. (2003). Autoimmunity: Real and symbolic suicides: A dialogue with Jacques Derrida. Philosophy in a time of terror: Dialogues with Jürgen Habermas and Jacques Derrida, 85–136.
- Estlund, D. (1993).Making truth safe for democracy. In D. Copp, J. Hampton, & J. Roemer (Eds.), The idea of democracy. Cambridge: Cambridge University Press.
- Estlund, D. (2003). Why not Epistocracy? In Naomi Reshotko (ed) Desire, Identity, and Existence: Essays in Honour of T.M. Penner. Academic Printing and Publishing
- Estlund, D. (2008). Democratic authority. Princeton: Princeton University Press.
- Thompson, C. (2013). Smarter than you think: how technology is changing our minds for the better. London: William Collins.
- Zarsky, T. (2011). Governmental data-mining and its alternatives. Penn State Law Review, 116, 285.
- Zarsky, T. (2012). Automated predictions: perception, law and policy. Communications of the ACM, 15(9), 33–35.
- Zarsky, T. (2013). Transparent prediction. University of Illinois Law Review, 4, 1504.