Why AI tech needs to be democratised
With the introduction of "large language models" (LLMs) in our day-to-day lives, artificial intelligence (AI) systems have experienced a sharp surge in popularity. It is already apparent that the usage of AI systems will drastically impact our professional lives, private lives, and – perhaps most crucially – how we structure and govern our societies. This isn't because algorithms are inherently more innovative than people; instead, they provide economic stability and efficiency in completing many simple and complex tasks at a level that many humans cannot match.
The introduction of AI systems in public administration and the judicial system, as well as their use concerning the provision of certain essential services by private actors, raises serious concerns about how to safeguard the sustained protection of human rights and democracy and respect for the rule of law, if AI systems assist or even replace human decision-makers. This contrasts with the general public debate, which focuses on the AI technology's economic benefits and drawbacks. The very foundations of liberal democracy, such as elections, the freedom to assemble and establish associations, and the right to have opinions and to receive or disseminate information, may all be severely impacted by their use.
Recent calls for a ban on AI technology have come from influential voices in the public discourse who believe that the risks it brings exceed its benefits. Though we must acknowledge that the genie is out of the bottle and that there is no practical way to turn back the scientific and technological advancements that have made it possible to develop sophisticated and potent AI systems, we must also take seriously the legitimate concerns about AI that have been raised.
The Council of Europe (CoE), the oldest intergovernmental regional organisation, with 46 member-states and perhaps best known globally for its European Court of Human Rights (ECtHR), started groundbreaking research on the viability and necessity of an international treaty on AI based on its own and other pertinent international legal norms in the fields of democratic values, human rights advocacy, and the rule of law commitments in 2019. The Committee on Artificial Intelligence (CAI), formed for the period of 2022-2024, is tasked with developing an AI framework convention that will outline legally binding standards, guidelines, rights, and obligations regarding the creation, development, application, and decommissioning of AI systems from the perspectives of human rights, democracy, and the rule of law.
It will take a coordinated effort from like-minded states and assistance from civil society, the tech sector, and academics to complete this enormous undertaking. Our hope and ambition is that the Council of Europe's AI framework convention will provide much-needed legal clarity and guarantees of the protection of fundamental rights.
But a genuine setup of standards for the human rights and democratic features of AI systems cannot be restricted to a particular region, because AI technology knows no borders. As a result, the CoE's Committee of Ministers decided to permit interested non-European states that share its goals and ideals to participate in the negotiations, and an increasing number of these states have already signed on or are actively working to join the efforts.
The European Union (EU), which regulates AI systems for its 27 member-states, is also directly involved in the CoE negotiations. The AI Act of the EU and the CoE's framework convention are designed to complement one another when they go into effect, showing how to effectively utilise the joint capabilities and skills of the two European entities. The draft framework convention is aimed at ensuring that the use of AI technology does not result in a legal vacuum regarding the protection of human rights, the operation of democracy and democratic processes, or the observance of the rule of law (a consolidated "working draft" is publicly available at the CoE website for the CAI).
To this end, parties must obligate regulators, developers, providers, and other AI players to consider dangers to human rights, democracy, and the rule of law from the moment these systems are conceived and throughout their existence. In addition, the legal system that victims of human rights breaches have access to should be modified in light of the unique difficulties that AI technologies present, such as their transparency and rationalisation.
The treaty will also specifically address the potential risks to democracy and democratic processes posed by AI technology. This includes the use of the so-called "deep fakes," microtargeting, or more overt violations of the freedoms of expression, association, opinion formation, and the ability to obtain and disseminate information. The framework convention will include enforceable duties for its parties to offer such practices with adequate protection. When developing and employing AI systems that may be used in sensitive contexts, including but not limited to the drafting of laws, public administration, and last but not least, the administration of justice through the courts of law, it is evident that the fundamental idea of what constitutes a just and liberal, law-abiding society must be respected. The framework convention will also specify the parties' precise obligations in this area.
The draft framework convention, as well as all of CAI's work, prioritises human dignity and agency by taking a Harmonised Risk-Driven Approach (HRDA) to design, develop, use and decommission AI systems. It's crucial to carefully analyse any potential adverse effects of deploying AI systems in diverse circumstances before getting carried away by the apparent possibilities given by this technology. Therefore, parties are also required by the proposed framework convention to spread knowledge about AI technology and to encourage an informed public discussion about its proper application.
To ensure that as many people as possible profit from AI and other digital technologies and are protected from their misuse, the realistic approach must be to discover responsible methods to use them. It will take a coordinated effort from like-minded states and assistance from civil society, the tech sector, and academics to complete this enormous undertaking. Our hope and ambition is that the Council of Europe's AI framework convention will provide much-needed legal clarity and guarantees of the protection of fundamental rights.
Dr Nafees Ahmad is associate professor at the Faculty of Legal Studies in South Asian University, New Delhi. He can be reached at [email protected].
Views expressed in this article are the author's own.
Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.