Artificial intelligence (AI) is increasingly present in all parts of our society, so much so that it has even been called “the new electricity”.
Intelligent systems already have the ability to analyze their environment and act autonomously, sometimes proving to be more efficient than humans. This in itself brings a great risk from a technological point of view, but also from a legal point of view. Let’s see some:
The main risk from the use of artificial intelligence is related to the large amount of data that it uses, since a good dataset or data set is the first step of this technology. This raises two legal challenges:
- On the one hand, those related to the origin and collection of data, which generates conflicts with the right to data protection.
- On the other hand, the dataset absolutely depends on the quality and quantity of the data. If these are less than correct or there are imbalances, the results of the artificial intelligence will be wrong.
Of these two questions, the second is the one that is more relevant, because now many companies are dedicated to the development and sale of data for the manufacture of artificial intelligence applications.
The existence of biases is very worrying, especially with regard to predictive algorithms.
In the case of Wisconsin v. Loomis, the existence of a study due to racial origin was demonstrated in Compass, a predictive learning algorithm or machine learning, which was used by the United States Justice Administration to determine the risk of recidivism of each individual at different levels. of the criminal process. This always resulted in higher recidivism in the dark.
Another example of employment discrimination based on gender is the use of algorithms or deep learning by Amazon in its personnel selection processes. In this case, the algorithm discovered that the percentage of women who had reached the final interview period in the company’s hiring processes was very low, which is why it began to systematically reject the CVs presented by them.
Continue reading the story
The second fundamental aspect of some artificial intelligence applications and one of the main risks is the opacity of its designs.
In some cases, we cannot understand how the algorithm works (traceability) and/or why it made the decision it did (explicability). This is known as the black box effect. This is a major problem, not only on a technical level, but also on a legal level, if one considers that this information can unknowingly lead to hidden and unexpected violations of fundamental rights.
The decree of the Ordinary Court of Bologna (Italy) on December 31, 2020 declared the French algorithm incorporated in the Deliveroo reservation system illegal, which caused a conflict between the delivery managers (known as the rider). The resolution explains the ruling court also in the impossibility of determining the specific rules of calculation for the statistics that the algorithm performs.
Likewise, the judgment of February 5, 2020 of the District Court of The Hague (Netherlands) stated that the SyRI Algorithm (Risk Indication System), used by the Dutch Government to fight fraud with the Social Security, is illegal. This was based on a disproportionate and discriminatory invasion of privacy, but also because it was not possible how the definition of the tree used was designed and what steps it consisted of.
3. That operation
Another problem that arises from the use of artificial intelligence is related to failures or errors. In not a few cases, these errors will bring personal or material damage, which must be repaired, and for which the due responsibility must be taken, whether civil or criminal.
It is sad to remember the case of the Boeing 737 Max, an aircraft model that, after being put into circulation in the airspace at the end of 2018, two accidents were discovered due to design errors that cost the lives of 346 people.
The offense may also be caused by a third-party cybercriminal. Artificial intelligence systems are very vulnerable to cyber attacks and this has fatal consequences.
Thus, in 2020, a patient who needed more expensive care in Germany had to be transferred when the systems of the hospital where she was located stopped working due to a ransomware attack, and she died in transit.
4. The relationship between technology and human
The truth is, apart from the gradual autonomous operation of artificial intelligence, today it is impossible to remove the human factor. There are three most troubling aspects of this scenario;
- who has the last word in the dissonance of the machine and the human;
- what level of responsibility should the human have in the absence of an artificial intelligence system;
- and finally, it should not be overlooked that artificial intelligence itself is not only used as a tool to increase productivity, but also to improve the working conditions of workers.
These are all just a few examples of where the law should look when it comes to judging artificial intelligence. The problems that he is currently dealing with also show the great regulatory needs that exist in relation to this matter.
It is necessary to ensure that the transfer of the results of artificial intelligence technical research into a social and commercial reality takes place in an appropriate legal framework. Technology that allows us to create a safe environment without unwanted threats to the users of products and services arising from artificial intelligence, which in turn generates trust and social acceptance of these technologies.
As the Communication on Artificial Intelligence from the Commission of the European Union indicates: “Putting artificial intelligence at the service of human progress requires solving the legal problems it poses, because such solutions determine the type of society and technological development of the future.”
María Isabel Montserrat Sánchez-Escribano does not receive a salary, does not work as a consultant, does not have her own shares, and does not receive funding from any company or organization that may benefit from this article, and she has declared that she has none. relevant relationships beyond the aforementioned academic status.