In the winter of 2016, the Google Nest Home Automation Manager underwent an update to software their thermostats that damage the battery. A significant number of users were disconnected, though many were able to replace batteries, buy a new thermostat, or wait for Google to fix it. The company indicated that the failure may have been caused by the artificial intelligence (AI) system that manages these updates.
What if most of the population used one of those thermostats and a failure caused half the country to be out in the cold for several days? A technical problem would have become a social emergency requiring state intervention. It is all because of the faulty artificial intelligence system.
No jurisdiction in the world has developed a comprehensive and specific regulation for the problems posed by artificial intelligence. This does not mean that there is a complete legislative vacuum: there may be other ways to respond to the many harms caused by artificial intelligence.
- For accidents caused by autonomous cars, insurance will continue to be the first recipient of claims.
- Companies that use artificial intelligence systems for their job selection processes may be prosecuted for engaging in discriminatory practices.
- Insurers who engage in anti-consumer practices derived from analysis generated by their artificial intelligence models to set prices and decide who to insure will continue to be held accountable as companies.
In general, other regulations that already exist – such as contract law, transport, torts, consumer law, even regulations to protect human rights – will adequately cover many regulatory needs of artificial intelligence.
generally Doesn’t seem like enough. There is a definite consensus that the use of these systems will create problems that cannot be easily resolved within our legal systems. From the spread of liability between developers and professional users to the measurability of damages, AI systems defy our legal logic.
For example, if an artificial intelligence finds illegal information on the deep web and makes investment decisions based on that, should the bank managing the pension fund or the company building the automated investment system be held accountable for those illegal investment practices? should? ,
If an autonomous community decides to include co-pays for medical prescriptions managed by an artificial intelligence system and that system makes small errors (for example, a few cents per prescription), but which affect almost the entire population Who is to blame for this lack of initial control? Administration? Contractor installing the system?
Towards a European (and global) regulatory system
Since the presentation in April 2021 of the EU regulation proposal for the regulation of artificial intelligence, the so-called AI ActA slow legislative process has been started that will lead us to a regulatory system for the entire European Economic Area and perhaps Switzerland by 2025. System.
But what about outside the EU? Who else wants to regulate artificial intelligence?
We look to the United States, China and Japan on these issues, and we often believe that the law is a matter of degree: more or less environmental protection, more or less consumer protection. However, in the context of artificial intelligence, it is surprising how different the views of legislators are.
In the United States, substantive law on AI is a norm of limited substantive content, which is more concerned with cyber security, instead, referring to other indirect regulatory techniques, such as the creation of standards. The underlying idea is that standards developed to control the risk of artificial intelligence systems will be voluntarily accepted by companies and become their standard. really,
To maintain some control over those standards, rather than leaving it to the discretion of organizations that typically develop technical standards and are governed by the companies themselves, in this case the AI Systems Risk Control Standards developed by a federal going. Agency (NIST).
The United States is thus immersed in a process open to industry, consumers, and users to create standards. With this now comes the White House’s draft for an AI Bill of Rights on a voluntary basis. At the same time, many states are trying to develop specific laws for certain contexts, such as the use of artificial intelligence in job selection processes.
China has developed a complex plan not only to lead the development of artificial intelligence, but also to regulate it.
To do this, they combine:
- Regulatory experimentation (some provinces may develop their own regulations, for example, to facilitate the development of autonomous driving).
- Development of standards (with a complex scheme involving more than thirty sub-sectors).
- Tougher regulation (for example, to avoid recommendations from recommendation systems on the Internet that could alter the social order).
For all these reasons, China is committed to regulatory control of artificial intelligence that does not hinder its development.
In Japan, on the other hand, they don’t seem particularly concerned about the need to regulate Artificial Intelligence.
Instead, they trust that their tradition of partnership between the state, companies, workers and users will prevent the worst of the problems caused by artificial intelligence. At the moment he focuses his policies on the development of Society 5.0.
Perhaps the most advanced country from a regulatory point of view is Canada. There, for two years, every artificial intelligence system used in the public sector must undergo an impact analysis that estimates its risks.
For the private sector, the Canadian legislature is now discussing a similar (though more simplified) standard than the European one. A similar process was launched in Brazil last year. Though it seemed to have lost momentum, now it can be saved after the elections.
Australia to India
Other countries from Mexico to Australia, passing through Singapore and India, are waiting.
These countries are confident that their existing rules can be adapted to prevent the worst damage that artificial intelligence can cause and allow themselves to wait and see what happens with other initiatives.
two parties with different ideologies
Within this legislative diversity, there are two parties being played.
first, among proponents that it is too soon to regulate a disruptive technology – and not well understood – such as artificial intelligence; and who prefer a clear regulatory framework that addresses core problems as well as creates legal certainty for developers and users.
The second game, and perhaps the most interesting, is the competition to be the regulator. really Global scale of artificial intelligence.
The EU’s commitment is clear: first create rules that bind anyone who wants to sell their products on its territory. The success of the General Data Protection Regulation, which is the global reference for technology companies today, encouraged European institutions to follow this model.
Faced with them, China and the United States have chosen to avoid detailed regulations, hoping that their companies can develop without excessive restrictions and that their standards, even voluntary ones, can be passed on to other countries and companies. becomes a reference.
Time plays against Europe in this. The United States will publish the first version of its standards in the coming months. The European Union will not have an applicable law for another two years. Perhaps the cost of an excess of European ambition will be within and outside the continent, making rules that would have already been superseded by other rules when they came into force.