From smart vacuum cleaners to driverless cars, artificial intelligence has found its way into all walks of life and is forcing policy makers to confront its as yet unknown consequences.
Its proponents believe that artificial intelligence (AI) is revolutionizing the human experience, but its critics fear leaving fundamental decisions to machines.
The European Union (EU) is looking to pass specific legislation next year, the United States published a bill on AI and Canada is also considering legislation.
There is concern in the West about AI being used in conjunction with biometric data or facial recognition, as in China, to create a population control system.
Gree Hasselbach, a Danish academic who advises the European Union on AI, said Western countries are also at risk of building “totalitarian infrastructures”.
“I see it as a huge threat, whatever the benefits,” he told AFP.
But before regulators can act, they must first face the complex task of defining AI.
Brown University’s Suresh Venkatasubramanian, co-author of the US AI bill, believes it is a “waste of time” to try to define it.
Any technology that affects the rights of individuals should be within the purview of the bill, he said on Twitter.
The 27 EU countries chose instead to attempt to define this extended area, and the bill covers virtually any computer system that involves automation.
The problem stems from the definition of AI itself, which has been changing over time.
For decades, the term has described efforts to create machines that emulate human thought. But in the 2000s, research into what became known as symbolic AI took off.
The rise of large Silicon Valley companies led to the use of the term AI as a generic label for their processing programs and the algorithms they generate.
This automation allowed users to be targeted with advertising and personalized content and earned those companies hundreds of billions of dollars.
“AI was a way to make more use of this surveillance data and make sense of what was happening,” Meredith Whitaker, a former Google employee and co-founder of New York University’s AI Now Institute, told AFP.
This is why both the EU and the US came to the conclusion that any definition of AI should be as broad as possible.
– “Very complex” technologies –
The European bill is over 100 pages long. Among his most striking proposals is a complete ban on some “high-risk” technologies, such as the biometric surveillance devices used in China.
It also heavily limits the use of AI tools by immigration officials, police and judges.
Hasselbach, an academic who advises the EU, believes that some technologies are “too complex for fundamental rights.”
Unlike the EU, the draft law in the US contains a smaller set of principles, with statements such as “You must be protected from unsafe or ineffective systems”.
The bill was issued by the White House and builds on existing legislation, but experts believe the United States is unlikely to have AI-specific legislation before 2024.
“We desperately need regulation,” Gary Marcus of New York University told AFP. ,