Thursday, October 5, 2023
Advertisement

Are we on the verge of extinction?… Experts warn that artificial intelligence could pose a risk to humans

A group of more than 300 industry experts and researchers, including Google’s chief executive, has compared the artificial intelligence (AI) pandemic or nuclear war to an “extinction risk”. DeepMind, Demis Hassabis; Dario Amodei, CEO of Anthropic and Sam Altman, CEO of OpenAI, among others.

“Mitigating the risk of AI extinction should be a global priority, along with other societal risks such as pandemics and nuclear war,” said the statement in a paragraph issued by the San Francisco-based non-profit Center for AI Safety.

May interest you: IA will shake up the business model of the technology giants

Other signatories to the statement are Geoffrey Hinton – who was dubbed the “godfather” of AI and spent part of his career at Google – and Eushua Bengio, two of the three AI researchers who won the 2018 Turing Award – One famous award is the Nobel Prize in Computer Science – for his work on AI.

like nuclear war or pandemic

As explained by Dan Hendricks, executive director of CAIS, the brevity of the message answers the objective of bringing together the largest number of experts who can agree on a diagnosis but not a path to follow. Also, the statement marks the first time for some of the signatories to publicly highlight their concerns about AI.

“There’s a common misconception, even in the AI ​​community, that it’s only a handful of doomsdayers, but many people express concern about these things in private,” Hendrix said.

According to CAIS, the statement represents a “historic coalition of AI experts that establishes the risk of extinction from advanced AI systems of the future as one of the world’s most pressing issues.”

A new alert

This new notice joins an already wide range of opinions and messages warning of the risks of general artificial intelligences that are more advanced than present and perform a variety of tasks better than humans for the future of society. able to do. ,

In March, more than a thousand experts and personalities from the tech world, including Elon Musk, called for a six-month moratorium on training new artificial intelligence while safety protocols were developed. Subsequently, others joined in, such as researchers Geoffrey Hinton and Eliezer Yudkowsky, Pentagon AI chief Craig Martel or Sam Altman himself, among others.

This last case is really curious because it’s about who he is. OpenAI is the company that currently leads the pack of AI companies due to its success with ChatGPT and Microsoft’s commitment to integrating language models into its software, but its CEO continues to warn of the risks of artificial intelligence. And by regulation advocate again and again. The last time, on May 16, in a hearing before a US Senate subcommittee.

“I think if this technology gets it wrong, it could be very, very bad. We want to work with the government to prevent that from happening.

Nation World News Desk
Nation World News Deskhttps://nationworldnews.com/
Nation World News is the fastest emerging news website covering all the latest news, world’s top stories, science news entertainment sports cricket’s latest discoveries, new technology gadgets, politics news, and more.
Latest news
Related news