Wednesday, May 31, 2023

A group of businessmen and scientists led by Elon Musk is asked to continue experiments with AI

A group of employees from the technology sector, experts and politicians, including a controversial businessman Elon Muskhe demanded in apocalyptic tone of the letter which experiments with the strongest artificial intelligence are suspended for six months, because they think that “they can enter the most dangerous for society and humanity”.

“Society has put silence on other technologies with potentially disastrous effects on society. We can do this. Lets enjoy a long summer break AI, let’s not rush unprepared”, an open letter issued by a non-profit organization. Future of Life Institute.

CEO of Tesla Elon MuskCo-founder of Apple Stephen Wozniak American researchers Yoshua Bengio, Stuart Russel or Spanish carles sierra y* Raymond Lopez de Mantarasthere are some signatories of these patent letters, in which they warn of the supposed dangers of AI, if it is not accompanied by proper management and planning.

“Advanced AI could mean a profound change in the history of life on earth, and should be planned and managed with appropriate care and resources,” says the letter, which also states that “AI laboratories are engaged in obsolete.-control of the human race, more powerful digital minds are always being developed and developed.” .

They also claim “no one – not even its creators – can understand, predict or control it sure” to these digital minds.

The signatories raise all the fears and dangers that have sounded abroad, in articles and in daily conversations in recent months, and they do so in an apocalyptic tone.

“Should we develop non-human minds that could end us in countless numbers, rising, rendering obsolete and replacing them? Or should we risk losing our civilization? Tech leaders.”

In this regard, they are asking developers to work with legislators to accelerate a strong legal framework around AI.

Again, they highlight the power of Artificial Intelligence systems as just developed with security its effects are positive and its risks are manageable.

“We call on all AI laboratories to immediately train for at least six months the most powerful AI systems of GPT-4,” he says, calling for government intervention to impose this moratorium. in the event that the parties refuse. to temporarily suspend investigations.

GPTacronym in English of the term generative pretrained transformersis a type of machine learning model used to generate human language. GPT-4 is a much more advanced model than the previous ones.

The OpenAI Lab defines GPT-4 as its “Some milestones in the attempt to climb deep learning“. This new system receives both text image reception and text output.

However, OpenAI claims that GPT-4 is still less capable than humans in many real-world situations, although it “shows human-like performance in various academic and professional benchmarks.”

The open letter proposes that during the last six months gaps, AI labs and independent experts “develop and implement a set of security protocols for advanced AI design and development that is rigorously audited and audited by independent third-party experts.”

Finally, they call for AI research and development to be refocused to become “powerful, next-generation systems that are more accurate, secure, interpretable, transparent, robust, versatile, reliable, and trustworthy.”

Nation World News Desk
Nation World News Desk
Nation World News is the fastest emerging news website covering all the latest news, world’s top stories, science news entertainment sports cricket’s latest discoveries, new technology gadgets, politics news, and more.
Latest news
Related news


Please enter your comment!
Please enter your name here