Since it was launched on the market in November 2022, the ChatGPT application has caused panic in the tech world, in the political world and in the labor market. With a handful of keywords, this artificial intelligence chatbot from the OpenAI company is capable of writing a text as if it were a human, naturally answering questions in conversation, writing a script for a podcast, and writing academic or journalistic articles on any topic writing. Literary style and desired detail.
developed from the system over 175 million parameters And they have been trained with a large number of texts, so it is difficult to differentiate between one of their creations and another created by a person while answering or writing. «The level he has reached in language processing is astonishing. It’s really almost like talking to someone or reading a text from someone who has received a lot of information”, explains Sara Lumbreras, co-director of the Chair of Science, Technology and Religion at the Pontifical University of Comillas. is, the device has hallucinations. «If you pass him some key points He is able to write a text, but is not able to justify where the information comes from. If asked and he doesn’t know the answer, he makes it up. It is very difficult to confirm these hallucinations. We do not know how he has trained and besides, he does not reason. It only imitates the language of the texts that serve as training, so we have often found that it lacks coherence. Several professors in the university have tried it and the ones that give the worst results are their math classmates”, he qualifies.
The OpenAi company has come to admit the mistakes; that the program does not always generate accurate answers and lacks control and reliability. But the power to generate reliable text is quite troubling. In fact, this week at OpenAI’s appearance in the US Senate, and as reported by CNN, Senator Richard Blumenthal starts hearing with fake recording of his own voice In which he described the risks of the technology and confirmed that just as the text said one thing, a “fake” recording could be made talking about Putin or any other topic with a false opinion.
In late March, an open letter promoted by the Future of Life Institute (a non-profit organization that aims to “reduce the global existential risk posed by advanced artificial intelligence”) appeared on social media calling for at least a moratorium appeared. Six more powerful systems than GPT-4 in major artificial intelligence development. “Should we develop non-human brains that may, over time, outwit us, make us obsolete, and replace us? Should we risk losing control of our civilization?», says the letter. celebrities such as Elon Musk (one of the first investors in OpenAI); Apple co-founder Steve Wozniak or Skype co-founder Jan Tallinn are in favor of stopping the developments because they believe they represent a risk to humanity. Geoffrey Hinton, the father of neural networks, has also been critical of chatbots and generative AI, curiously based on his own work. He and other developers are considered regrets of artificial intelligence.
However, other industry heavyweights such as Microsoft founder Bill Gates (and OpenAI shareholder) argue that halting development for the summer will not solve the problems at hand. What other problems do you face? What does the letter ask, for example, It is working on algorithmic biases, privacy and labor changes.
Perhaps the best threat is employment. Journalists, lawyers, clerks are just the first in a long list of occupations that can be automated. And it is that some theorists compare the disruption of these generative AIs (images like Dal-e or texts like ChatGPT) with the Industrial Revolution. Then machines replaced manual labor. And now AI is called upon to replace work that humans have always considered appropriate for intelligent beings. Apart from this, the loss of jobs is also being calculated. A recent report by Goldman Sachs states that Around 300 million jobs are on the verge of ending worldwide. 24% of European workers could be replaced by AI. On the other hand, they also believe that world GDP can grow by 7% annually for about 10 years.
End of democracy?
In recent times, false images such as the Pope wearing a feather or politicians giving false speeches have raised concerns about the speed of computer progress and some of the unintended consequences that AI is achieving. It has been said that we move into a gray zone between reality and lies which can undermine democracy. In fact, it was one of the major arguments in favor of more regulation in the US Senate this week. « We are going to face elections next year and these models are getting better and better and There is a possibility that AI will be used to trick and misinform voters», says in a report on CNN. It must be remembered that Donald Trump’s victory in 2016 was overshadowed by the Cambridge Analytica scandal. Several US newspapers claimed that the consultancy had obtained information from the 50 million Facebook users with whom it influenced election results.
Furthermore, text or image generation tools can be used to steal the identity of any Internet user, gain access to personal and financial information, etc. This same week, Samsung announced that it was barring its employees from using this tool and other AIs after discovering that one of them shared confidential information with the aforementioned tool.
Another problem is data privacy, which is why the Italian government banned the use of ChatGPT a month ago. Although the veto has already been lifted, other countries such as Russia, China or Iran maintain the ban and artificial intelligence legislation is being drawn up in Europe, focusing, among other things, on data protection or inherent biases in algorithms. going. “It turns out that when we use these automated learning processes to make decisions such as personnel selection, for example, much of the data used contains bias, racial or gender,” Professor of Artificial Intelligence and explains researcher Vicente Botti. Intelligence from the Polytechnic University of Valencia. Another controversial application of AI is facial recognition, which is used to control movements of political dissidents or freedom of the press in countries such as China or Russia.
Will the moratorium help? «Banning is trying to put doors on the field, because the program is available on the Internet. There are more GPT models from different companies, so it was only a matter of time before one came forward”, says Botti. The Lumbreras, for their part, believe that this application has forced the request for adjournment, but The real interest behind it is unclear. « It is signed by academics, but also by companies and It is suspected that there may be an interest on the part of some of them to hold back development until they themselves reach the same level of competition. There’s no guarantee what companies are going to do behind the scenes,” he concluded.
Are we facing a conscious AI?
This week a group of Microsoft researchers said that GPT-4 is one step away from being considered artificial general intelligence, or AGI, that is, we are facing a system that is capable of understanding or reasoning in an almost human way. « Arguments about the need to better regulate AI development have been on the table for many years and some of the claims contained in the letter are not exactly new. Responsible Artificial Intelligence is a concept that has been worked on for at least six years. The European Commission coined the term, which is understood from a cyber security point of view as an ethical, reliable, interpretable and secure artificial intelligence. It is necessary to regulate which uses are right and which can be punished. A rigid rule to avoid abuse, says Vicente Botti, a researcher at the UPV.