Tuesday, September 26, 2023
Advertisement

ChatGPT AI is getting slower and less accurate reveals scientific study

Fallibility is human and apparently it is also common in Generative Artificial Intelligence systems such as ChatGPT from the OpenAI company. At the beginning of this year, the chatbot impressed everyone with its apparent ability to solve problems and speak on a wide variety of topics of all kinds.

But those who have interacted deeply with AI, with extensive knowledge of the subject being consulted, have come to the clear conclusion that ChatGPT is always a convincing “liar.”

In most cases the platform can provide apparently solid answers and texts, even if deep down they are plagued with inaccuracies, inaccurate information or outright lies.

This is a scenario that seems to be more felt, especially when discussing the subjects of engineering or exact science, and it is not part of our imagination.

Because in reality Artificial Intelligence can be less accurate in its answers. Or at least that’s what an interesting research project that fully exposes AI has discovered.

ChatGPT is highly imprecise and that should worry those using it blindly

The ChatGPT artificial intelligence, developed by OpenAI, has improved its ability to solve mathematical problems, according to a study at Stanford University, where two versions of the OpenAI chatbot were analyzed: GPT-3.5 and GPT-4.

The results show that the accuracy of ChatGPT in some mathematical tasks has decreased significantly compared to the previous version. Similarly, similar fluctuations occurred in more detailed tasks such as writing code and combining texts based on visual reasoning.

James Zou, a professor of computer science at Stanford who participated in the study, was surprised by the huge changes in ChatGPT’s performance:

“If we tune a large language model to improve its performance in some tasks, we may have many unexpected results, which may harm the performance of this model in other tasks ( …).

“There are all kinds of interesting interdependencies in how the model responds to things that can lead to some of the worsening behavior that we’re seeing.”

The research results themselves are a clear example of this. Because thanks to them it has been proven that the capabilities of ChatGPT are not consistent and therefore the chatbot can be less reliable.

For example, when it comes to solving math problems, GPT-4 started strong in March 2023, correctly identifying prime numbers 97.6% of the time, but just three months later, in June 2023, its accuracy dropped to just 2.4%.

Chatgpt Ai Is Getting Slower And Less Accurate Reveals Scientific Study

For its part, GPT-3.5 shows an improvement, from 7.4% accuracy to 86.8% in the same task. Which can be alarming considering that this version is the one that was theoretically abandoned to make way for its replacement.

The study also showed that ChatGPT’s responses to questions about gender or ethnic issues became increasingly evasive or even refused to answer in some cases, driving users away from the chat.

Nation World News Desk
Nation World News Deskhttps://nationworldnews.com/
Nation World News is the fastest emerging news website covering all the latest news, world’s top stories, science news entertainment sports cricket’s latest discoveries, new technology gadgets, politics news, and more.
Latest news
Related news