ChatGPT has marked the front and back in the technological world and is not the only pioneer in the development of artificial generative intelligence (AI) for the use of communication. Now, the popular OpenAI program could voluntarily stop its operations in the European Union (EU).
Sam Altman, CEO of OpenAI, expressed the need for the nascent AI industry to be regulated and licensed. Despite the fact that some proposals are currently being discussed to satisfy this need, none seem to be sufficient, at least from the point of view of society.
After a meeting with the authorities from Spain, France and the United Kingdom, to discuss the mechanisms that regulate the growth and use of AI in the next hours, the leader of OpenAI expressed his disagreement with some considerations in the artificial projects that the EU is preparing for the established law of intelligence. He claims that ChatGPT and other OpenAI developers could slow down their operations in that region, as they would not be able to comply with new regulations that would come into effect sometime in 2025.
The European Union proposes the world’s top regulatory developments such as ChatGPT
The breakthrough of models with artificial intelligence like ChatGPT and its rapid development has put on the table an urgent rule of generative AI development in the entire planet.
In a conference at University College London, Altman spoke about OpenAI’s position on the proposal to regulate AI in Europe and stated that “if we can comply, we will, and if we can’t, we will stop working. We will try, but there are technical limits to what is possible.”
Appropriately, the executive director of the company behind ChatGPT reported on the categorization system proposed by the European Commission, in which AI platforms would be classified according to the “potential risk” they pose to social well-being.
The system proposes four levels of assessment: minimal risk, limited risk, high risk and acceptable risk. According to the “level of risk” at which each AI project is assessed, the companies behind these developments must meet strict requirements in order to operate in the EU. Obviously, the last two types are most in demand.
According to the characteristics of each category described in the current proposal, ChatGPT and models known as GPT-4 should have a “risk list”. This would imply that OpenAI would have to meet very strict requirements in order to operate in the region.
ChatGPT wants a tailor made organization
In this category, services must establish high security policies for users, have verifiable human monitoring, feed quality data with diligence to avoid bias, and track user activity to them if necessary.
If the practical mission is read into the laws, this means that ChatGPT and other OpenAI developments will have to have higher costs to cover the investments that the company will have to make in order to fully comply with the requirements demanded by the European Union.
In a statement provided by Reuters, Altman said that the current draft of the EU AI Law proposes a “draft” regulation, although he hopes that there will be a change in the final version that could be decided in the coming weeks.