Today, millions of people around the world’s sensitive data are exposed to potential cyberattacks, and even technology companies like Google, Microsoft, and Meta are not safe. Just as there are computer experts who specialize in malicious practices, there are those in charge guarantee the safety and the proper functioning of the services developed by large companies in a reality now characterized by artificial intelligence.
So-called “Red Hat hackers” are an integral part of the security strategies and software practices of these teams of experts from the 1960s. Computer security professionals test new technologies to identify bugs, vulnerabilities, and potential threats to businesses and users.
The goal is set Ensure systems are resistant to various types of attacks An example of this is the chatbot developed by OpenAI. Before its launch, ChatGPT had to hire experts to verify the generated replies for harmful stereotypes or other dangerous content.
Currently, the work teams of tech giants are made up of specialized professionals who look for vulnerabilities in artificial intelligence systems and software. To do this, they use targeted tactics, such as the smuggling of personal data.
“Red Hat teams not only consult an AI model to generate toxic reactions, but also use tactics such as mining training data that reveals personally identifiable information such as names, addresses, and phone numbers,” explains Cristian Canton, CTO at AI at Meta, in talks with Forbes.
This is the intoxication process with which they identify errors in the models in order to then eliminate them and avoid safety risks.