In April, security analysts at the social network giant found malware masquerading as ChatGPT or similar AI tools, Meta’s chief information security officer Guy Rosen told reporters.
He recalled that bad actors (hackers, spammers, etc.) are always on the lookout for the latest trends that “capture the imagination” of the public, such as ChatGPT. This OpenAI interface, which allows seamless dialogue with people to generate code and text such as emails and essays, has generated great enthusiasm.
Rosen said Meta has detected fake Internet browser extensions that claim to contain generative AI tools but actually contain malware designed to infect devices.
It is common for malicious actors to capture the interest of Internet users with flashy developments, tricking people into clicking on malicious Internet links or downloading programs that steal data.
“We have seen this in other popular topics, such as scams motivated by the immense interest in digital currency,” Rosen said. “From a bad actor’s perspective, ChatGPT is the new cryptocurrency,” he noted.
Meta detects and blocks more than 1,000 URLs that are promoted as promising tools similar to ChatGPT, but are actually traps set by hackers, according to the technology company’s security team.
Meta has yet to see generative AI being used as anything more than bait by hackers, but gearing up for it to be used as a weapon, something he sees as inevitable, Rosen said.
“Generative AI holds great promise and the bad actors know it, so we all need to be very vigilant,” he said.
At the same time, the Meta team is looking for ways to use generative AI to defend against hackers and fraudulent online campaigns.
“We have teams already thinking about how (generative AI) can be abused and the defenses we need to put in place to counter that,” Meta’s head of security policy, Nathaniel Gleicher said during the same briefing.
“We’re preparing for that,” Gleicher said.