This report is part of ongoing coverage of the Russo-Ukraine War. Visit our dedicated page For more on this topic.
In mid-March, as the Russian invasion of Ukraine entered its third week, an unusual video started doing the rounds on social media and was also broadcast on the television channel Ukraine 24 due to hackers’ efforts.
The video shows Ukraine’s President Volodymyr Zelensky, his head shaking and his body largely motionless, calling on citizens of his country to fight Russian troops and surrender their weapons. The video claims that he had already fled from Kyiv.
Except, they weren’t the words of the real ZelensThe KY video was a “deepfake” or content created using artificial intelligence. In a deepfake, individuals train a computer to imitate real people to create an authentic video. Soon after the deepfakes aired, it was rejected by Zelensky itself, removed from major online sources such as Facebook and YouTube, and ridiculed by Ukrainians for its poor quality. Atlantic Council,
However, just because the video was quickly discredited, doesn’t mean it didn’t cause harm. In a world increasingly politically polarized, in which consumers of media may believe information that reinforces their prejudices, regardless of the apparent legitimacy of the content, deepfakes pose a significant threat, according to Northeastern University. Warns the professor of computer science and philosophy. Don Fallis,
“It’s kind of interesting respect in that it wasn’t a particularly high quality deepfake. There were all kinds of indicators that individual consumers of information might think, ‘That doesn’t sound right,'” Folis Zelensky about deepfakes it is said. “That being said, as with all these sources of misinformation, no matter how credible the information may seem, if you have a strong inclination towards a particular point of view, if you need to confirm that pre-existing bias. information is received, the source of that information—and the plausibility of that information—doesn’t matter.”
In his research, Fallis—who studies epistemology, or the theory of knowledge—tries to place modern issues, such as deepfakes and fake news, in the larger philosophical context of how individuals acquire true knowledge and It also digests wrong information.
In 2018, he co-authored an article titled, “Fake news is fake news“With Northeast Philosophy Professor K Mathiesen. The article looked at the threat to democracy and knowledge that became fake news and sought to define the concept. Two years later, he wrote an article about deepfakes,”Epidemic threat of deepfakes”, in which he concluded that deepfakes can lead to false beliefs, undermine the justification of true beliefs and prevent people from obtaining true beliefs.
Follis argues that both fake news and deepfakes have the negative effect of making real news illegal. They say they reduce the amount of anecdotal information available, undermine consumers’ trust in authentic media, and place an additional burden on fact-checkers to authenticate vast amounts of online content.
“In the case of fake news, you are building this online presence that is going to look like a legitimate news site,” says Fallis. “Similarly, in the case of deepfakes, you are creating video and audio that are supposed to look like legitimate media.”
Additionally, in conjunction with the tools used to collect personal information of individual users, deepfakes can also be used to maliciously target large audiences and manipulate them by playing on their inherent biases, says Falis. it is said.
“It just can’t be this one killer technology,” he says. “It’s not that deepfakes are something that drives us over a cliff. It’s a whole suite of potentially problematic technology.”
Northeastern political science and computer science professor David Lazar notes that the heightened political divide has a similar effect on the way people interpret fake news, where users are clearly seeking and accepting such information. which is in line with their prior prejudices. However, it is not clear how much one drops their critical-thinking skills when facing the media that reinforces their worldview.
“Certainly, we have seen an increased polarization in public opinion, and this is clearly one of the factors that can play with the spread of misinformation,” Lazar says. “It is quite plausible that political polarization and the spread of misinformation go hand in hand, but this is an area of needed research.”
director of northeast laser labLedger, who researches social influence and networks, focuses primarily on the spread of misinformation on social media. In 2019, he co-authored one discovery On the spread of fake news on Twitter during the 2016 presidential election cycle.
Deepfake techniques are also “quite relevant” to their study, Ledger says, but more research needs to be done on the different types of misinformation, how they are spread, and their psychological impact on consumers of media. He further added that the increase in political polarization and its impact on media consumption is also a high priority area of study.
“We can say with certainty that there has been an increase in many types of polarization over the past 40 years, and it is concerning,” Lazar says.
Beyond the issue of users failing to question the deepfakes faced by users if the content confirms their current worldview, the technology raises other significant concerns.
One of the most problematic uses of technology is when a person’s likeness, typically that of a woman, is manipulated and sexually explicit videos are put up, making it appear as if the person they are targeting. He is participating in sexual activity, says Mark Berkman, executive director of the Organization for Social Media Safety, a nonprofit dedicated to making social media safer through advocacy and education.
Additionally, as in the case of Zelensky’s deepfakes, the world is seeing the political impact of technology, Berkman says. Deepfakes can potentially interfere with democratic elections and be used as propaganda to sow division and suspicion, he says.
Follis and Berkman stress the importance of users developing critical thinking skills when venturing out online. One way for people to protect themselves from deepfakes is to engage in safe social-media use: view content, especially news, critically.
The Organization for Social Media Safety is currently supporting media training in public schools, helping children understand news sources so they can take a non-partisan approach to evaluating and understanding the credibility of content.
“It is incredibly important for our democracy to understand what is real and what is not,” Berkman says. “It’s also important to limit a healthy amount of time on social media, so people can avoid deepfakes being used for promotional purposes.”
However, as Follis and Berkman note, individual efforts cannot replace structural change in businesses and governments aimed at combating the spread of this potentially dangerous technology.
Social-media giants like Facebook have adopted policies to remove deepfakes from their platforms when found certain criteriaAnd some state governments, such as California, have adopted laws imposing civil liability on creators of intentionally harmful deepfakes.
In California, Berkman says, his organization is working on passing a state law that would also impose criminal penalties on creators of malicious porn deepfakes, with the hope that such legislation would spread to other states and the federal government similarly. adopts the law.
for media inquiriesPlease contact [email protected]