Social media platforms are changing how online advertising works and, in turn, are raising concerns about discrimination and new forms of violent marketing.
Today the ARC Center of Excellence for Automated Decision Making and Society (ADM+S) – a multi-university unit led by RMIT – launched the Australian Advertising Observatory. This research project will explore how platforms target Australian users with ads.
The goal is to foster a conversation about the need for public transparency in online advertising.
The rise of ‘dark commercials’
In the mass media age, advertisements were (for the most part) public. This meant that it was open to scrutiny. When advertisers behaved illegally or irresponsibly, the consequences were in front of many.
And the history of advertising is full of irresponsible behavior. We have seen tobacco and alcohol companies engage in violent targeting of women, underage people and socially disadvantaged communities. We have seen the use of sexist and racist stereotypes. Recently, the spread of misinformation has become a major concern.
Read more: Is this really wrong, or do you just disagree? Why Twitter’s user-driven use of tackling misinformation is complicated
When such practices are out in the open, they can be countered by media watchdogs, citizens and regulators. On the other hand, the rise of online advertising – which is tailored to individuals and delivered on personal devices – undermines public accountability.
These so called “dark ads” are visible only to the target user. They are difficult to track, as an ad may only appear a few times before it disappears. Also, the user does not know whether the ads they see are being shown to others, or whether they are being segregated based on their identification data.
There is a lack of transparency around the automated systems Facebook employs to target users with ads, as well as the recommendations it provides to advertisers.
In 2017 investigative journalists at ProPublica were able to purchase a test ad on Facebook targeting users associated with the term “Jewish hate”. In response to the attempted ad buying, Facebook’s automated system suggested additional targeting categories, including “How to Burn the Jews.”
Facebook removed the categories after facing the findings. Without investigating investigators, could they have lasted indefinitely?
Researchers are increasingly concerned about dark ads. In the past, Facebook has made it possible to advertise for housing, credit and employment based on race, gender and age.
This year it was found distributing targeted advertisements for military gear with posts about the attack on the US Capitol. It also enabled ads targeting African Americans during the 2016 US presidential campaign to suppress voter turnout.
Public support for transparency
It is not always clear whether such crimes were committed intentionally. Yet they have become a feature of the widespread automated ad-targeting systems used by commercial digital platforms, and the opportunity for harm is always present – intentionally or otherwise.
Most examples of problematic Facebook advertising come from the United States, as this is where most of the research on this issue is done. But it is equally important to examine the issue in other countries, including Australia. And the Australians agree.
Research published Tuesday and conducted by Essential Media (on behalf of the ADM+S Center) has revealed strong support for transparency in advertising. More than three-quarters of Australian Facebook users responded that “Facebook should be more transparent about how it distributes ads on its News Feed”.
With this goal in mind, the Australian Ad Observatory developed a version of an online tool created by ProPublica to allow members of the public to anonymously share the ads they receive on Facebook with journalists and researchers. Can go
This tool will allow us to see how ads are being targeted to Australians based on demographic characteristics such as age, ethnicity and income. It is available as a free plugin for anyone to install on their web browser (and it can be removed or disabled at any time).
Importantly, the plug-in does not collect any personally identifying information. Participants are invited to provide some basic, non-identifying, demographic information when they set it up, but this is voluntary. The plug-in only captures text and images in ads labeled as “Sponsored Content” that appear in users’ news feeds.
Facebook’s online advertising library provides some level of visibility into its targeted advertising practices — but it isn’t comprehensive.
The Ad Library only provides limited information about how ads are targeted, and excludes some ads based on the number of people. It is also not reliable as an archive, as the ads disappear when not in use.
need for public interest research
Despite its past failures, Facebook has been hostile to outside efforts to ensure accountability. For example, it recently asked New York University researchers to close their research into how political ads are targeted on Facebook.
When he refused, Facebook cut off his access to its platform. The tech company claimed it had to ban the research because it was obliged to settle with the United States Federal Trade Commission over past privacy violations.
However, the Federal Trade Commission publicly rejected this claim and emphasized its support for public interest research, which was intended to “shed light on opaque business practices, particularly around surveillance-based advertising”.
Platforms should be required to provide universal transparency for the way they are advertised. Until that happens, projects like the Australian Aid Observatory plugin can help provide some accountability. To participate, or for more information, visit the website.
Read more: Australia’s competition watchdog says Google has a monopoly on online advertising – but how does it work?