Latest News
How Meta Reset Brand Safety Controls on Facebook and Instagram | Ad Age
How Meta Reset Brand Safety Controls on Facebook and Instagram
Now social media giant has a new plan to show brands exactly where their ads appear in the feed By Garett Sloane. Published on October 04, 2022.
Credit: Facebook
Meta, the Facebook and Instagram parent company, retooled the system for controlling where ads appear in social media feeds in order to verify for brands that ads are not running next to harmful posts, which has been an ongoing concern of marketers on the internet.
On Tuesday, Meta announced new tests of brand safety controls, overhauling an earlier program launched when the company was still Facebook. In some ways, Meta went back to the drawing board to fix safety controls. The new method of serving ads into contexts that meet a brand’s comfort level provides more transparency, said Samantha Stetson, Meta’s VP of industry relations and client council. In this case, the advertiser will have a better sense of the specific content that surrounds an ad on Facebook and Instagram.
“[Advertisers] wanted true control over the actual adjacency placement,” Stetson said in an interview this week. “They really wanted to control what content was above or below.”
In the past two years, with advertisers seeing alarming political rhetoric and hateful conduct on social media, the need for controls over ads grew. Major brands started pushing harder for Meta, in particular, but Twitter, Reddit, TikTok and YouTube, too, to give them more controls to target ads into places that avoided content that violated their values.
Meta began feeling the pressure, especially after the civil rights uprisings in the summer of 2020 and during the heat of the presidential elections. Brands were more adamant about not having ads appear anywhere near posts that could be construed as supporting hate or even violence. After the Jan. 6 Capitol attacks, Meta began setting firmer timelines for implementing news feed controls. Meta picked Zefr as the third-party measurement firm that could relay to brands that ads appeared in pre-approved settings. Tests of the new program start this quarter, before being offered widely next year.
Related: Meta puts new ads in Facebook Reels
Reporting on the context of ads
Facebook has 1.97 billion daily users, each with personalized feeds that deliver tailored content to fit their interests, so it is a daunting task to target ads in a way that accounts for all the subjects that could appear above or below that content. Stetson explained: Facebook’s algorithms were designed with separate content rankings, one for individual users and another that picks the ads. “You now have to re-engineer the whole back end to make the two systems, to have a content relationship, as well, and take that into consideration,” Stetson said.
Meta’s first solution to control ads, which it started testing last year, had drawbacks. Meta wanted to implement “topic exclusions,” in which brands could pick broad swaths of content they wanted to avoid, including crime, social issues and news. Topic exclusions is a method they have used for brand controls in other parts of the platform including within videos that brands directly sponsor, in places such as Facebook Watch. With topic exclusions in the feed, Meta has been analyzing individual users’ personal feeds for the amount of content that applied to the exclusion categories. If a user was highly likely to be a consumer of those topics, they would basically be flagged as unsuitable for a certain brand, if advertisers wanted to avoid such content. The system fell short in some respects because it did not relay to advertisers exactly where their ads appeared. Advertisers just had assurances that they would not reach the users who had been put in a type of ads penalty box.
Meta views the new controls as an evolution of the topic exclusions it tested last year.
Under the new system, Meta, through third parties including Zefr, will give reporting on the context of all the ads.
Brand safety on social media
Meta and all the major digital platforms are working with groups such as the Global Alliance for Responsible Media (GARM), and other advertising stakeholders, to agree on terminology around what constitutes brand safety and how to measure it. For instance, GARM has been defining terms including “hate speech” and “misinformation,” and those agreed-upon concepts are used as the baseline for measuring brand suitability across the platforms.
“The hope is that the entire industry gets around reporting and measuring,” said Rich Raddon, co-CEO of Zefr, in an interview, “whether that’s post-campaign or pre-campaign, that everyone is using the same architecture. Because when you don’t, and you say, ‘yeah, we do it, but we do it differently.’ Then it doesn’t allow for brands to understand what’s happening with their placements from platform to platform.”
There are almost daily reminders of brand safety issues on social media. Just last week, Twitter had a crisis following a report from Reuters about illicit accounts that were sharing links to illegal content related to child sexual exploitation. Ads were appearing on profile pages of the accounts that were sharing links to this material. Twitter has since been working with GARM, and brands and agencies, to study what went wrong and rid the platform of the bad actors. But it showed why advertisers are pushing for controls and transparency.
There are still questions about how much adjacency matters in a social media setting, even when the content around ads is not illegal but could simply be offensive. Some camps — and Meta has been among them — have said that the exact placement of ads does not change ad performance. But there should still be controls for advertisers that want them. Under the new brand suitability test at Meta, brands will be able to set their risk tolerance. Some brands, like ones that are family-friendly at their core, could have zero tolerance. Edgier brands may have some degree of risk acceptance. Each brand’s risk setting would help inform where its ads run.
Zefr will present to advertisers in the program an after-action report, with details about the settings in which the ads ran.
Rob Rakowitz, the head of the Global Alliance for Responsible Media, said that all platforms are working to adopt standards that could apply across the board.
“The underlying content taxonomy is the same,” Rakowitz said. “What it should do is give the advertiser some predictability and some consistency. If you were looking at your campaigns, and if you were working on Meta, and if you were working on TikTok, and you were working on YouTube … you would be able to sort of see, “OK, on each of these three platforms, did each of my campaigns show up according to my strategy.’”