March 2022

Meta Picks Zefr as First News Feed Brand Safety Measurement Partner

Promises to report where ads appear in feeds on Facebook, and later in Instagram, Reels and Instagram Discover

By Garett Sloane. Published on March 17, 2022.

Facebook picked Zefr, a brand suitability” tech company, to help verify that brands can avoid controversial subjects within the social media app’s feeds.

On Thursday, Meta, which owns Instagram and Facebook, made its latest update to the brand safety program it started in 2021 to implement controls for brands in its chaotic feeds. A year ago, Facebook called the project topic exclusions” in News Feed, but it has since renamed News Feed to simply Feed. The environment is the main area where up to 1.9 billion Facebook users spend time daily.

Meta said it picked Zefr as the first independent party to report to brands on the efficacy of the program, letting brands understand the content of ads in feeds. DoubleVerify and Integral Ad Science were also interested in the brand safety commission. After an extensive vetting process, we’ve selected Zefr,” Meta said in an announcement, as the initial partner for providing independent reporting on the context in which ads appear on Facebook Feed. We will work together to develop a solution to measure and verify the suitability of adjacent content to ads in Feed, starting with small scale testing in the third quarter of this year and moving to limited availability in the fourth quarter. ”

Meta said it would open the brand suitability measurement tools to all marketing partners eventually, which includes ad agencies and measurement firms like DoubleVerify. Meta also said that the controls will expand to other surfaces” on its platform, including Instagram, video Reels and Instagram Explore page, so that brands can better target where their messages appear in those settings.

Topic exclusions were launched as an experiment to see whether brands could avoid running ads above or below posts that contain subjects they would rather avoid, such as subjects related to news, politics, crime and social issues. Facebook already had similar controls, which brands could deploy, to refrain from appearing in midstream video ads and alongside apps in the Facebook Audience Network. But feeds were a harder challenge as they are some of the most dynamic environments, where brands are worried about objectionable news and heated political discourse.

The topic exclusions work when brands apply the themes they want to avoid, and then only run ads that won’t appear adjacent next to any of the subjects they selected. In the early testing, Facebook had been identifying accounts of everyday Facebook users who could share some of the types of content most brands would find available, and turn down running ads to those personal pages if there was a risk some of the content violated the topics exclusions.

Meta had been pressured into working on a feed-based brand safety program after the troubles of disinformation and uncivil discourse became apparent on the platform. Critics of the social network were concerned about disinformation, racism and harassment, particularly following the police killing of George Floyd in 2020. There also have been concerns that low-quality media sources get more visibility in people’s personal feeds, and advertisers had grown increasingly worried their messages were bumping next to conspiracies and propaganda. For years, Facebook maintained a stance that context is not as important as targeting when it comes to ads in social environments.

Meta has been working with industry groups like the Global Alliance for Responsible Media, an organization that has tried to define the parameters of what constitutes hate speech online and other issues. Major ad agencies have also been developing programs that rank social platforms for their ability to contain offensive content and keep brands from supporting it.

Meta’s announcement mentions Omnicom Media Group’s in-house program called Council for Accountable Social Advertising, which lobbies for ad adjacency controls and verification across social environments. 

It’s a significant step forward in assuring a transparent and brand-safe environment for advertisers to connect with their customers.” Ben Hovaness, Omnicom Media Group’s senior VP of marketplace intelligence, said of Meta’s latest tool to measure safety in the context of feeds.

~ ~ ~
Clarification: A previous version of this story incorrectly reported that Instagram had also selected a third-party measurement firm.

Full Story