March 2023

Meta’s In-Feed Brand Suitability Tools Are Ready For Prime Time

After more than two years of building and testing, Meta is releasing brand suitability controls and third-party verification for its feed environments.

On Thursday, Meta announced new AI-powered inventory filters for the Facebook and Instagram feeds so brands can control whether their ads appear above or below in-feed content based on their suitability preferences.

(The controls are available in English and Spanish, with French and German as the likely next two languages.)

As a refresher, brand safety is about not monetizing the sort of content that most brands probably wouldn’t want to be near: pornography, violence, terrorism and the like.

But brand suitability is a more nuanced concept, said Samantha Stetson, Meta’s VP of client council and industry trade relations, because it’s contingent on a brand’s individual sensitivities.

Safe vs. suitable

Meta uses a mixture of AI and human review to remove content and pages that violate its community standards. This is content that would also be considered brand unsafe and, therefore, not monetizable.

What’s left is all monetizable – in theory. But different advertisers have different thresholds of risk based on their brand values and preferences, Stetson said. That’s where suitability comes in.

What we’re trying to do here,” Stetson said, is give advertisers greater control over where their ads are placed so they can feel comfortable and make better decisions to inform their marketing goals.”

Meta’s brand suitability review system classifies in-feed content, including text, images and video, based on risk tolerance levels as recommended by the Global Alliance for Responsible Media (GARM), an industry group founded by members of the World Federation of Advertisers to determine standards for defining brand safety and suitability.

GARM’s suitability framework sets out standard definitions for high‑, medium- and low-risk content.

For example, a depiction of death or injury would be classified as high risk if it glamorizes harmful acts, but as medium risk if the depiction is in the context of entertainment or breaking news. Depicting death or injury for educational or scientific purposes would fall into the low-risk category.

Suit your fancy

After in-feed content has been cleared for monetization, Meta’s AI review system scans it to see which risk bucket it should fall into.

Facebook and Instagram advertisers, meanwhile, can choose between three different settings to control the type of content their ads are adjacent to in feed.

Expanded inventory is the default setting, which includes any in-feed content that abides by Meta’s community standards and is eligible for monetization. The moderate inventory setting filters out content that’s been classified as high risk, as per GARM’s framework. Limited inventory is for advertisers that only want their ads to run alongside low-risk content.

It’s not possible to quantify exactly how much in-feed content across Facebook and Instagram is monetizable, Stetson said. The number of ads people see depends on the ad load and session depth.

But she noted that, during the testing phase, Meta found that less than 1% of content on its platforms falls into the high-risk category.

Prove it

Advertisers aren’t going to take it on faith that their ads are only running adjacent to content they’ve deemed suitable.

So Meta partnered with video brand suitability platform Zefr to develop tools for reporting on the context in which ads appear in the Facebook feed. (Measurement and verification for the Instagram feed isn’t available yet, Stetson said.)

Advertisers can get transparency reports for Facebook that show them the content that appeared above and/​or below their ads and the risk category it fell into. Being able to see the content itself could eventually convince some brands to loosen their suitability restrictions.

If they can actually take a look at it,” Stetson said, they might say, Oh, I thought I only wanted to stay away from high-risk stuff, but maybe medium risk could work for me.’”

Up next

Next up on the road map is to expand these classification controls and settings to Stories and Reels, which Stetson said advertisers have been asking for.

Improving Reels monetization in general is a top priority for Meta. CEO Mark Zuckerberg told investors in February that, although more than 40% of Facebook and Instagram advertisers use Reels, the format monetizes less efficiently.

Onboarding other third-party measurement partners to develop their own suitability verification tools for in feed and pursue Media Rating Council (MRC) accreditation for Meta’s in-feed inventory filters are up next on the agenda. (The MRC process will kick off during the second half of the year.)

In November, Meta finally got accreditation for direct monetization controls – as in, for ads placed directly in content, such as Instant Articles and in-stream video – after concluding a two-year MRC audit.

Safety (AI)n’t easy

One of the reasons it took so long for Meta to launch suitability measurement and verification controls for its feed environments is because no two feeds are alike.

The content lineup in any given feed is personalized and different for every person. Feeds are the ultimate in complexity,” said Rich Raddon, co-CEO and co-founder of Zefr, because they contain a dynamic mix of text, images, sound and video.

The AI training models need to validate each of the elements for brand safety independently and as a grouping in order to properly understand the context,” Raddon said.

A team of people at Zefr review text, images and videos on Meta. These signals are fed into Zefr’s AI system and merged with other deep learning language models, such as Roberta XLM and CLIP.

AI is an excellent tool when it comes to taking human action and amplifying that voice to the scale of feeds,” Raddon said.

But speaking of AI, it could only be a matter of time until regulators start taking more interest in brand safety and suitability, especially now that misinformation is part of GARM’s framework. (In June 2021, GARM updated its framework to include misinformation as a harmful content category.)

When most people hear the term brand safety,” one of the first things that comes to mind is misinformation, Raddon said.

With the explosion of generative AI, many Americans now understand how easy it is to create synthetic images, video and text,” he said. All of a sudden, the idea of misinformation has become very real for your everyday citizen.”

Full Story