Top of Page
Skip to main content
Menu
October 2024

Meta lets brands shut off comments on Facebook and Instagram ads—what marketers should know | AdAge October 2024

Brands can now silence trolls on Facebook and Instagram with a new brand safety tool that mutes comments on ads. The tool is part of a flurry of new mechanisms Meta developed for its social platforms, giving advertisers more levers to avoid unsuitable content, and they come amid uncertainty over the future of brand safety online.


On Thursday, Meta announced the updates to its brand safety and suitability platform, which it has been developing for years and rolled out widely last year. Meanwhile, TikTok recently expanded a brand safety partnership with Zefr, the ad viewability and measurement firm. Zefr is giving advertisers a way to avoid appearing near TikTok videos that contain risky subjects by putting those subjects on an exclusion list.”

The emphasis from the two major social media platforms, Meta and TikTok, shows the industry is still working on quality controls for brands even after the collapse of the Global Alliance for Responsible Media, the group that was forced to shut down after pushback from Elon Musk and X, formerly Twitter. 

Meta and TikTok’s changes coincide with Advertising Week in New York this week, the annual industry confab where brand safety is typically a core talking point. Meta is now letting brands shut off comments, which could lower the volume of harassment and misinformation that sometimes accompanies sponsored posts. Obviously this is really good, you know, when there’s sensitive campaigns or sensitive issues happening in the world,” said Samantha Stetson, VP of client council and industry trade relations at Meta. “[Brands] just have the ability to control the kind of comments that would be appearing.”

Meta also introduced several more adjustments to its brand safety régime, including exclusion lists to keep ads from running on profile pages of unwanted people and publishers. Meta recently expanded inventory to place ads within profile pages, not just within the news feed, meaning users are served ads when they peruse different Facebook and Instagram accounts. Those accounts could be at odds with brands’ sensibilities about what’s suitable for their messages.

Meta also announced a deeper partnership with Integral Ad Science, another firm that measures viewability and brand safety. Meta will enable third parties such as IAS to help manage content block lists to avoid categories of subjects that make a brand uncomfortable.

The changes represent an update to the adjacency controls that Meta released widely last year, a tool brands could use to avoid appearing above or below user-generated content that fell outside the brand’s safety standards. Meta, TikTok, Google and others have all worked on new methods of controlling where ads run on their services before the ad is placed, and reporting to brands after the campaigns on how effective they were at keeping ads in clean settings. The platforms had been working with GARM on standards and definitions in brand safety, identifying core categories that could turn off advertisers, including nudity, violence, hate speech, misinformation and more.

This year, however, Musk took a wrecking ball to GARM, suing the firm, claiming it was engaged in collusion and threatening ad freezes on platforms if they did not adopt proper speech measures. GARM, which had been a core ad industry institution and a part of the World Federation of Advertisers, shut down under the threat of the lawsuit. Now, it seems platforms are coming up with piecemeal solutions to work on quality controls individually.

Brands have been through the wringer online, where they are contending with the same faulty information landscape as the rest of the public. Social media has become flooded with AI deepfakes and brands often are the subject of misinformation campaigns, similar to how fake news spreads in politics and current events. Brands can find themselves on the other end of smear campaigns. Brands also live in fear of unfortunate screenshots that show their ads online next to insensitive content. This summer, Adalytics, a watchdog group that looks for bad ad adjacencies online, documented alleged lapses in the brand safety edifice. Adalytics has been critical of groups such as IAS and DoubleVerify, another viewability and brand safety firm.

The social media platforms also have woes as they both enable the creation of AI content, but now suffer from an influx of misleading information generated by AI. AI content has become an even more pressing consideration as the U.S. presidential election nears. In recent weeks, there has been a rash of fake images of Hurricane Helene victims. Meanwhile, TikTok and Meta have also been scrutinized for lapses in child safety, which often goes hand in hand with brand safety. Brands are scared off by reports that social media sites aren’t protecting teens and younger users. This week, TikTok was sued by more than a dozen states over viral challenges that spread on the site, which could encourage dangerous behavior from young people. States have also sued Meta over alleged failures to protect children.

It’s in this chaotic information ecosystem that brands are trying to exert some control, said Andrew Serby, chief commercial officer at Zefr, speaking with Ad Age after announcing the new exclusion capabilities on TikTok. Zefr’s new tool on TikTok is getting even more specific about the categories of content a brand could avoid, Serby said. So, a brand that is dealing with a very specific kind of misinformation can set its standards to avoid TikTok videos that include that bit of fake news. The tool has an even more basic function — it allows brands to avoid appearing next to content that contains their competitors, which is something TV advertisers do all the time so as not to run commercials in the same ad breaks.

The social platforms and brands are on their own to come up with these new products since GARM shuttered, but that could help force innovation in brand safety, Serby said. Still, GARM could have been useful in helping guide the conversation as the space evolves with AI and new types of misinformation, Serby said.

Without the common definitions, we’ve been trying to think about what would brands want next for brand safety and suitability,” Serby said, and how does it work on the social platforms, which is where most of the brands are asking their questions about because, as you know, that’s where most of the investment is going.”

Full Story