Zefr Acquires Adverif.ai to Identify Misinformation on Social Platforms
Misinformation has historically been trickier to detect with tech than other kinds of brand safety
5 HOURS AGO
Introducing the Adweek Podcast Network. Access infinite inspiration in your pocket on everything from career advice and creativity to metaverse marketing and more. Browse all podcasts.
Brand safety ad-tech firm Zefr is buying Adverif.ai, an Israeli artificial intelligence company that uses tech to identify misinformation, the companies announced today. Terms of the deal were not disclosed.
Zefr, which focuses on brand suitability — particularly inside walled gardens like Meta and TikTok — has previously been more focused on helping brands avoid appearing next to videos with undesirable content, such as nudity or violence.
Misinformation has been a harder beast for technology to tackle, which made the Adverif.ai an attractive deal, said Andrew Serby, EVP of strategy and marketing at Zefr.
“[Misinformation] is a very difficult challenge, given that it’s been so prolific and so hard to pin down,” Serby said. “We always wanted something a lot more technical that we can actually build into our product.”
The deal comes as brand safety on platforms comes under more scrutiny. At the Cannes Lions Festival of Creativity this year, the Global Alliance for Responsible Media — an initiative of trade body the World Federation of Advertisers—said it was adding misinformation to its brand safety floor and suitability framework, which it uses to produce a report card on social platforms.
Serby said the announcement served as an impetus for the deal, as Zefr aims to help companies achieve brand safety across GARM standards. The deal is the company’s first ad-tech business acquisition.
Social platforms over the past several years have invested money in content moderation after a host of scandals, but research shows misinformation still proliferates. A study from New York University and Université Grenoble in France found that news publishers known for putting out misinformation got six times the amount of likes, shares and interactions on Facebook as trustworthy news sources around the 2020 election (Facebook responded that engagement doesn’t equate with reach.) The International Fact-Checking Network wrote a letter to YouTube’s CEO in January 2022, highlighting recent examples of misinformation on the platform.
Using AI to detect falsehoods
Adverif.ai started in 2018 as a game, testing users’ ability to parse fake from real headlines. The technology tended to identify the truth correctly, whereas the players were not always as successful — opening the door for a wider use case, said Or Levi, Adverif.ai’s founder and CEO.
The incumbents in the space, such as news publisher rating firm NewsGuard, tend to rely mostly on human fact-checkers. Adverif.ai takes fact-checkers recommendations as a starting point — and then uses this data to help its algorithms learn what to look for in identifying misinformation.
“You can use what we have in order to achieve scale,” Levi said. “There is only so much [fact checkers] can do.”
Serby said that Zefr’s content policy moderators will be monitoring the accuracy of the technology and making modifications.
Brands cannot yet transact against Adverif.ai’s misinformation distinctions within the Zefr platform. Zefr is first focused on integrating the technology and making sure the labeling is as good as possible.
“We’re dedicating the next several months on product integration into our GARM targeting and measurement suite, which allows advertisers to choose their GARM risk thresholds across platforms for either targeting or post-campaign measurement,” Serby said.
Joshua Lowcock, global chief media officer at UM Worldwide, said the tech would be particularly useful if ad buyers can transact against it, especially because Zefr was recently named a brand suitability partner of TikTok and selected by Meta to build a brand suitability product for the Facebook Feed.
“To buy against [Adverif.ai] and use it as filters and controls on TikTok and Facebook. That’s the dream,” Lowcock said, noting it is much more difficult to monitor brand safety in walled gardens than on the open web. “We’d have some sense that we’re not monetizing disinformation.”
Other brand safety ad-tech firms are also more focused on other forms of brand safety than misinformation — and their offerings are not explicitly mapped to the new GARM standards, allowing an opening in the market, said Lowcock and a media buyer at a global agency, who was not authorized to speak to the press.
But there’s a reason the tech for misinformation is limited. Ad tech firms are reluctant to call out publishers as false, Lowcock said, and adjudicating truth is not straightforward.