April 2024

Zefr’s New AI Chief Wants To Prove The Value Of ‘Truthiness’ | AdExchanger April 2024

Zefr’s New AI Chief Wants To Prove The Value Of Truthiness’

An interview with Jon Morra, Chief AI Officer

Zefr is the latest company to name a chief AI officer. Jon Morra was promoted to the role in February after about seven years with the company.

Zefr is among the brand safety and suitability vendors with a specialty in rating contextual placements across the walled gardens, including YouTube, TikTok and Meta. Over time, Morra has seen content moderation shift from mass human labeling” (where humans reviewed and manually marked content for violations) to painstakingly training machine learning models on the same rules. Now, he said, large language models have begun to do pseudo labeling” of content, with a human stepping in at the end just to vet the software’s decisions.

My job is to figure out how to use AI responsibly, to keep on top of research trends and know how to cut through the noise,” said Morra, who joked he falls asleep most nights reading the machine learning subreddit.

AdExchanger caught up with Morra about the new role and how digital media will adapt (or acquiesce) to AI technology.

AdExchanger: What are your top priorities in your new role?

JON MORRA: First is identification of misinformation. The amount of generative content online that’s not clearly marked [as AI generated] is growing.

Two, scaling our policy effectively in as many languages and modalities as we can.

Third, we have a new initiative around responsible AI. A lot of our customers are creating generative experiences, so there’s a burgeoning market for making sure these experiences are safe and suitable.

Why is it difficult to detect misinformation?

When you’re looking at the GARM [Global Alliance for Responsible Media] categories, a well-trained person can assert whether a piece of content matches a policy. Is somebody committing a crime? Is there a weapon? Is somebody consuming alcohol? People can be trained to do that. Misinformation, not so much.


Will The JIC Stick?; Spotify’s Lagging Ads ARPU

Asserting the truthiness of something is hard.

Truthiness is an interesting way to put it. Is there not always an absolute truth?

You have two separate problems. There’s not always an absolute truth. But you also have negatives that are hard to prove.

There was a post saying that Joe Biden’s mental faculties aren’t what they used to be. Is that true? Is that false? He’s in his 80s. He’s never been diagnosed with Alzheimer’s. Does he have some other condition? Probably not, but it gets hard to prove a negative.

What would you do to prove a negative?

We’ll find articles from trusted news sources that talk about why that’s probably not true. Ultimately, our policy team makes the call about whether or not they want to add a fact to our database. It’s case by case.

How do you stay ahead of misinformation trends?

Our goal is to stay on top of these facts and be as fast as we can from a reactionary standpoint.

In 2022, we acquired AdVerif​.ai, which focuses on misinformation. Zefr also integrates with verified fact-checkers [International Fact-Checking Network members] and public data sources, which we use as our ground truth to train our models to assert, when some new piece of content comes in, whether it’s true or false.

In addition, our policy team hunts for social media trends. Once they find a trend, they try to find a verified fact to say this is proven or disproven, according to some third-party source. We then put that fact into our database and retrain and redeploy our models.

We want to makes sure we get both a global definition of misinformation and a customer-focused definition.

What’s the difference between global and customer-focused misinformation?

Global misinformation would be [misinformation about] anything you would read on CNN or in a major newspaper.

Brand-specific misinformation could be where a brand creates a product and somebody claims that product causes cancer.

Where do you see generative AI going?

Generative models are going in two separate directions. One is bigger. GPT‑5 is going to be this monstrous model that’s going to consume a ton of compute power.

The other thing you see is smaller, more targeted models. This is where Zefr is investing: using the big models to understand the world at large, fine-tuning them and creating these smaller models to do one thing really well – in our case, brand safety and suitability.

Where the generative models excel is helping us come up with training data.

What are the implications of generative AI for brand safety and suitability?

The future of brand safety is this ability to run fast. When we have a policy change, no longer do we need to train our crowdsourced reviewers on what that policy change means, get a million pieces of content labeled about that policy change, retrain the model and redeploy.

Now, the cycle of deployment and keeping up with new policies – new content in the wild – has gotten a lot faster.

Full Story