Top of Page
Skip to main content
Menu
May 2023

Here are safeguards brands and agencies are tapping to prevent or mitigate AI fraud | Digiday

Here are safeguards brands and agencies are tapping to prevent or mitigate AI fraud

May 19, 2023 • 6 min read • By Michael Bürgi and Marty Swant

Ivy Liu

As interest in and use of generative AI spreads across the marketing and media spectrum like wildfire — with seemingly every company announcing some advancement or development including search, content creation, advertising and media — where and who are the firefighters to keep AI from burning out of control?

It hasn’t taken long for deepfakes and misinformation to be the proverbial lightning bolt striking a dead tree in a dry forest.

Such outbreaks aren’t being as prevented today as they could be, said John Montgomery, who specializes in privacy and fraud-prevention issues following a long career at GroupM addressing those topics.

There are teams inside every agency and every marketer who are tasked with figuring out how to use large language models in marketing,” said Montgomery, who works with companies like DoubleVerify and LinkedIn on these issues. And there’s a lot of dystopian talk about what ChatGPT might do around the elections, around everything from the distraction of humanity to the acceleration of deepfakes. But what I didn’t get from anybody was … what should we do about it?”

Montgomery leans on the truism in marketing that if you can measure it, you can mitigate it. So he advocates the industry use technology, including generative AI, to suss out what he calls synthetic content. Once you can measure that, you can make a decision to optimize against it,” he said.

Plenty of individual and community-driven efforts are moving forward with new tech and partnerships that aim to mitigate AI-generated misinformation through various tools and standards.

To develop new standards, some companies have banded together through organizations such as the Coalition for Content Provenance and Authenticity (C2PA), which was founded in 2021 to develop technical standards for various types of content. Current members include tech giants like Microsoft, Intel, Sony, Adobe along with camera companies like Canon and Nikon. Others include media companies such as The New York Times, the BBC, France TV and CBC Radio Canada.

Among C2PA’s members is the startup TruePic, which has a mobile software development kit (SDK) that can provide a digital signature” to verify the authenticity of a photo. The company also recently started working with the AI content platform Rev​el​.AI to label deepfakes and other videos as computer-generated. Making sure companies and people can detect AI-generated content includes improving consumer and societal digital literacy, said TruePic CEO Jeffrey McGregor. He said it’s also key to create uniform standards that are interoperable, adding that new regulations around media transparency would also help.

Earlier this week, TruePic — a company with tech that can verify if images and videos are AI-generated — announced a partnership with the World Economic Forum to help shape the trajectory of technology change.” Others that heralded new partnerships this week include Shutterstock, which announced a new global partnership with the United Nations to develop ethical AI models, tools, product and solutions through the UN’s AI For Good” platform.

The thing that worries us the most is if anything can be faked, then everything can be faked,” McGregor told Digiday.

Other startups that recently released new tech include RKVST, which last week debuted a tool called Instaproof that checks data for evidence of deepfakes and AI-generated or edited content. For centuries we’ve had pens and people have used them to write things,” said Jon Geater, CTO of RKVST. Some of that has been good and some of that has been bad.”

Another is PicoNext, a startup that helps companies mint NFTs, which earlier this week added a way to authenticate branded content on the Ethereum and Polygon blockchains. The company wouldn’t disclose any brands testing the new tool in private beta. However, PicoNext Founder and CEO Dave Dickson — who spent a decade working on emerging tech at Adobe — said examples of content could be high-stakes campaign material such as elements of a Super Bowl ad and other visual content. Dickson put it this way: How can you repair your reputation in real-time if it’s being damaged?”

Other companies like Zefr are trying to tackle faked content across social media platforms. Last year, the company acquired the Israeli startup Adverif​.ai, which uses a combination of machine learning and human fact-checkers to detect misinformation. Using discriminative AI” — which Zefr described as the opposite of generative AI — the company can focus on various features of any piece of content and use that info to determine if it’s safe. The real danger to society is going to be the subtlety of misinformation over the next year and a half,” said Zefr Chief Commercial Officer Andrew Serby. That’s where opinions get swayed, and I don’t think there’s enough oversight of it because it’s hard to understand.”

Within the agency world, efforts are also picking up speed to ensure both internal use of generative AI and clients’ efforts are within safe parameters. Dave Meeker, evp and head of design and innovation, Americas, at Dentsu Creative (but who works across creative and media), said his holding company has been working on various uses of AI since 2016 and is dead serious about safeguarding any use of AI. 

We’re… taking a very methodical, measured, legal and compliance-first approach to [all uses of generative AI], not just to make sure that we are indemnified, but to make sure that the work we do is transparent, authentic, real, and continuing to be meaningful while absolutely being excited about the potential.”

Meeker pointed to client Intel’s development and launch last fall (before this current explosion of interest in generative AI) of its own deepfake detector, called FakeCatcher, which can detect with 96% accuracy fake video content. 

All those efforts aside, one holding company brand safety expert who declined to speak on the record, said industry-wide efforts ultimately won’t prevent AI-generated misinformation because bad actors will use AI anyway. Instead, the expert argued government regulation will be best at establishing perimeters of what’s acceptable and what isn’t.

Misappropriation of clients’ logos, content and trademark to commit fraud or inappropriate behaviors online has always been a problem,” said the exec. The only way [AI-generated fraud] is going to be solved is through regulation. I’m not convinced industry bodies can achieve what they need to achieve anymore. These are global problems …Codes of conduct only work for people that are well behaved and want to abide by the codes of conduct.”

For what it’s worth, the government is on the case. Concerns with AI related to misinformation, copyright issues and data privacy were top of mind on Tuesday during the U.S. Senate Judiciary Committee’s hearing about AI oversight. IBM Chief Privacy and Trust Officer Christina Montgomery — who testified alongside OpenAI CEO Sam Altman and NYU professor Gary Marcus — said rules around AI should be different depending on the risks. She added that it’s also important to provide clear guidance on AI uses and categories and that consumers know when they’re interacting with AI systems.

Companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public and attest that they’ve done so,” Montgomery said.

Full Story