Top of Page
Skip to main content
Menu
July 2024

Striking A Balance: Managing Content Moderation And Free Speech In Online Ecosystems | AdExchanger July 2024

Periodically, the worlds of advertising and partisan politics collide. An impassioned and often politicized debate has taken center stage around the value of walled garden content moderation.

There are those who proclaim that any initiative aimed at tackling harmful content on digital media platforms – and how advertising monetizes this content – infringes on freedom of expression.

On the other side are those who maintain that balancing content moderation and freedom of speech is not a zero-sum game, and that maintaining an equilibrium between free speech and moderation is vital for a thriving democratic society.

Some critics of moderation argue that the public is capable of discerning the truth of social media content without third-party oversight. But a significant portion of social media users recognize the necessity of content moderation, especially in an era marked by AI-generated misinformation and deepfakes.

In 2024, the imperative for robust, nuanced social media content moderation systems has never been more pronounced – over 40% of the world’s population is projected to go to the polls in one of the largest election years in history.

Open social platforms need established content policy that is underpinned by transparency, advanced technology and feedback loops for constant improvement.

Transparency as the cornerstone

Yes, transparency is an industry buzzword. But it needs to be a guiding principle of our industry’s content policy standard development.

All tech players should make their individual repositories of content policy accessible to client partners. This openness not only fosters trust among marketer and agency clients but also empowers them with a deeper understanding of the nuances of brand safety and suitability.

In this increasingly global marketplace, it is important that companies employ multilingual content policy teams and implement partnerships with industry bodies (such as the Brand Safety Institute & Global Alliance of Responsible Media) as a staunch commitment to education and clarity in content moderation.

The importance of understanding nuance

Content moderation at its core does not signify removal but actually refers to a deep and scaled understanding of context.

In content labeling, computer vision techniques are critical in a digital age dominated by complex pieces of content, whether images, videos, text, audio or a combination of various mediums.

Artificial intelligence (AI) and machine learning (ML) should be a key component of any company’s content moderation tech stack. But while these automated tools are powerful, the underlying technology needs to understand the subtleties of human expression. 

If a video features explicit content, it’s important for the technology to be nuanced, such as detecting profanity or nudity in explicit adult content vs. profanity in a clip from a historical war documentary or within the lyrics of a rap music video. Otherwise, content moderation will fall short.

Advanced AI and ML techniques that adhere to industry-standard taxonomies make it easier to map universes of content across the world’s largest social media platforms with accuracy.

An open dialogue

When third-party tech companies invite their brand partners to provide regular feedback on content labeling policies, it empowers them to play an active role in shaping their safety and suitability settings.

Brand safety and suitability is highly complex and nuanced. What works in one market or region may not work in another. And what works for one product line may be different across other lines of business that fall under the same brand.

Consider the context of humor and comedy. In America, many people find crude humor quite entertaining, with shows like South Park that are well known for pushing comedic boundaries nevertheless entering the mainstream. But in France, comedies are often more intellectual or satirical. French audiences may find some flavors of American humor crude, juvenile or lacking in wit, and therefore may deem something like South Park unsuitable for brand alignment. 

By hosting regular feedback sessions and learning cultural differences across various regions globally, platforms can develop region-specific content guidelines.

Balancing continuous evolution with quality control

User-generated social media environments are constantly in flux. Evolving policies to remain relevant and timely should be a high priority.

This cadence can be maintained through brands and their agency and tech partners hosting weekly policy roundtable meetings, having open internal dialogues on a daily basis and implementing a robust policy challenge process. Each of these checks and balances ensures that strategies are adaptable to new and emerging social media trends. 

By fostering an ecosystem where multiple constituents contribute to the development of policies, we not only enhance overall user safety on platforms but also encourage a culture of responsibility and awareness.

The content landscape continues to expand and diversify. Despite the challenges posed by some skeptics, and especially with the advent of generative AI technologies, it’s crucial for us to unite to safeguard the broader information environment.

Full Story