Latest News
Helping children navigate online misinformation in the age of generative AI By Emma Lacey, SVP EMEA | IAB UK
In a world where generative AI is becoming ever more popular; education, government regulations, and technological interventions can help equip children to separate fact from misinformation, writes Zefr’s Emma Lacey
We all agree that children need protection from a host of dangers in the physical world. Should it not be the same in the digital world? The reality is, online platforms are playing a larger role in children’s lives than ever before. Accelerated by the pandemic, this heightened usage goes beyond merely facilitating school education and social networking.
Children develop pivotal social and political opinions during their formative years, and much of this is now being shaped online. A report by the National Literacy Trust found that more than half of 12 – 15 year olds regularly use social media as their main news source. However, their elevated vulnerability also makes them prime targets for spreaders of misinformation.
It is more important than ever, then, that we examine the factors that continue to drive online misinformation and implement systems that protect children from its negative impact.
How generative AI is magnifying online misinformation
Experts project that nearly 90 per cent of online content may be generated or manipulated with AI by 2026. While much of this will be using AI to push content to even greater heights, it also makes the creation of misleading content and websites easier. This is especially alarming considering only 11% of school children can distinguish a fake website from a real one.
Deepfakes, for example, are generative AI videos and images that convincingly replicate the mannerisms and speech of real people — or create digital personas from scratch. These are already being used to distort information on social and political issues, and manipulate children for financial gain. Deepfakes have been ranked as the most serious AI crime threat due to their uncanny ability to deceive people and their increasingly easy production, replication, and accessibility. Given children’s lack of familiarity with such manipulated content, deepfakes are also expected to pose a greater risk to them.
What’s more, AI-generated misinformation has been shown to be even more persuasive than fake content created by humans. In some instances, faces generated by AI were deemed more trustworthy than real human faces. In its active crackdown of fake profiles and content, more than two-thirds of the accounts that Meta removed from its platforms in 2022 likely had AI-generated profile pictures.
How online spaces can be made safer for children
As generative AI threatens to increase online misinformation, a combination of regulations, practical changes in the education system, and technological interventions must be implemented to keep children safe.
Policymakers should formulate standardised procedures that technology companies can use to classify the safety and reliability of content across their platforms, ensuring transparency and accountability with the users, advertisers, and publishers. Industry-wide standards, such as those laid out by the Global Alliance for Responsible Media (GARM), also act to align advertisers and publishers on content safety classifications. These standards can help inform brands on where to direct their advertising budgets, guaranteeing appearing next to and the funding of content that doesn’t harm young people — and inadvertently, the brand.
In addition, children require up-to-date education on how to spot misleading or AI-generated posts online. This includes being taught how to look for inconsistencies in fake content, such as odd phrasings or unnatural visual components, and cross-check online content with reputable news sources, fact-checking services, or adults that are more in touch with factual information.
While tech platforms have a role to play, brands can help stifle misinformation by ensuring their ad spend doesn’t fund or endorse distributors of misinformation. AI-powered brand safety solutions are increasingly effective at identifying online misinformation. By comprehensively analysing the contextual environment and intent of dynamic forms of media, these solutions provide an easy way for marketers to understand content safety — which allows them to fine-tune their media plans accordingly. As the detection capabilities of AI are still developing, the accompanying nuances of online content can be more thoroughly understood and identified when human moderators also work in combination with AI.
The internet is a remarkable resource for children as it opens the door to a diverse range of educational material, strengthens social relationships, and provides opportunities for seeking support where it may not be possible in immediate environments — we should not stop young people from using it.
The focus must be on cultivating safe and positive online environments where children can thrive and reap the benefits of the internet. This is best achieved through implementing progressive policies and innovative AI-powered solutions that ensure the safety of content online, while updating the education system to equip children with a robust educational foundation for avoiding the pitfalls of generative AI and online misinformation.
By Emma Lacey, SVP EMEA
Zefr
Zefr is a technology company that delivers precision content targeting solutions for brands on YouTube.