Latest News
Expert Comment: Why Is It Important For Consumers & Businesses To Learn How To Spot Deep Fakes? | TechRound April 2024
Expert Comment: Why Is It Important For Consumers & Businesses To Learn How To Spot Deep Fakes?
The issue of deepfakes continues to dominate news headlines, especially as the technology behind them becomes more advanced.
The upcoming election year only makes this worse, as deepfakes pose a real threat to the democratic process. Because of this, the EU has called on big tech companies like Facebook, TikTok and others to label AI-generated deepfakes ahead of the EU polls in June. This shows just how central big tech companies will be in helping curb disinformation driven by deepfakes.
What Are Deepfakes?
Deepfakes are video or audio files that have been manipulated using AI. These technologies can make it seem like people said things they never did, making them a dangerous tool – especially around elections.
The technology uses deep learning algorithms to learn and replicate specific expressions, sounds and gestures, allowing it to fabricate highly convincing, but fake content. As time goes on, deepfake technology is getting more and more advanced, making it difficult to know what is real or fake online. The ability for both businesses and consumers to learn how to spot them is now more important than ever.
How Can You Spot A Deepfake?
Whilst spotting a deepfake is hard, it’s certainly not impossible if you know what to look for. Things like unnatural blinking, skin texture or strange lighting can all be tell tale signs that something isn’t right. Audio deepfakes might also sound a little robotic, or have unnatural fluctuations in pitch.
However, the best way to spot deepfakes is through being informed about how the technology is evolving.
Why Is It Important To Learn How To Spot Deepfakes?
Elections aside, deepfakes can also have detrimental effects on both businesses and consumers. Here, we spoke to a series of experts around why they think spotting deepfakes is crucial.
Here’s what they had to say…
Durgan Cooper, CETSAT Chairman
“AI-generated images are becoming more and more advanced, and harder and harder to detect. To the untrained eye, it can often be hard to determine what is real, and what is not. As AI develops, this is going to be a bigger problem and people need to be aware of the risks.
“How to protect yourself? Stay aware of the threats. Learn how to notice AI-generated images. How can you tell?
“A particular weakness of AI is its ability to produce text in an image format, for example on posters or road signs. Also look for unusual features, such as additional fingers, abnormal hair or mistakes with accessories.
“Blurry backgrounds can be a sign that AI is involved. An overly-glossy finish on parts of the image also suggests foul play.
“When relating to intimate or pornographic deepfake images, then it’s possible a criminal offence has been committed. Frustratingly, for other reasons it is a grey area in the law and there can often be little to be done. As AI develops, and these images becoming harder to detect, the law must evolve at a similar pace. I fear that cyber criminals will move far faster than the law-makers.
“The impact of AI’s rapid evolution has yet to be fully understood in Westminster. This is further complicated by these challenges straddling different legal jurisdictions will also creates further issues as governments will be evolving their response at different pace. The honest answer is that we are nowhere near having adequate safeguards in place and a good dose of ‘pinch of salt’ discretion taken.
“There are ways to help spot AI-generated images, although it is becoming harder and harder. Elections across the world in 2024 will mean that the internet is rife with deepfakes, another reason to be overly cautious. Sadly, we all need to approach online content with far greater suspicion than previously.”
Nick France, CTO of Sectigo
“It’s alarming to see the rise of deepfake technology now being used to mimic news anchors to spread misinformation. People don’t realise how far AI-deep fake technology has come and how democratised the technology is. Unfortunately, anything about your physical appearance can be replicated, i.e eyes, face, voice. This is no longer something that only exists in films, as more people are now capable of creating convincing deepfakes. A recent experiment even proved that AI can produce convincing deep fakes capable of bypassing voice recognition for online banking.
“As the landscape has dramatically changed, people’s mindset when consuming media must shift with it. They must now exercise more caution than ever in what they watch and reconsider the validity of the source and its trustworthiness.
“We must look at better and smarter ways to validate the authenticity of what we see. One of the best solutions that can evade the fraudulent use of AI deep fakes is PKI-based authentication. PKI does not rely on biometric data that can be spoofed or faked, by using public and private keys, PKI ensures a high level of security that can withstand threats of disinformation.”
Paul Holland, CEO at Beyond Encryption
“As technology has continued to evolve, the cyber threat landscape has evolved alongside it. The most notable advancement that has taken the cybersecurity landscape by storm is AI. As this technology has developed, it has become increasingly accessible, allowing cybercriminals to leverage it for malicious purposes – with deepfakes being no exception.
“In the past, deepfakes have been used to mimic public figures, due to a reliance on a large collection of audio data. However, the latest pre-trained algorithms are able to replicate an individual’s voice using just a three-second snippet of their speech, which is an alarming development.
“Given the rise in cybercriminals employing AI to launch hyper-personalised attacks, it is more important than ever before for businesses and consumers to be on high alert to avoid falling victim to these advanced attacks. Businesses must establish robust verification processes for critical communications and invest in advanced AI-based cybersecurity tools to detect and prevent deep fakes. They also have a duty of care to help educate their customers and staff about how to spot potential threats and how to respond accordingly to an attack.”
Dan Purcell is CEO at Ceartas DMCA
“Deepfakes are becoming increasingly advanced and difficult to spot. In February, a Hong Kong finance worker was duped into attending a video call with what he believed were several other staff members handing over $25 million to the fraudsters, only to realise later that they were all deepfake recreations.
“However, the damage of deepfakes on businesses extends beyond financial loss, it can really harm the company’s brand. With deepfake technology becoming more intelligent and accessible, a CEO’s reputation can be tarnished within minutes through a fabricated video going viral of them saying or doing something controversial.
“There is no quick fix to this issue, so a multilayered approach to tackling deepfakes is crucial. Reacting quickly to remove deepfakes from the internet and eliminating their discoverability is vital. Additionally, educating both staff and consumers on how to identify deepfakes is imperative.
“Face-swapping is one of the most common deepfake methods. Therefore, it is essential to examine the edges of the face for any inconsistencies or strange lighting and shadows. Unnatural eye movements, such as a lack of blinking, and inconsistent audio and noise are also red flags to watch out for. Scammers will typically focus more on perfecting the video rather than the audio. Overall, questioning what you see before clicking the share button is key.”
Tom Holloway, Head of Cybersecurity at Redcentric
“If you have team members who market themselves well by presenting webinars or posting advice-led videos on LinkedIn and on your website, cyber criminals can easily extract the voice that is featured and create a false voice note or even video posing as this exact team member, using AI.
“For team members who aren’t clued up on this, it can be extremely easy to fall into this trap. Be extra mindful to consider whether this is ‘normal’ behaviour for your team member to send a voice note or video to you, and if it isn’t, then it’s likely to be a scam.”
Emma Lacey, SVP EMEA, Zefr
“Deepfakes have been listed as the most serious AI crime threat by the UK government — and for good reason. Deepfakes use generative AI to fabricate realistic videos and images of people doing things they never did. As a result, they are increasingly used to propagate disinformation and manipulate individuals’ perspectives on important matters.
“Examples of this include fake images of Donald Trump being arrested and a fake video of Joe Biden making offensive remarks, both of which have rapidly attracted impressions online. With the 2024 UK and US elections on the horizon, and over half of internet users receiving their news from social media, it’s imperative that consumers and businesses learn to spot deepfakes and AI-generated content over real content.
“Research shows that brands whose ads appear next to misinformation are perceived as less trustworthy and respectable by consumers. Fortunately, the EU’s AI Act will strengthen the development of technologies that can label deepfakes and misinformation. Brands should inform their media buying with these innovations and ensure their ads aren’t appearing next to and inadvertently funding bad actors using deepfakes — protecting their customers and public image while fostering a safer online environment .”
Eric Bravic, Head Of Artificial Intelligence at CryptoOracle Collective
“It’s crucial for consumers and businesses to learn how to spot deepfakes because the extent of this problem is vastly underestimated. Soon, nearly everything, from audio and video to your genetic code, can be convincingly deepfaked. As costs decrease, fraud incidents, such as impersonation attacks, will skyrocket. Moreover, the legal system will grapple with the challenge of discerning authentic evidence from deepfaked evidence. The virality of deepfake information exacerbates the issue: human beings are inclined to propagate inflammatory information, regardless of its true value. When false content spreads widely, its authentic counterparty becomes irrelevant, impacting public perceptions and influencing decision-making processes across our society.
“Businesses relying on identification processes are particularly vulnerable, as deepfakes can easily bypass conventional authentication mechanisms. In the entertainment industry, the rise of deepfakes poses an existential threat to the integrity of performers and their creative works, potentially inundating media platforms with fabricated content. As deepfakes permeate various aspects of society, consumers and businesses should adopt a multifaceted approach that combines education, technology, and vigilance to protect themselves against deepfake-related scams and fraud.”
Daniel Li, Co-Founder and CEO at Plus Docs
“Every year, consumers and businesses lose billions of dollars to scams and other forms of misinformation. With the introduction of generative AI and deepfakes, it is easier than ever for scammers and other nefarious actors to create false information. Just as AI has made it cheaper than ever for businesses, marketers, and sales teams to customise content for their customers, AI has made it easier for people to create personalised deepfakes to fake telephone calls, create election propaganda, and run other types of scams.
“Because the volume of these types of activities will grow exponentially over the next few years, it is crucial for consumers and businesses to learn to spot deepfakes and protect themselves from bad actors. For example, it is helpful for business owners to experiment with the latest technologies that can “clone” someone’s voice for a phone call or voicemail in order to educate themselves on the potential attack vectors for new scams and hacks. By staying up-to-date on these technologies, business owners will better understand what’s possible with AI so they are less likely to fall prey to deepfakes.”