In January 2024, the World Economic Forum published its Global Risks Report. The report identified AI-generated misinformation and disinformation as the second greatest risk overall and the first most severe risk over the next two years.
The rise of deepfake technology and its ability to erode our definition of truth is identified as a major societal risk. False online information and synthetic media have long been acknowledged to amplify interconnected risks such as market confidence, political polarisation, and extremism. This represents a dangerous combination of threats that are immediately apparent, and insidious secondary threats that erode security over time.
Another characteristic that makes it so dangerous, is that deepfake technology can be found across all kinds of false information.
The type of deepfake that most individuals and organisations are noticeably impacted by are the ones that are maliciously made and targeted, otherwise known as disinformation and fraud.
Deepfakes shared without the knowledge of the person sending it fall under the definition of Misinformation, something that can drastically exasperate situations and make them into crises.
This blog will look at both threats, the directly targeted and the secondary.
What is ‘False Information’?
Many people will be familiar with the terms ‘deepfake,’ ‘disinformation,’ and ‘misinformation’ but it is rarely explained where this threat comes from and why it poses such severe risks.
Those who know history will be aware that disinformation long predates the internet, and that conversation about it has become increasingly noticeable alongside the rise of digital communications.
Disinformation that challenges organisations and norms displays two characteristics that have remained unchanged since before the emergence of online communications.
In 2024, this represents a key connection between the most damaging disinformation campaigns and data leaks/breaches. [2]
In previous decades, this threat was largely confined to print media and the written word.
Online communications were the first meaningful change to the substance of this threat. The rapid spread of false information across the internet is something we as a society have been grappling with through the 2000s and 2010s. From 2017, the rise of synthetic media marked another transformation. Fake and manipulated content could be produced using deep learning techniques, producing “Deepfakes.”
The Evolved Threat
So how have technologies, particularly AI (Artificial Intelligence) enabled deepfakes, changed the substance of this threat?
The social engineering everyone was familiar with before deepfake technology took the form of campaigns. A hostile actor would build information on a target using online information and then probe a target by messaging employees to identify an entry point. More frequently than not, these attacks would occur over a long period of time and be about quantity rather than quality. When these social engineering attacks failed it was mostly because they lacked an image of credibility. This is described as “truthiness effect”; an audience’s sense of familiarity with the content being produced. It is for this reason that disinformation and deepfakes that are targeted have often been the most successful.
Identified threats include:
Blackmail – With the rise of deepfakes content can be anonymously generated and corresponding threats can also be made anonymously.
Inciting violence – In India, April 2018 a video went viral on WhatsApp showing the kidnapping of a child by two men on mopeds. As a result, an 8-week period of mob violence erupted, and 9 innocent people lost their lives. [3]
Uncertainty – Even if a deepfake is identified as false by the majority of the audience, it may reduce their trust in legitimate content.
Identity fraud – Fraudulent calls and fake identities are becoming increasingly frequent in business environments demanding enhanced verification and validation processes.
Direct Threats
The strength of AI enabled deepfakes to rapidly engineer an appearance of credibility is what makes them so dangerous. A deepfake campaign will start silently as attackers use a target’s digital footprint to gather information, this includes everything about you that’s online and publicly accessible, for example profile photos, email addresses, videos, places visited and other kinds of personal information that could even be collected from friends, colleagues, family and company online profiles. Following this, an attacker will train the AI-enabled deepfake on that information to make it as lifelike as possible, this can include any insider information that can be used to build credibility, such as the topic of discussion during a prior online call. This information gathering is how deepfakes like the recent attack on Arup are so successful.
These organised deepfake attacks pose a major challenge to risk managers. This is because they rely less on quantity and more on quality. You will not know when a hostile actor starts gathering information on you. It is now far more likely that you will only become aware when you see the deepfake for the first time. At that stage, the deepfake will either be successful, or it won’t. It would depend entirely on the audience’s discernment. This touches on a vital point in protecting yourself from deepfakes... audience resilience.
Case Study: Arup
The recent substantial deepfake fraud committed against Arup in Hong Kong has been widely publicised. In January 2024, a member of staff in the Hong Kong office was directly targeted by attackers. Utilising AI enabled deepfake technology they successfully convinced the targeted employee that the Chief Financial Officer was there in the online meeting, significantly building both the image of credibility and pressure upon the member of staff who subsequently transferred the equivalent of £20 million to various Hong Kong based bank accounts.
This attack against Arup shows the tell-tale signs of a highly organised and developed deepfake fraud attack. The targeting of a specific member of staff with financial transfer permissions indicates a period of vigorous and thorough research by the hostile attacker(s). It is highly likely that the attacker’s reviewed the digital footprint of the organisation and its members to identify targets, their permissions, and who they know within the organisation. The use of a deepfake falsely portraying the Chief Financial Officer further indicates that attackers were able to confirm an association between the CFO (Chief Financial Officer) and the target employee having used online information such as speeches, promotional videos, media posts, and professional profiles to train the AI deepfake.
Secondary Threat: Mis and Disinformation
The emergence of open access deep learning tools online has resulted in the proliferation of new, high-risk, and often opportunistic disinformation. Open-source tools place the power to generate advanced forms of disinformation in the hands of individuals, in a similar way to how AI is empowering hostile cyber attackers aka “scriptkiddies.”
This is causing a surge in the severity of online harms now being experienced by educational institutions, reputation management firms, NGOs, private businesses, and individuals. Manipulated content is growing rapidly and threat actors are finding it increasingly simple to produce potent falsifications at scale. This is a trend that has been most felt by sectors that rely on good reputational standing such as finance and banking.
Case Study: First Republic Bank
Deepfakes are a major and one of the most direct weapons in the arsenal of hostile actors, however, they are not the only attack vector.
In March 2023, First Republic Bank (FRB) started to trend online with tags like #collapse, #bankcollapse and #bankingcrisis. Analysis by several organisations across the public and private sectors have identified how patterns of activity from both authentic and bot accounts caused the rapid decline in public trust in First Republic Bank.
The rapid decline of First Republic Bank’s reputation, and therefore its market viability is unconfirmed as an orchestrated hostile campaign, however, the fact that FRB collapsed is ominous. A survey conducted in May 2024 by digital identity company Signicat found that approximately 42% of fraud attempts against banks now utilise AI, something that can only exacerbate and be exacerbated by the exploitability of social media networks for mal, mis, and disinformation.
Resilience
Resilience is the key term when it comes to protecting yourself, your organisation, and society more broadly from online harms.
Businesses in 2024 are encountering increasingly volatile business and risk environments. In response to this senior stakeholders and business management specialists are moving towards hybrid approaches that include a diversification of functions.
A predicted consequence of this will be increased outsourcing and diversification of data inputs/outputs. Intelligent process automation is also likely to increase meaning that in the near future, it will not be a stretch to assume that the majority of communicated information has been contributed to in some way by AI.
The increasing dependence of businesses, especially marketing professionals on advanced analytics presents a further developing risk. Decisions at the executive level may become the targets of “social engineering’ through AI bot enabled falsification of findings. This is particularly relevant with staff oversight reduced due to the prevalence of remote working.
To address these challenges, it is vital that organisations retain a security centred approach when adopting future facing modes of operating. Single source of truth data management structures for centralised verification and validation of processes will likely become essential for the data processing, project coordination, and application of disinformation management strategies.
The need for careful centralised oversight is reinforced by the government’s own framework for combatting disinformation, RESIST 2. This framework also depends on a level of central coordination and policy setting. RESIST 2 Counter Disinformation Toolkit - GCS (civilservice.gov.uk)
These measures must be combined with staff media literacy and cyber hygiene training – a recurrent measure that is easier to enforce from a central position.
This does not mean that organisations ought to centralise everything, however, verification and quality assessment processes should pass through one point. These processes should include:
*An immediate recommendation is to move away from voice authentication where possible.
If you have read this blog and are interested in being protected from disinformation and deepfakes, or you have any questions please contact us at info@torosolutions.co.uk.
[1] The Soviet Active Measures (aktivnye meropriyatiya) were notorious for their employment of this method. Many significant state level disinformation campaigns leveraged leaks and partial truths to cause mass societal harm. For further reading see Rid, Thomas. (2020) “Active Measures: The Secret History of Disinformation and Political Warfare”.
[2] https://www.biometricupdate.com/202402/data-breach-identity-fraud-trends-reveal-deepfake-and-generative-ai-threats
[3] https://www.bbc.co.uk/news/world-asia-india-44435127; India WhatsApp killings: Why mobs are lynching outsiders over fake videos - National | Globalnews.ca