skip to main content

Misinformation and disinformation - The role of the intelligence analyst in a post-truth world

It could be something as simple as a poorly considered post on social media at just the wrong time, amplified beyond all expectations by platform algorithms, or it could be a carefully co-ordinated, multifaceted campaign by a state actor as part of wider grey-zone conflict. But one of the greatest existential threats to Western liberal democracy comes arguably not from the potential for miscalculation between nuclear powers, potential adversaries, or rogue states and non-state actors attempting to force their world views on others, but from the pernicious impact of misinformation and disinformation that is now permeating through society.

Enabled by the social media revolution, we have now reached a level in which the overwhelming amount of ‘information’ we receive can simply no longer be trusted at face value. The online world is awash with innocent mistakes, false narratives, manipulated media, and cleverly crafted lies that contain enough semblance of truth to appear credible. It is now quite possible, at the click of a button, to find a narrative that supports or reinforces a specific world view on almost any given subject, and it touches all aspects of society and polarising opinions about issues such as political preferences, climate change, vaccination safety, migration, and even the use of language. 

While much of this ‘false’ information is produced, reproduced, and amplified from a position of ignorance and/or laziness, there is a more sinister trend towards the deliberate dissemination of material aimed at disrupting societal norms and undermining national stability and security. The former can be characterised as misinformation: incorrect information shared without an explicit intent to deceive but often reflecting individual or group preconceived bias. The latter is deliberate activity (by state or non-state actors) to provide misleading or incorrect information with intent to mislead, influence, or change behaviour. The key differentials here are the words intent and deliberate. They matter because disinformation often requires government organisations to mount counter-disinformation operations (cyber operations, diplomacy, and the delivery of a counter narrative), whereas there is a more societal element to misinformation, with greater responsibility on social (and mainstream) media, education, and engagement. From an OSINT perspective, they also matter, particularly in terms of how they are identified and mitigated. Misinformation can often be identified through the validation and cross-referencing of sources, whereas well-constructed disinformation often needs a more sophisticated approach. Both are nothing new and disinformation has long been a recognised and valuable tool for the strategist, however framed (deception, Information Operations). Sun Tzu said in The Art of War, “The whole secret lies in confusing the enemy, so that he cannot fathom our real intent.” What has changed, however, is the scale and ‘normalisation’ of bad information, facilitated by the proliferation of social media. This has the impact of muddying the waters between what is deliberate and what is not.

The proliferation and pervasiveness of both disinformation and misinformation pose significant challenges for the intelligence analyst, whose primary task is to provide the best possible assessment and analysis for decision makers. Faced with deepfakes, ‘bot’-generated data deludes, echo chambers, and filter bubbles, it has never been more important for the analyst to rigorously apply tried and tested analytical standards and tradecraft, honed over many years. How does the analyst firstly identify incorrect information, whether deliberate or not, and ensure that their reports, assessments, and analyses are not based on incorrect data points? This is what we really mean by tradecraft. At the most basic level is the experience of the analyst. Does a report ‘feel’ right? Does it fit in with what we already know about a given event or situation? But this is only the start. Techniques such as reframing, which considers all possible hypotheses, no matter how unlikely; forecasting, which uses historical data and previous patterns to predict future outcomes; and backcasting, where analysts work backwards from possible future outcomes to consider what steps would lead to various scenarios all play a role, as does the validation and weighting of all available sources. Given the pace of the contemporary information environment and the ease in which the decision maker can access what they need, the temptation to be their own intelligence analyst has never been so great, increasing the susceptibility of targeted disinformation. This puts additional pressure on the analyst, who not only has to get it right, but do so in a timely and easily digestible manner.

Technology now also plays an important role, with applied AI increasing both the number and validity of data sources. AI is also becoming critical in detecting deepfakes that are now beyond the ability of the analyst to determine. While this is an area that needs further investment and focus, its utility in synthesising and summarising the huge amounts of data now available into bite-size chunks rapidly and in detecting patterns that can then be used as a benchmark to identify anomalies are just two examples of where technology can help the intelligence analyst. Analysts simply can’t collect, sift, and verify every source manually. At the same time, AI poses its own challenges for the community. To what extent can the algorithms be trusted, and how can they show their ‘working out’ in an auditable manner? And there is an ethical element too: AI algorithms have no concept of right and wrong. So, while AI must be embraced and its benefits realised, there remains the needs for genuine human/machine teaming and a balanced judgement made based on objective, well-founded, and researched information.

The bottom line though is that the information environment of today has never been more rich nor so uncertain. Our potential adversaries, many of whom are from autocratic regimes, unfettered by the same ethical constraints, are using this to wage grey or hybrid warfare on the west. China, Russia, Iran, and North Korea are all attempting to undermine Western society through targeted disinformation campaigns. And they seem to be working. Our society has never been more polarised and if you follow Clausewitzian theory about removing an adversary’s will to fight, they may not even have to fire a shot to achieve their aims!

AVM Sean Corbett, CBE, MBE, MA, Royal Air Force is Chairman of Janes National Security Advisory Board and the founder and CEO of IntSight Global, a consultancy specilaising in strategic thinking, open-source intelligence and business optimisation. He retired from the Royal Air Force in September 2018 as a two-star general after a 30-year career as a professional intelligence officer. His last appointment was in Washington, DC as the first Non-US Deputy Director of a major US Intelligence Agency.

Sean is also the co-host of Janes World of Intelligence podcast series, which covers topics related to open-source intelligence including misinformation.