X Close

UCL Department of Science, Technology, Engineering and Public Policy

Home

Applied in Focus. Global in Reach

Menu

The Infinite Game of Disinformation

By Alex Shepherd, on 15 October 2020

Alex Shepherd (@palexshepherd) is a nationally recognised subject matter expert on disinformation. He has delivered talks on the subject at the University of Oxford and the University of Cambridge, and has actively engaged with representatives from the UK government’s Sub-Committee on Disinformation. He is currently a senior AI researcher at Oxford Brookes University and a Digital Technologies and Policy MPA candidate at UCL STEaPP. 

Disinformation is one of the most important issues we face today, not only due to the massive social impact and disruption it creates globally, but also due to its exceptionally robust nature. This blog post, inspired by the tweetstorm “Some thoughts on disinformation”, attempts to explain disinformation’s robustness through the lens of game theory and analysis of technology trends.

Man using tablet to view fake news website

The concept of infinite games and finite games was popularised by Simon Sinek in his book, The Infinite Game, and at a keynote speech he delivered at a New York Times event. The book was influenced, in part, by James P. Carse’s book Finite and Infinite Games, which in turn was influenced by basic game theory.

Infinite games and finite games can be defined as:

  • Infinite Game: has known and unknown players, changeable rules, no set duration, with the objective being to perpetuate the game, not to win. 

Example: politics, business.

  • Finite Game: has known players, fixed rules, a set duration, with the objective being to win the game.

Example: football, tennis.

If an infinite player plays against an infinite adversary, or a finite player plays against a finite adversary, the system is stable as both players are playing the same game. However, when a finite player attempts to play against an infinite adversary, the system is unstable and the finite player will find themselves in quagmire. Quagmire is defined as an unwinnable position where a finite player’s resources deplete rapidly, with very little, if any, progress to show for it. The only way a finite player can escape is by forfeiting the game.

Disinformation is an infinite game and bad faith actors conducting disinformation attacks can be classified as infinite players. The problem is that good faith actors attempting to counter disinformation currently play as finite players. That is to say, they are trying to “win the war” on disinformation. However, bad faith actors are not playing to “win the war”, they are playing to perpetuate the game of disinformation. This imbalance has arguably created the quagmire good faith actors find themselves in today, where their efforts, though laudable, are ultimately futile.

The key requirement for successfully playing an infinite game is plentiful resources. The player who has the most resources, therefore the ability to keep playing, will win the game. As a bad faith actor conducting disinformation attacks deals in bits not atoms, the only resources they require are bandwidth, storage and processing power. Following Moore’s Law (computing capability doubles every two years while the cost of computers halves), there is another interesting technology trend I have observed which is salient for this piece.

My observation is that there appears to be an inverse relationship between technology development and technology’s barrier to entry. As technology development has rapidly increased year on year, its barrier to entry has decreased with equal rapidity. For example, when the internet was first created, it was only available to a handful of military organisations and elite academic researchers. To access it required a computer science degree, despite it being quite primitive in comparison to Web 2.0, which is so intuitive a chimp can use it. As internet applications’ user interfaces have become increasingly accessible year on year, it has lowered the barrier to entry to what would have been inconceivably advanced technologies not all that long ago.

Both of these trends have enabled bad faith actors, providing them access to an arsenal of advanced technologies which are either free or too cheap to meter. This creates another imbalance when compared to the much broader array of resources a good faith actor requires to counter disinformation attacks. Quite simply, there are not enough counter-disinformation specialists in the world to counter the world’s disinformation. 

So how can a good faith actor become an infinite player? One potential course of action is to even the balance of resources by changing the perspective from atoms to bits. As an AI researcher I am biased, but I’m a strong advocate for the use of machine learning to augment counter-disinformation efforts. Natural language processing is a rapidly developing field of machine learning and there are open-source language models that can be used to detect disinformation. Though they would not replace counter-disinformation specialists at the strategic level, they have the potential to be a powerful force multiplier for countering disinformation at the tactical level. 

In conclusion, I hope this has provided a different viewpoint on the issue of disinformation and has been thought-provoking. It is an extremely complex issue and I certainly do not claim to have a silver bullet. My hope is that there will be more state investment in machine learning technologies that can be used to augment counter-disinformation attempts and that these technologies become more widely adopted by good faith actors aiming to play the infinite game.

2 Responses to “The Infinite Game of Disinformation”

Leave a Reply