
Artificial Intelligence, News, Technology
Deepfakes: Navigating the Intersection of Technology and Deception in International Conflict
Experts in AI and foreign policy at Northwestern University and Brookings Institution suggest the creation of a code of conduct by world governments to moderate the use of deepfakes to combat digital disinformation.
In the Information Age, online disinformation has become an unlikely tool for securing political advantage—a commonality between many developing and developed nations alike.
The accessibility to the advancements in digital technology has made it significantly more complex to determine what is real and what isn’t in cyberspace. One such technology, known as ‘Deepfakes’, has notably become a cause of concern for world governments endeavouring to counter disinformation.
This is partly because of how easy it makes for insidious cyber threat actors to manipulate audio-visual content to disseminate disinformation for malicious purposes—ranging from harassment, fraud and coercion to large-scale manipulation of public opinion.
The emergence of a modified audio-video clip of Ukrainian President Volodymyr Zelenskyy in March 2022, wherein he is ostensibly asking for the surrender of Ukrainian troops amid an ongoing international conflict between Russia and Ukraine, serves as a stark reminder of the dangers of deepfakes and the potential for weaponizing this technology by adversarial cyber threat actors to perpetuate disinformation in geopolitical conflicts.

A deepfake of the Ukrainian President calling for Ukrainian troops to surrender emerged in March 2022. (Image: The Telegraph)
Acknowledging the potential for the abuse of technology in international conflicts, a recently published report by artificial intelligence (AI) and foreign policy experts at Northwestern University and the Brookings Institution puts forth recommendations for security officials and policymakers on how to address the burgeoning threat of the emergent deepfake technology.
The authors of the report have urged the policymakers in the United States and its allies to develop a code of conduct for the use of deepfakes.
“The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement—most recently through a form of AI known as stable diffusion—point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” emphasize the authors in their report, “Deepfakes and international conflict”.
“Security officials and policymakers will need to prepare accordingly.”
The rise of specialized AI is a double-edged sword that has the potential to alter the state of the digital landscape. In the wrong hands, specialized AI tools such as deepfakes can have serious consequences for individuals, organizations and nations, if left unchecked.
There arises a need for policymakers across the world to be prudent in their judgement, and the authors of the report impress upon the need for long-term strategies to combat the spread of disinformation through a multifaceted approach.

Image: Visuals on Unsplash
This entails the education of the general public to increase digital literacy and critical reasoning, the development of systems that can trace the movement of digital assets (such as images, audio-video clips and documents) and identify who handles them to ascertain accountability, and encouraging journalists and intelligence analysts to cross-examine content before including it in published articles by using information from separate sources, such as verification codes, to confirm the veracity of digital assets.
Institutional prudence on the aspect of deepfakes is on the rise. The European Parliament in their report ‘Tackling deepfakes in European policy‘ has identified the five dimensions in the deepfake life-cycle that policymakers can take into account while drafting legislation to mitigate the adverse impacts of deepfake-enabled disinformation.
However, the forthcoming age of disinformation beckons a concerted effort from media persons, intelligent analysts, policymakers and the general public to take proactive measures to counter the spread of disinformation and safeguard the integrity of institutions, democratic processes, national security and overall public trust.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us