Posted by Sponsored Post Posted on 7 August 2019

AI Deepfake: from innocuous entertainment to information warfare

Ai technology is generally used for positive purposes. Furthermore, many CAD and 3D design programs also use AI technology. Unfortunately, AI technology is also used in harmful ways. In recent months, news and entertainment articles have been awash with reports of Deepfake technology being used in detrimental ways. In this article, we look at AI Deepfake technology and the problems it is causing.

What is AI Deepfake?

You may have heard of this term in the news. Deepfake is a technique used in photo and video editing. It does have a myriad of valid purposes, but unscrupulous users have used it maliciously. The following provides some basic background information about this technology:

Deepfake – a definition

Deepfake is a technique that enables human image synthesis using artificial intelligence. What does this mean? It means that users can superimpose an existing image, such as a person’s head from a photo, onto other material, such as a video or other image.

Let’s say, for example, you have an image of Person A, and a video of Person B. Using Deepfake technology, the likeness of Person A, could be superimposed onto the video of Person B. This would create the illusion that Person A is the person within the video. The technology is extremely realistic and it is difficult to tell that the video has indeed been edited.

How does the technology work?

In order for the technology to work, users have to upload a large volume of photos and/or video content of the person they wish to superimpose. This multimedia content should contain various different angles, lighting and poses. The AI technology that is implemented in Luminar 3 photo editor, for example, uses AI for recognizing and adjusting separate objects in the photo. It uses fewer resources and serves to simplify the manual photo editing process. The AI code is being continuously improved and has become a so-called impetus to the development of Deepfake.

The Deepfake artificial intelligence then analyses the provided content and attempts to learn the behavior and movements of the person. This learnt knowledge is then used to create a realistic 3D representation of the person that can then be imposed onto other source material. A huge amount of graphics and processing power is required to create this type of content. Furthermore, a large amount of storage space is also required – a typical Deepfake video, for example, could take up 4GB of storage space.

How has AI Deepfake been used?

As you can imagine, this technology has both positive and negative applications. Unfortunately, it has largely been used to create harmful content. We have listed below some of the common applications for AI Deepfake technology:

Academic Research into Artificial Intelligence

Originally, this technology was used to test the boundaries and usability of artificial intelligence. Different academic projects looked to create Deepfake technology to further the field of computer vision and AI. The most notable project was the “Synthesizing Obama” program.

First published in 2017, technology was used to modify existing footage of President Barrack Obama. A video was created in which Obama mouthed different words and read an entirely different speech. The end result was extremely realistic. It proved that the technology was effective and that it had real applications.


Unfortunately, Deepfake technology has largely been used for a negative purpose – one such use is to create fake pornography. Many amateurs have created realistic pornographic material featuring celebrities. This is of course hugely damaging – sites that have displayed the videos have been ordered to remove the content including Reddit, Twitter, and Pornhub.

Celebrities such as Emma Watson, Katy Perry, Taylor Swift, and Gal Gadot, have all been subject to Deepfake pornographic videos. The videos put these celebrities in compromising sexual scenes – their faces and likeness have been superimposed onto another pornographic video. The effect looks realistic and it is hard to tell that the video is actually fake.

Furthermore, a wave of “revenge porn” has also been created. This is essentially where a disgruntled person creates a Deepfake video of someone they know who has mistreated them. This video is then distributed across the web as a form of revenge. Obviously, the implications of this are severe and totally damaging to the subject of the video.


Politicians are also a prime target for malicious Deepfake content. The “Synthesizing Obama” creation was not intended to be malicious – it was merely created to test the limits of the technology. Other creations, however, have the sole intent of causing harm and mockery to different politicians.

Donald Trump, Angela Merkel, and Mauricio Macri have all been the subject of Deepfake productions. Argentinian President Mauricio Macri, for example, has had the head of Adolf Hitler superimposed onto his own body. Donald Trump has also been subject to numerous Deepfake productions that make fun of his appearance and language used.

Why is AI Deepfake bad news?

Deepfake videos are blurring the lines between reality and fiction. The main problem is that the technology is so advanced, that the videos are hard to distinguish as fakes. Take the pornography, for example – the end product is quite convincing. It actually appears that the celebrity is part of the video. This raised the problem of knowing what is real and fake – how can we as the general public know what is a genuine video, and what is a Deepfake headswap creation?

The problem spans beyond the scope of celebrities, however. Generally, celebrities are protected by their own fame – their reputations are not damaged greatly by Deepfake productions. But what about the general public? What if an average person has a malicious Deepfake video created using their likeness? What could they really do? How badly would this damage their reputation? It makes people vulnerable.

Furthermore, a myriad of applications such as FakeApp, DeepDream, and DeepFaceLab has been released. These apps offer Deepfake or simple face swap technology to the general public. Is this a good thing? Again, it all depends on how the technology is used!

For the most part, AI technology brings innovation to a myriad of different industries. Graphic design and editing programs can offer greater efficiency. Furthermore, AI technology also helps improve marketing and SEO analysis. Technologies such as Deepfake AI however, undermine the usefulness of AI – they put a stain on the technology and this is something we should strive to avoid.

From our advertisers