The Untold Dangers of Deepfake Technology

mark zuckerberg deepfake technology

With the advent of deepfake technology, an era of new online threats has begun. I show a few examples of how deepfake can harm us personally. Then in this article, I share how society as a whole can be affected by fake information that has been formed with artificial intelligence.

 We have been using e-mail on a large scale for almost three decades. Everyone is now familiar with phishing, spam, and other annoying actions. The possibilities of internet fraud and other forms of internet crime are increasing.

Deepfake technology

Yet, that is only a child’s play compared to the threats caused by deepfake. ‘Deepfakes‘ are texts, images, videos and audio edits created by artificially intelligent software. We are entering an online era in which we can no longer trust our eyes and ears. Deepfake technology brings fake information to a higher level. Unfortunately with several negative consequences. 

Identity fraud

mark Zuckerberg deepfake technology

For example, the CEO of a British energy company recently received a voicemail from Germany. He was convinced that he was dealing with his manager at the German parent company. Even the accent was unmistakable. Whether he just wanted to transfer money to a Hungarian account. Dutiful as he was, he did it immediately. The voice of the CEO, however, was generated by an artificially intelligent system: a voice cloning deepfake.

Many future scenarios are conceivable in which a voice cloning recording of someone’s voice can be misused. Consider the decoupling of personal information for a later cyber hack or a direct fraud attack, for example with a spoken WhatsApp message:

 “Hello Jane, I am calling with another phone because my iPhone is broken and now I am standing in front of our new office, what is the entrance code again? That is also on my iPhone. Just that, thank you! “

 Misleading the digital assistant

Also, the digital assistant in the house, to whom you give spoken assignments, can probably be fooled with such a cloned voice. What if a deep-fake vote misleads an assistant and thus gains access to your agenda, your contact list, your e-mail, and your bank account?

  •  “Alexa, transfer $350 to the next account number”
  •  “Alexa, send a message to Harold asking him to cancel the trip to China”
  •  “Alexa, remove all appointments from my agenda until September 1st.”

As we increasingly operate our devices and software with voice control, the above scenario is dangerous and not unlikely.

Reputational damage

lady Gaga deepfake face edited image

Deepfake information, through speech, image information or otherwise, can also seriously damage your reputation. And that is serious, because a good, often carefully constructed reputation is worth a lot. Imagine that such a fake voice lets you say things in a video or podcast that is absolutely unacceptable politically or socially. Prove it is fake. Fake videos in which you figure can be particularly compromising. Even if they are demonstrably fake , they can roam on social media forever and permanently damage your reputation. For the outside world, it soon applies: where there is smoke, there is fire. The suggestion alone has far-reaching consequences. And it will then cost you an enormous amount of time and energy to clear yourself of all blame if that succeeds.

Blackmail

It is even more frightening than the threat of deepfake videos alone can be a blackmail tool. The idea that such a video is put into circulation can make someone desperate. An early example of this is the deepfake porn video, for which there was a tool on the Reddit forum. That software tool replaced faces of porn actresses in videos with faces of famous women and celebrities.

Not only famous people are vulnerable to such abuse. Such a tool can also be used for making revenge porn after a relationship has hit the rocks.

The Untold Dangers of Deepfake Technology - Technology - Lorelei Web
An example of deepfake technology: actress Amy Adams in the original (left) is modified to have the face of actor Nicolas Cage (right) . Source: Wikipedia

Making a fake video is also relatively easy. All ingredients are, in fact, already ready. Trend watcher Sander Duivestein explains in his report “Machines with imagination“: 

“Making a fake video with deepfakes becomes as easy as telling a lie.” 

Couple that with the lightning-fast distribution of a video via social media and the presence of plenty of interested public, and the suffering has been done.

Liar’s dividend

Deepfake technology can, therefore, affect us all personally, but society as a whole is also vulnerable. The term ‘fake news’ has already become established in recent years. That expression then mainly refers to messages that have been pulled out of context or supposed lies.

With the speed at which deepfake technology is developing, we should not be surprised if, for example, politicians within a few years with the same ease dismiss unwanted video or audio recordings as “deepfake news”. Even if those recordings are one hundred percent reliable. This is called Liar’s dividend, the benefit of the doubt that the liar is particularly good. In the age of deepfakes, the burden of proof against criminals, companies or government is, therefore, becoming increasingly less valuable.

 Social unrest

Video recordings will also come into circulation to create divisions. For example, politically focused deepfake video recordings can be a serious strategy from one country to sow divisions in another. Videos can stimulate contradictions, affect the solidarity and lead to hesitant decision-making.

This undermines the foundation of a democratic state because it is precisely in a democracy that we must agree on the jointly experienced reality. Even if we disagree, we should not doubt the facts. If that foundation is lost due to fake information, we often do not even come up with solutions. This is because we do not agree on what exactly the problems are.

End of credibility

Deepfake technology can also produce false texts. There are already systems that produce fake texts. They are not yet working perfectly, but perhaps that is a matter of time. You only have to dictate such a system to a newspaper headline and then simply press a button. A short text is then created by AI. That is really deepfake news. If that news is spread en masse via social media, it casts doubt on the reliability of all the news that reaches us. Whether that is real or fake.

Suppose a certain group in the Netherlands wants to put the police in a bad light. In the future, with AI software, they may be able to quickly put articles online that have been written to negatively influence sentiment towards the police. With fantasized headlines such as:

  •  “Police crackdown on a peaceful demonstration against overly tolerant immigration policy”
  •  Protesters attacked by politically colored police officers during a peaceful demonstration of Dutch People’s Party ‘
  •  “Verbal and physical aggression by police at a meeting in The Hague”

 These can then quickly lead to credible generated articles that violate reality.

 Lightning-fast spread

This massive spread can happen very quickly. After all, we all click, as consumers of social media, all too eagerly for funny, juicy, negative, provocative or prejudicial news. And we are happy to share that with one another. We prefer to share that within a group of people who are just as in the world as we are, with the same interests and prejudices. With a simple swipe or press of the button, we spread the news through all platforms. We share information that confirms our existing world view (rightly or wrongly), especially if it is current and negative in nature. That does not only apply to real news, but also to counterfeiting.

Even if deepfake videos and deepfake texts never reach the general public, they can still persist in the corners of the internet and reinforce a colorful world view. You can think of supporters of a religious extremist ideology or anti-establishment groups.

Indifference

How can the consumer, comfortable within his filter bubble, make the distinction, or the difference between fake and real? How does the consumer react when he or she realizes that ‘first seeing and then believing’ no longer applies?

The most likely result will be apathy. That we develop a broad social insensitivity to what’s new to us, whether that’s real or fake. Such development is bad for our democracy. Free news collection and the distribution of news play an essential role in this.

 Opportunity for traditional media

However, there is also an opportunity for traditional media to conclude with a positive note. More than ever, as a filter they must be able to distinguish between real and fake and provide the news consumer with reliable news. Whether print, online or on television. Do you want to know more? I wrote an extensive report on deepfake technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.