According to figures by Cyber Magazine, hackers’ interest in the use of deepfake technology has surged by 43% since 2019. Recent incidents involving the use of deepfake technology also point to the fact that the usage of the technology for malicious intentions is on the rise.

In this interview, Kumar Vaibhav, Solution Architect at In2IT Technologies, speaks on what deepfake technology is, why cases of its usage have been on the rise and how governments and business organisations can protect themselves from this rising cybersecurity threat.

Tell us more about the evolution of deepfake technology over the years.

If we talk about it in laymen’s terms, it is a technology by which a subject’s face is modified onto a target face or subject audio. It can also be vice-verse, where a voice is modified onto a target face. 

If you get more scientific, you can say that the technology relies on artificial neural networks. Let’s say you have an iPhone, right, and you need to log into it via facial authentication. The iPhone will capture all the nodal points of your biometrics or the face, and that’s how it authenticates you. 

Deepfake technology works in a similar way. It’s a computer system that recognises the patterns in data. Developing a deep fake photo or a video involves feeding of hundreds of thousands of images into those artificial neural networks which have been trained. The algorithms then train the data to identify and reconstruct the face or voice patterns.

It can be used for good causes like the David Beckham deepfake where he spoke 9 languages to raise awareness about malaria. However, it can be used for malicious intentions like committing fraud and spreading misinformation.

How prominent is deepfake technology in the world at the moment?

A couple of major incidents have happened recently, one of those involving the Russia/Ukraine war. So what happened was, around March a video was posted to social media which depicted an image of the Ukrainian president directing his soldiers to surrender to Russian forces. 

Another recent event was when a crypto project team was tricked into believing that it was meeting a Binance executive so that their tokens could be listed on the finance platform. How the hackers did it is that took the videos of that particular executive available on social media, digitally altered the videos and then created an AI hologram with them. 

What has been the main reason for the uptick in deepfake technology attacks over the last few years?

Whenever a cyber attack happens, it’s not a one-step approach. It involves reconnaissance steps: understanding how to get into the environment, exfiltrate data and recover your tracks so no one catches you. 

One of the reasons deepfake attacks have increased over the years is that AI technology, which is used to make deepfake videos and audios, is evolving at a much faster pace than the pace at which companies are training personnel to be able to deal with the attacks.

So, for example, you see that by the time a company figures out how to stop audio deepfake attacks, attackers are already making video deepfakes which means the company is always one step behind.

Another factor is perhaps the lack of sufficient cybersecurity budgets. To keep up with technology that is evolving as rapidly as AI requires funds and most businesses are unable or unwilling to invest in the necessary cybersecurity infrastructure and personnel.

Coming back to South Africa, how prominent is the use of deepfake technology in the country?

I haven’t seen much in South Africa yet. However, because of the rapid evolution of this technology and how interconnected everything is on the internet, it can easily escalate from country to country or from continent to continent.

All it takes is for a bad character to identify an opportunity and exploit it. Major events like conflicts and elections are great opportunities for these characters to use deepfake media and because things like elections happen in any and every country; it’s just a matter of time.

How can South African companies ensure they do not fall victim to deepfake attacks?

Microsoft has something called a Video Authentication Tool and though it is not 100% effective, I think around 70% effective; it’s a start. You just put a video in there and it will try to tell you whether the video is real or not. 

Because there is no 100% effective tool in the market to tackle deepfake attacks,  the best approach today is to try to understand what it is and how it works. Being vigilant is another important factor in preventing attacks.

If a person or company falls victim, how can they mitigate the effects of such an attack?

The main thing would be awareness because if the manipulated video has already been seen by the public, it is impossible to make them unsee it.

Education about deepfake technology would entail educating the public on how to differentiate between real and digitally-altered content.

Is there a counter technology that can prevent deepfake attacks?

Apart from the aforementioned Microsoft tool, there are also some developments happening which will use cryptography to flag deepfake content.

A kind of crypto-algorithm could be encoded into the original video so that if someone tries to manipulate it, an alert will be raised.

But deepfake is rapidly evolving; do you think these counter technologies can keep up with it?

It is difficult but not impossible. It will take much more investment in cybersecurity by organisations and governments to tackle this problem and that investment is lacking at the moment.

The potential of deepfake technology to cause far-reaching harm should be motivation enough for relevant stakeholders to accelerate efforts in preventing it.

Do you think there is any role that regulation can play in helping to reduce the prominence of deepfake attacks in South Africa?

South Africa has the POPI Act, which prevents someone from putting out another’s private and personal information. But in deepfake attacks,  the attacker is trying to deceive, so this regulation is lacking on that regard.

In the US, there is a law which states that if someone is caught creating deepfake content for malicious purposes, they could be jailed for it.  So, South Africa can go this route as well. 

But beyond laws, there should also be awareness about the technology and what its repercussions are, similar to how there were campaigns to educate the public and tackle misinformation with regards to COVID-19 and vaccines.

11. Please share any parting thoughts with our readers about deepfake technology.

Deepfake technology is still relatively new and it is evolving rapidly so the best way to tackle its malicious use cases is education, education, education. Businesses and governments should be proactive instead of reactive when trying to deal with it.

As seen with examples like the case involving the Ukrainian president, deepfake content can have far-reaching consequences if not properly addressed so it is best tackled with awareness tactics, coupled with technologies like the Microsoft tool and cryptography hashing.

On Friday, the 23rd of September, TechCabal in partnership with Moniepoint (by TeamApt) will host the most important players in tech and business on and off the continent to discuss the future of commerce in Africa. Register now to attend.

Get the best African tech newsletters in your inbox