13 February 2018 #Information Technology
You may have heard of the latest AI-enabled technology Deepfake, and you would be forgiven if you tried to deny knowledge of its existence- given that the main product of this technology, as reported by hundreds of media outlets already, is of a pornographic nature.
So what is Deepfake?
To put it simply, it’s a computer program which can, theoretically, “swap” faces of anyone into any video you want. The program does this via a process known as “training”, whereby image sets of any given face (“source”) and the face of the target in your video (“target”) are transposed. The source is processed by your computer to generate hundreds, even thousands, of frames. The source frames are then mapped onto the target frames, the result of which is a video showing the Target with the source’s face. Through tens and hundreds of hours of training, there are some rather impressive, albeit disconcerting, results.
So how does this concern us? Well, for one, there’s the potential for immense breaches of consent as well as privacy concerns. The simplistic program makes it possible for anyone with a computer to take a person’s face and map it onto anything they want, be it for personal or commercial use. The main issue is that it is near impossible to detect such uses if the perpetrator does not publicise their “work” and, even if they did, what laws are there to protect us?
There have been arguments from both sides. Some say Deepfakes are harmless, as they are not actually violating a person’s rights just by swapping their face onto a publicly available video; and, of course, there are those who question the morality of the practice, and whether non-consensual swapping of faces should be considered illegal or not. After all there is no law dictating that this is unacceptable and should, therefore, carry a punishment.
At the time of writing, Deepfakes have been banned on a number of mainstream platforms - Twitter, Discord, Gfycat, and Reddit to name but a few. But any internet user of the modern age knows that anything which is on the internet never really goes away, it merely gets migrated to another platform (i.e. another website).
It seems apparent from this new AI-enabled technology that the law struggles at times to keep up with reality - to what extent must new technologies be abused before law makers take action? It may come as a fact that, one day, someone will pioneer an AI-system capable of analysing technological trends and proposing corresponding laws dealing with the risks faster than humans can. We may even find that AI develops enough to make and enforce our laws – what do you think?