Starting in December and gaining traction through 2018, a new tech has shaked the internet for several reasons. As of now, you can find several videos online of people (mostly celebrities) having their faces inserted into others. They’re called “deepfakes.” Everything points to this being more than a passing trend. So if we now live in a world where this happens, we can start talking about its ethical consequences.

The thing with these clips is that they are made with relatively few resources, but the results can be —and often are— very impressive; to the point that at first glance or even just without looking explicitly for it, one could be completely fooled.

This of course presents amazing opportunities in everything from special effects down to just internet memes (for now memes, mostly). But as there are less-benign uses for it too, the question of it being ethical or not inevitably arises.

To understand it, let’s have a quick look at what this tech actually is.

What exactly is this?

The short version is that this is an AI and machine learning-powered technology that can insert the face of a person into another’s body, usually in a video or GIF. The results are, as previously stated, impressive. Here’s a good example of Trump’s face inserted into a speech of Angela Merkel.

Weird, right? It might not seem like much, but the important part here is that someone did this from his home computer. That’s the scary part, because now we’re dealing with something that will become massive in the coming months —if it hasn’t already.

While similar tech has existed for a while now, it didn’t have this level of notorierty or sophistication. The current trend originated on Reddit from a user called “deepfake”. His influence on the matter has been so relevant that the practice has basically been given his name: a deepfake, or deepfaking.

The problem

While deepfakes have become a source of joy in many cases, as well as a perfect example of impressive technology advancement, there are of course problems that can arise from it. In fact, let’s not kid ourselves: the first widely known deepfakes were in fact adult videos that had inserted celebrities’ faces into them.

This makes things from funny to serious, questionable and even potentially illegal. In fact, not long after their appearance, several sites, including Reddit and Twitter, have banned them. However, as the internet has proven repeatedly, this won’t just go away. So what do we do from there?

The legality of these clips (the ones used for “compromised” or humiliating footage without the party in question’s consent) isn’t even something contemplated by the law, so they are technically still not illegal. But we can all agree the moral and ethics of it don’t need much discussing.

It’s specially worrying because before any other uses, the tech was conceived for malicious uses from the beginning, and the main target for this are women.

A cycle that repeats itself

This is something we’ve seen happen with new technologies basically always. The examples are endless: the rise of the mp3 and sharing sites led to draft modern piracy laws. The popularity of drones led to strong regulations for using them. The loss of privacy. And we could spend a whole day talking about the ethics of self-driving cars.

The point is clear: with every new disruptive and widely available tech, new problems arise. The controversy around Deepfakes, if that’s what they’ll end up being called in the long term, exemplifies the toxic behaviors that tech can amplify.

But there’s also another side of the coin. A lighter take on the tech is putting Nicolas Cage on several famous movie scenes, for instance. The results are hilarious.

This is a perfect example of the democratization of technology, just as Photoshop and similar tools were when they first showed up and caused their own controversies.

Sure, this is a much more difficult terrain to travel in, with widly different implications. But as a society we must rise to the challenge and make good use of it while combating the toxicity. In fact, that might be what’s already happening: AI is being used to combat malicious deepfakes.