online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 What are deepfakes? AI that deceives

摘要: Deepfakes extend the idea of video compositing with deep learning to make someone appear to say or do something they didn’t really say or do

 


Deepfakes

▲圖片標題(來源:BrownMantis)

Deepfakes are media — often video but sometimes audio — that were created, altered, or synthesized with the aid of deep learning to attempt to deceive some viewers or listeners into believing a false event or false message.

The original example of a deepfake (by reddit user /u/deepfake) swapped the face of an actress onto the body of a porn performer in a video – which was, of course, completely unethical, although not initially illegal. Other deepfakes have changed what famous people were saying, or the language they were speaking.

Deepfakes extend the idea of video (or movie) compositing, which has been done for decades. Significant video skills, time, and equipment go into video compositing; video deepfakes require much less skill, time (assuming you have GPUs), and equipment, although they are often unconvincing to careful observers.

How to create deepfakes

Originally, deepfakes relied on autoencoders, a type of unsupervised neural network, and many still do. Some people have refined that technique using GANs (generative adversarial networks). Other machine learning methods have also been used for deepfakes, sometimes in combination with non-machine learning methods, with varying results.

Autoencoders

Essentially, autoencoders for deepfake faces in images run a two-step process. Step one is to use a neural network to extract a face from a source image and encode that into a set of features and possibly a mask, typically using several 2D convolution layers, a couple of dense layers, and a softmax layer. Step two is to use another neural network to decode the features, upscale the generated face, rotate and scale the face as needed, and apply the upscaled face to another image.

Training an autoencoder for deepfake face generation requires a lot of images of the source and target faces from multiple points of view and in varied lighting conditions. Without a GPU, training can take weeks. With GPUs, it goes a lot faster.

詳見全文Full Text: InfoWorld

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance

 


留下你的回應

以訪客張貼回應

0
  • 找不到回應