-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Exploring Deepfakes
By :

One major current trend in AI is in upscaling content. This is a great use for AI, which can fill in missing data from its training, especially temporally aware upscalers, which can find missing data by tracking an object across multiple frames to get more detail. Unfortunately, when used as training data for generative AI such as deepfakes, the AI upscaled data is problematic and prone to training failures. Even a very good upscaling AI has glitches and artifacts. The artifacts might be difficult for the eye to see, but the deepfake AI searches for patterns and will often get tripped up by artifacts, causing the training to fail.
Generally, the best way to deal with upscaling is to not upscale your training data but instead to upscale the output. This is even better in many ways since it can replace missing face data and improve the resolution of the output at the same time. The reason that chaining AIs in that direction doesn’t cause failures is that, unlike deepfakes...