This course covers the fundamentals necessary for a state-of-the-art GAN. Anyone who experimented with GANs on their own knows that it's easy to throw together a GAN that spits out MNIST digits, but it's another level of difficulty entirely to produce photorealistic images at a resolution higher than a thumbnail.
This course comprehensively bridges the gap between MNIST digits and high-definition faces. You'll create and train a GAN that can be used in real-world applications.
And because training high-resolution networks of any kind is computationally expensively, you'll also learn how to distribute your training across multiple GPUs or TPUs. Then for training, we'll leverage Google's TPU hardware for free in Google Colab. This allows students to train generators up to 512x512 resolution with no hardware costs at all.
The material for this course was pulled from the ProGAN, StyleGAN, and StyleGAN 2 papers which have produced ground-breaking and awe-inspiring results. We'll even use the same Flicker Faces HD dataset to replicate their results.
Finally, what GAN course would be complete without having some fun with the generator? Students will learn not only how to generate an infinite quantity of unique images, but also how to filter them to the highest-quality images by using a perceptual path length filter. You'll even learn how to generate smooth interpolations between two generated images, which make for some really interesting visuals.
939
57
TAKE THIS COURSE