Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Hands-On Generative Adversarial Networks with Keras
  • Toc
  • feedback
Hands-On Generative Adversarial Networks with Keras

Hands-On Generative Adversarial Networks with Keras

By : Rafael Valle
1.5 (2)
close
Hands-On Generative Adversarial Networks with Keras

Hands-On Generative Adversarial Networks with Keras

1.5 (2)
By: Rafael Valle

Overview of this book

Generative Adversarial Networks (GANs) have revolutionized the fields of machine learning and deep learning. This book will be your first step toward understanding GAN architectures and tackling the challenges involved in training them. This book opens with an introduction to deep learning and generative models and their applications in artificial intelligence (AI). You will then learn how to build, evaluate, and improve your first GAN with the help of easy-to-follow examples. The next few chapters will guide you through training a GAN model to produce and improve high-resolution images. You will also learn how to implement conditional GANs that enable you to control characteristics of GAN output. You will build on your knowledge further by exploring a new training methodology for progressive growing of GANs. Moving on, you'll gain insights into state-of-the-art models in image synthesis, speech enhancement, and natural language generation using GANs. In addition to this, you'll be able to identify GAN samples with TequilaGAN. By the end of this book, you will be well-versed with the latest advancements in the GAN framework using various examples and datasets, and you will have developed the skills you need to implement GAN architectures for several tasks and domains, including computer vision, natural language processing (NLP), and audio processing. Foreword by Ting-Chun Wang, Senior Research Scientist, NVIDIA
Table of Contents (14 chapters)
close
Free Chapter
1
Section 1: Introduction and Environment Setup
4
Section 2: Training GANs
8
Section 3: Application of GANs in Computer Vision, Natural Language Processing, and Audio

Generation of Discrete Sequences Using GANs

In this chapter, you will learn how to implement a model that is used in the paper Adversarial Generation of Natural Language by Rajeswar et al. This model was first described in the paper Improved Training of Wasserstein GANs by Gulrajani et al. It is capable of generating short discrete sequences with small vocabularies.

We will first address language generation as a problem of conditional probability, in which we want to estimate the probability of the next token given the previous tokens. Then will address the challenges involved in training models for discrete sequences using GANs.

After this introduction to language generation, you will learn how to implement the model described in the paper by Rajeswar et al. and train it on the Google 1 Billion Word Dataset. We will train two separate models: one to generate sequences of characters...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete