Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Hands-On Deep Learning Architectures with Python
  • Toc
  • feedback
Hands-On Deep Learning Architectures with Python

Hands-On Deep Learning Architectures with Python

By : Yuxi (Hayden) Liu, Mehta
4.5 (2)
close
Hands-On Deep Learning Architectures with Python

Hands-On Deep Learning Architectures with Python

4.5 (2)
By: Yuxi (Hayden) Liu, Mehta

Overview of this book

Deep learning architectures are composed of multilevel nonlinear operations that represent high-level abstractions; this allows you to learn useful feature representations from the data. This book will help you learn and implement deep learning architectures to resolve various deep learning research problems. Hands-On Deep Learning Architectures with Python explains the essential learning algorithms used for deep and shallow architectures. Packed with practical implementations and ideas to help you build efficient artificial intelligence systems (AI), this book will help you learn how neural networks play a major role in building deep architectures. You will understand various deep learning architectures (such as AlexNet, VGG Net, GoogleNet) with easy-to-follow code and diagrams. In addition to this, the book will also guide you in building and training various deep architectures such as the Boltzmann mechanism, autoencoders, convolutional neural networks (CNNs), recurrent neural networks (RNNs), natural language processing (NLP), GAN, and more—all with practical implementations. By the end of this book, you will be able to construct deep models using popular frameworks and datasets with the required design patterns for each architecture. You will be ready to explore the potential of deep architectures in today's world.
Table of Contents (15 chapters)
close
Free Chapter
1
Section 1: The Elements of Deep Learning
5
Section 2: Convolutional Neural Networks
8
Section 3: Sequence Modeling
10
Section 4: Generative Adversarial Networks (GANs)
12
Section 5: The Future of Deep Learning and Advanced Artificial Intelligence

Evolution path to CNNs

In the 1960s, it was discovered that the visual cortex in animals doesn't act in a way deep feedforward networks do with images. Rather, a single neuron in the visual cortex is connected to a small region (and not a single pixel), which is called a receptive field. Any activity in the receptive field triggers the corresponding neuron. 

Inspired by the receptive field in the visual cortex, scientists came up with the idea of local connectivity to reduce the number of artificial neurons required to process images. This modified version of deep feedforward networks was termed CNN (all through this book, CNN refers to convolutional neural network). In 1989, Yann LeCun developed a trainable CNN that was able to recognize handwritten digits. In 1998, again, Yann LeCun's LeNet-5 model successfully used seven stacked layers of convolution (like layers...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete