Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Deep Learning with PyTorch Quick Start Guide
  • Toc
  • feedback
Deep Learning with PyTorch Quick Start Guide

Deep Learning with PyTorch Quick Start Guide

By : David Julian
3.3 (3)
close
Deep Learning with PyTorch Quick Start Guide

Deep Learning with PyTorch Quick Start Guide

3.3 (3)
By: David Julian

Overview of this book

PyTorch is extremely powerful and yet easy to learn. It provides advanced features, such as supporting multiprocessor, distributed, and parallel computation. This book is an excellent entry point for those wanting to explore deep learning with PyTorch to harness its power. This book will introduce you to the PyTorch deep learning library and teach you how to train deep learning models without any hassle. We will set up the deep learning environment using PyTorch, and then train and deploy different types of deep learning models, such as CNN, RNN, and autoencoders. You will learn how to optimize models by tuning hyperparameters and how to use PyTorch in multiprocessor and distributed environments. We will discuss long short-term memory network (LSTMs) and build a language model to predict text. By the end of this book, you will be familiar with PyTorch's capabilities and be able to utilize the library to train your neural networks with relative ease.
Table of Contents (8 chapters)
close

autograd

As we saw in the last chapter, much of the computational work for ANNs involves calculating derivatives to find the gradient of the cost function. PyTorch uses the autograd package to perform automatic differentiation of operations on PyTorch tensors. To see how this works, let's look at an example:

In the preceding code, we create a 2 x 3 torch tensor and, importantly, set the requires_grad attribute to True. This enables the calculation of gradients across subsequent operations. Notice also that we set the dtype to torch.float, since this is the data type that PyTorch uses for automatic differentiation. We perform a sequence of operations and then take the mean of the result. This returns a tensor containing a single scalar. This is normally what autograd requires to calculate the gradient of the preceding operations. This could be any sequence of operations;...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete