Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Deep Learning with PyTorch Lightning
  • Toc
  • feedback
Deep Learning with PyTorch Lightning

Deep Learning with PyTorch Lightning

By : Kunal Sawarkar, Dheeraj Arremsetty
4.3 (16)
close
Deep Learning with PyTorch Lightning

Deep Learning with PyTorch Lightning

4.3 (16)
By: Kunal Sawarkar, Dheeraj Arremsetty

Overview of this book

Building and implementing deep learning (DL) is becoming a key skill for those who want to be at the forefront of progress.But with so much information and complex study materials out there, getting started with DL can feel quite overwhelming. Written by an AI thought leader, Deep Learning with PyTorch Lightning helps researchers build their first DL models quickly and easily without getting stuck on the complexities. With its help, you’ll be able to maximize productivity for DL projects while ensuring full flexibility – from model formulation to implementation. Throughout this book, you’ll learn how to configure PyTorch Lightning on a cloud platform, understand the architectural components, and explore how they are configured to build various industry solutions. You’ll build a neural network architecture, deploy an application from scratch, and see how you can expand it based on your specific needs, beyond what the framework can provide. In the later chapters, you’ll also learn how to implement capabilities to build and train various models like Convolutional Neural Nets (CNN), Natural Language Processing (NLP), Time Series, Self-Supervised Learning, Semi-Supervised Learning, Generative Adversarial Network (GAN) using PyTorch Lightning. By the end of this book, you’ll be able to build and deploy DL models with confidence.
Table of Contents (15 chapters)
close
1
Section 1: Kickstarting with PyTorch Lightning
6
Section 2: Solving using PyTorch Lightning
11
Section 3: Advanced Topics

Text classification using BERT transformers

Text classification using BERT transformers is a transformer-based machine learning technique for Natural Language Processing (NLP) developed by Google. BERT was created and published in 2018 by Jacob Devlin. Before BERT, for language tasks, semi-supervised models such as Recurrent Neural Networks (RNNs) or sequence models were commonly used. BERT was the first unsupervised approach to language models and achieved state-of-the-art performance on NLP tasks. The large BERT model consists of 24 encoders and 16 bi-directional attention heads. It was trained with Book Corpora words and English Wikipedia entries for about 3,000,000,000 words. It later expanded to over 100 languages. Using pre-trained BERT models, we can perform several tasks on text, such as classification, information extraction, question answering, summarization, translation, and text generation.

Figure 3.7 – BERT architecture diagram (Image credit...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete