Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Python Natural Language Processing Cookbook
  • Toc
  • feedback
Python Natural Language Processing Cookbook

Python Natural Language Processing Cookbook

By : Zhenya Antić
4.4 (18)
close
Python Natural Language Processing Cookbook

Python Natural Language Processing Cookbook

4.4 (18)
By: Zhenya Antić

Overview of this book

Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing the parts of speech to complex topics such as topic modeling, text classification, and visualization. Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts of speech tagging to help you to prepare your data. You’ll then learn ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution, discover different ways of representing the semantics using bag-of-words, TF-IDF, word embeddings, and BERT, and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques. As you advance, you’ll also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book shows you how to develop chatbots using NLTK and Rasa and visualize text data. By the end of this NLP book, you’ll have developed the skills to use a powerful set of tools for text processing.
Table of Contents (10 chapters)
close

Chapter 6: Topic Modeling

In this chapter, we will cover topic modeling, or the unsupervised discovery of topics present in a corpus of text. There are many different algorithms available to do this, and we will cover four of them: Latent Dirichlet Allocation (LDA) using two different packages, non-negative matrix factorization, K-means with Bidirectional Encoder Representations from Transformers (BERT) embeddings, and Gibbs Sampling Dirichlet Multinomial Mixture (GSDMM) for topic modeling of short texts, such as sentences or tweets.

The recipe list is as follows:

  • LDA topic modeling with sklearn
  • LDA topic modeling with gensim
  • NMF topic modeling
  • K-means topic modeling with BERT
  • Topic modeling of short texts
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete