Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Apache Solr for Indexing Data
  • Toc
  • feedback
Apache Solr for Indexing Data

Apache Solr for Indexing Data

By : Handiekar, Johri
1 (1)
close
Apache Solr for Indexing Data

Apache Solr for Indexing Data

1 (1)
By: Handiekar, Johri

Overview of this book

Apache Solr is a widely used, open source enterprise search server that delivers powerful indexing and searching features. These features help fetch relevant information from various sources and documentation. Solr also combines with other open source tools such as Apache Tika and Apache Nutch to provide more powerful features. This fast-paced guide starts by helping you set up Solr and get acquainted with its basic building blocks, to give you a better understanding of Solr indexing. You’ll quickly move on to indexing text and boosting the indexing time. Next, you’ll focus on basic indexing techniques, various index handlers designed to modify documents, and indexing a structured data source through Data Import Handler. Moving on, you will learn techniques to perform real-time indexing and atomic updates, as well as more advanced indexing techniques such as de-duplication. Later on, we’ll help you set up a cluster of Solr servers that combine fault tolerance and high availability. You will also gain insights into working scenarios of different aspects of Solr and how to use Solr with e-commerce data. By the end of the book, you will be competent and confident working with indexing and will have a good knowledge base to efficiently program elements.
Table of Contents (13 chapters)
close
12
Index

Introducing analyzers

To make us able to search effectively and efficiently, Solr splits text into tokens during indexing as well as during search (query time). Solr does all of this with the help of its three main components: analyzers, tokenizers, and filters. Analyzers are used during both indexing and searching. An analyzer examines the text of fields and the generated token stream with the help of tokenizers. Then, filters examine the stream of tokens and perform filtering jobs of any one of these: keeping them, discarding them, or creating new tokens. Tokenizers and filters might be combined in the form of pipelines or chains such that the output of one is the input of the other. Such a sequence of tokenizers and filters is called an analyzer, and the resulting output of the analyzer is used to match search queries or build indices. Let's see how we can use these components in Solr and implement them.

Analyzers are core components that preprocess input text at indexing and search...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete