
Lucene 4 Cookbook
By :

Analyzer's job is to analyse text. It enforces configured policies (IndexWriterConfig
) on how index terms are extracted and tokenized from a raw text input. The output from Analyzer is a set of indexable tokens ready to be processed by the indexer. This step is necessary to ensure consistency in both the data store and search functionality. Also, note that Lucene only accepts plain text. Whatever your data type might be—be it XML, HTML, or PDF, you need to parse these documents into text before tossing them over to Lucene.
Imagine you have this piece of text: Lucene is an information retrieval library written in Java. An analyzer will tokenize this text, manipulate the data to conform to a certain data formatting policy (for example, turn to lowercase, remove stop words, and so on), and eventually output as a set of tokens. Token is a basic element in Lucene's indexing process. Let's take a look at the tokens generated by an analyzer for the above text:
{Lucene} {is} {an} {Information} {Retrieval} {library} {written} {in} {Java}
Each individual unit enclosed in braces is referred to as a token. In this example, we are leveraging WhitespaceAnalyzer
to analyze text. This specific analyzer uses whitespace as a delimiter to separate the text into individual words. Note that the separated words are unaltered and stop words (is, an, in) are included. Essentially, every single word is extracted as a token.
The Lucene-analyzers-common
module contains all the major components we discussed in this section. Most commonly-used analyzers can be found in the org.apache.lucene.analysis.core
package. For language-specific analysis, you can refer to the org.apache.lucene.analysis
{language code} packages.
Many analyzers in Lucene-analyzers-common require little or no configuration, so instantiating them is almost effortless. For our current exercise, we will instantiate the WhitespaceAnalyzer
by simply using new
object:
Analyzer analyzer = new WhitespaceAnalyzer();
An analyzer is a wrapper of three major components:
The analysis phase includes pre- and post-tokenization functions, and this is where the character filter and token filter come into play. The character filter preprocesses text before tokenization to clean up data such as striping out HTML markups, removing user-defined patterns, and converting a special character or specific text. The token filter executes the post tokenization filtering; its operations involve various kinds of manipulations. For instance, stemming, stop word filtering, text normalization, and synonym expansion are all part of token filter. As described earlier, the tokenizer splits up text into tokens. The output of these analysis processes is TokenStream where the indexing process can consume and produce an index.
Lucene provides a number of standard analyzer implementations that should fit most of the search applications. Here are some additional analyzers, which we haven't talked about yet:
Change the font size
Change margin width
Change background colour