Creating an analyzer
Analyzer's job is to analyse text. It enforces configured policies (IndexWriterConfig
) on how index terms are extracted and tokenized from a raw text input. The output from Analyzer is a set of indexable tokens ready to be processed by the indexer. This step is necessary to ensure consistency in both the data store and search functionality. Also, note that Lucene only accepts plain text. Whatever your data type might be—be it XML, HTML, or PDF, you need to parse these documents into text before tossing them over to Lucene.
Imagine you have this piece of text: Lucene is an information retrieval library written in Java. An analyzer will tokenize this text, manipulate the data to conform to a certain data formatting policy (for example, turn to lowercase, remove stop words, and so on), and eventually output as a set of tokens. Token is a basic element in Lucene's indexing process. Let's take a look at the tokens generated by an analyzer for the above text:
{Lucene} {is} {an} {Information} {Retrieval} {library} {written} {in} {Java}
Each individual unit enclosed in braces is referred to as a token. In this example, we are leveraging WhitespaceAnalyzer
to analyze text. This specific analyzer uses whitespace as a delimiter to separate the text into individual words. Note that the separated words are unaltered and stop words (is, an, in) are included. Essentially, every single word is extracted as a token.
Getting ready
The Lucene-analyzers-common
module contains all the major components we discussed in this section. Most commonly-used analyzers can be found in the org.apache.lucene.analysis.core
package. For language-specific analysis, you can refer to the org.apache.lucene.analysis
{language code} packages.
How to do it...
Many analyzers in Lucene-analyzers-common require little or no configuration, so instantiating them is almost effortless. For our current exercise, we will instantiate the WhitespaceAnalyzer
by simply using new
object:
Analyzer analyzer = new WhitespaceAnalyzer();
How it works…
An analyzer is a wrapper of three major components:
Character filter
Tokenizer
Token filter
The analysis phase includes pre- and post-tokenization functions, and this is where the character filter and token filter come into play. The character filter preprocesses text before tokenization to clean up data such as striping out HTML markups, removing user-defined patterns, and converting a special character or specific text. The token filter executes the post tokenization filtering; its operations involve various kinds of manipulations. For instance, stemming, stop word filtering, text normalization, and synonym expansion are all part of token filter. As described earlier, the tokenizer splits up text into tokens. The output of these analysis processes is TokenStream where the indexing process can consume and produce an index.
Lucene provides a number of standard analyzer implementations that should fit most of the search applications. Here are some additional analyzers, which we haven't talked about yet:
StopAnalyzer: This is built with a LowerCaseTokenizer and StopWordFilter. As the names suggest, this analyzer lowercases text, tokenizes non-letter characters and removes stop words.
SimpleAnalyzer: This is built with a LowerCaseTokenizer so that it simply splits text at non-letter characters, and lowercases the tokens.
StandardAnalyzer: This is slightly more complex than SimpleAnalyzer. It consists of StandardTokenizer, StandardFilter, LowerCaseFilter, and StopWordFilter. StandardTokenizer uses a grammar-based tokenization technique that's applicable for most European languages. StandardFilter normalizes tokens extracted with StandardTokenizer. Then, we have the familiar LoweCaseFilter and StopWordFilter.
SnowballAnalyzer: This is the most featured of the bunch. It's made up of StandardTokenizer with StandardFilter, LowerCaseFilter, StopFilter, and SnowballFilter. SnowballFilter stems words, so this analyzer is essentially StandardAnalyzer plus stemming. In simple terms, stemming is a technique to reduce words to their word stem or root form. By reducing words, we can easily find matches of words with the same meaning, but in different forms such as plural and singular forms.