Lucene / Elasticsearch Analyzers

In Lucene, analyzer is a combination of tokenizer (splitter) + stemmer + stopword filter

In ElasticSearch, analyzer is a combination of

1. Character filter: “tidy up” a string before it is tokenize. Example: remove html tags
2. Tokenizer: MUST have a single tokenizer. It’s used to break up the string into individual terms or tokens
3. Token filter: change, add or remove tokens. Stemmer is a token filter, it is used to get base of word, for example: “happy”, “happiness” => “happi” (Snowball demo)




All About Analyzers:

Testing Lucene Analyzers with elasticsearch
“Here’s an awesome plugin on github repo. It’s somewhat extension of Analyze API. Found it on official elastic plugin list.

What’s great is that it shows tokens with all their attributes after every single step. With this it is easy to debug analyzer configuration and see why we got such tokens and where we lost the ones we wanted.”


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s