In Lucene, analyzer is a combination of tokenizer (splitter) + stemmer + stopword filter
In ElasticSearch, analyzer is a combination of
1. Character filter: “tidy up” a string before it is tokenize. Example: remove html tags
2. Tokenizer: MUST have a single tokenizer. It’s used to break up the string into individual terms or tokens
3. Token filter: change, add or remove tokens. Stemmer is a token filter, it is used to get base of word, for example: “happy”, “happiness” => “happi” (Snowball demo)
Testing Lucene Analyzers with elasticsearch
“Here’s an awesome plugin on github repo. It’s somewhat extension of Analyze API. Found it on official elastic plugin list.
What’s great is that it shows tokens with all their attributes after every single step. With this it is easy to debug analyzer configuration and see why we got such tokens and where we lost the ones we wanted.”