Tokenizer operations
Tokenizer operations
tokenizer_set(conn, index, body, ...)
conn |
an Elasticsearch connection object, see |
index |
(character) A character vector of index names |
body |
Query, either a list or json. |
... |
Curl options passed on to crul::HttpClient |
Scott Chamberlain myrmecocystus@gmail.com
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html
## Not run: # connection setup (x <- connect()) # set tokenizer ## NGram tokenizer body <- '{ "settings" : { "analysis" : { "analyzer" : { "my_ngram_analyzer" : { "tokenizer" : "my_ngram_tokenizer" } }, "tokenizer" : { "my_ngram_tokenizer" : { "type" : "nGram", "min_gram" : "2", "max_gram" : "3", "token_chars": [ "letter", "digit" ] } } } } }' if (index_exists('test1')) index_delete('test1') tokenizer_set(index = "test1", body=body) index_analyze(text = "hello world", index = "test1", analyzer='my_ngram_analyzer') ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.