Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

SentencePieceTokenizer

SentencePieceTokenizer


Description

SentencePiece tokenizer for 'lang'

Usage

SentencePieceTokenizer(
  lang = "en",
  special_toks = NULL,
  sp_model = NULL,
  vocab_sz = NULL,
  max_vocab_sz = 30000,
  model_type = "unigram",
  char_coverage = NULL,
  cache_dir = "tmp"
)

Arguments

lang

lang

special_toks

special_toks

sp_model

sp_model

vocab_sz

vocab_sz

max_vocab_sz

max_vocab_sz

model_type

model_type

char_coverage

char_coverage

cache_dir

cache_dir

Value

None


fastai

Interface to 'fastai'

v2.0.7
Apache License 2.0
Authors
Turgut Abdullayev [ctb, cre, cph, aut]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.