Tokenize_files
Tokenize text 'files' in parallel using 'n_workers'
tokenize_files( files, path, output_dir, output_names = NULL, n_workers = 6, rules = NULL, tok = NULL, encoding = "utf8", skip_if_exists = FALSE )
files |
files |
path |
path |
output_dir |
output_dir |
output_names |
output_names |
n_workers |
n_workers |
rules |
rules |
tok |
tokenizer |
encoding |
encoding |
skip_if_exists |
skip_if_exists |
None
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.