Tokenize text using spaCy
Tokenize text using spaCy. The results of tokenization is stored as a Python
object. To obtain the tokens results in R, use get_tokens()
.
http://spacy.io.
process_document(x, multithread, ...)
x |
input text
functionalities including the tagging, named entity recognition, dependency
analysis.
This slows down |
multithread |
logical; |
... |
arguments passed to specific methods |
result marker object
spacy_initialize() # the result has to be "tag() is ready to run" to run the following txt <- c(text1 = "This is the first sentence.\nHere is the second sentence.", text2 = "This is the second document.") results <- spacy_parse(txt)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.