Split a column into tokens, flattening the table into one-token-per-row. This function supports non-standard evaluation through the tidyeval framework.

unnest_tokens(
  tbl,
  output,
  input,
  token = "words",
  format = c("text", "man", "latex", "html", "xml"),
  to_lower = TRUE,
  drop = TRUE,
  collapse = NULL,
  ...
)

Arguments

tbl

A data frame

output

Output column to be created as string or symbol.

input

Input column that gets split as string or symbol.

The output/input arguments are passed by expression and support quasiquotation; you can unquote strings and symbols.

token

Unit for tokenizing, or a custom tokenizing function. Built-in options are "words" (default), "characters", "character_shingles", "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", "regex", "tweets" (tokenization by word that preserves usernames, hashtags, and URLS ), and "ptb" (Penn Treebank). If a function, should take a character vector and return a list of character vectors of the same length.

format

Either "text", "man", "latex", "html", or "xml". When the format is "text", this function uses the tokenizers package. If not "text", this uses the hunspell tokenizer, and can tokenize only by "word"

to_lower

Whether to convert tokens to lowercase. If tokens include URLS (such as with token = "tweets"), such converted URLs may no longer be correct.

drop

Whether original input column should get dropped. Ignored if the original input and new output column have the same name.

collapse

A character vector of variables to collapse text across, or NULL.

For tokens like n-grams or sentences, text can be collapsed across rows within variables specified by collapse before tokenization. At tidytext 0.2.7, the default behavior for collapse = NULL changed to be more consistent. The new behavior is that text is not collapsed for NULL.

Grouping data specifies variables to collapse across in the same way as collapse but you cannot use both the collapse argument and grouped data. Collapsing applies mostly to token options of "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", or "regex".

...

Extra arguments passed on to tokenizers, such as strip_punct for "words" and "tweets", n and k for "ngrams" and "skip_ngrams", strip_url for "tweets", and pattern for "regex".

Details

If format is anything other than "text", this uses the hunspell_parse tokenizer instead of the tokenizers package. This does not yet have support for tokenizing by any unit other than words.

Examples

library(dplyr) library(janeaustenr) d <- tibble(txt = prideprejudice) d
#> # A tibble: 13,030 x 1 #> txt #> <chr> #> 1 "PRIDE AND PREJUDICE" #> 2 "" #> 3 "By Jane Austen" #> 4 "" #> 5 "" #> 6 "" #> 7 "Chapter 1" #> 8 "" #> 9 "" #> 10 "It is a truth universally acknowledged, that a single man in possession" #> # … with 13,020 more rows
d %>% unnest_tokens(word, txt)
#> # A tibble: 122,204 x 1 #> word #> <chr> #> 1 pride #> 2 and #> 3 prejudice #> 4 by #> 5 jane #> 6 austen #> 7 chapter #> 8 1 #> 9 it #> 10 is #> # … with 122,194 more rows
d %>% unnest_tokens(sentence, txt, token = "sentences")
#> # A tibble: 15,545 x 1 #> sentence #> <chr> #> 1 "pride and prejudice" #> 2 "by jane austen" #> 3 "chapter 1" #> 4 "it is a truth universally acknowledged, that a single man in possession" #> 5 "of a good fortune, must be in want of a wife." #> 6 "however little known the feelings or views of such a man may be on his" #> 7 "first entering a neighbourhood, this truth is so well fixed in the minds" #> 8 "of the surrounding families, that he is considered the rightful property" #> 9 "of some one or other of their daughters." #> 10 "\"my dear mr." #> # … with 15,535 more rows
d %>% unnest_tokens(ngram, txt, token = "ngrams", n = 2)
#> # A tibble: 114,045 x 1 #> ngram #> <chr> #> 1 pride and #> 2 and prejudice #> 3 NA #> 4 by jane #> 5 jane austen #> 6 NA #> 7 NA #> 8 NA #> 9 chapter 1 #> 10 NA #> # … with 114,035 more rows
d %>% unnest_tokens(chapter, txt, token = "regex", pattern = "Chapter [\\\\d]")
#> # A tibble: 10,721 x 1 #> chapter #> <chr> #> 1 "pride and prejudice" #> 2 "by jane austen" #> 3 "chapter 1" #> 4 "it is a truth universally acknowledged, that a single man in possession" #> 5 "of a good fortune, must be in want of a wife." #> 6 "however little known the feelings or views of such a man may be on his" #> 7 "first entering a neighbourhood, this truth is so well fixed in the minds" #> 8 "of the surrounding families, that he is considered the rightful property" #> 9 "of some one or other of their daughters." #> 10 "\"my dear mr. bennet,\" said his lady to him one day, \"have you heard that" #> # … with 10,711 more rows
d %>% unnest_tokens(shingle, txt, token = "character_shingles", n = 4)
#> # A tibble: 506,732 x 1 #> shingle #> <chr> #> 1 prid #> 2 ride #> 3 idea #> 4 dean #> 5 eand #> 6 andp #> 7 ndpr #> 8 dpre #> 9 prej #> 10 reju #> # … with 506,722 more rows
# custom function d %>% unnest_tokens(word, txt, token = stringr::str_split, pattern = " ")
#> # A tibble: 124,032 x 1 #> word #> <chr> #> 1 "pride" #> 2 "and" #> 3 "prejudice" #> 4 "" #> 5 "by" #> 6 "jane" #> 7 "austen" #> 8 "" #> 9 "" #> 10 "" #> # … with 124,022 more rows
# tokenize HTML h <- tibble(row = 1:2, text = c("<h1>Text <b>is</b>", "<a href='example.com'>here</a>")) h %>% unnest_tokens(word, text, format = "html")
#> # A tibble: 3 x 2 #> row word #> <int> <chr> #> 1 1 text #> 2 1 is #> 3 2 here