Calculate and bind posterior log odds ratios, assuming a multinomial model with a Dirichlet prior. The Dirichlet prior parameters are set using an empirical Bayes approach by default, but an uninformative prior is also available. Assumes that data is in a tidy format, and adds the weighted log odds ratio as a column. Supports non-standard evaluation through the tidyeval framework.
Arguments
- tbl
A tidy dataset with one row per
feature
andset
.- set
Column of sets between which to compare features, such as documents for text data.
- feature
Column of features for identifying differences, such as words or bigrams with text data.
- n
Column containing feature-set counts.
- uninformative
Whether or not to use an uninformative Dirichlet prior. Defaults to
FALSE
.- unweighted
Whether or not to return the unweighted log odds, in addition to the weighted log odds. Defaults to
FALSE
.
Value
The original tidy dataset with up to two additional columns.
weighted_log_odds
: The weighted posterior log odds ratio, where the odds ratio is for the feature distribution within that set versus all other sets. The weighting comes from variance-stabilization of the posterior.log_odds
(optional, only returned if requested): The posterior log odds without variance stabilization.
Details
The arguments set
, feature
, and n
are passed by expression and support
rlang::quasiquotation
; you can unquote strings
and symbols. Grouping is preserved but ignored.
The default empirical Bayes prior inflates feature counts in each group by total feature counts across all groups. This is like using a moment based estimator for the parameters of the Dirichlet prior. Note that empirical Bayes estimates perform well on average, but can have some surprising properties. If you are uncomfortable with empirical Bayes estimates, we suggest using the uninformative prior.
The weighted log odds computed by this function are also z-scores for the log odds; this quantity is useful for comparing frequencies across sets but its relationship to an odds ratio is not straightforward after the weighting.
The dataset must have exactly one row per set-feature combination for this calculation to succeed. Read Monroe et al (2008) for more on the weighted log odds ratio.
References
Monroe, B. L., Colaresi, M. P. & Quinn, K. M. Fightin' Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Polit. anal. 16, 372-403 (2008). doi: 10.1093/pan/mpn018
Minka, T. P. Estimating a Dirichlet distribution. (2012). https://tminka.github.io/papers/dirichlet/minka-dirichlet.pdf
Examples
library(dplyr)
#>
#> Attaching package: ‘dplyr’
#> The following objects are masked from ‘package:stats’:
#>
#> filter, lag
#> The following objects are masked from ‘package:base’:
#>
#> intersect, setdiff, setequal, union
gear_counts <- mtcars %>%
count(vs, gear)
gear_counts
#> vs gear n
#> 1 0 3 12
#> 2 0 4 2
#> 3 0 5 4
#> 4 1 3 3
#> 5 1 4 10
#> 6 1 5 1
# find the number of gears most characteristic of each engine shape `vs`
regularized <- gear_counts %>%
bind_log_odds(vs, gear, n)
regularized
#> vs gear n log_odds_weighted
#> 1 0 3 12 1.1728347
#> 2 0 4 2 -1.3767516
#> 3 0 5 4 0.4033125
#> 4 1 3 3 -1.1354777
#> 5 1 4 10 1.5661168
#> 6 1 5 1 -0.4362340
unregularized <- gear_counts %>%
bind_log_odds(vs, gear, n, uninformative = TRUE, unweighted = TRUE)
# these log odds will be farther from zero
# than the regularized estimates
unregularized
#> vs gear n log_odds log_odds_weighted
#> 1 0 3 12 0.6968169 1.8912729
#> 2 0 4 2 -1.2527630 -1.9691060
#> 3 0 5 4 0.3249262 0.5549172
#> 4 1 3 3 -0.9673459 -1.7407107
#> 5 1 4 10 1.1451323 2.8421436
#> 6 1 5 1 -0.5268260 -0.6570674