Slides from my RStanARM tutorial

Back in September, I gave a tutorial on RStanARM to the Madison R user’s group. As I did for my magrittr tutorial, I broke the content down into slide decks. They were:

The source code and supporting materials are on Github.

Observations (training data)

The intuition-building section was the most challenging and rewarding, because I had to brush up on Bayesian statistics well enough to informally, hand-wavily teach about it to a crowd of R users. Like, I have good sense of how to fit these models and interpret them in practice, but there’s a gulf between understanding something and teaching about it. It was a bit of trial by fire :fire:.

One thing I did was work through a toy Bayesian updating demo. What’s the mean of some IQ scores, assuming a standard deviation of 15 and a flat prior over a reasonable range values? Cue some plots of how the distribution of probabilities update as new data is observed.

A frame of my Bayesian updating animationA frame of my Bayesian updating animationA frame of my Bayesian updating animationA frame of my Bayesian updating animation

See how the beliefs are updated? See how we retain uncertainty around that most likely value? And so on.

Naturally, I animated the thing—I’ll take any excuse to use gganimate.

Someone asked a good question about what advantages these models have over classical ones. I find the models more intuitive1, because posterior probabilities are post-data probabilities. I also find them more flexible. For example, I can use a t-distribution for my error terms—thick tails! If I write the thing in Stan, I can incorporate measurement error into the model. If I put my head down and work really hard, I could even fit one of those gorgeous Gaussian process models. We can fit vanilla regression models or get really, really fancy, but it all kind of emerges nicely from the general framework of writing out priors and a likelihood definition.

  1. But I was taught the classical models first… I sometimes think that these models are only more intuitive because this is my second bite at the apple. This learning came more easily because the first time I learned regression, I was a total novice and had to learn everything. I had learn to about t-test, reductions in variance, collinearity, and what interactions do. Here, I can build off of that prior learning. Maybe if I learn everything again—as what? everything as a neural network?—it will be even more intuitive.