Slides from my intro to Bayesian regression talk

Back in April, I gave a guest lecture on Bayesian regression for the psychology department’s graduate statistics class. This is the same course where I first learned regression—and where I first started using R for statistics instead of for data cleaning. It was fun drawing on my experience in that course and tailoring the materials for the level of training.

Here are the materials:

Slide where I explain 'Bayesianism' Bayesian updating demo
Some slides from the talk.

Observations (training data)

As I did with my last Bayes talk, I’m going to note some questions from the audience, so I don’t forget what kinds of questions people have when they are introduced to Bayesian statistics.

One theme was frequentist baggage :handbag:. One person asked about Type I and Type II error rates. I did not have a satisfactory (that is, rehearsed) answer ready for this question. I think I said something about how those terms are based on a frequentist, repeated-sampling paradigm, whereas a Bayesian approach worries about different sorts of errors. (Statistical power is still important, of course, for both approaches.) Next time, I should study up on the frequentist properties of Bayesian models, so I can field these questions better.

Other questions:

  • Another bit of frequentist baggage :handbag:. I mentioned that with a posterior predictive distribution, we can put an uncertainty interval on any statistic we can calculate, and this point brought up the question of multiple comparisons. These are a bad thing in classical statistics. But for Bayes, there is only one model, and the multiple comparisons are really only the implications of one model.
  • Someone else said that they had heard that Bayesian models can provide evidence for a null effect—how does that work? I briefly described the ROPE approach, ignoring the existence of Bayes factors entirely.

For future iterations of this tutorial, I should have a worked example, maybe a blog post, on each of these issues.

It’s kind of amusing now that I think about it. A big part of my enthusiasm for Bayesian statistics is that I find it much more intuitive than frequentist statistics. Yes! I thought to myself. I never have to worry about what the hell a confidence interval is ever again! Well, actually—no. I need to know this stuff even more thoroughly than ever if I am going to talk fluently about what makes Bayes different. ¯\_(ツ)_/¯