This talk was recorded on Nov 7, 2022 as part of the LADAL Webinar Series 2022. LADAL Website: https://ladal.edu.au LADAL Webinar Series 2022: https://ladal.edu.au/webinars2022.html LADAL on Twitter: @slcladal Facebook: https://www.facebook.com/profile.php?... Contact: [email protected] Abstract Linguistics is undergoing a rapid shift away from significance tests towards approaches emphasizing parameter estimation, such as linear mixed effects models. Alongside this shift, another revolution is underway: away from using p-values as part of a “null ritual” (Gigerenzer, 2004) towards Bayesian models. Both shifts can nicely be dealt with the ‘brms’ package (Bürkner, 2017). After briefly reviewing why we shouldn’t blindly follow the “null ritual” of significance testing, I will demonstrate how easy it is to fit quite complex models using this package. I will also talk about how mixed models are used in different subfields of linguistics (Winter & Grice, 2021), and why established practices such as dropping random slopes for non-converging models are a further reason to go Bayesian. Finally, I will briefly touch on issues relating to prior specification, especially the importance of weakly informative priors to prevent overfitting.