If you find any typos, errors, or places where the text may be improved, please let me know by adding an annotation using hypothes.is. To add an annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the in the upper right-hand corner of the page.

5 Decision Trees & Random Forests

In this chapter, we describe tree-based methods for regression and classification. Tree-based methods are simple and useful for interpretation. However, they typically are not competitive with the best supervised learning approaches in terms of prediction accuracy. Hence in this chapter we also introduce bagging, random forests, and boosting. Each of these approaches involves producing multiple trees which are then combined to yield a single consensus prediction. We will see that combining a large number of trees can often result in dramatic improvements in prediction accuracy, at the expense of some loss in interpretation.

You are invited to watch the following videos14. You can download slides15 used in these videos by clicking here .

The Basics of Decision Trees

Classification Trees

Bagging & Random Forests

Boosting

Trees in R

Random forests - the first-choice method for every data analysis?

Now you know how Random Forests method (RF) works. We often read some claims, like Random Forests “works well without tuning,” there is “no need to scale or recode predictors,” it “works well on high dimensional data,” it “cannot overfit,” etc..

In this section, you find a super talk and slides discussing some common claims about Random Forests and whether is it true that RF the first-choice method for every data analysis.

You can download the slides16 by clicking here .


  1. Source: the famous MOOC Statistical Learning↩︎

  2. Source: Trevor Hastie’s website↩︎

  3. Source: Marvin Wright’s talk from Why R? 2019↩︎