Ensemble Methods

Back to Homepage

Ensemble decision tree methods improve upon simple decision trees by averaging the outcomes of many decision trees. By taking the average of many models, we reduce variance and typically improve the prediction error rate.

install.packages(“MASS”)

install.packages(“randomForest”)

library(MASS)

library(randomForest)

The data set is Housing Values in Suburbs of Boston. There are 506 observations and 14 features.

data(Boston)
?Boston

Random Forest

Create a training data set using 300 of the 506 observations.

set.seed(101)
train <- sample(1:nrow(Boston),300)

Use the randomForest function to generate the random forest model using the training set. The target variable is medv, the median home value of owner-occupied homes.

rf.Boston = randomForest(medv~., data=Boston, subset=train)
rf.Boston

Call:
 randomForest(formula = medv ~ ., data = Boston, subset = train) 
               Type of random forest: regression
                     Number of trees: 500
No. of variables tried at each split: 4

          Mean of squared residuals: 12.30718
                    % Var explained: 85.13

The first model used all available predictors to split the nodes of the decision trees. We can adjust this value to see if there is a different.

Create two vector variables to hold out-of-bag error and test error for each model.

oob.err <- double(13)
test.err <- double(13)

mtry is a parameter of the randomForest function corresponding to the number of features to randomly select. We create a loop that will fit models using 1, 2, 3 randomly selected features for each split, up to 13 which is all the features.

Four hundred trees will be created for each value of mtry.

For each value of mtry the average Mean Squared Error (MSE) will be captured in the oob.err variable.

We then predict the values of medv for the observations not in the training set.

For each value of mtry the Residual Squared Error (RSS) will be captured in the test.err variable.

Finally, the values of mtry are printed out just to show the loop functioned.

for (mtry in 1:13) {
  fit <- randomForest(medv~., data=Boston, subset=train, mtry = mtry, ntree=400)
  oob.err[mtry] <- fit$mse[400]
  pred <- predict(fit, Boston[-train,])
  test.err[mtry] <- with(Boston[-train,], mean((medv-pred)^2))
  cat(mtry," ")
}
1  2  3  4  5  6  7  8  9  10  11  12  13  

We plot the MSE and RSS for each value of mtry using the matplot function.

matplot(1:mtry, cbind(test.err,oob.err), pch=19, col=c("red","blue"), type="b", ylab="Mean Squared Error")
legend("topright", legend=c("OOB", "Test"), pch=19, col=c("red","blue"))

The mean squared error of the predictions drops from 26 (when only a single feature is used) to a minimum around 12 when five random features are used to split the data at each node.

Boosting an Ensemble Decision Tree Method

Boosting is very similar to random forest, except that each new tree seeks to improve upon the performance of the previous tree. The trees are created sequentially and are shallow, possibly limited to two or four terminal nodes. Boosting reduces bias in a model helping it perform better on test data.

Boosting may be performed with the gbm package, Gradient Boosted Machines.

require(gbm)
Loading required package: gbm
package <U+393C><U+3E31>gbm<U+393C><U+3E32> was built under R version 3.3.3Loading required package: survival
Loading required package: lattice
Loading required package: splines
Loading required package: parallel
Loaded gbm 2.1.3

There are three tuning parameters for boosting.

  1. The number of trees
  2. The shrinkage parameter lambda
  3. The number of splits allowed for each tree

Call gbm to create 10,000 trees. We pick an arbitrary value for lambda - 0.01, labeled shrinkage. Interaction depth indicates the number of splits. We limit each tree to four splits. The distribution is set based on the loss function of the algorithm. Ours is Gaussian because we are measuring success based on squared error.

boost.boston <- gbm(medv~., data = Boston[train,], distribution = "gaussian", n.trees=10000, shrinkage = 0.01, interaction.depth=4)
summary(boost.boston)

The summary output is a variable importance plot. The number of rooms and the percentage of “low status” people appear to be the most important predictors of medv.

We can plot the relationships between medv and each variable from the boosting object.

plot(boost.boston, i="rm")

We can see that as the number of rooms increases, medv (as a function of rm) increases, especially after seven rooms.

Finally, we will predict values from the test data set using the boosted model.

  • We create a variable to hold the number 100 through 10,000, incrementally increased by 100.

  • We create a variable to hold our preduction on the test data set using the boost.boston boosted decision tree model.

  • The dim function shows that the output of the prediction contains 100 observations corresponding to the number of models we tried with larger numbers of trees from n.trees.

  • We create the variable berr to hold the residual sume of squares from the predictions.

  • We plot the number of trees (we have 100 models each with a different number of trees) against the residual sum of squares (RSS).

  • Finally we add a horizontal line to the plot comparing the RSS to the test error from the unboosted random forest model.

n.trees = seq(from=100, to=10000, by=100)
predmat = predict(boost.boston, newdata = Boston[-train,], n.trees=n.trees)
dim(predmat)
[1] 206 100
berr = with(Boston[-train,], apply( (predmat-medv)^2,2,mean) )
plot(n.trees, berr, pch=19, ylab="Mean Squared Error", xlab="# Trees", main="Boosting Test Error")
abline(h=min(test.err), col="green")

We see that the boosted model reduces the test error significantly but after ~3,000 trees, the effect is minimized the the error flattens out.

LS0tDQp0aXRsZTogIkVuc2VtYmxlIFRyZWUgTWV0aG9kcyINCm91dHB1dDoNCiAgaHRtbF9ub3RlYm9vazoNCiAgICBmaWdfaGVpZ2h0OiA2DQogIGh0bWxfZG9jdW1lbnQ6IGRlZmF1bHQNCiAgcGRmX2RvY3VtZW50OiBkZWZhdWx0DQogIHdvcmRfZG9jdW1lbnQ6IGRlZmF1bHQNCi0tLQ0KDQojIEVuc2VtYmxlIE1ldGhvZHMNCltCYWNrIHRvIEhvbWVwYWdlXShodHRwczovL2FsZXhpc2lkbGV0dGV3aWxzb24uZ2l0aHViLmlvLykNCg0KRW5zZW1ibGUgZGVjaXNpb24gdHJlZSBtZXRob2RzIGltcHJvdmUgdXBvbiBzaW1wbGUgZGVjaXNpb24gdHJlZXMgYnkgYXZlcmFnaW5nIHRoZSBvdXRjb21lcyBvZiBtYW55IGRlY2lzaW9uIHRyZWVzLiBCeSB0YWtpbmcgdGhlIGF2ZXJhZ2Ugb2YgbWFueSBtb2RlbHMsIHdlIHJlZHVjZSB2YXJpYW5jZSBhbmQgdHlwaWNhbGx5IGltcHJvdmUgdGhlIHByZWRpY3Rpb24gZXJyb3IgcmF0ZS4NCg0KaW5zdGFsbC5wYWNrYWdlcygiTUFTUyIpIA0KDQppbnN0YWxsLnBhY2thZ2VzKCJyYW5kb21Gb3Jlc3QiKSANCg0KbGlicmFyeShNQVNTKSANCg0KbGlicmFyeShyYW5kb21Gb3Jlc3QpIA0KDQpUaGUgZGF0YSBzZXQgaXMgSG91c2luZyBWYWx1ZXMgaW4gU3VidXJicyBvZiBCb3N0b24uIFRoZXJlIGFyZSA1MDYgb2JzZXJ2YXRpb25zIGFuZCAxNCBmZWF0dXJlcy4NCmBgYHtyfQ0KZGF0YShCb3N0b24pDQo/Qm9zdG9uDQpgYGANCg0KIyMgUmFuZG9tIEZvcmVzdA0KDQpDcmVhdGUgYSB0cmFpbmluZyBkYXRhIHNldCB1c2luZyAzMDAgb2YgdGhlIDUwNiBvYnNlcnZhdGlvbnMuDQpgYGB7cn0NCnNldC5zZWVkKDEwMSkNCnRyYWluIDwtIHNhbXBsZSgxOm5yb3coQm9zdG9uKSwzMDApDQpgYGANCg0KVXNlIHRoZSAqcmFuZG9tRm9yZXN0KiBmdW5jdGlvbiB0byBnZW5lcmF0ZSB0aGUgcmFuZG9tIGZvcmVzdCBtb2RlbCB1c2luZyB0aGUgdHJhaW5pbmcgc2V0LiBUaGUgdGFyZ2V0IHZhcmlhYmxlIGlzIGBtZWR2YCwgdGhlIG1lZGlhbiBob21lIHZhbHVlIG9mIG93bmVyLW9jY3VwaWVkIGhvbWVzLg0KYGBge3J9DQpyZi5Cb3N0b24gPSByYW5kb21Gb3Jlc3QobWVkdn4uLCBkYXRhPUJvc3Rvbiwgc3Vic2V0PXRyYWluKQ0KcmYuQm9zdG9uDQpgYGANCg0KVGhlIGZpcnN0IG1vZGVsIHVzZWQgYWxsIGF2YWlsYWJsZSBwcmVkaWN0b3JzIHRvIHNwbGl0IHRoZSBub2RlcyBvZiB0aGUgZGVjaXNpb24gdHJlZXMuIFdlIGNhbiBhZGp1c3QgdGhpcyB2YWx1ZSB0byBzZWUgaWYgdGhlcmUgaXMgYSBkaWZmZXJlbnQuIA0KDQpDcmVhdGUgdHdvIHZlY3RvciB2YXJpYWJsZXMgdG8gaG9sZCBvdXQtb2YtYmFnIGVycm9yIGFuZCB0ZXN0IGVycm9yIGZvciBlYWNoIG1vZGVsLg0KDQpgYGB7cn0NCm9vYi5lcnIgPC0gZG91YmxlKDEzKQ0KdGVzdC5lcnIgPC0gZG91YmxlKDEzKQ0KYGBgDQoNCiptdHJ5KiBpcyBhIHBhcmFtZXRlciBvZiB0aGUgcmFuZG9tRm9yZXN0IGZ1bmN0aW9uIGNvcnJlc3BvbmRpbmcgdG8gdGhlIG51bWJlciBvZiBmZWF0dXJlcyB0byByYW5kb21seSBzZWxlY3QuIFdlIGNyZWF0ZSBhIGxvb3AgdGhhdCB3aWxsIGZpdCBtb2RlbHMgdXNpbmcgMSwgMiwgMyByYW5kb21seSBzZWxlY3RlZCBmZWF0dXJlcyBmb3IgZWFjaCBzcGxpdCwgdXAgdG8gMTMgd2hpY2ggaXMgYWxsIHRoZSBmZWF0dXJlcy4NCg0KRm91ciBodW5kcmVkIHRyZWVzIHdpbGwgYmUgY3JlYXRlZCBmb3IgZWFjaCB2YWx1ZSBvZiAqbXRyeSouDQoNCkZvciBlYWNoIHZhbHVlIG9mICptdHJ5KiB0aGUgYXZlcmFnZSBNZWFuIFNxdWFyZWQgRXJyb3IgKE1TRSkgd2lsbCBiZSBjYXB0dXJlZCBpbiB0aGUgYG9vYi5lcnJgIHZhcmlhYmxlLiANCg0KV2UgdGhlbiBwcmVkaWN0IHRoZSB2YWx1ZXMgb2YgYG1lZHZgIGZvciB0aGUgb2JzZXJ2YXRpb25zIG5vdCBpbiB0aGUgdHJhaW5pbmcgc2V0Lg0KDQpGb3IgZWFjaCB2YWx1ZSBvZiAqbXRyeSogdGhlIFJlc2lkdWFsIFNxdWFyZWQgRXJyb3IgKFJTUykgd2lsbCBiZSBjYXB0dXJlZCBpbiB0aGUgYHRlc3QuZXJyYCB2YXJpYWJsZS4NCg0KRmluYWxseSwgdGhlIHZhbHVlcyBvZiAqbXRyeSogYXJlIHByaW50ZWQgb3V0IGp1c3QgdG8gc2hvdyB0aGUgbG9vcCBmdW5jdGlvbmVkLg0KYGBge3J9DQpmb3IgKG10cnkgaW4gMToxMykgew0KICBmaXQgPC0gcmFuZG9tRm9yZXN0KG1lZHZ+LiwgZGF0YT1Cb3N0b24sIHN1YnNldD10cmFpbiwgbXRyeSA9IG10cnksIG50cmVlPTQwMCkNCiAgb29iLmVyclttdHJ5XSA8LSBmaXQkbXNlWzQwMF0NCiAgcHJlZCA8LSBwcmVkaWN0KGZpdCwgQm9zdG9uWy10cmFpbixdKQ0KICB0ZXN0LmVyclttdHJ5XSA8LSB3aXRoKEJvc3RvblstdHJhaW4sXSwgbWVhbigobWVkdi1wcmVkKV4yKSkNCiAgY2F0KG10cnksIiAiKQ0KfQ0KYGBgDQoNCldlIHBsb3QgdGhlIE1TRSBhbmQgUlNTIGZvciBlYWNoIHZhbHVlIG9mICptdHJ5KiB1c2luZyB0aGUgKm1hdHBsb3QqIGZ1bmN0aW9uLg0KYGBge3J9DQptYXRwbG90KDE6bXRyeSwgY2JpbmQodGVzdC5lcnIsb29iLmVyciksIHBjaD0xOSwgY29sPWMoInJlZCIsImJsdWUiKSwgdHlwZT0iYiIsIHlsYWI9Ik1lYW4gU3F1YXJlZCBFcnJvciIpDQpsZWdlbmQoInRvcHJpZ2h0IiwgbGVnZW5kPWMoIk9PQiIsICJUZXN0IiksIHBjaD0xOSwgY29sPWMoInJlZCIsImJsdWUiKSkNCmBgYA0KDQpUaGUgbWVhbiBzcXVhcmVkIGVycm9yIG9mIHRoZSBwcmVkaWN0aW9ucyBkcm9wcyBmcm9tIDI2ICh3aGVuIG9ubHkgYSBzaW5nbGUgZmVhdHVyZSBpcyB1c2VkKSB0byBhIG1pbmltdW0gYXJvdW5kIDEyIHdoZW4gZml2ZSByYW5kb20gZmVhdHVyZXMgYXJlIHVzZWQgdG8gc3BsaXQgdGhlIGRhdGEgYXQgZWFjaCBub2RlLg0KDQojIyBCb29zdGluZyBhbiBFbnNlbWJsZSBEZWNpc2lvbiBUcmVlIE1ldGhvZA0KDQpCb29zdGluZyBpcyB2ZXJ5IHNpbWlsYXIgdG8gcmFuZG9tIGZvcmVzdCwgZXhjZXB0IHRoYXQgZWFjaCBuZXcgdHJlZSBzZWVrcyB0byBpbXByb3ZlIHVwb24gdGhlIHBlcmZvcm1hbmNlIG9mIHRoZSBwcmV2aW91cyB0cmVlLiBUaGUgdHJlZXMgYXJlIGNyZWF0ZWQgc2VxdWVudGlhbGx5IGFuZCBhcmUgc2hhbGxvdywgcG9zc2libHkgbGltaXRlZCB0byB0d28gb3IgZm91ciB0ZXJtaW5hbCBub2Rlcy4gQm9vc3RpbmcgcmVkdWNlcyBiaWFzIGluIGEgbW9kZWwgaGVscGluZyBpdCBwZXJmb3JtIGJldHRlciBvbiB0ZXN0IGRhdGEuDQoNCkJvb3N0aW5nIG1heSBiZSBwZXJmb3JtZWQgd2l0aCB0aGUgYGdibWAgcGFja2FnZSwgR3JhZGllbnQgQm9vc3RlZCBNYWNoaW5lcy4NCg0KYGBge3J9DQpyZXF1aXJlKGdibSkNCmBgYA0KDQpUaGVyZSBhcmUgdGhyZWUgdHVuaW5nIHBhcmFtZXRlcnMgZm9yIGJvb3N0aW5nLg0KDQoxLiBUaGUgbnVtYmVyIG9mIHRyZWVzDQoyLiBUaGUgc2hyaW5rYWdlIHBhcmFtZXRlciAqbGFtYmRhKg0KMy4gVGhlIG51bWJlciBvZiBzcGxpdHMgYWxsb3dlZCBmb3IgZWFjaCB0cmVlDQpgYGB7cn0NCg0KYGBgDQoNCkNhbGwgZ2JtIHRvIGNyZWF0ZSAxMCwwMDAgdHJlZXMuIFdlIHBpY2sgYW4gYXJiaXRyYXJ5IHZhbHVlIGZvciBsYW1iZGEgLSAwLjAxLCBsYWJlbGVkIHNocmlua2FnZS4gSW50ZXJhY3Rpb24gZGVwdGggaW5kaWNhdGVzIHRoZSBudW1iZXIgb2Ygc3BsaXRzLiBXZSBsaW1pdCBlYWNoIHRyZWUgdG8gZm91ciBzcGxpdHMuIFRoZSBkaXN0cmlidXRpb24gaXMgc2V0IGJhc2VkIG9uIHRoZSBsb3NzIGZ1bmN0aW9uIG9mIHRoZSBhbGdvcml0aG0uIE91cnMgaXMgR2F1c3NpYW4gYmVjYXVzZSB3ZSBhcmUgbWVhc3VyaW5nIHN1Y2Nlc3MgYmFzZWQgb24gc3F1YXJlZCBlcnJvci4NCmBgYHtyfQ0KYm9vc3QuYm9zdG9uIDwtIGdibShtZWR2fi4sIGRhdGEgPSBCb3N0b25bdHJhaW4sXSwgZGlzdHJpYnV0aW9uID0gImdhdXNzaWFuIiwgbi50cmVlcz0xMDAwMCwgc2hyaW5rYWdlID0gMC4wMSwgaW50ZXJhY3Rpb24uZGVwdGg9NCkNCnN1bW1hcnkoYm9vc3QuYm9zdG9uKQ0KYGBgDQoNClRoZSBzdW1tYXJ5IG91dHB1dCBpcyBhIHZhcmlhYmxlIGltcG9ydGFuY2UgcGxvdC4gVGhlIG51bWJlciBvZiByb29tcyBhbmQgdGhlIHBlcmNlbnRhZ2Ugb2YgImxvdyBzdGF0dXMiIHBlb3BsZSBhcHBlYXIgdG8gYmUgdGhlIG1vc3QgaW1wb3J0YW50IHByZWRpY3RvcnMgb2YgYG1lZHZgLg0KDQpXZSBjYW4gcGxvdCB0aGUgcmVsYXRpb25zaGlwcyBiZXR3ZWVuIGBtZWR2YCBhbmQgZWFjaCB2YXJpYWJsZSBmcm9tIHRoZSBib29zdGluZyBvYmplY3QuDQpgYGB7cn0NCnBsb3QoYm9vc3QuYm9zdG9uLCBpPSJybSIpDQpgYGANCg0KV2UgY2FuIHNlZSB0aGF0IGFzIHRoZSBudW1iZXIgb2Ygcm9vbXMgaW5jcmVhc2VzLCBgbWVkdmAgKGFzIGEgZnVuY3Rpb24gb2Ygcm0pIGluY3JlYXNlcywgZXNwZWNpYWxseSBhZnRlciBzZXZlbiByb29tcy4NCg0KRmluYWxseSwgd2Ugd2lsbCBwcmVkaWN0IHZhbHVlcyBmcm9tIHRoZSB0ZXN0IGRhdGEgc2V0IHVzaW5nIHRoZSBib29zdGVkIG1vZGVsLg0KDQoqIFdlIGNyZWF0ZSBhIHZhcmlhYmxlIHRvIGhvbGQgdGhlIG51bWJlciAxMDAgdGhyb3VnaCAxMCwwMDAsIGluY3JlbWVudGFsbHkgaW5jcmVhc2VkIGJ5IDEwMC4NCg0KKiBXZSBjcmVhdGUgYSB2YXJpYWJsZSB0byBob2xkIG91ciBwcmVkdWN0aW9uIG9uIHRoZSB0ZXN0IGRhdGEgc2V0IHVzaW5nIHRoZSBib29zdC5ib3N0b24gYm9vc3RlZCBkZWNpc2lvbiB0cmVlIG1vZGVsLg0KDQoqIFRoZSAqZGltKiBmdW5jdGlvbiBzaG93cyB0aGF0IHRoZSBvdXRwdXQgb2YgdGhlIHByZWRpY3Rpb24gY29udGFpbnMgMTAwIG9ic2VydmF0aW9ucyBjb3JyZXNwb25kaW5nIHRvIHRoZSBudW1iZXIgb2YgbW9kZWxzIHdlIHRyaWVkIHdpdGggbGFyZ2VyIG51bWJlcnMgb2YgdHJlZXMgZnJvbSBgbi50cmVlc2AuDQoNCiogV2UgY3JlYXRlIHRoZSB2YXJpYWJsZSBgYmVycmAgdG8gaG9sZCB0aGUgcmVzaWR1YWwgc3VtZSBvZiBzcXVhcmVzIGZyb20gdGhlIHByZWRpY3Rpb25zLg0KDQoqIFdlIHBsb3QgdGhlIG51bWJlciBvZiB0cmVlcyAod2UgaGF2ZSAxMDAgbW9kZWxzIGVhY2ggd2l0aCBhIGRpZmZlcmVudCBudW1iZXIgb2YgdHJlZXMpIGFnYWluc3QgdGhlIHJlc2lkdWFsIHN1bSBvZiBzcXVhcmVzIChSU1MpLg0KDQoqIEZpbmFsbHkgd2UgYWRkIGEgaG9yaXpvbnRhbCBsaW5lIHRvIHRoZSBwbG90IGNvbXBhcmluZyB0aGUgUlNTIHRvIHRoZSB0ZXN0IGVycm9yIGZyb20gdGhlIHVuYm9vc3RlZCByYW5kb20gZm9yZXN0IG1vZGVsLg0KDQpgYGB7cn0NCm4udHJlZXMgPSBzZXEoZnJvbT0xMDAsIHRvPTEwMDAwLCBieT0xMDApDQoNCnByZWRtYXQgPSBwcmVkaWN0KGJvb3N0LmJvc3RvbiwgbmV3ZGF0YSA9IEJvc3RvblstdHJhaW4sXSwgbi50cmVlcz1uLnRyZWVzKQ0KDQpkaW0ocHJlZG1hdCkNCg0KYmVyciA9IHdpdGgoQm9zdG9uWy10cmFpbixdLCBhcHBseSggKHByZWRtYXQtbWVkdileMiwyLG1lYW4pICkNCg0KcGxvdChuLnRyZWVzLCBiZXJyLCBwY2g9MTksIHlsYWI9Ik1lYW4gU3F1YXJlZCBFcnJvciIsIHhsYWI9IiMgVHJlZXMiLCBtYWluPSJCb29zdGluZyBUZXN0IEVycm9yIikNCg0KYWJsaW5lKGg9bWluKHRlc3QuZXJyKSwgY29sPSJncmVlbiIpDQpgYGANCg0KV2Ugc2VlIHRoYXQgdGhlIGJvb3N0ZWQgbW9kZWwgcmVkdWNlcyB0aGUgdGVzdCBlcnJvciBzaWduaWZpY2FudGx5IGJ1dCBhZnRlciB+MywwMDAgdHJlZXMsIHRoZSBlZmZlY3QgaXMgbWluaW1pemVkIHRoZSB0aGUgZXJyb3IgZmxhdHRlbnMgb3V0LiA=