Decision Tree Method for Classification

Back to Homepage

I use the ISLR library to get the Carseats data set. I also install the tree package.

library(ISLR) data(“Carseats”) install.packages(“tree”) library(tree)

Decision Tree for Classification

I create a histogram of sales to help decide what the classification targets will be. The values on the x-axis correspond to thousands of dollars.

attach(Carseats)
hist(Sales)

Based on the histogram, I create a binary variable to split the Sales into “high” and “low” at the $8,000 mark.

salesVol <- ifelse(Sales<=8,"No","Yes")

I create a new data frame to hold the Carseats data and the labels for each observation.

Carseats <- data.frame(Carseats,salesVol)
head(Carseats)

I create a decision tree model using the new Carseats data frame. I make sure to exclude “Sales” from the model since my salesVol variable is directly based on “Sales”.

I also get a summary of my decision tree model.

tree.carseats <- tree(salesVol~.-Sales, data=Carseats)
summary(tree.carseats)

Classification tree:
tree(formula = salesVol ~ . - Sales, data = Carseats)
Variables actually used in tree construction:
[1] "ShelveLoc"   "Price"       "Income"      "CompPrice"  
[5] "Population"  "Advertising" "Age"         "US"         
Number of terminal nodes:  27 
Residual mean deviance:  0.4575 = 170.7 / 373 
Misclassification error rate: 0.09 = 36 / 400 

I end up with 27 terminal nodes. I can actually plot the model to see a tree diagram.

plot(tree.carseats)
text(tree.carseats, pretty = 0,  cex = 0.45)

The value under each terminal indicates the predicted value for observations in that segment. The height of each node indicates how much the misclassification error rate was reduced by the split. The improvement in misclassification error rate gets smaller as we make more splits.

Test and Training

Create a training data set with 250 rows. There are 400 observations total.

set.seed(1011)
train = sample(1:nrow(Carseats),250)

Create and plot a decision tree using the training data.

tree.carseatsTrainModel <- tree(salesVol~.-Sales, data = Carseats, subset=train)
plot(tree.carseatsTrainModel)
text(tree.carseatsTrainModel, pretty=0, cex=0.45)

Predict values for the remaining test data. We specify the type as classification to predict the class labels.

tree.pred <- predict(tree.carseatsTrainModel, Carseats[-train,], type="class")

Evaluate the error

with(Carseats[-train,], table(tree.pred,salesVol))
         salesVol
tree.pred No Yes
      No  72  27
      Yes 18  33
(72+33)/150
[1] 0.7

The misclassification error rate is 0.30.

Use cross validation cv to prune the tree. We indicate we will use misclassification error in the parameters.

cv.carseats <- cv.tree(tree.carseatsTrainModel, FUN=prune.misclass)
plot(cv.carseats)

I pick 13 as the number of terminal nodes as it is in the middle of the area with lowest deviance.

prune.carseats <- prune.misclass(tree.carseatsTrainModel, best=13)
plot(prune.carseats)
text(prune.carseats, pretty=0, cex=0.7)

I will use this new “pruned” model on the test data set, then evaluate the error.

tree.pred.Pruned <- predict(prune.carseats, Carseats[-train,], type="class")
with(Carseats[-train,], table(tree.pred.Pruned,salesVol))
                salesVol
tree.pred.Pruned No Yes
             No  72  28
             Yes 18  32
(72+32)/150
[1] 0.6933333

I find that the error rate did not improve with the pruned model.

LS0tDQp0aXRsZTogIkRlY2lzaW9uIFRyZWUgTWV0aG9kcyINCm91dHB1dDoNCiAgaHRtbF9ub3RlYm9vazoNCiAgICBmaWdfaGVpZ2h0OiA2DQogIGh0bWxfZG9jdW1lbnQ6IGRlZmF1bHQNCiAgcGRmX2RvY3VtZW50OiBkZWZhdWx0DQogIHdvcmRfZG9jdW1lbnQ6IGRlZmF1bHQNCi0tLQ0KDQojIERlY2lzaW9uIFRyZWUgTWV0aG9kIGZvciBDbGFzc2lmaWNhdGlvbg0KW0JhY2sgdG8gSG9tZXBhZ2VdKGh0dHBzOi8vYWxleGlzaWRsZXR0ZXdpbHNvbi5naXRodWIuaW8vKQ0KDQpJIHVzZSB0aGUgSVNMUiBsaWJyYXJ5IHRvIGdldCB0aGUgYENhcnNlYXRzYCBkYXRhIHNldC4gSSBhbHNvIGluc3RhbGwgdGhlIGB0cmVlYCBwYWNrYWdlLiANCg0KbGlicmFyeShJU0xSKQ0KZGF0YSgiQ2Fyc2VhdHMiKQ0KaW5zdGFsbC5wYWNrYWdlcygidHJlZSIpDQpsaWJyYXJ5KHRyZWUpDQoNCiMjIERlY2lzaW9uIFRyZWUgZm9yIENsYXNzaWZpY2F0aW9uDQoNCkkgY3JlYXRlIGEgaGlzdG9ncmFtIG9mIHNhbGVzIHRvIGhlbHAgZGVjaWRlIHdoYXQgdGhlIGNsYXNzaWZpY2F0aW9uIHRhcmdldHMgd2lsbCBiZS4gVGhlIHZhbHVlcyBvbiB0aGUgeC1heGlzIGNvcnJlc3BvbmQgdG8gdGhvdXNhbmRzIG9mIGRvbGxhcnMuDQpgYGB7cn0NCmF0dGFjaChDYXJzZWF0cykNCmhpc3QoU2FsZXMpDQpgYGANCg0KQmFzZWQgb24gdGhlIGhpc3RvZ3JhbSwgSSBjcmVhdGUgYSBiaW5hcnkgdmFyaWFibGUgdG8gc3BsaXQgdGhlIFNhbGVzIGludG8gImhpZ2giIGFuZCAibG93IiBhdCB0aGUgJDgsMDAwIG1hcmsuIA0KYGBge3J9DQpzYWxlc1ZvbCA8LSBpZmVsc2UoU2FsZXM8PTgsIk5vIiwiWWVzIikNCmBgYA0KDQpJICBjcmVhdGUgYSBuZXcgZGF0YSBmcmFtZSB0byBob2xkIHRoZSBgQ2Fyc2VhdHNgIGRhdGEgYW5kIHRoZSBsYWJlbHMgZm9yIGVhY2ggb2JzZXJ2YXRpb24uDQpgYGB7cn0NCkNhcnNlYXRzIDwtIGRhdGEuZnJhbWUoQ2Fyc2VhdHMsc2FsZXNWb2wpDQpoZWFkKENhcnNlYXRzKQ0KYGBgDQoNCkkgY3JlYXRlIGEgZGVjaXNpb24gdHJlZSBtb2RlbCB1c2luZyB0aGUgbmV3IGBDYXJzZWF0c2AgZGF0YSBmcmFtZS4gSSBtYWtlIHN1cmUgdG8gZXhjbHVkZSAiU2FsZXMiIGZyb20gdGhlIG1vZGVsIHNpbmNlIG15IHNhbGVzVm9sIHZhcmlhYmxlIGlzIGRpcmVjdGx5IGJhc2VkIG9uICJTYWxlcyIuDQoNCkkgYWxzbyBnZXQgYSBzdW1tYXJ5IG9mIG15IGRlY2lzaW9uIHRyZWUgbW9kZWwuDQpgYGB7cn0NCnRyZWUuY2Fyc2VhdHMgPC0gdHJlZShzYWxlc1ZvbH4uLVNhbGVzLCBkYXRhPUNhcnNlYXRzKQ0Kc3VtbWFyeSh0cmVlLmNhcnNlYXRzKQ0KYGBgDQoNCkkgZW5kIHVwIHdpdGggMjcgdGVybWluYWwgbm9kZXMuIEkgY2FuIGFjdHVhbGx5IHBsb3QgdGhlIG1vZGVsIHRvIHNlZSBhIHRyZWUgZGlhZ3JhbS4gDQoNCmBgYHtyfQ0KcGxvdCh0cmVlLmNhcnNlYXRzKQ0KdGV4dCh0cmVlLmNhcnNlYXRzLCBwcmV0dHkgPSAwLCAgY2V4ID0gMC40NSkNCmBgYA0KVGhlIHZhbHVlIHVuZGVyIGVhY2ggdGVybWluYWwgaW5kaWNhdGVzIHRoZSBwcmVkaWN0ZWQgdmFsdWUgZm9yIG9ic2VydmF0aW9ucyBpbiB0aGF0IHNlZ21lbnQuIFRoZSBoZWlnaHQgb2YgZWFjaCBub2RlIGluZGljYXRlcyBob3cgbXVjaCB0aGUgbWlzY2xhc3NpZmljYXRpb24gZXJyb3IgcmF0ZSB3YXMgcmVkdWNlZCBieSB0aGUgc3BsaXQuIFRoZSBpbXByb3ZlbWVudCBpbiBtaXNjbGFzc2lmaWNhdGlvbiBlcnJvciByYXRlIGdldHMgc21hbGxlciBhcyB3ZSBtYWtlIG1vcmUgc3BsaXRzLg0KDQojIyBUZXN0IGFuZCBUcmFpbmluZw0KDQpDcmVhdGUgYSB0cmFpbmluZyBkYXRhIHNldCB3aXRoIDI1MCByb3dzLiBUaGVyZSBhcmUgNDAwIG9ic2VydmF0aW9ucyB0b3RhbC4NCmBgYHtyfQ0Kc2V0LnNlZWQoMTAxMSkNCnRyYWluID0gc2FtcGxlKDE6bnJvdyhDYXJzZWF0cyksMjUwKQ0KYGBgDQoNCkNyZWF0ZSBhbmQgcGxvdCBhIGRlY2lzaW9uIHRyZWUgdXNpbmcgdGhlIHRyYWluaW5nIGRhdGEuDQpgYGB7cn0NCnRyZWUuY2Fyc2VhdHNUcmFpbk1vZGVsIDwtIHRyZWUoc2FsZXNWb2x+Li1TYWxlcywgZGF0YSA9IENhcnNlYXRzLCBzdWJzZXQ9dHJhaW4pDQpwbG90KHRyZWUuY2Fyc2VhdHNUcmFpbk1vZGVsKQ0KdGV4dCh0cmVlLmNhcnNlYXRzVHJhaW5Nb2RlbCwgcHJldHR5PTAsIGNleD0wLjQ1KQ0KYGBgDQoNClByZWRpY3QgdmFsdWVzIGZvciB0aGUgcmVtYWluaW5nIHRlc3QgZGF0YS4gV2Ugc3BlY2lmeSB0aGUgdHlwZSBhcyBjbGFzc2lmaWNhdGlvbiB0byBwcmVkaWN0IHRoZSBjbGFzcyBsYWJlbHMuDQpgYGB7cn0NCnRyZWUucHJlZCA8LSBwcmVkaWN0KHRyZWUuY2Fyc2VhdHNUcmFpbk1vZGVsLCBDYXJzZWF0c1stdHJhaW4sXSwgdHlwZT0iY2xhc3MiKQ0KYGBgDQoNCkV2YWx1YXRlIHRoZSBlcnJvcg0KYGBge3J9DQp3aXRoKENhcnNlYXRzWy10cmFpbixdLCB0YWJsZSh0cmVlLnByZWQsc2FsZXNWb2wpKQ0KYGBgDQpgYGB7cn0NCig3MiszMykvMTUwDQpgYGANClRoZSBtaXNjbGFzc2lmaWNhdGlvbiBlcnJvciByYXRlIGlzIDAuMzAuDQoNClVzZSBjcm9zcyB2YWxpZGF0aW9uICpjdiogdG8gcHJ1bmUgdGhlIHRyZWUuIFdlIGluZGljYXRlIHdlIHdpbGwgdXNlIG1pc2NsYXNzaWZpY2F0aW9uIGVycm9yIGluIHRoZSBwYXJhbWV0ZXJzLg0KYGBge3J9DQpjdi5jYXJzZWF0cyA8LSBjdi50cmVlKHRyZWUuY2Fyc2VhdHNUcmFpbk1vZGVsLCBGVU49cHJ1bmUubWlzY2xhc3MpDQpwbG90KGN2LmNhcnNlYXRzKQ0KDQpgYGANCg0KSSBwaWNrIDEzIGFzIHRoZSBudW1iZXIgb2YgdGVybWluYWwgbm9kZXMgYXMgaXQgaXMgaW4gdGhlIG1pZGRsZSBvZiB0aGUgYXJlYSB3aXRoIGxvd2VzdCBkZXZpYW5jZS4NCmBgYHtyfQ0KcHJ1bmUuY2Fyc2VhdHMgPC0gcHJ1bmUubWlzY2xhc3ModHJlZS5jYXJzZWF0c1RyYWluTW9kZWwsIGJlc3Q9MTMpDQpwbG90KHBydW5lLmNhcnNlYXRzKQ0KdGV4dChwcnVuZS5jYXJzZWF0cywgcHJldHR5PTAsIGNleD0wLjcpDQpgYGANCg0KSSB3aWxsIHVzZSB0aGlzIG5ldyAicHJ1bmVkIiBtb2RlbCBvbiB0aGUgdGVzdCBkYXRhIHNldCwgdGhlbiBldmFsdWF0ZSB0aGUgZXJyb3IuDQpgYGB7cn0NCnRyZWUucHJlZC5QcnVuZWQgPC0gcHJlZGljdChwcnVuZS5jYXJzZWF0cywgQ2Fyc2VhdHNbLXRyYWluLF0sIHR5cGU9ImNsYXNzIikNCndpdGgoQ2Fyc2VhdHNbLXRyYWluLF0sIHRhYmxlKHRyZWUucHJlZC5QcnVuZWQsc2FsZXNWb2wpKQ0KYGBgDQpgYGB7cn0NCig3MiszMikvMTUwDQpgYGANCg0KSSBmaW5kIHRoYXQgdGhlIGVycm9yIHJhdGUgZGlkIG5vdCBpbXByb3ZlIHdpdGggdGhlIHBydW5lZCBtb2RlbC4=