Berry M.J.A. – Data Mining Techniques For Marketing, Sales & Customer Relationship Management

A decision tree model consists of a set of rules for dividing a large heterogeneous population into smaller, more homogeneous groups with respect to a particular target variable. A decision tree may be painstakingly constructed by hand in the manner of Linnaeus and the generations of taxonomists that followed him, or it may be grown automatically by applying any one of several decision tree algorithms to a model set comprised of preclassified data. This chapter is mostly concerned with the algorithms for automatically generating decision trees. The target variable is usually categorical and the decision tree model is used either to calculate the probability that a given record belongs to each of the categories, or to classify the record by assigning it to the most likely class. Decision trees can also be used to estimate the value of a continuous variable, although there are other techniques more suitable to that task.

Classification

Anyone familiar with the game of Twenty Questions will have no difficulty understanding how a decision tree classifies records. In the game, one player

470643 c06.qxd 3/8/04 11:12 AM Page 167

Decision Trees 167

thinks of a particular place, person, or thing that would be known or recognized by all the participants, but the player gives no clue to its identity. The other players try to discover what it is by asking a series of yes-or-no questions. A good player rarely needs the full allotment of 20 questions to move all the way from

“Is it bigger than a bread box?” to “the Golden Gate Bridge.”

A decision tree represents such a series of questions. As in the game, the answer to the first question determines the follow-up question. The initial questions create broad categories with many members; follow-on questions divide the broad categories into smaller and smaller sets. If the questions are well chosen, a surprisingly short series is enough to accurately classify an incoming record.

The game of Twenty Questions illustrates the process of using a tree for appending a score or class to a record. A record enters the tree at the root node.

The root node applies a test to determine which child node the record will encounter next. There are different algorithms for choosing the initial test, but the goal is always the same: To choose the test that best discriminates among the target classes. This process is repeated until the record arrives at a leaf node.

All the records that end up at a given leaf of the tree are classified the same way. There is a unique path from the root to each leaf. That path is an expression of the rule used to classify the records.

Different leaves may make the same classification, although each leaf makes that classification for a different reason. For example, in a tree that classifies fruits and vegetables by color, the leaves for apple, tomato, and cherry might all predict “red,” albeit with varying degrees of confidence since there are likely to be examples of green apples, yellow tomatoes, and black cherries as well.

The decision tree in Figure 6.1 classifies potential catalog recipients as likely (1) or unlikely (0) to place an order if sent a new catalog.

The tree in Figure 6.1 was created using the SAS Enterprise Miner Tree Viewer tool. The chart is drawn according to the usual convention in data mining circles—with the root at the top and the leaves at the bottom, perhaps indicating that data miners ought to get out more to see how real trees grow.

Each node is labeled with a node number in the upper-right corner and the predicted class in the center. The decision rules to split each node are printed on the lines connecting each node to its children. The split at the root node on

“lifetime orders”; the left branch is for customers who had six or fewer orders and the right branch is for customers who had seven or more.

Any record that reaches leaf nodes 19, 14, 16, 17, or 18 is classified as likely to respond, because the predicted class in this case is 1. The paths to these leaf nodes describe the rules in the tree. For example, the rule for leaf 19 is If the customer has made more than 6.5 orders and it has been fewer than 765 days since the last order, the customer is likely to respond.

470643 c06.qxd 3/8/04 11:12 AM Page 168

168 Chapter 6

1

1

lifetime orders

< 6.5 ≥ 6.5 2 3 0 1 $ last 24 months days since last < 19.475 ≥ 19.475 < 756 ≥ 756 7 17 8 19 20 0 1 1 0 kitchen tot $980.3 1 0 < 19.325 ≥ 19.325 9 10 13 14 0 0 1 1 women’s underwear lifetime orders 0 1 < 1.5 ≥ 1.5 11 12 15 16 0 0 0 1 food < 1.5 ≥ 1.5 17 17 18 1 1 Figure 6.1 A binary decision tree classifies catalog recipients as likely or unlikely to place an order. Alert readers may notice that some of the splits in the decision tree appear to make no difference. For example, nodes 17 and 18 are differentiated by the number of orders they have made that included items in the food category, but both nodes are labeled as responders. That is because although the probability of response is higher in node 18 than in node 17, in both cases it is above the threshold that has been set for classifying a record as a responder. As a classifier, the model has only two outputs, one and zero. This binary classification throws away useful information, which brings us to the next topic, using decision trees to produce scores and probabilities. 470643 c06.qxd 3/8/04 11:12 AM Page 169 Decision Trees 169 Scoring Figure 6.2 is a picture of the same tree as in Figure 6.1, but using a different tree viewer and with settings modified so that the tree is now annotated with additional information—namely, the percentage of records in class 1 at each node. It is now clear that the tree describes a dataset containing half responders and half nonresponders, because the root node has a proportion of 50 percent. As described in Chapter 3, this is typical of a training set for a response model with a binary target variable. Any node with more than 50 percent responders is labeled with “1” in Figure 6.1, including nodes 17 and 18. Figure 6.2 clarifies the difference between these nodes. In Node 17, 52.8 percent of records represent responders, while in Node 18, 66.9 percent do. Clearly, a record in Node 18 is more likely to represent a responder than a record in Node 17. The proportion of records in the desired class can be used as a score, which is often more useful than just the classification. For a binary outcome, a classification merely splits records into two groups. A score allows the records to be sorted from most likely to least likely to be members of the desired class. Figure 6.2 A decision tree annotated with the proportion of records in class 1 at each node shows the probability of the classification. 470643 c06.qxd 3/8/04 11:12 AM Page 170 170 Chapter 6 For many applications, a score capable of rank-ordering a list is all that is required. This is sufficient to choose the top N percent for a mailing and to calculate lift at various depths in the list. For some applications, however, it is not sufficient to know that A is more likely to respond than B; we want to know that actual likelihood of a response from A. Assuming that the prior probability of a response is known, it can be used to calculate the probability of response from the score generated on the oversampled data used to build the tree. Alternatively, the model can be applied to preclassified data that has a distribution of responses that reflects the true population. This method, called backfitting, creates scores using the class proportions at the tree’s leaves to represent the probability that a record drawn from a similar population is a member of the class. These, and related issues, are discussed in detail in Chapter 3. Estimation Suppose the important business question is not who will respond but what will be the size of the customer’s next order? The decision tree can be used to answer that question too. Assuming that order amount is one of the variables available in the preclassified model set, the average order size in each leaf can be used as the estimated order size for any unclassified record that meets the criteria for that leaf. It is even possible to use a numeric target variable to build the tree; such a tree is called a regression tree. Instead of increasing the purity of a categorical variable, each split in the tree is chosen to decrease the variance in the values of the target variable within each child node.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154

Leave a Reply 0

Your email address will not be published. Required fields are marked *