A Data Science Central Community

Considering the number of target customer selection projects I do, Direct Mails appear to be a very popular communication and marketing channel amongst retailers.

Almost all the time, I use a combination of RFM, Decision Tree or Logistic Regression techniques for sorting, profiling and/or scoring customers (hopefully, I can post a separate detailed blog on this).

The best thing about a decision tree is that it has very less assumptions or requirements on the data unlike, let’s say, logistic regression. Another thing is that everyone can understand it! Depending on the software you use, there are a number of different Tree algorithms available with the most common being CHAID, CART and C5.

CART can handle only binary splits (produce splits of two child nodes). It uses a measure of impurity called Gini for splitting the nodes. This is a measure of dispersion that depends on the distribution of the outcome variables. Its values range from 1 (worst) to 0 (best). You get a 0 when all records of a node are falling under a single category level (e.g. all 10,000 customers in a terminal node are responders). This is a purely theoretical example, by the way!

In C5, splits are based on the ratio of the information gain. C5 prunes the tree by examining the error rate at each node and assuming that the true error rate is actually substantially worse. If N records arrive at a node, and E of them are classified incorrectly, then the error rate at that node is E/N.

Information gain can also be simply defined as –

Information (Parent Node) – Information (after splitting on a particular variable)

CHAID is an efficient decision tree technique based on the Chi-Square test of independence of 2 categorical fields. CHAID makes use of the Chi-square test in several ways—first to merge classes that do not have significantly different effects on the target variable; then to choose a best split; and finally to decide whether it is worth performing any additional splits on a node.

CHAID and C5 can handle multiple splits unlike CART. And as far as my own experiences go, I prefer CHAID over C5 as C5 tends to produce very bushy trees.

References:

Data Mining Techniques: Michael J.A. Berry & Gordon S. Linoff

Data Mining Techniques (Inside Customer Segmentation): Konstantinos Tsiptsis & Antonios Chorianopoulos

© 2020 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge