Subscribe to DSC Newsletter

Hidden Decision Trees vs. Decision Trees or Logistic Regression

Hidden Decision Trees is a statistical and data mining methodology (just like logistic regression, SVM, neural networks or decision trees) to handle problems with large amounts of data, non-linearities and strongly correlated dependent variables.

The technique is easy to implement in any programming language. It is more robust than decision trees or logistic regression, and help detect natural final nodes. Implementations typically rely heavily on large, granular hash tables.

No decision tree is actually built (thus the name hidden decision trees), but the final output of an hidden decision tree procedure consists of a few hundred nodes from multiple non-overlapping small decision trees. Each of these parent (invisible) decision trees corresponds e.g. to a particular type of fraud, in fraud detection models. Interpretation is straightforward, in contrast with traditional decision trees.

The methodology was first invented in the context of credit card fraud detection, back in 2003. It is not implemented in any statistical package at this time. Frequently, hidden decision trees are combined with logistic regression in an hybrid scoring algorithm, where 80% of the transactions are scored via hidden decision trees, while the remaining 20% are scored using a compatible logistic regression type of scoring.

Hidden decision trees take advantage of the structure of large multivariate features typically observed when scoring a large number of transactions, e.g. for fraud detection. The technique is not connected with hidden Markov fields.

Related article

Views: 13924

Reply to This

Replies to This Discussion

However, I will probably present on this subject at the SAS Data Mining Conference in October.
Hi Vincent,

Great work ! Few queries ! What are the resources available to apply & validate. As comment trails suugest, it is not available with any of the available software,neither any published material,case study, application & so on. If you can share any learining/application resources may contribute for the few reseracher doing research in the decision sciences.

Anticipate & expect more details.

Thanks !
I am working on a solution where there's no need to use an hybrid strategy anymore. Observations that do not belong to a "statistically significant" node will be assigned a metric computed on the k-nearest nodes, rather than processed through constrained logistic regression. A correction for bias (for these observations) will be introduced. An example of a successful application will be provided: predicting the commercial value and/or volume of a keyword in Google advertising campaigns.

Dear Dr Vincent,

It is pretty interesting indeed.

May I request you to provide the URL of this publication, if it is published in a Journal or conference? I am curious to know how it works.

Thx n rgds,


Is there a Python implementation anywhere, or an example, that we can look at?  I realize it's not in sklearn, just wondering how I could use this, even if I had to build it from scratch.


Follow Us

On Data Science Central

On DataViz

On Hadoop

© 2018 is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service