Subscribe to DSC Newsletter

Democratizing Data Assets: Learning From Data, Big and Small

When we devote so much time and energy talking about Big Data, are we neglecting the important things that you can do with Small Data?

Maybe, but... probably not.

Looking beyond the Big Data hype helps us to capture real value from advanced analytics on data, big and small. 

The drumbeat of Big Data dialogue in social media, in the press, and everywhere merely highlights the important roles that data and analytics are now playing in all sectors. While we read, think, and dream about Big Data, we also realize that the "big" in Big Data refers to more than just data volume. We know all about Big Data velocity, variety, value, and more.  For example: independent of data volume, you can have lots of variety in your data, you can have very tight real-time (data velocity) constraints, and you can derive huge value from your data assets. So, I believe that the deluge of Big Data discussions are not actually diverting our attention from small data, but they are indeed democratizing data assets -- causing us to give more attention to how we can learn from data, both big and small.  

So, how do we "learn from data"?

In the field of Machine Learning, algorithms are usually categorized as Supervised, Semi-Supervised, or Unsupervised. The first and second of these usually require the use of historical training data to build and improve classification and predictive models --- it is fair to say that (in most cases) the bigger the training data set, then the better (more complete, accurate, robust) will be our predictive analytics models.

The third category of Machine Learning (Unsupervised Learning) is essentially the purest form of Data Mining (in my opinion): it is data-driven, evidence-based, unfettered by models or preconceived notions regarding the patterns in the data. It is used to discover the patterns, anomalies, categories, correlations, and features in the data, both BIG and SMALL. This is true knowledge discovery from data. One article (Shabalin et al. 2009) described it this way: "unsupervised exploratory analysis plays an important role in the study of large, high-dimensional datasets that arise in a variety of applications".  When performed with rigorous systematic scientific methodology (as opposed to random "fishing expeditions"), the data mining application of Unsupervised Machine Learning algorithms becomes "powerful Jedi" Data Science.

"Discovery from Data" and "Learning from Data" certainly become more effective when the data set is big, but most types of unsupervised learning work just fine with relatively small data. There are four broad categories of unsupervised learning that you can apply to your data (big and small). These are:

  1. Novelty Discovery (also known as outlier or anomaly detection, but I prefer to call this "Surprise Discovery" -- finding the rare, unexpected, surprising thing in your data).
  2. Correlation Discovery (finding the patterns, relationships, and correlations in data -- e.g., using principal component analysis, or independent component analysis, or the maximal information coefficient).
  3. Class Discovery (finding new classes of events or behaviors, including finding new subclasses of previously known classes, or discovering improved rules for distinguishing and disambiguating known classes and subclasses).
  4. Association Discovery (discovering connections between different things, people, or events; or finding unusual, improbable co-occurring combinations of attributes, things, or objects -- this type of algorithm is usually at the heart of Recommender Engines -- it is sometimes called Link Mining, or Market Basket Analysis, or Graph Mining -- several fun examples of this are discussed in the final 7 minutes of the TedX talk "Big Data, Small World").

Yes, of course, knowledge discovery from data (i.e., Learning From Data) is FUN, especially if it is unsupervised!

You can see more discussion of unsupervised machine learning from small data (specifically for time series data) in the article "Hello World, I'm Learning From Data!", published at

Follow Kirk on Twitter at @KirkDBorne

Views: 1133

Tags: Analytics, BigData, DataMining, DataScience, MachineLearning


You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Comment by Kirk Borne on August 12, 2013 at 9:57am

@paul, with regard to "rigorous systematic scientific methodology", I am talking about: (1) hypothesis generation, (2) experimental design, (3) experiment & testing, (4) data collection & analysis, (5) discovery & inference from data, (6) hypothesis refinement, (7) go back to step #1.  This is the scientific method, or (more accurately) the scientific cycle. Following this formal process helps to prevent (often biased, subjective) non-statistical sampling and haphazard exploration of the data. Data Science to me should follow the scientific method: define a problem, design an experiment, do the experiment, learn from it, make data-informed inferences and decisions, and then build on that knowledge and experience.

Comment by Paul Kitko on August 12, 2013 at 9:50am

Can you elaborate on your "rigorous systematic scientific methodology?"

When performed with rigorous systematic scientific methodology (as opposed to random "fishing expeditions"), the data mining application of Unsupervised Machine Learning algorithms becomes "powerful Jedi" Data Science.


On Data Science Central

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service