Subscribe to DSC Newsletter

About 200,000 data miners needed according to McKinsey

Big data: The next frontier for innovation, competition, and productivity

Analyzing large data sets—so called big data—will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus as long as the right policies and enablers are in place.

Research by MGI and McKinsey's Business Technology Office examines the state of digital data and documents the significant value that can potentially be unlocked.

For example, a retailer using big data to the full could increase its operating margin by more than 60 percent. Harnessing big data in the public sector has enormous potential, too. If US health care were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year. Two-thirds of that would be in the form of reducing US health care expenditure by about 8 percent. In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues. And users of services enabled by personal location data could capture $600 billion in consumer surplus.

The use of data has long been part of the impact of information and communication technology, but the scale and scope of changes that big data are bringing about is today at an inflection point as a number of trends converge. The increasing volume and detail of information captured by enterprises, together with the rise of multimedia, social media, and the Internet of Things will fuel exponential growth in data for the foreseeable future. Data have now reached every sector in the economy.

Big data will help to create new growth opportunities and entirely new categories of companies. Many of these will be companies that sit in the middle of large information flows where data about products and services, buyers and suppliers, consumer preferences and intent can be captured and analyzed.

Drawing on detailed analysis of five domains—health care, retailing, the public sector, manufacturing, and personal location data—the research identifies five broadly applicable ways to leverage big data:

  • Making big data more accessible in a timely manner. In the public sector, making data more accessible across otherwise separated departments can sharply reduce search and processing time. In manufacturing, integrating data from R&D, engineering, and manufacturing units to enable concurrent engineering can cut time-to-market.
  • Using data and experimentation to expose variability and improve performance. As they create and store more transactional data in digital form, organizations can collect more accurate and detailed performance data on everything from product inventories to personnel sick days.
  • Segmenting populations to customize actions. Big data allow organizations to create ever-narrower segmentations and to tailor services precisely to meet customer needs. This approach is well-known in marketing and risk management, but can be revolutionary in places like the public sector.
  • Replacing and supporting human decision-making with automated algorithms. Sophisticated analytics can substantially improve decision making, minimize risks, and unearth valuable insights that would otherwise remain hidden. Such analytics have applications from tax agencies to retailers.
  • Innovating new business models, products, and services. Manufacturers are using data obtained from the use of products to improve the development of the next generation of products, and to create innovative after-sales service offerings. The emergence of real-time location data has created a new set of location-based mobile services from navigation to people tracking.

However, companies and policy makers must tackle significant hurdles to fully capture big data's potential. The United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of big data. And there are difficult issues around privacy and security, as well as hurdles associated with technological limitations such as the difficulties of pooling data from legacy IT systems with incompatible formats.

Read the executive summary (PDF - 920 KB)  
Read the full report (PDF - 2.87 MB) 

Full story at

Views: 350


You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Comment by Thomas Ball on May 18, 2011 at 5:24am
Matthew's observations are accurate.  In spite of McKinsey's brand name, its published output (via the MGI or McKinsey Quarterly) is almost never on the cutting edge.  Their article on Big Data is a case in point since it is, in many ways, a rehash of ideas published in Super Crunchers (2008 NYTs bestseller) or diffusing down from even earlier academic conferences such as the joint NSF/ASA conference on Massive Data Analysis in the mid-90s, one of the first efforts to reconcile the analytic gulf between a century of classic statistics (built on small datasets for practical reasons) and mining massive amounts of information.
Comment by Matthew OKane on May 18, 2011 at 2:09am

Although I think it is great that analytics is getting its time in the lime-light (and we all know some execs won't do anything until McKinsey tell them to) I'm very sceptical about their findings and get the impression they are trying to throw everything onto the 'Big Data' bandwagon.  

Their definition of Big Data:

'"Big Data" refers to datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze'

doesn't match many of the examples given in the document which were obviously achieved using traditional database and analytical tools using techniques that we have been using for decades.  They make no effort to distinguish the analytics from anything that has been done before the term 'Big Data' appeared in common use.

In my opinion, 'Big Data' isn't about building a predictive model on 100 million observations instead of 100,000 (which typically isn't worth the effort).  It's about mining vast amounts of unstructured data that would break traditional database models or social network analysis that requires use of every potential link across terabytes of data.

On Data Science Central

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service