A Data Science Central Community
William Cleveland, Saptarshi Guha, Ryan Hafen, Jianfu Li, Jeremiah Rounds, Bowei Xi, and Jin Xia
Divide and Recombine (D&R) is an approach to the analysis of large complex data. The data are parallelized: divided into subsets in one or more ways by the data analyst. Numeric and visualization methods are applied to each of the subsets separately. Then the results of each method are recombined across subsets. By introducing exploitable parallelization of data, D&R makes it possible to apply to large complex data almost any existing analysis method from statistics, machine learning, and visualization. This enables a comprehensive detailed analysis that does not lose important information in the data through inappropriate data reductions. RHIPE, the R and Hadoop Integrated Programming Environment, provides D&R analysis wholly from within R. Transparent to the user, Hadoop distributes the subsets across a cluster; schedules and carries out each subset computation with an algorithm that attempts to use a processor as close to each subset as possible; computes across the outputs of the subset computations if needed, provides fault tolerance, and enables simultaneous fair sharing of the cluster by multiple cluster users through fine-grained intermingling of all subset computations. The dividing into subsets, the subset computations, and the output computations are specified by R commands that are passed on to RHIPE R commands that manage the communication with Hadoop. A cluster architecture, Delta Rho, accommodates the computational tasking of D&R with R/RHIPE, which is very diverse, ranging from big distributed jobs (elephants) to nearly instantaneous interactive commands (mice).
Research in statistical theory and methods for D&R division and recombination is very broad because methods need to depend on the structure of the data being analyzed.