An Algorithmic Framework for Geo-Distributed Analytics

International Conference on NETwork Games, COntrol and OPtimisation |

Large scale cloud enterprises operate tens to hundreds of datacenters, running a variety of services that produce enormous amounts of data, such as search clicks and infrastructure operation logs. A recent research direction in both academia and industry is to attempt to process the “big data” in multiple datacenters, as the
alternative of centralized processing might be too slow and costly (e.g., due to transferring all the data to a single location). Running such geo-distributed analytics jobs at scale gives rise to key resource management decisions: Where should each of the computations take place? Accordingly, which data should be moved to which location, and when? Which network paths should be used for moving the data, etc. These decisions are complicated not only because they involve the scheduling of multiple types of resources (e.g., compute and network), but also due to the complicated internal data flow of the jobs – typically structured as a DAG of tens of stages, each of which with up to thousands of tasks. Recent work has dealt with the resource management problem by abstracting away certain aspects of the problem, such as the physical network connecting the datacenters, the DAG structure of the jobs and/or the compute capacity constraints at the (possibly heterogeneous) datacenters. In this paper, we provide the first analytical model that includes all aspects of the problem, with the objective of minimizing the makespan of multiple geo-distributed jobs. We provide exact and approximate algorithms for certain practical scenarios, and suggest principled heuristics for other scenarios of interest.