hog: distributed hadoop mapreduce on the ?· hog: distributed hadoop mapreduce on the ... in this...

Download HOG: Distributed Hadoop MapReduce on the ?· HOG: Distributed Hadoop MapReduce on the ... In this paper,…

Post on 17-Jun-2018

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • HOG: Distributed Hadoop MapReduce on the GridChen He, Derek Weitzel, David Swanson, Ying Lu

    Computer Science and EngineeringUniversity of Nebraska Lincoln

    Email: {che, dweitzel, dswanson, ylu}@cse.unl.edu

    AbstractMapReduce is a powerful data processing platformfor commercial and academic applications. In this paper, webuild a novel Hadoop MapReduce framework executed on theOpen Science Grid which spans multiple institutions across theUnited States Hadoop On the Grid (HOG). It is differentfrom previous MapReduce platforms that run on dedicatedenvironments like clusters or clouds. HOG provides a free, elastic,and dynamic MapReduce environment on the opportunisticresources of the grid. In HOG, we improve Hadoops faulttolerance for wide area data analysis by mapping data centersacross the U.S. to virtual racks and creating multi-institutionfailure domains. Our modifications to the Hadoop frameworkare transparent to existing Hadoop MapReduce applications. Inthe evaluation, we successfully extend HOG to 1100 nodes on thegrid. Additionally, we evaluate HOG with a simulated FacebookHadoop MapReduce workload. We conclude that HOGs rapidscalability can provide comparable performance to a dedicatedHadoop cluster.

    I. INTRODUCTION

    MapReduce [1] is a framework pioneered by Google forprocessing large amounts of data in a distributed environment.Hadoop [2] is the open source implementation of the MapRe-duce framework. Due to the simplicity of its programmingmodel and the run-time tolerance for node failures, MapRe-duce is widely used by companies such as Facebook [3], theNew York Times [4], etc. Futhermore, scientists also employHadoop to acquire scalable and reliable analysis and storageservices. The University of Nebraska-Lincoln constructed a1.6PB Hadoop Distributed File System to store CompactMuon Solenoid data from the Large Hadron Collider [5],as well as data for the Open Science Grid (OSG) [6]. Inthe University of Maryland, researchers developed blastreducebased on Hadoop MapReduce to analyze DNA sequences [7].As Hadoop MapReduce became popular, the number and scaleof MapReduce programs became increasingly large.

    To utilize Hadoop MapReduce, users need a Hadoop plat-form which runs on a dedicated environment like a cluster orcloud. In this paper, we construct a novel Hadoop platform,Hadoop on the Grid (HOG), based on the OSG [6] whichcan provide scalable and free of charge services for users whoplan to use Hadoop MapReduce. It can be transplanted to otherlarge scale distributed grid systems with minor modifications.

    The OSG, which is the HOGs physical environment, iscomposed of approximately 60,000 CPU cores and spans109 sites in the United States. In this nation-wide distributedsystem, node failure is a common occurrence [8]. Whenrunning on the OSG, users from institutions that do not ownresources run opportunistically and can be preempted at any

    time. A preemption on the remote OSG site can be caused bythe processing job running over allocated time, or if the ownerof the machine has a need for the resources. Preemption isdetermined by the remote sites policies which are outside thecontrol of the OSG user. Therefore, high node failure rate isthe largest barrier that HOG addresses.

    Hadoops fault tolerance focuses on two failure levels anduses replication to avoid data loss. The first level is the nodelevel which means a node failure should not affect the dataintegrity of the cluster. The second level is the rack level whichmeans the data is safe if a whole rack of nodes fail. In HOG,we introduce another level which is the site failure level. SinceHOG runs on multiple sites within the OSG. It is possible thata whole site could fail. HOGs data placement and replicationpolicy takes the site failure into account when it places datablocks. The extension to a third failure level will also bringdata locality benefits which we will explain in Section III.

    The rest of this paper is organized as follows. SectionII gives the background information about Hadoop and theOSG. We describe the architecture of HOG in section III. Insection IV, we show our evaluation of HOG with a well-knownworkload. Section V briefly describes related work, and VIdiscusses possible future research. Finally, we summarize ourconclusions in Section VII.

    II. BACKGROUNDA. Hadoop

    A Hadoop cluster is composed of two parts: Hadoop Dis-tributed File System and MapReduce.

    A Hadoop cluster uses Hadoop Distributed File System(HDFS) [9] to manage its data. HDFS provides storage forthe MapReduce jobs input and output data. It is designedas a highly fault-tolerant, high throughput, and high capacitydistributed file system. It is suitable for storing terabytes orpetabytes of data on clusters and has flexible hardware require-ments, which are typically comprised of commodity hardwarelike personal computers. The significant differences betweenHDFS and other distributed file systems are: HDFSs write-once-read-many and streaming access models that make HDFSefficient in distributing and processing data, reliably storinglarge amounts of data, and robustly incorporating heteroge-neous hardware and operating system environments. It divideseach file into small fixed-size blocks (e.g., 64 MB) and storesmultiple (default is three) copies of each block on cluster nodedisks. The distribution of data blocks increases throughput andfault tolerance. HDFS follows the master/slave architecture.

  • Fig. 1. HDFS Structure. Source: http://hadoop.apache.org

    The master node is called the Namenode which manages thefile system namespace and regulates client accesses to the data.There are a number of worker nodes, called Datanodes, whichstore actual data in units of blocks. The Namenode maintainsa mapping table which maps data blocks to Datanodes inorder to process write and read requests from HDFS clients.It is also in charge of file system namespace operationssuch as closing, renaming, and opening files and directories.HDFS allows a secondary Namenode to periodically save acopy of the metadata stored on the Namenode in case ofNamenode failure. The Datanode stores the data blocks inits local disk and executes instructions like data replacement,creation, deletion, and replication from the Namenode. Figure1 (adopted from Apache Hadoop Project [10]) illustrates theHDFS architecture.

    A Datanode periodically reports its status through a heart-beat message and asks the Namenode for instructions. EveryDatanode listens to the network so that other Datanodes andusers can request read and write operations. The heartbeatcan also help the Namenode to detect connectivity with itsDatanode. If the Namenode does not receive a heartbeat froma Datanode in the configured period of time, it marks the nodedown. Data blocks stored on this node will be considered lostand the Namenode will automatically replicate those blocksof this lost node onto some other datanodes.

    Hadoop MapReduce is the computation framework builtupon HDFS. There are two versions of Hadoop MapReduce:MapReduce 1.0 and MapReduce 2.0 (Yarn [11]). In this paper,we only introduce MapReduce 1.0 which is comprised of twostages: map and reduce. These two stages take a set of inputkey/value pairs and produce a set of output key/value pairs.When a MapReduce job is submitted to the cluster, it is dividedinto M map tasks and R reduce tasks, where each map taskwill process one block (e.g., 64 MB) of input data.

    A Hadoop cluster uses slave (worker) nodes to execute mapand reduce tasks. There are limitations on the number of mapand reduce tasks that a worker node can accept and executesimultaneously. That is, each worker node has a fixed number

    of map slots and reduce slots. Periodically, a worker nodesends a heartbeat signal to the master node. Upon receiving aheartbeat from a worker node that has empty map/reduce slots,the master node invokes the MapReduce scheduler to assigntasks to the worker node. A worker node who is assigned amap task reads the content of the corresponding input datablock from HDFS, possibly from a remote worker node. Theworker node parses input key/value pairs out of the block,and passes each pair to the user-defined map function. Themap function generates intermediate key/value pairs, whichare buffered in memory, and periodically written to the localdisk and divided into R regions by the partitioning function.The locations of these intermediate data are passed back tothe master node, which is responsible for forwarding theselocations to reduce tasks.

    A reduce task uses remote procedure calls to read theintermediate data generated by the M map tasks of the job.Each reduce task is responsible for a region (partition) ofintermediate data with certain keys. Thus, it has to retrieveits partition of data from all worker nodes that have executedthe M map tasks. This process is called shuffle, which involvesmany-to-many communications among worker nodes. Thereduce task then reads in the intermediate data and invokes thereduce function to produce the final output data (i.e., outputkey/value pairs) for its reduce partition.

    B. Open Science Grid

    The OSG [6] is a national organization that provides ser-vices and software to form a distributed network of clusters.OSG is composed of 100+ sites primarily in the United States.Figure 2 shows the OSG sites across the United States.

    Each OSG user has a personal certificate that is trusted bya Virtual Organization (VO) [12]. A VO is a set of individualsand/or institutions that perform computational research andshare resources. A User receives a X.509 user certificate [13]which is used to authenticate with remote resources from theVO.

    Users submit jobs to remote gatekeepers. OSG gatekeepersare a combination of different software based on the GlobusToolkit [14], [15]. Users can use different tools that cancommunicate using the Globus resource s