lectures 10 & 11: mapreduce & hadoop. placing mapreduce in the course context programming...
TRANSCRIPT
Lectures 10 & 11: MapReduce & Hadoop
Placing MapReduce in the course context Programming environments:
Threads On what type of architecture? What are the best problems to solve with threads?
Message passing On what type of architecture? What are the best problems to solve?
MapReduce: Architecture? Types of problems?
Huge data
The web 20+ billion web pages x 20KB = 400+ terabytes
One computer can read 30-35 MB/sec from disk ~four months to read the web ~1,000 hard drives to store the web
Sensors Gene sequencing
machines Modern telescopes
Large Hadron Collider
Need to service those users and analyze that data Google not only stores (multiple copies of) the web,
it handles an average of 3000 searches per second (7 billion searches per month)!
The LHC will produce 700 MB of data per second – 60 terabytes per day – 20 petabytes per year Hopefully they’re going to analyze this data, because it
cost $6 billion to build the instrument…
The only hope: concurrent processing/parallel computing/distributed computing at enormous scale
MapReduce: The Google Solution
Answering a Google search request There are multiple clusters (of thousands of computers
each) all over the world DNS routes your search to a nearby cluster These are cheap standalone computers, rack-mounted,
connected by commodity networking gear
Background
A cluster consists of: Google Web Servers Index Servers Doc Servers and various other servers (ads, spell checking, etc.)
Within the cluster, load-balancing routes your search to a lightly-loaded Google Web Server (GWS), which will coordinate the search and response
The index is partitioned into “shards.” Each shard indexes a subset of the docs (web pages). Each shard can be searched by multiple computers – “index servers”
The GWS routes your search to one index server associated with each shard, through another load-balancer
When the dust has settled, the result is an ID for every doc satisfying your search, rank-ordered by relevance
The docs, too, are partitioned into “shards” – the partitioning is a hash on the doc ID. Each shard contains the full text of a subset of the docs. Each shard can be searched by multiple computers – “doc servers”
The GWS sends appropriate doc IDs to one doc server associated with each relevant shard
When the dust has settled, the result is a URL, a title, and a summary for every relevant doc
Hundreds of computers involved in responding to a single search request System requirements:
Fault-Tolerant It can recover from component failures without performing incorrect
actions Highly Available
It can restore operations, permitting it to resume providing services even when some components have failed
Recoverable Failed components can restart themselves and rejoin the system, after
the cause of failure has been repaired Consistent
The system can coordinate actions by multiple components, often in the presence of concurrency and failure
Scalable It can operate correctly even as some aspect of the system is scaled to a
larger size Predictable Performance
The ability to provide desired responsiveness in a timely manner Secure
The system authenticates access to data and services The system also must support a straightforward programming model
For efficiency, debugging, reduced costs, etc. And it must be cheap
A Google rack (176 2-GHz Xeon CPUs, 176 GB of RAM, 7 TB of disk) costs about $300K; 6,000 racks ~ $2B
You could easily pay 2x this or more for “more robust” hardware (e.g., high-quality SCSI disks, bleeding-edge CPUs)
A “traditional” multiprocessor with very high bisection bandwidth costs much more (and cost would affect scale)
Hardware Components are reliable Components are homogeneous
Software Is correct
Network Latency is zero Bandwidth is infinite Is secure
Overall system Configuration is stable There is one administrator
No assumptions that:
How to enable simplified coding? Recognize that many Google applications have the
same structure Apply a “map” operation to each logical record in order to
compute a set of intermediate key/value pairs Apply a “reduce” operation to all the values that share the
same key in order to combine the derived data appropriately
Example: Count the number of occurrences of each word in a large collection of documents Map: Emit <word, 1> each time you encounter a word Reduce: Sum the values for each word
Build a runtime library that handles all the details, accepting a couple of customization functions from the user – a Map function and a Reduce function
That’s what MapReduce is Supported by the Google File System Augmented by BigTable (not-quite-a-database
system)
Some terminology
MapReduce The LISP functional programming “Map / Reduce” way of
thinking about problem solving The name of Google’s runtime library supporting this
programming paradigm at enormous scale Hadoop
An open source implementation of the MapReduce functionality
Dryad Microsoft’s version
Two Major Sections
Lisp/ML map/fold review MapReduce
Functional Programming Review Functional operations do not modify data
structures: They always create new ones Original data still exists in unmodified form
Data flows are implicit in program design Order of operations does not matter
Functional Programming Reviewfun foo(l: int list) =
sum(l) + mul(l) + length(l)
Order of sum() and mul(), etc does not matter – they do not modify l
“Updates” Don’t Modify Structuresfun append(x, lst) =
let lst' = reverse lst in
reverse ( x :: lst' )
The append() function above reverses a list, adds a new element to the front, and returns all of that, reversed, which appends an item.
But it never modifies lst!
Functions Can Be Used As Argumentsfun DoDouble(f, x) = f (f x)
It does not matter what f does to its argument; DoDouble() will do it twice.
Map
map f lst: (’a->’b) -> (’a list) -> (’b list)
Creates a new list by applying f to each element of the input list; returns output in order.
f f f f f f
Fold
fold f x0 lst: ('a*'b->'b)->'b->('a list)->'b
Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list
f f f f f returned
initial
map Implementation
This implementation moves left-to-right across the list, mapping elements one at a time
… But does it need to?
fun map f [] = [] | map f (x::xs) = (f x) :: (map f xs)
Implicit Parallelism In map
In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements
If order of application of f to elements in list is commutative, we can reorder or parallelize execution
This is the “secret” that MapReduce exploits
MapReduce
Motivation: Large Scale Data Processing Want to process lots of data ( > 1 TB) Want to parallelize across
hundreds/thousands of CPUs … Want to make this easy
MapReduce
Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers
Programming Model
Borrows from functional programming Users implement interface of two functions:
map (in_key, in_value) ->
(out_key, intermediate_value) list
reduce (out_key, intermediate_value list) ->
out_value list
map
Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line).
map() produces one or more intermediate values along with an output key from the input.
map (in_key, in_value) -> (out_key, intermediate_value) list
map
reduce
After the map phase is over, all the intermediate values for a given output key are combined together into a list
reduce() combines those intermediate values into one or more final values for that same output key
(in practice, usually only one final value per key)
Reduce
reduce (out_key, intermediate_value list) ->out_value list
returned
initial
Data store 1 Data store nmap
(key 1, values...)
(key 2, values...)
(key 3, values...)
map
(key 1, values...)
(key 2, values...)
(key 3, values...)
Input key*value pairs
Input key*value pairs
== Barrier == : Aggregates intermediate values by output key
reduce reduce reduce
key 1, intermediate
values
key 2, intermediate
values
key 3, intermediate
values
final key 1 values
final key 2 values
final key 3 values
...
Parallelism
map() functions run in parallel, creating different intermediate values from different input data sets
reduce() functions also run in parallel, each working on a different output key
All values are processed independently Bottleneck: reduce phase can’t start until map
phase is completely finished.
Example: Count word occurrencesmap(String input_key, String input_value):
// input_key: document name
// input_value: document contents
for each word w in input_value:
EmitIntermediate(w, 1);
reduce(String output_key, Iterator<int> intermediate_values):
// output_key: a word
// output_values: a list of counts
int result = 0;
for each v in intermediate_values:
result += v;
Emit(result);
Locality
Master program divvies up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack
map() task inputs are divided into 64 MB blocks: same size as Google File System chunks
Fault Tolerance
Master detects worker failures Re-executes completed & in-progress map() tasks Re-executes in-progress reduce() tasks
Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. Effect: Can work around bugs in third-party
libraries!
Optimizations
No reduce can start until map is complete: A single slow disk controller can rate-limit the
whole process Master redundantly executes “slow-moving”
map tasks; uses results of first copy to finish
Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?
Combining Phase
Run on mapper nodes after map phase “Mini-reduce,” only on local map output Used to save bandwidth before sending data
to full reducer Reducer can be combiner if commutative &
associative
Combiner, graphically
Combiner replaces with:
Map output
To reducer
On one mapper machine:
To reducer
MapReduce Conclusions
MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at
Google Functional programming paradigm can be applied to
large-scale applications Fun to use: focus on problem, let library deal w/
messy details
Lecture 11 – Hadoop Technical Introduction
Terminology
Google calls it: Hadoop equivalent:
MapReduce Hadoop
GFS HDFS
Bigtable HBase
Chubby Zookeeper
Some MapReduce Terminology Job – A “full program” - an execution of a
Mapper and Reducer across a data set Task – An execution of a Mapper or a
Reducer on a slice of data a.k.a. Task-In-Progress (TIP)
Task Attempt – A particular instance of an attempt to execute a task on a machine
Terminology Example
Running “Word Count” across 20 files is one job 20 files to be mapped imply 20 map tasks +
some number of reduce tasks At least 20 map task attempts will be
performed… more if a machine crashes, etc.
Task Attempts
A particular task will be attempted at least once, possibly more times if it crashes If the same input causes crashes over and over, that input
will eventually be abandoned Multiple attempts at one task may occur in parallel
with speculative execution turned on Task ID from TaskInProgress is not a unique identifier; don’t
use it that way
MapReduce: High Level
JobTrackerMapReduce job
submitted by client computer
Master node
TaskTracker
Slave node
Task instance
TaskTracker
Slave node
Task instance
TaskTracker
Slave node
Task instance
In our case: circe.rc.usf.edu
Nodes, Trackers, Tasks
Master node runs JobTracker instance, which accepts Job requests from clients
TaskTracker instances run on slave nodes
TaskTracker forks separate Java process for task instances
Job Distribution
MapReduce programs are contained in a Java “jar” file + an XML file containing serialized program configuration options
Running a MapReduce job places these files into the HDFS and notifies TaskTrackers where to retrieve the relevant program code
… Where’s the data distribution?
Data Distribution
Implicit in design of MapReduce! All mappers are equivalent; so map whatever data
is local to a particular node in HDFS If lots of data does happen to pile up on the
same node, nearby nodes will map instead Data transfer is handled implicitly by HDFS
What Happens In MapReduce?Depth First
Job Launch Process: Client
Client program creates a JobConf Identify classes implementing Mapper and
Reducer interfaces JobConf.setMapperClass(), setReducerClass()
Specify inputs, outputs FileInputFormat.addInputPath(), FileOutputFormat.setOutputPath()
Optionally, other options too: JobConf.setNumReduceTasks(),
JobConf.setOutputFormat()…
Job Launch Process: JobClient Pass JobConf to JobClient.runJob() or
submitJob() runJob() blocks, submitJob() does not
JobClient: Determines proper division of input into InputSplits Sends job data to master JobTracker server
Job Launch Process: JobTracker JobTracker:
Inserts jar and JobConf (serialized to XML) in shared location
Posts a JobInProgress to its run queue
Job Launch Process: TaskTracker TaskTrackers running on slave nodes
periodically query JobTracker for work Retrieve job-specific jar and config Launch task in separate instance of Java
main() is provided by Hadoop
Job Launch Process: Task
TaskTracker.Child.main(): Sets up the child TaskInProgress attempt Reads XML configuration Connects back to necessary MapReduce
components via RPC Uses TaskRunner to launch user process
Job Launch Process: TaskRunner TaskRunner, MapTaskRunner, MapRunner
work in a daisy-chain to launch your Mapper Task knows ahead of time which InputSplits it
should be mapping Calls Mapper once for each record retrieved from
the InputSplit Running the Reducer is much the same
Creating the Mapper
You provide the instance of Mapper Should extend MapReduceBase
One instance of your Mapper is initialized by the MapTaskRunner for a TaskInProgress Exists in separate process from all other instances
of Mapper – no data sharing!
Mapper
void map(K1 key,
V1 value,
OutputCollector<K2, V2> output,
Reporter reporter)
K types implement WritableComparable V types implement Writable
What is Writable?
Hadoop defines its own “box” classes for strings (Text), integers (IntWritable), etc.
All values are instances of Writable All keys are instances of WritableComparable
Getting Data To The Mapper
Input file
InputSplit InputSplit InputSplit InputSplit
Input file
RecordReader RecordReader RecordReader RecordReader
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Inpu
tFor
mat
Reading Data
Data sets are specified by InputFormats Defines input data (e.g., a directory) Identifies partitions of the data that form an
InputSplit Factory for RecordReader objects to extract (k, v)
records from the input source
FileInputFormat and Friends
TextInputFormat – Treats each ‘\n’-terminated line of a file as a value
KeyValueTextInputFormat – Maps ‘\n’- terminated text lines of “k SEP v”
SequenceFileInputFormat – Binary file of (k, v) pairs with some add’l metadata
SequenceFileAsTextInputFormat – Same, but maps (k.toString(), v.toString())
Filtering File Inputs
FileInputFormat will read all files out of a specified directory and send them to the mapper
Delegates filtering this file list to a method subclasses may override e.g., Create your own “xyzFileInputFormat” to
read *.xyz from directory list
Record Readers
Each InputFormat provides its own RecordReader implementation Provides (unused?) capability multiplexing
LineRecordReader – Reads a line from a text file
KeyValueRecordReader – Used by KeyValueTextInputFormat
Input Split Size
FileInputFormat will divide large files into chunks Exact size controlled by mapred.min.split.size
RecordReaders receive file, offset, and length of chunk
Custom InputFormat implementations may override split size – e.g., “NeverChunkFile”
Sending Data To Reducers
Map function receives OutputCollector object OutputCollector.collect() takes (k, v) elements
Any (WritableComparable, Writable) can be used
By default, mapper output type assumed to be same as reducer output type
WritableComparator
Compares WritableComparable data Will call WritableComparable.compare() Can provide fast path for serialized data
JobConf.setOutputValueGroupingComparator()
Sending Data To The Client
Reporter object sent to Mapper allows simple asynchronous feedback incrCounter(Enum key, long amount) setStatus(String msg)
Allows self-identification of input InputSplit getInputSplit()
Partition And Shuffle
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Reducer Reducer Reducer
(intermediates) (intermediates) (intermediates)
Partitioner Partitioner Partitioner Partitioner
shu
fflin
g
Partitioner
int getPartition(key, val, numPartitions) Outputs the partition number for a given key One partition == values sent to one Reduce task
HashPartitioner used by default Uses key.hashCode() to return partition num
JobConf sets Partitioner implementation
Reduction
reduce( K2 key, Iterator<V2> values, OutputCollector<K3, V3> output, Reporter reporter)
Keys & values sent to one partition all go to the same reduce task
Calls are sorted by key – “earlier” keys are reduced and output before “later” keys
Finally: Writing The Output
Reducer Reducer Reducer
RecordWriter RecordWriter RecordWriter
output file output file output file
Ou
tpu
tFo
rma
t
OutputFormat
Analogous to InputFormat TextOutputFormat – Writes “key val\n” strings
to output file SequenceFileOutputFormat – Uses a binary
format to pack (k, v) pairs NullOutputFormat – Discards output