Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 1
CS5950 Introduction to Cloud Computing
SummerII 2015 http://www.cs.wmich.edu/gupta/teaching/cs5950/sumII10cloudComputing/index.html
Ajay Gupta B239, CEAS
Computer Science Department Western Michigan University
276-3104
1 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Acknowledgements
• I have liberally borrowed these slides and material from a number of sources including – Web – MIT, Harvard, UMD, UCSD, UW, Clarkson, . . . – Amazon, Google, IBM, Apache, ManjraSoft,
CloudBook, . . . • Thanks to original authors including Ives, Dyer,
Lin, Dean, Buyya, Ghemawat, Fanelli, Bisciglia, Kimball, Michels-Slettvet,…
• If I have missed any, its purely unintentional. My sincere appreciation to those authors and their creative mind.
2 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Today’s Topics
• More on – MapReduce – Distributed file system
• Synchronization and coordinating partial results
• Use of complex keys and values
3 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 2
THE ARCHITECTURE OF MAPREDUCE
Intro to Cloud Computing 2015
4 WiSe Lab @ WMU www.cs.wmich.edu/wise
What is MapReduce? • A famous distributed programming model • In many circles, considered the key building block for
much of Google’s data analysis – Example of usage rates for one language that compiled to
MapReduce, in 2010: – … [O]n one dedicated Workqueue cluster with 1500 Xeon CPUs, there were 32,580 Sawzall
jobs launched, using an average of 220 machines each. While running those jobs, 18,636 failures occurred (application failure, network outage, system crash, etc.) that triggered rerunning some portion of the job. The jobs read a total of 3.2x1015 bytes of data (2.8PB) and wrote 9.9x1012 bytes (9.3TB).
– Other similar languages: Yahoo’s Pig Latin and Pig; Microsoft’s Dryad
• Cloned in open source: Hadoop, http://hadoop.apache.org/ • Many “successors” now – just google and find out!
Intro to Cloud Computing 2015
5 WiSe Lab @ WMU www.cs.wmich.edu/wise
The MapReduce programming model • Simple distributed functional programming
primitives • Modeled after Lisp primitives:
– map (apply function to all items in a collection) and – reduce (apply function to set of items with a common
key) • We start with:
– A user-defined function to be applied to all data, map: (key,value) à (key, value)
– Another user-specified operation reduce: (key, {set of values}) à result
– A set of n nodes, each with data • All nodes run map on all of their data, producing
new data with keys – This data is collected by key, then shuffled, and finally reduced – Dataflow is through temp files on DFS
Intro to Cloud Computing 2015
6 WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 3
Simple example: Word count
• Goal: Given a set of documents, count how often each word occurs – Input: Key-value pairs (document:lineNumber, text) – Output: Key-value pairs (word, #occurrences) – What should be the intermediate key-value pairs?
Intro to Cloud Computing
2015
map(String key, String value) { // key: document name, line no // value: contents of line }
reduce(String key, Iterator values) { }
for each word w in value: emit(w, "1")
// key: a word // values: a list of counts int result = 0; for each v in values: result += ParseInt(v); emit(key, result)
7 WiSe Lab @ WMU www.cs.wmich.edu/wise
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing 2015
8
8/5/15è
Intro to Cloud Computing 2015
9
Simple example: Word count
9 University of Pennsylvania
Mapper (1-2)
Mapper (3-4)
Mapper (5-6)
Mapper (7-8)
Reducer (A-G)
Reducer (H-N)
Reducer (O-U)
Reducer (V-Z)
(1, the apple)
(2, is an apple) (3, not an orange) (4, because the) (5, orange)
(6, unlike the apple) (7, is orange) (8, not green)
(the, 1)
(apple, 1)
(is, 1)
(apple, 1) (an, 1)
(not, 1)
(orange, 1)
(an, 1) (because, 1)
(the, 1) (orange, 1)
(unlike, 1)
(apple, 1)
(the, 1)
(is, 1)
(orange, 1)
(not, 1)
(green, 1)
(apple, 3) (an, 2)
(because, 1) (green, 1)
(is, 2) (not, 2)
(orange, 3) (the, 3)
(unlike, 1)
(apple, {1, 1, 1}) (an, {1, 1})
(because, {1}) (green, {1})
(is, {1, 1}) (not, {1, 1})
(orange, {1, 1, 1}) (the, {1, 1, 1})
(unlike, {1})
Each mapper receives some of the KV-pairs as input
The mappers process the KV-pairs one by one
Each KV-pair output by the mapper is sent to the reducer that is responsible for it
The reducers sort their input by key and group it
The reducers process their input one group at a time
1 2 3 4 5
Key range the node is responsible for
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 4
MapReduce dataflow
10 Intro to Cloud Computing 2015
Mapper
Mapper
Mapper
Mapper
Reducer
Reducer
Reducer
Reducer
Inpu
t dat
a
Out
put d
ata
"The Shuffle"
Intermediate (key,value) pairs
What is meant by a 'dataflow'? What makes this so scalable?
WiSe Lab @ WMU www.cs.wmich.edu/wise
More examples • Distributed grep – all lines matching a pattern
– Map: filter by pattern – Reduce: output set
• Count URL access frequency – Map: output each URL as key, with count 1 – Reduce: sum the counts
• Reverse web-link graph – Map: output (target,source) pairs when link
to target found in source – Reduce: concatenates values and emits
(target,list(source)) • Inverted index
– Map: Emits (word,documentID) – Reduce: Combines these into
(word,list(documentID)) 11
Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Common mistakes to avoid • Mapper and reducer should be stateless
– Don't use static variables - after map + reduce return, they should remember nothing about the processed data!
– Reason: No guarantees about which key-value pairs will be processed by which workers!
• Don't try to do your own I/O! – Don't try to write to files in the file system – The MapReduce framework does the I/O
for you (usually):
• All the incoming data will be fed as arguments to map and reduce
• Any data your functions produce should be output via emit
12
HashMap h = new HashMap(); map(key, value) { if (h.contains(key)) { h.add(key,value); emit(key, "X"); } } Wrong!
map(key, value) { File foo = new File("xyz.txt"); while (true) { s = foo.readLine(); ... } } Wrong!
Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 5
More common mistakes to avoid
• Mapper must not map too much data to the same key – In particular, don't map everything to the same key!!
Otherwise the reduce worker will be overwhelmed! – It's okay if some reduce workers have more work than others
• Example: In WordCount, the reduce worker that works on the key 'and' has a lot more work than the reduce worker that works on 'argute'.
13 Intro to Cloud Computing 2015
map(key, value) { emit("FOO", key + " " + value); }
reduce(key, value[]) { /* do some computation on all the values */ }
Wrong!
WiSe Lab @ WMU www.cs.wmich.edu/wise
Designing MapReduce algorithms • Key decision: What should be done by map, and what by reduce?
– map can do something to each individual key-value pair, but it can't look at other key-value pairs
• Example: Filtering out key-value pairs we don't need – map can emit more than one intermediate key-value pair for each
incoming key-value pair • Example: Incoming data is text, map produces (word,1) for
each word – reduce can aggregate data; it can look at multiple values, as
long as map has mapped them to the same (intermediate) key • Example: Count the number of words, add up the total cost, ...
• Need to get the intermediate format right! – If reduce needs to look at several values together, map
must emit them using the same key!
14 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing 2015
15
More details on the MapReduce data flow
15
Data partitions by key
Map computation partitions
Reduce computation partitions
Redistribution by output’s key ("shuffle")
Coordinator
University of Pennsylvania
(Default MapReduce uses a file system)
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 6
Some additional details • To make this work, we need a few more parts…
• The file system (distributed across all nodes): – Stores the inputs, outputs, and temporary results
• The driver program (executes on one node): – Specifies where to find the inputs and the outputs – Specifies what mapper and reducer to use – Can customize behavior of the execution
• The runtime system (controls nodes): – Supervises the execution of tasks – Esp. JobTracker
16 Intro to Cloud Computing
2015 WiSe Lab @ WMU www.cs.wmich.edu/wise
Some details • Fewer computation partitions than data partitions
– All data is accessible via a distributed filesystem with replication
– Worker nodes produce data in key order (makes it easy to merge)
– The master is responsible for scheduling, keeping all nodes busy
– The master knows how many data partitions there are, which have completed – atomic commits to disk
• Locality: Master tries to do work on nodes that have replicas of the data
• Master can deal with stragglers (slow machines) by re-executing their tasks somewhere else
17 Intro to Cloud Computing
2015 WiSe Lab @ WMU www.cs.wmich.edu/wise
What if a worker crashes? • We rely on the file system being shared across all the
nodes • Two types of (crash) faults:
– Node wrote its output and then crashed • Here, the file system is likely to have a copy of
the complete output – Node crashed before finishing its output
• The JobTracker sees that the job isn’t making progress, and restarts the job elsewhere on the system
• (Of course, we have fewer nodes to do work…) • But what if the master crashes?
18 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 7
Other challenges • Locality
– Try to schedule map task on machine that already has data
• Task granularity – How many map tasks? How many reduce tasks?
• Dealing with stragglers – Schedule some backup tasks
• Saving bandwidth – E.g., with combiners
• Handling bad records – "Last gasp" packet with current sequence number
19 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Stay tuned
Next you will learn about: Programming in MapReduce (cont.)
20 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Managing Dependencies
• Remember: Mappers run in isolation – You have no idea in what order the mappers run – You have no idea on what node the mappers run – You have no idea when each mapper finishes
• Tools for synchronization: – Ability to hold state in reducer across multiple key-
value pairs – Sorting function for keys – Partitioner – Cleverly-constructed data structures
21 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 8
Motivating Example
• Term co-occurrence matrix for a text collection – M = N x N matrix (N = vocabulary size) – mij: number of times i and j co-occur in some context
(for concreteness, let’s say context = sentence)
• Why? – Distributional profiles as a way of measuring semantic
distance – Semantic distance useful for many language
processing tasks
“You shall know a word by the company it keeps” (Firth, 1957)
e.g., Mohammad and Hirst (EMNLP, 2006) 22 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
MapReduce: Large Counting Problems
• Term co-occurrence matrix for a text collection = specific instance of a large counting problem – A large event space (number of terms) – A large number of events (the collection itself) – Goal: keep track of interesting statistics about the
events • Basic approach
– Mappers generate partial counts – Reducers aggregate partial counts
How do we aggregate partial counts efficiently?
23 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
First Try: “Pairs”
• Each mapper takes a sentence: – Generates all co-occurring term pairs – For all pairs, emit (a, b) → count
• Reducers sum up counts associated with these pairs
• Use combiners!
Note: in all my slides for this example, I denote a key-value pair as k → v
24 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 9
“Pairs” Analysis
• Advantages – Easy to implement, easy to understand
• Disadvantages – Lots of pairs to sort and shuffle around (upper
bound?)
25 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Another Try: “Stripes” • Idea: group together pairs into an associative array
• Each mapper takes a sentence: – Generate all co-occurring term pairs – For each term, emit a → { b: countb, c: countc, d: countd … }
• Reducers perform element-wise sum of associative arrays
(a, b) → 1 (a, c) → 2 (a, d) → 5 (a, e) → 3 (a, f) → 2
a → { b: 1, c: 2, d: 5, e: 3, f: 2 }
a → { b: 1, d: 5, e: 3 } a → { b: 1, c: 2, d: 2, f: 2 } a → { b: 2, c: 2, d: 7, e: 3, f: 2 }
+
26 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
“Stripes” Analysis
• Advantages – Far less sorting and shuffling of key-value
pairs – Can make better use of combiners
• Disadvantages – More difficult to implement – Underlying object is more heavyweight – Fundamental limitation in terms of size of
event space 27 Intro to Cloud Computing
2015 WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 10
Cluster size: 38 cores Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)
28 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Conditional Probabilities
• How do we compute conditional probabilities from counts?
• How do we do this with MapReduce?
∑==
')',(count),(count
)(count),(count)|(
BBABA
ABAABP
29 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
P(B|A): “Pairs”approach
For this to work: – Must emit extra (a, *) for every bn in mapper – Must make sure all a’s get sent to same
reducer (use Partitioner) – Must make sure (a, *) comes first (define sort
order)
(a, b1) → 3 (a, b2) → 12 (a, b3) → 7 (a, b4) → 1 …
(a, *) → 32
(a, b1) → 3 / 32 (a, b2) → 12 / 32 (a, b3) → 7 / 32 (a, b4) → 1 / 32 …
Reducer holds this value in memory
30 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 11
P(B|A): “Stripes” approach
• Easy! – One pass to compute (a, *) – Another pass to directly compute P(B|A)
a → {b1:3, b2 :12, b3 :7, b4 :1, … }
31 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Synchronization in Hadoop • Approach 1: turn synchronization into an ordering
problem – Sort keys into correct order of computation – Partition key space so that each reducer gets the appropriate set
of partial results – Hold state in reducer across multiple key-value pairs to perform
computation – Illustrated by the “pairs” approach
• Approach 2: construct data structures that “bring the pieces together” – Each reducer receives all the data it needs to complete the
computation – Illustrated by the “stripes” approach
32 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Issues and Tradeoffs
• Number of key-value pairs – Object creation overhead – Time for sorting and shuffling pairs across the
network • Size of each key-value pair
– De/serialization overhead
• Combiners make a big difference! – RAM vs. disk and network – Arrange data to maximize opportunities to aggregate
partial results
33 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 12
Data Types in Hadoop Writable Defines a de/serialization protocol.
Every data type in Hadoop is a Writable.
WritableComparable Defines a sort order. All keys must be of this type (but not values).
IntWritable LongWritable Text …
Concrete classes for different data types.
34 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Complex Data Types in Hadoop • How do you implement complex data types? • The easiest way:
– Encoded it as Text, e.g., (a, b) = “a:b” – Use regular expressions to parse and extract data – Works, but pretty hack-ish
• The hard way: – Define a custom implementation of WritableComparable – Must implement: readFields, write, compareTo – Computationally efficient, but slow for rapid prototyping
• Alternatives: – Cloud9 offers two other choices: Tuple and JSON
35 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Next time
• Info retrievals • Examples
36 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise
Intro to Cloud Computing, July+August 2015
8/5/15
WiSe Lab www.cs.wmic.edu/wise
Ajay Gupta, WMU-CS 13
Questions?
37 Intro to Cloud Computing 2015
WiSe Lab @ WMU www.cs.wmich.edu/wise