getting started withbed-con.org/images/files/bed2011/bed2011-hadoop.pdf · * big data problem...

Post on 28-May-2020

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Getting Started with

Josh DevinsNokia

Berlin Expert DaysApril 8, 2011Berlin, Germany

http://www.flickr.com/photos/haiko/154105048/

how did we get here?* Google crawls the web, surfaces the "big data" problem* big data problem defined: so much data that cannot be processed by one individual machine* (also defined as: so much data that you need a team of people to managed it)* solve it: use multiple machines

http://www.flickr.com/photos/jamisonjudd/2433102356/

http://www.flickr.com/photos/torkildr/3462607995/

http://www.flickr.com/photos/torkildr/3462606643/

* since 1999, Google engineers wrote complex distributed programs to analyze crawled data* too complex, not accessible* requirement: must be easy for engineers with little to no distributed computing and large data processing experience * fault tolerance * scaling * simple coding experience * easy to teach * visibility/monitorability

• Google implement MapReduce and GFS

• GFS paper published (Ghemawat, et al)

basic history of MapReduce at Google

* 2003 Google implement MapReduce and GFS * to support large-scale, distributed computing on large data sets using commodity hardware* basically to make data crunching a reality for "regular" Google engineers* 2003 GFS paper published by Sanjay Ghemawat, et al

• MapReduce paper published (Jeffrey Dean and Sanjay Ghemawat)

• MapReduce patent application (2004 applied, 2010 approved)

* 2004 MapReduce paper published by Jeffrey Dean and Sanjay Ghemawat * http://labs.google.com/papers/mapreduce.html* MR is patented by Google (2004 applied, 2010 approved), but supports Hadoop completely and uses the patent defensively only (to ensure that everyone can use the patent)* http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7,650,331.PN.&OS=PN/7,650,331&RS=PN/7,650,331

• 2004 Doug Cutting and Mike Cafarella create implementation for Nutch

• 2006 Doug Cutting joins Yahoo!

• 2006 Hadoop split out from Nutch

• 2006 Yahoo! search index building powered by Hadoop

• 2007 Yahoo! runs 2x 1,000 node R&D clusters

• 2008 Hadoop wins the 1 TB sort benchmark in 209s on 900 nodes

• 2008 Cloudera founded by ex-Oracle, Yahoo! and Facebook employees

• 2009 Cutting leaves Yahoo! for Cloudera

evolution into Hadoop, natural continuation from Google work, in the public domain

* implemented for Nutch's index creation, relying on their NDFS (Nutch dist filesystem)* Nutch is a web crawler and search engine based on Lucene

book summary

map1

map2

map3

map4

mapn

reduce

so what the hell is it already?“a distributed batch processing system”the non-technical example, courtesy of Matt Biddulph: give n people a book to read and get reports back from themmap/reduce parts can be parallelized, section in the outer box

map(String key, String value): // key: document name // value: row/line from document for each w in value: EmitIntermediate(w, 1);

reduce(String key, Iterator<Integer> values): // key: a word // values: a list of counts Integer count = 0; for each v in count: result += values.next(); Emit(key, count);

sortAndGroup(List<String, Integer> mapOut)

similar to the previous example of reports(simplified) canonical example of word counting* give those same n people or mappers each a line from document and have them write down a ‘1’ for every word they see* the collector is the responsible for summing up all the ‘1’s per word* not a ‘pure function’ (‘emit’ methods have side-effects, impl in Hadoop has side-effects)* based on, but not exact ‘map’ and ‘reduce’ in the strictly functional definition

map function takes: - key as document name - value as the line from the document

map function emits: - key as the word - value as the number 1 (I’ve seen this word one time)

reduce function takes: - key as the word - list of values is list of 1’s -- for each time the word was seen by a mapper

reduce function emits: the word, the sum of number of times word was encountered by a mapper

map input:(doc1,start of the first document)(doc1,the document is super interesting)(doc1,end of the first document)

map output:(start,1) (of,1) (the,1) (first,1) (document,1)(the,1) (document,1) (is,1) (super,1) (interesting, 1)(end,1) (of,1) (the,1) (first,1) (document,1)

sort:(start,1) (of,1)(of,1) (the,1)(the,1)(the,1)(first,1)(first,1) (document,1)(document,1)(document,1)(is,1) (super,1) (interesting, 1) (end,1)

group (reduce input):(start,{1}) (of,{1,1}) (the,{1,1,1}) (first,{1,1})(document,{1,1,1})(is,{1}) (super,{1}) (interesting,{1}) (end,{1})

reduce output:(start,1) (of,2) (the,3) (first,2) (document,3)(is,1) (super,1) (interesting,1) (end,1)

HDFS

logical file view

HDFS primer* block structure* std block size* replicated blocks, std 3x* input task per block* data locality

1

23

4

* high level, physical view of HDFS* walk through write operation steps

1

2

3

* job run* data/processing locality (best effort attempt)* can’t always achieve data-local processing though* stats will show how many data-local map tasks were run

Nomenclature Review

• HDFS

• NameNode: metadata, coordination

• DataNode: storage, retrieval, replication

• MapReduce

• JobTracker: job coordination

• TaskTracker: task management (map and reduce)

* saw all of these pieces in the previous slides

Hadoop ecosystem

Yahoo!

Facebook

Cloudera* Avro started at Yahoo! by Doug Cutting, continues work at Cloudera

LinkedIn

Other (Amazon-AWS Elastic MapReduce, Chris Wensel-Cascading, Infochimps-Wukong, Google-Proto Buf)

Hadoop ecosystem

Diving In

• Cloudera training VM, CDH3b3

• github.com/joshdevins/talks-hadoop-getting-started

• Exercise:

• analyse Apache access logs from mac-geeks.de

• use raw Java MapReduce API, MRUnit

• use Pig, PigUnit

• simple visualization/dashboard

* Cloudera VM, pre-installed with CDH (Cloudera Distribution for Hadoop): http://cloudera-vm.s3.amazonaws.com/cloudera-demo-0.3.5.tar.bz2?downloads (username/password: cloudera/cloudera)* thanks @maxheadroom, mac-geeks.de* throughput analysis* Pig is a high-level abstraction on MR providing a ‘data flow’ language, with constructs

similar to SQL

1.2.3.4 - - [30/Sep/2010:15:07:53 -0400] "GET /foo HTTP/1.1" 200 31901.2.3.4 - - [30/Sep/2010:15:07:53 -0400] "GET /bar HTTP/1.1" 404 31901.2.3.4 - - [30/Sep/2010:15:07:54 -0400] "GET /foo HTTP/1.1" 200 31901.2.3.4 - - [30/Sep/2010:15:07:54 -0400] "GET /foo HTTP/1.1" 200 3190

30/Sep/2010:15:07:53, 130/Sep/2010:15:07:54, 2 group by second

30/Sep/2010:15:00:00,{(30/Sep/2010:15:07:53, 1), (30/Sep/2010:15:07:54, 2)}

group by hour

30/Sep/2010:15:00:00, 3, 2 count, find max

general approach

Code

github.com/joshdevins/talks-hadoop-getting-started

Hadoop at Nokia

* Nokia Berlin - location based services

Global Architecture

* remote DC’s: Singapore, Peking, Atlanta, Mumbai* central DC: Slough/London* R&D DC’s and Hadoop clusters: Berlin, Boston

Hardware

DC LONDON BERLIN

cores 12x (w/ HT) 4x 2.00 GHz (w/ HT)

RAM 48GB 16GB

disks 12x 2TB 4x 1TB

storage 24TB 4TB

LAN 1Gb 2x 1Gb (bonded)

http://www.flickr.com/photos/torkildr/3462607995/in/photostream/

BERLIN* HP DL160 G6* 1x Quad-core Intel Xeon E5504 @ 2.00 GHz (4-cores total)* 16GB DDR3 RAM* 4x 1TB 7200 RPM SATA* 2x 1Gb LAN* iLO Lights-Out 100 Advanced

Meaning?

• Size

• Berlin: 2 master nodes, 13 data nodes, ~17TB HDFS

• London: “large enough to handle a year’s worth of activity log data, with plans for rapid expansion”

• Scribe

• 250,000 1KB msg/sec

• 244MB/sec, 14.3GB/hr, 343GB/day

http://www.flickr.com/photos/torkildr/3462607995/in/photostream/

Reporting

operational - access logs, throughput, general usage, dashboardsbusiness reporting - what are all of the products doing, how do they compare to other monthsad-hoc - random business queries

* almost all of this goes through Pig at some point* pipelines with Oozie* sometimes parsing and decoding in Java MR job, then Pig for the heavy lifting* mostly goes into a RDBMS using Sqoop for display and querying in other tools* Tableau for some dashboards and quick visualizations* many JS libs for good visualization/dashboarding* sometimes roll your own with image libraries in Python, Ruby, etc.

IKEA!

other than reporting, we also occasionally do some data exploration, which can be quite funany guesses what this is a plot of?geo-searches for Ikea!

Ikea Tempelhof

Ikea Spandau

Ikea Schoenefeld

Prenzl Berg Yuppies

Ikea geo-searches bounded to Berlincan we make any assumptions about what the actual locations are?kind of, but not much data hereclearly there is a Tempelhof cluster but the others are not very evidentcertainly shows the relative popularity of all the locationsIkea Lichtenberg was not open yet during this time frame

Ikea Croydon

Ikea Wembley

Ikea Edmonton

Ikea Lakeside

Ikea geo-searches bounded to Londoncan we make any assumptions about what the actual locations are?turns out we can!using a clustering algorithm like K-Means (maybe from Mahout) we probably could guess

> this is considering search location, what about time?

Berlindistribution of searches over days of the week and hours of the daycertainly can make some comments about the hours that Berliners are awakecan we make assumptions about average opening hours?

Berlinupwards trend a couple hours before openingcan also clearly make some statements about the best time to visit Ikea in Berlin - Sat night!

BERLIN * Mon-Fri 10am-9pm * Saturday 10am-10pm

Londonmore data points again so we get smoother results

LondonLONDON * Mon-Fri 10am-10pm * Saturday 9am-10pm * Sunday 11am-5pm

> potential revenue stream?> what to do with this data or data like this?

Productizing

Berlin

another example of something that can be productized

Berlin * traffic sensors * map tiles

Los Angeles

LA * traffic sensors * map tiles

Berlin Los Angeles

Join Us

• Nokia is hiring in Berlin!

• software engineers

• operations engineers

• josh.devins@nokia.com

• www.nokia.com/careers

Thanks!

Josh Devinswww.joshdevins.netinfo@joshdevins.net@joshdevins

code: github.com/joshdevins/talks-hadoop-getting-started

top related