data wrangling and oracle connectors for hadoop
DESCRIPTION
TRANSCRIPT
1
Wrangling DataWith Oracle Connectors for Hadoop
Gwen Shapira, Solutions [email protected]@gwenshap
Data Has Changed in the Last 30 YearsDA
TA G
ROW
TH
END-USERAPPLICATIONS
THE INTERNET
MOBILE DEVICES
SOPHISTICATEDMACHINES
STRUCTURED DATA – 10%
1980 2013
UNSTRUCTURED DATA – 90%
Data is Messy
5
Data Wrangling (n):The process of converting “raw” data into a format that allows convenient consumption
6
Hadoop Is…
• HDFS – Massive, redundant data storage• Map-Reduce – Batch oriented data processing at scale
Hadoop Distributed File System (HDFS)
ReplicatedHigh Bandwidth
Clustered Storage
MapReduce
Distributed Computing Framework
CORE HADOOP SYSTEM COMPONENTS
7
Hadoop and DatabasesDatabases
“Schema-on-Write”Hadoop
“Schema-on-Read”
Schema must be created before any data can be loaded
An explicit load operation has to take place which transforms data to DB internal structure
New columns must be added explicitly
Data is simply copied to the file store, no transformation is needed
Serializer/Deserlizer is applied during read time to extract the required columns
New data can start flowing anytime and will appear retroactively
1) Reads are Fast
2) Standards and GovernancePROS
1) Loads are Fast
2) Flexibility and Agility
8
Hadoop rocks Data Wrangling
• Cheap storage for messy data• Tools to play with data:
• Acquire• Clean• Transform
• Flexibility where you need it most
9
Got unstructured data?
• Data Warehouse:• Text• CSV• XLS• XML
• Hadoop:• HTML• XML, RSS• JSON• Apache Logs• Avro, ProtoBuffs, ORC, Parquet• Compression• Office, OpenDocument, iWorks• PDF, Epup, RTF• Midi, MP3• JPEG, Tiff• Java Classes• Mbox, RFC822• Autocad• TrueType Parser• HFD / NetCDF
10
But Eventually, you need your data in your DWH
Oracle Connectors for HadoopRock Data Loading
11
What Data Wrangling Looks Like?
Source Acquire Clean Transform Load
12
Data Sources
• Internal• OLTP• Log files• Documents• Sensors / network events
• External:• Geo-location• Demographics• Public data sets• Websites
13
Free External DataName URL
U.S. Census Bureau http://factfinder2.census.gov/
U.S. Executive Branch http://www.data.gov/
U.K. Government http://data.gov.uk/
E.U. Government http://publicdata.eu/
The World Bank http://data.worldbank.org/
Freebase http://www.freebase.com/
Wikidata http://meta.wikimedia.org/wiki/Wikidata
Amazon Web Services http://aws.amazon.com/datasets
14
Data for SellSource Type URL
Gnip Social Media http://gnip.com/
AC Nielsen Media Usage http://www.nielsen.com/
Rapleaf Demographic http://www.rapleaf.com/
ESRI Geographic (GIS) http://www.esri.com/
eBay AucAon https://developer.ebay.com/
D&B Business Entities http://www.dnb.com/
Trulia Real Estate http://www.trulia.com/
Standard & Poor’s Financial http://standardandpoors.com/
15
Source Acquire Clean Transform Load
16
Getting Data into Hadopp
• Sqoop• Flume • Copy• Write• Scraping • Data APIs
Sqoop Import Examples
• Sqoop import --connect jdbc:oracle:thin:@//dbserver:1521/masterdb --username hr --table emp --where “start_date > ’01-01-2012’”
• Sqoop import jdbc:oracle:thin:@//dbserver:1521/masterdb --username myuser --table shops --split-by shop_id --num-mappers 16
Must be indexed or partitioned to avoid 16 full table
scans
18
Or…
• Hadoop fs -put myfile.txt /big/project/myfile.txt • curl –i list_of_urls.txt • curl https://api.twitter.com/1/users/show.json?screen_name=cloudera
{ "id":16134540, "name":"Cloudera", "screen_name":"cloudera", "location":"Palo Alto, CA", "url":"http://www.cloudera.com”"followers_count":11359 }
19
And even…
$cat scraper.pyimport urllibfrom BeautifulSoup import BeautifulSouptxt = urllib.urlopen("http:// www.example.com/")soup = BeautifulSoup(txt)headings = soup.findAll("h2")for heading in headings: print heading.string
20
Source Acquire Clean Transform Load
21
Data Quality Issues
• Given enough data – quality issues are inevitable• Main issues:
• Inconsistent – “99” instead of “1999”• Invalid – last_update: 2036• Corrupt - #$%&@*%@
22
Happy families are all alike.Each unhappy family is unhappyin its own way.
23
Endless Inconsistencies
• Upper vs. lower case• Date formats• Times, time zones, 24h• Missing values • NULL vs. empty string vs. NA• Variation in free format input
• 1 PATCH EVERY 24 HOURS• Replace patches on skin daily
24
Hadoop Strategies
• Validation script is ALWAYS first step
• But not always enough
• We haveknown unknowns and unknowns unknowns
25
Known Unknowns
• Script to:• Check number of columns per row• Validate not-null• Validate data type (“is number”)• Date constraints• Other business logic
26
Unknown Unknowns
• Bad records will happen• Your job should move on• Use counters in Hadoop job to count bad records• Log errors • Write bad records to re-loadable file
27
Solving Bad Data
• Can be done at many levels:• Fix at source• Improve acquisition process• Pre-process before analysis• Fix during analysis
• How many times will you analyze this data?• 0,1, many, lots
28
Source Acquire Clean Transform Load
29
Endless Possibilities
• Map Reduce (in any language)
• Hive (i.e. SQL)• Pig• R• Shell scripts• Plain old Java
30
De-Identification
• Remove PII data• Names, addresses, possibly
more• Remove columns
• Remove IDs *after* joins• Hash• Use partial data• Create statistically similar fake
data
31
87% of US populationcan be identified from gender, zip code and date of birth
32
Joins
• Do at source if possible• Can be done with MapReduce• Or with Hive (Hadoop SQL )• Joins are expensive:
• Do once and store results• De-aggregate aggressively
• Everything a hospital knows about a patient
33
DataWrangler
34
Process Tips
• Keep track of data lineage• Keep track of all changes to data• Use source control for code
35
Source Acquire Clean Transform Load
36
Sqoop
sqoop export --connect jdbc:mysql://db.example.com/foo --table bar --export-dir /results/bar_data
37
FUSE-DFS
• Mount HDFS on Oracle server:• sudo yum install hadoop-0.20-fuse• hadoop-fuse-dfs
dfs://<name_node_hostname>:<namenode_port> <mount_point>
• Use external tables to load data into Oracle
38
That’s nice.But can you load data FAST?
39
Oracle Connectors
• SQL Connector for Hadoop• Oracle Loader for Hadoop• ODI with Hadoop• OBIEE with Hadoop• R connector for Hadoop
You don’t need BDA
40
Oracle Loader for Hadoop
• Kinda like SQL Loader• Data is on HDFS• Runs as Map-Reduce job• Partitions, sorts, converts format to Oracle Blocks• Appended to database tables• Or written to Data Pump files for later load
41
Oracle SQL Connector for HDFS
• Data is in HDFS• Connector creates external table• That automatically matches Hadoop data• Control degree of parallelism
• You know External Tables, right?
43
Data Types Supported
• Data Pump• Delimited text• Avro• Regular expressions• Custom formats
44
Main Benefit:Processing is done in Hadoop
45
Benefits
• High performance• Reduce CPU usage on Database• Automatic optimizations:
• Partitions• Sort• Load balance
46
Measuring Data Load
Concerns
How much time?
How much CPU?
BottlenecksDisk
CPU
Network
47
I Know What This Means:
48
What does this mean?
49
Measuring Data Load
• Disks: ~300MB /s each• SSD: ~ 1.6 GB/s each• Network:
• ~ 100MB/s (1gE) • ~ 1GB/s (10gE)• ~ 4GB/s (IB)
• CPU: 1 CPU second per second per core.• Need to know: CPU seconds per GB
50
Lets walk through this…
We have 5TB to loadEach core: 3600 seconds per hour
5000GB will take:With Fuse: 5000*150 cpu-sec = 750000/3600 = 208 cpu-hoursWith SQL Connector: 5000 * 40 = 55 cpu-hours
Our X2-3 half rack has 84 cores.So, around 30 minutes to load 5TB at 100% CPU.
Assuming you use Exadata (Infiniband + SSD = 8TB/h load rate)And use all CPUs for loading
51
Given fast enough network and disks, data loading will take all available CPUThis is a good thing
52