2013 open analytics_countingv3

Post on 23-Jan-2015

809 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

AddThis' OA DC Summit Presentaiton

TRANSCRIPT

Cardinality Estimation for Very Large Data

Sets

Matt Abrams, VP Data and OperationsMarch 25, 2013

THANKS FOR COMING!

I build large scale distributed systems and work on algorithms that make sense of the data stored in them

Contributor to the open source project Stream-Lib, a Java library for summarizing data streams (https://github.com/clearspring/stream-lib)

Ask me questions: @abramsm

HOW CAN WE COUNT THE NUMBER OF DISTINCT ELEMENTS IN LARGE DATA SETS?

HOW CAN WE COUNT THE NUMBER OF DISTINCT ELEMENTS IN VERY LARGE DATA SETS?

GOALS FOR COUNTING SOLUTION

Support high throughput data streams (up to many 100s of thousands per second)

Estimate cardinality with known error thresholds in sets up to around 1 billion (or even 1 trillion when needed)

Support set operations (unions and intersections)

Support data streams with large number of dimensions

http://msnbcmedia.msn.com/j/MSNBC/Components/Photo/_new/pb-111031-hajjj-01.photoblog900.jpg

513a71b843e54b73

1 UID = 128 bits

In one month AddThis logs 5B+ UIDs

2,500,000 * 2000 = 5,000,000,000

That’s 596GB of just UIDS

NAÏVE SOLUTIONS

• Select count(distinct UID) from table where dimension = foo

• HashSet<K>

• Run a batch job for each new query request

WE ARE NOT A BANK

http://graphics8.nytimes.com/images/2008/01/30/timestopics/feddc.jpg

This means a estimate rather than exact value is acceptable.

THREE INTUITIONS

• It is possible to estimate the cardinality of a set by understanding the probability of a sequence of events occurring in a random variable (e.g. how many coins were flipped if I saw n heads in a row?)

• Averaging the the results of multiple observations can reduce the variance associated with random variables

• By applying a good hash function effectively de-duplicates the input stream

INTUITION

What is the probability that a binary string starts with ’01’?

INTUITION

(1/2)2 = 25%

INTUITION

(1/2)3 = 12.5%

INTUITION

Crude analysis: If a stream has 8 unique values the hash of at least one of them should start with ‘001’

INTUITION

Given the variability of a single random value we can not use a single variable for accurate cardinality estimations

MULTIPLE OBSERVATIONS HELP REDUCE VARIANCE

By taking the mean of the standard deviation of multiple random variables we can make the error rate as small as desired by controlling the size of m (the number random variables)

THE PROBLEM WITH MULTIPLE HASH FUNCTIONS

• It is too costly from a computational perspective to apply m hash functions to each data point

• It is not clear that it is possible to generate m good hash functions that are independent

STOCHASTIC AVERAGING

• Emulating the effect of m experiments with a single hash function

• Divide input stream into m sub-streams

• An average of the observable values for each sub-stream will yield a cardinality that improves in proportion to as m increases

HASH FUNCTIONS

32 Bit Hash

64 Bit Hash

160 Bit Hash

Odds of a Collision

77163 5.06 Billion 1.42 * 10^14

1 in 2

30084 1.97 Billion 5.55 * 10^23

1 in 10

9292 609 million 1.71 * 10^23

1 in 100

2932 192 million 5.41 * 10^22

1 in 1000

http://preshing.com/20110504/hash-collision-probabilities

HYPERLOGLOG (2007)

Philippe Flajolet (1948-2011)

Counts up to1 Billion in 1.5KB of space

HYPERLOGLOG (HLL)

• Operates with a single pass over the input data set

• Produces a typical error of of

• Error decreases as m increases. Error is not a function of the number of elements in the set

HLL SUBSTREAMS

HLL uses a single hash function and splits the result into m buckets

Hash FunctionInput Values

Bucket 1

Bucket 2

Bucket m

S

HLL ALGORITHM BASICS

• Each substream maintains an Observable

• Observable is largest value p(x) which is the position of the leftmost 1-bit in a binary string x

• 32 bit hashing function with 5 bit “short bytes”

• Harmonic mean

• Increases quality of estimates by reducing variance

WHAT ARE “SHORT BYTES”?

• We know a priori that the value of a given substream of the multiset M is in the range

• Assuming L = 32 we only need 5 bits to store the value of the register

• 85% less memory usage as compared to standard java int

ADDING VALUES TO HLL

• The first b bits of the new value define the index for the multiset M that may be updated when the new value is added

• The bits b+1 to m are used to determine the leading number of zeros (p)

ADDING VALUES TO HLL

The multiset is updated using the equation:

Observations

Number of leading zeros + 1

INTUITION ON EXTRACTING CARDINALITY FROM HLL

• If we add n elements to a stream then each substream will contain roughly n/m elements

• The MAX value in each substream should be about (from earlier intuition re random variables)

• The harmonic mean (mZ) of 2MAX is on the order of n/m

• So m2Z is on the order of n That’s the cardinality!

HLL CARDINALITY ESTIMATE

• m2Z has systematic multiplicative bias that needs to be corrected. This is done by multiplying a constant value

Harmonic Mean

A NOTE ON LONG RANGE CORRECTIONS

• The paper says to apply a long range correction function when the estimate is greater than:

• The correction function is:

• DON’T DO THIS! It doesn’t work and increases error. Better approach is to use a bigger/better hash function

Lets look at HLL in Action.

DEMO TIME!

http://www.aggregateknowledge.com/science/blog/hll.html

HLL UNIONS• Merging two or more HLL

data structures is a similar process to adding a new value to a single HLL

• For each register in the HLL take the max value of the HLLs you are merging and the resulting register set can be used to estimate the cardinality of the combined sets

Root

MON

TUE

WED

THU

FRI

HLL

HLL

HLL

HLL

HLL

HLL INTERSECTION

You must understand the properties of your sets to know if you can trust the resulting intersection

A BC

HYPERLOGLOG++

• Google researches have recently released an update to the HLL algorithm

• Uses clever encoding/decoding techniques to create a single data structure that is very accurate for small cardinality sets and can estimate sets that have over a trillion elements in them

• Empirical bias correction. Observations show that most of the error in HLL comes from the bias function. Using empirically derived values significantly reduces error

• Already available in Stream-Lib!

OTHER PROBABILISTIC DATA STRUCTURES

• Bloom Filters – set membership detection

• CountMinSketch – estimate number of occurrences for a given element

• TopK Estimators – estimate the frequency and top elements from a stream

THANKS!

AddThis is hiring!

top related