b+-trees and hashing techniques for storage and index structures rizwan rehman centre for computer...
TRANSCRIPT
B+-Trees and Hashing Techniques for Storage and Index Structures
Rizwan RehmanCentre For Computer Studies
Dibrugarh University
Alternative File Organizations
Many alternatives exist, each ideal for some situation , and not so good in others:• Heap files: Suitable when typical access is a
file scan retrieving all records.• Sorted Files: Best if records must be
retrieved in some order, or only a `range’ of records is needed.
• Hashed Files: Good for equality selections.• File is a collection of buckets. Bucket =
primary page plus zero or more overflow pages.
• Hashing function h: h(r) = bucket in which record r belongs. h looks at only some of the fields of r, called the search fields.
Index Classification
• Primary vs. secondary: If search key contains primary key, then called primary index.• Unique index: Search key contains a
candidate key.• Clustered vs. unclustered: If order of data
records is the same as, or `close to’, order of data entries, then called clustered index.• Alternative 1 implies clustered, but not vice-
versa.• A file can be clustered on at most one search
key.• Cost of retrieving data records through index
varies greatly based on whether index is clustered or not!
Clustered vs. Unclustered Index
• Suppose that Alternative (2) is used for data entries, and that the data records are stored in a Heap file.• To build clustered index, first sort the Heap file
(with some free space on each page for future inserts).
• Overflow pages may be needed for inserts. (Thus, order of data recs is `close to’, but not identical to, the sort order.)
Index entries
Data entries
direct search for
(Index File)
(Data file)
Data Records
data entries
Data entries
Data Records
CLUSTERED UNCLUSTERED
Index Classification (Contd.)
• Dense vs. Sparse: If there is at least one data entry per search key value (in some data record), then dense.• Alternative 1 always
leads to dense index.• Every sparse index is
clustered!• Sparse indexes are
smaller; however, some useful optimizations are based on dense indexes.
Ashby, 25, 3000
Smith, 44, 3000
Ashby
Cass
Smith
22
25
30
40
44
44
50
Sparse Indexon
Name Data File
Dense Indexon
Age
33
Bristow, 30, 2007
Basu, 33, 4003
Cass, 50, 5004
Tracy, 44, 5004
Daniels, 22, 6003
Jones, 40, 6003
Index Classification (Contd.)
• Composite Search Keys: Search on a combination of fields.• Equality query: Every field
value is equal to a constant value. E.g. wrt <sal,age> index:
• age=20 and sal =75• Range query: Some field value
is not a constant. E.g.:• age =20; or age=20 and sal >
10
• Data entries in index sorted by search key to support range queries.• Lexicographic order, or• Spatial order.
sue 13 75
bob
cal
joe 12
10
20
8011
12
name age sal
<sal, age>
<age, sal> <age>
<sal>
12,20
12,10
11,80
13,75
20,12
10,12
75,13
80,11
11
12
12
13
10
20
75
80
Data recordssorted by name
Data entries in indexsorted by <sal,age>
Data entriessorted by <sal>
Examples of composite keyindexes using lexicographic order.
Physical Database Designfor Relational Databases
1. Select Storage Structures (determine how the particular relation is physically stored)
2. Select Index Structures (to speed up certain queries)
3. Select ……to minimize the runtime for a certain
workload (e.g a given set of queries)
Introduction Indexing Techniques
• As for any index, 3 alternatives for data entries k*:1. Data record with key value k2. <k, rid of data record with search key value k>3. <k, list of rids of data records with search key
k>• Hash-based indexes are best for equality
selections. Cannot support range searches.
• B+-trees are best for sorted access and range queries.
Static Hashing
• # primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed.
• h(k) mod M = bucket to which data entry with key k belongs. (M = # of buckets)
h(key) mod N
hkey
Primary bucket pages Overflow pages
20
N-1
Static Hashing (Contd.)
• Buckets contain data entries.• Hash fn works on search key field of record r. Must
distribute values over range 0 ... M-1.• h(key) = (a * key + b) usually works well.• a and b are constants; lots known about how to tune h.
• Long overflow chains can develop and degrade performance. • Two approaches:
• Global overflow area • Individual overflow areas for each bucket (assumed in the
following)• Extendible and Linear Hashing: Dynamic techniques to fix
this problem.
Range Searches
• ``Find all students with gpa > 3.0’’• If data is in sorted file, do binary search to
find first such student, then scan to find others.
• Cost of binary search can be quite high.• Simple idea: Create an `index’ file.
Can do binary search on (smaller) index file!
Page 1 Page 2 Page NPage 3 Data File
k2 kNk1 Index File
B+ Tree: The Most Widely Used Index
• Insert/delete at log F N cost; keep tree height-balanced. (F = fanout, N = # leaf pages)
• Minimum 50% occupancy (except for root). • Supports equality and range-searches
efficiently.
Index Entries
Data Entries("Sequence set")
(Direct search)
Example B+ Tree (order p=5, m=4)
• Search begins at root, and key comparisons direct it to a leaf (as in ISAM).
• Search for 5*, 15*, all data entries >= 24* ...
Based on the search for 15*, we know it is not in the tree!
Root
16 22 29
2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
7
p=5 because tree can have at most 5 pointers in intermediate node; m=4 because at most 4 entries in leaf node.
B+ Trees in Practice
• Typical order: 200. Typical fill-factor: 67%.• average fanout = 133
• Typical capacities:• Height 4: 1334 = 312,900,700 records• Height 3: 1333 = 2,352,637 records
• Can often hold top levels in buffer pool:• Level 1 = 1 page = 8 Kbytes• Level 2 = 133 pages = 1 Mbyte• Level 3 = 17,689 pages = 133 MBytes
Inserting a Data Entry into a B+ Tree• Find correct leaf L. • Put data entry onto L.
• If L has enough space, done!• Else, must split L (into L and a new node L2)
• Redistribute entries evenly, copy up middle key.• Insert index entry pointing to L2 into parent of L.
• This can happen recursively• To split index node, redistribute entries evenly,
but push up middle key. (Contrast with leaf splits.)
• Splits “grow” tree; root split increases height. • Tree growth: gets wider or one level taller at
top.
Inserting 4* into Example B+ Tree
• Observe how minimum occupancy is guaranteed in both leaf and intermediate node splits.
• Note difference between copy-up and push-up; be sure you understand the reasons for this.
2* 3* 5* 7*
4
Entry to be inserted in parent node.(Note that 4 iscontinues to appear in the leaf.)
s copied up and
appears once in the index. Contrast
4 22 29
16
7
Entry to be inserted in parent node.(Note that 16 is pushed up and only
this with a leaf split.)
4*
Example B+ Tree After Inserting 4*
Notice that root was split, leading to increase in height.
In this example, we can avoid split by re-distributing entries; however, this is usually not done in practice.
2* 3*
Root
16
22 29
14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
74
5*4* 7*
Deleting a Data Entry from a B+ Tree
• Start at root, find leaf L where entry belongs.• Remove the entry.
• If L is at least half-full, done! • If L has only d-1 entries,
• Try to re-distribute, borrowing from sibling (adjacent node with same parent as L).
• If re-distribution fails, merge L and sibling.• If merge occurred, must delete entry (pointing to L
or sibling) from parent of L.• Merge could propagate to root, decreasing height.
Example Tree (after inserting 4*, and deleting 19* and 20*)
before deleting 24
• Deleting 19* is easy.• Deleting 20* is done with re-distribution.
Notice that the intermediate node key had to be changed to 24.
2* 3*
Root
16
29
14* 16* 33* 34* 38* 39*
4
7*5*
7
22* 24*
24
27* 29*4*
... And Then Deleting 24*
• Must merge.• Observe `toss’ of
index entry (on right), and `pull down’ of index entry (below).
29
22* 27* 29* 33* 34* 38* 39*
2* 3* 7* 14* 16* 22* 27* 29* 33* 34* 38* 39*5*4*
Root2974 16
Example of Non-leaf Re-distribution
• Tree is shown below during deletion of 24*. (What could be a possible initial tree?)
• In contrast to previous example, can re-distribute entry from left child of root to right child.
Root
74 16 18
21
29
14* 16* 17* 18* 20* 33* 34* 38* 39*22* 27* 29*21*7*5*3*2* 4*
After Re-distribution
• Intuitively, entries are re-distributed by `pushing through’ the splitting entry in the parent node.
• It suffices to re-distribute index entry with key 20; we’ve re-distributed 17 as well for illustration.
14* 16* 33* 34* 38* 39*22* 27* 29*17* 18* 20* 21*7*5*2* 3*
Root
74
16
2918 21
4*
Clarifications B+ Tree• B+ trees can be used to store relations as well as index
structures• In the drawn B+ trees we assume (this is not the only
scheme) that an intermediate node with q pointers stores the maximum keys of each of the first q-1 subtrees it is pointing to; that is, it contains q-1 keys.
• Before B+-tree can be generated the following parameters have to be chosen (based on the available block size; it is assumed one node is stored in one block):• the order p of the tree (p is the maximum number of
pointers an intermediate node might have; if it is not a root it must have between ((p+1)/2) and p pointers; ‘/’ is integer division)
• the maximum number m of entries in the leaf node can hold (in general leaf nodes (except the root) must hold between (m+1)/2 and m entries)
• Intermediate nodes usually store more entries than leaf nodes
Why is the minimum number of pointers in an intermediate node
(p+1)/2 and not p/2 + 1??• (p+1)/2: Assume p=10; then p is between 5 and 10; in the case of
underflow without borrowing, 4 pointers have to be merged with 5 pointer yielding a node with 9 pointers.
• p/2 + 1: Assume p=10; then p is between 6 and 10; in the case of underflow without borrowing, 5 pointers have to be merged with 6 pointer yielding 11 pointers which is one too many.
• If p is odd: Assume p=11, then p is between 6 and 11; in the case of an underflow without borrowing a 5 pointer node has to be merged with a 6 pointer node yielding an 11 pointer node.
Conclusion: We infer from the discussion that the minimum maximum numbers of entries for a tree• of height 2 is: 2*((p+1)/2)*((m+1)/2) p*p*m• of height 3 is: 2* ((p+1)/2)* ((p+1)/2)* ((m+1/2) p*p*p*m• of height n+1 is: 2*((p+1)/2)n *((m+1)/2) pn+1*m
Remark: Therefore the correct answer for the homework problem (p=10;m=100) should be: 2*5*50/10*10*100
What order p and leaf entry maximum m should I chose?
Idea: One B+-tree node is stored in one block; choose maximal m and p without exceeding block size!!
Example1: Want to store tuples of a relation E(ssn, name, salary) in a B+-tree using ssn as the search key; ssn, and salary take 4 Byte; name takes 12 byte. B+-pointers take 2 Byte; the block size is 2048 byte and the available space inside a block for B+-tree entries is 2000 byte. Choose p and m!!
px2 + (p-1)x4 =<2000 p=<2004/6=334m =< 2000/20Answer: Choose p=334 and m=100!
Block
B+-tree Block Meta Data
Storage forB+-tree node entries
B+-tree Block Meta Data:Neighbor pointers, #entries, Parent pointer, sibling bits,…
Choosing p and m (continued)
Example2: Want to store an index for a relation E(ssn, name, salary) in a B+-tree using; storing ssn’s take 4 Byte; index pointers take 4 Byte. B+-pointers take 4 Byte; the block size is 2048 byte and the available space inside the block for B+-tree entries is 2000 byte. Choose p and m!!
px4 + (p-1)x4 =<2000 p=<2004/8=250m =< 2000/8 = 250Answer: Choose p=250 and m=250.
Coping with Duplicate Keysin B+ Trees
Possible Approaches:1. Just allow duplicate keys. Consequences:
• Search is still efficient• Insertion is still efficient (but could create “hot
spots”)• Deletion faces a lot of problems: We have to follow
the leaf pointers to find the entry to be deleted, and then updating the intermediate nodes might get quite complicated (can partially be solved by creating two-way node pointers)
2. Just create unique keys by using key+data (key*) Consequences: • Deletion is no longer a problem• p (because of the larger key size) is significantly
lower, and therefore the height of the tree is likely higher.
Summary B+ Tree
• Most widely used index in database management systems because of its versatility. One of the most optimized components of a DBMS.
• Tree-structured indexes are ideal for range-searches, also good for equality searches (log F N cost).
• Inserts/deletes leave tree height-balanced; log F N cost.
• High fanout (F) means depth rarely more than 3 or 4.
• Almost always better than maintaining a sorted file• Self reorganizing data structure• Typically 67%-full pages at an average
Extendible Hashing
• Situation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets?• Reading and writing all pages is expensive!• Idea: Use directory of pointers to buckets,
double # of buckets by doubling the directory, splitting just the bucket that overflowed!
• Directory much smaller than file, so doubling it is much cheaper. Only one page of data entries is split. No overflow page!
• Trick lies in how hash function is adjusted!
Example
• Directory is array of size 4.• To find bucket for r, take last
`global depth’ # bits of h(r); we denote r by h(r).• If h(r) = 5 = binary 101,
it is in bucket pointed to by 01.
Insert: If bucket is full, split it (allocate new page, re-distribute).
If necessary, double the directory. (As we will see, splitting a bucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.)
13*00
01
10
11
2
2
2
2
2
LOCAL DEPTH
GLOBAL DEPTH
DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
DATA PAGES
10*
1* 21*
4* 12* 32* 16*
15* 7* 19*
5*
Insert h(r)=20 (Causes Doubling)
20*
00
01
10
11
2 2
2
2
LOCAL DEPTH 2
2
DIRECTORY
GLOBAL DEPTHBucket A
Bucket B
Bucket C
Bucket D
Bucket A2(`split image'of Bucket A)
1* 5* 21*13*
32*16*
10*
15* 7* 19*
4* 12*
19*
2
2
2
000
001
010
011
100
101
110
111
3
3
3DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
Bucket A2(`split image'of Bucket A)
32*
1* 5* 21*13*
16*
10*
15* 7*
4* 20*12*
LOCAL DEPTH
GLOBAL DEPTH
Points to Note
• 20 = binary 10100. Last 2 bits (00) tell us r belongs in A or A2. Last 3 bits needed to tell which.• Global depth of directory: Max # of bits needed to
tell which bucket an entry belongs to.• Local depth of a bucket: # of bits used to
determine if an entry belongs to this bucket.• When does bucket split cause directory
doubling?• Before insert, local depth of bucket = global depth.
Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page. (Use of least significant bits enables efficient doubling via copying of directory!)
Directory Doubling
00
01
10
11
2
Why use least significant bits in directory? Allows for doubling via copying!
000
001
010
011
3
100
101
110
111
vs.
0
1
1
6*6*
6*
6 = 110
00
10
01
11
2
3
0
1
1
6*6*
6*
6 = 110000
100
010
110
001
101
011
111
Least Significant Most Significant
Comments on Extendible Hashing
• If directory fits in memory, equality search answered with one disk access; else two.• 100MB file, 100 bytes/rec, 4K pages contains
1,000,000 records (as data entries) and 25,000 directory elements; chances are high that directory will fit in memory.
• Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large.
• Multiple entries with same hash value cause problems!
• Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory.
Linear Hashing
• This is another dynamic hashing scheme, an alternative to Extendible Hashing.
• LH handles the problem of long overflow chains without using a directory, and handles duplicates.
• Idea: Use a family of hash functions h0, h1, h2, ...• hi(key) = h(key) mod(2iN); N = initial # buckets• h is some hash function (range is not 0 to N-1)• If N = 2d0, for some d0, hi consists of applying h and
looking at the last di bits, where di = d0 + i.• hi+1 doubles the range of hi (similar to directory
doubling)