disk storage, basic file structures, and hashing chapter 13 byung-hyun ha [email protected]

41
Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha [email protected]

Upload: ernest-gallagher

Post on 11-Jan-2016

219 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Disk Storage, Basic File Structures, and Hashing

Chapter 13

Byung-Hyun Ha

[email protected]

Page 2: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Table of Contents

Disk Storage Devices

Files of Records

Operations on Files

Unordered Files

Ordered Files

Hashed Files

RAID Technology

Page 3: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Introduction

Computer storage media Primary storage Secondary storage

Memory hierarchies and storage devices Primary storage level

• cache memory – static RAM

• DRAM

Secondary storage level• magnetic disks

• CD-ROM, optical jukebox memories, DVD

• magnetic tapes, tape jukeboxes

Flash memory

Storage of databases

Page 4: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Disk Storage Devices

High storage capacity and low cost Preferred secondary storage device for

Magnetic disk surfaces Data stored as magnetized areas A disk pack contains several magnetic disks connected to a

rotating spindle Disks are divided into concentric circular tracks on each disk

surface• track capacities vary typically from 4 to 50 Kbytes

• divided into smaller blocks or sectors

Page 5: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Disk Storage Devices

A single-sided disk (a) and a disk pack (b)

Page 6: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Disk Storage Devices

Different sector organizations on disk(a) Sectors subtending a fixed angle

(b) Sectors maintaining a uniform recording density.

Page 7: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Disk Storage Devices

Blocks Because a track usually contains a large amount of information, it

is divided into smaller blocks Block size B is fixed for each system Typical block sizes range from B=512 bytes to B=4096 bytes Whole blocks are transferred between disk and main memory for

processing

Transferring disk blocks A read-write head moves to the track that contains the block to b

e transferred Disk rotation moves the block under the read-write head for readi

ng or writing 12 to 60 msec (Seek time / rotational delay / block transfer time)

Page 8: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Disk Storage Devices

Double buffering

Page 9: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Placing File Records on Disk

Records Fixed and variable length records Records contain fields which have values of a particular type

(e.g., amount, date, time, age) Fields themselves may be fixed length or variable length Variable length fields can be mixed into one record

• separator characters or length fields are needed so that the record can be “parsed”

Page 10: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Placing File Records on Disk

(a) A fixed-length record with six fields and size of 71 bytes

(b) A record with variable-length fields and fixed-length fields

(c) A variable-field record with three types of separator characters

Page 11: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Files of Records

Block Unit of data transfer

File A sequence of records

• each record is a collection of data values (or data items)

File descriptor• includes information that describes the file, such as the field names

and their data types, and the addresses of the file blocks on disk

Records are stored on disk blocks• the blocking factor bfr for a file is the (average) number of file record

s stored in a disk block

A file can have fixed-length records or variable-length records

Page 12: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Files of Records

Unspanned or spanned records

Page 13: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Operation on Files

Typical file operations OPEN FIND FINDNEXT READ INSERT DELETE MODIFY CLOSE REORGANIZE READ_ORDERED

Page 14: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Files of Unordered Records

Also called a heap or a pile file

New records are inserted at the end of the file

linear search through the file records This requires reading and searching half the file blocks on the

average, and is hence quite expensive

Record insertion is quite efficient

Reading the records in order of a particular field requires sorting the file records

Page 15: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Files of Ordered Records

Also called a sequential file File records are kept sorted by the values of an

ordering field Insertion is expensive

Records must be inserted in the correct order It is common to keep a separate unordered overflow (or

transaction) file for new records to improve insertion efficiency; this is periodically merged with the main ordered file.

A binary search can be used to search for a record on its ordering field value This requires reading and searching log2 of the file blocks on the

average, an improvement over linear search. Reading the records in order of the ordering field is

quite efficient

Page 16: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Files of Ordered Records

An example

Page 17: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Files of Ordered Records

Binary search for an key value K of a disk file

Algorithm: Binary search on an ordering key of a disk file

l 1; u b; (* b is the number of file blocks*)while (u l) do

begin i (l + u) div 2;read block i of the file into the buffer;if K < (ordering key field value of the first record in block i)then u i – 1else if K > (ordering key field value of the last record in block i)

then l i + 1else if the record with ordering key field value = K is in the buffer

then goto foundelse goto notfound;

end;goto notfound;

Page 18: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Average Access Times

Page 19: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Hashing Technique

Hashing Search using hash function of yielding address for hash key The search condition must be an equality condition on hash field Collision

hash("apple") = 5hash("watermelon") = 3hash("grapes") = 8hash("cantaloupe") = 7hash("kiwi") = 0hash("strawberry") = 9hash("mango") = 6hash("banana") = 2hash("honeydew") = 6

kiwi

bananawatermelon

applemango

cantaloupegrapes

strawberry

0

1

2

3

4

5

6

7

8

9

kiwi

bananawatermelon

applemango

cantaloupegrapes

strawberry

0

1

2

3

4

5

6

7

8

9

Page 20: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Hashing Technique

Collision resolution Open addressing / chaining / multiple hashing

Page 21: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

External Hashing for Disk Files

Hashing for disk files is called External Hashing The file blocks are divided into M equal-sized buckets

Typically, a bucket corresponds to one (or a fixed number of) disk block

One of the file fields is designated to be the hash key of the file

The record with hash key value K is stored in bucket i, where i=h(K), and h is the hashing function

Search is very efficient on the hash key Collisions occur when a new record hashes to a bucket t

hat is already full An overflow file is kept for storing such records. Overflow records

that hash to each bucket can be linked together

Page 22: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

External Hashing for Disk Files

Matching bucket numbers to disk block addresses

Page 23: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

External Hashing for Disk Files

To reduce overflow records, a hash file is typically kept 70-80% full

The hash function h should distribute the records uniformly among the buckets otherwise, search time will be increased because many overflow

records will exist.

Main disadvantages of static external hashing Fixed number of buckets M is a problem if the number of records

in the file grows or shrinks Ordered access on the hash key is quite inefficient (requires

sorting the records).

Page 24: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

External Hashing for Disk Files

Overflow handling

Page 25: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Dynamic Hashing Technique

Hashing techniques are adapted to allow the dynamic growth and shrinking of the number of file records

These techniques include the following Extendible hashing and linear hashing

Extendible hashing Use the binary representation of the hash value h(K) in order to

access a directory Directory is an array of size 2d where d is called the global depth

Page 26: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Dynamic Hashing Technique

Extendible hashing

Page 27: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Dynamic Hashing Technique

Linear hashing This is another dynamic hashing scheme, an alternative to

Extendible Hashing. Motivation: Ext. Hashing uses a directory that grows by

doubling… Can we do better? (smoother growth) LH: split buckets from left to right, regardless of which one

overflowed (simple, but it works!!)

Page 28: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

Initially: h(x) = x mod N (N=4 here)Assume 3 records/bucketInsert 17 = 17 mod 4 1Bucket id 0 1 2 3

4 8 5 9 6 7 11

13

Page 29: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

Initially: h(x) = x mod N (N=4 here)Assume 3 records/bucketInsert 17 = 17 mod 4 1

Bucket id 0 1 2 3

4 8 5 9 6 7 11

13

Overflow for Bucket 1

Split bucket 0, anyway!!

Page 30: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

To split bucket 0, use another function h1(x): h0(x) = x mod N , h1(x) = x mod (2*N)

17

0 1 2 3

4 8 5 9 6 7 11

13

Split pointer

Page 31: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

To split bucket 0, use another function h1(x): h0(x) = x mod N , h1(x) = x mod (2*N)

17

Bucket id 0 1 2 3 4

8 5 9 6 7 11 413

Split pointer

Page 32: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

To split bucket 0, use another function h1(x): h0(x) = x mod N , h1(x) = x mod (2*N)

Bucket id 0 1 2 3 4

8 5 9 6 7 11 413

17

Page 33: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

h0(x) = x mod N , h1(x) = x mod (2*N)

Insert 15 and 3Bucket id 0 1 2 3 4

8 5 9 6 7 11 413

17

Page 34: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

h0(x) = x mod N , h1(x) = x mod (2*N)

Bucket id 0 1 2 3 4 5

8 9 6 7 11 4 13 515

3

17

Page 35: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Example of Linear Hashing

h0(x) = x mod N (for the un-split buckets)h1(x) = x mod (2*N) (for the split ones)

Bucket id 0 1 2 3 4 5

8 9 6 7 11 4 13 515

3

17

Page 36: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Other Primary File Organizations

Files of mixed records Connecting fields Logical vs. physical file reference Physical clustering

B-Tree and other data structures

Page 37: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Parallelizing Disk Access

Secondary storage technology must take steps to keep up in performance and reliability with processor technology

A major advance in secondary storage technology is represented by the development of RAID, which originally stood for Redundant Arrays of Inexpensive Disks

The main goal of RAID is to even out the widely different rates of performance improvement of disks against those in memory and microprocessors

Page 38: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

RAID Technology

A natural solution is a large array of small independent disks acting as a single higher-performance logical disk. A concept called data striping is used, which utilizes parallelism to improve disk performance.

Data striping distributes data transparently over multiple disks to make them appear as a single large, fast disk.

Page 39: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

Improving Reliability with RAID

Likelihood of failure Mean Time To Failure (MTTF)

• One disk: 200,000 hours

• n disks: (200,000 / n) hours

Improving reliability Introducing redundancy

• Mirroring or shadowing

Storing extra information• parity bits or hamming codes

Page 40: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

RAID Technology

Based on striping and redundant information. Raid level 0 uses no redundant data and hence has the best writ

e performance Raid level 1 uses mirrored disks. Raid level 2 uses memory-style redundancy by using Hamming c

odes, which contain parity bits for distinct overlapping subsets of components

Raid level 3 uses a single parity disk relying on the disk controller to figure out which disk has failed

Raid Levels 4 and 5 use block-level data striping, with level 5 distributing data and parity information across all disks

Raid level 6 applies the so-called P + Q redundancy scheme using Reed-Soloman codes to protect against up to two disk failures by using just two redundant disks

Page 41: Disk Storage, Basic File Structures, and Hashing Chapter 13 Byung-Hyun Ha bhha@pusan.ac.kr

RAID Technology

Multiple level of RAID