distributed filesystems review

30
Distributed File System Review Schubert Zhang May 2008

Upload: fmoreira9650

Post on 27-Oct-2014

56 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Distributed Filesystems Review

Distributed File System Review

Schubert Zhang

May 2008

Page 2: Distributed Filesystems Review

Slide 2

File Systems

• Google File System (GFS)

• Kosmos File System (KFS)

• Hadoop Distributed File System (HDFS)

• GlusterFS

• Red Hat Global File System

• Luster

• Summary

Page 3: Distributed Filesystems Review

Slide 3

Google File System (GFS)

Page 4: Distributed Filesystems Review

Slide 4

Google File System (GFS)

• Specified applications oriented file system.– Search engines.– Grid computing applications.– Data mining applications.– Other application for the generation and processing of data.

• Workload Characters– Performance, scalability, reliability, and availability requirements.– Large distributed data-intensive applications.– Large/Huge files (tens of MB to tens of GB in size).– Primarily write-once/read-many.– Appending rather than overwriting.– Mostly sequential access.– The emphasis is on high sustained throughput of data access rather than low latency of data

access.

• System Requirements– Inexpensive commodity hardware that may often fail.– Adequate memory for Master-Server.– GE network interface.

• Architecture– Usually both client and chunkserer run on a same machine.– Fixed-size chunks (usually 64MB) (memory of master).– File replicated, chunk replicated (usually 3).– Single master and multiple chunkservers and accessed by multiple clients.

Page 5: Distributed Filesystems Review

Slide 5

Google File System (GFS)

• Single masterserver – metadata server– Namespaces (files and chunks)– File access control info– Mapping from files to chunks– Locations of chunks replicas– Metadata in memory– Namespaces and mapping stored in disk by checkpoints and

operation log.– Namespace management and locking– Metadata HA and fault tolerance– Replica Placement, rack-aware replica placement policy – Chunk creation, re-replication, rebalancing– chunk server management (heartbeat and control.)– chunk lease management– Garbage collection– Minimize the master’s involvement in all operations.

 

Page 6: Distributed Filesystems Review

Slide 6

Google File System (GFS)

• Large number of chunkserver– No cache for file data– Chunk allocation (lazy)– Lease, data replication chain– Blocks checksums– Chunk state report– P2P replication, Replication Pipelining and Clone

• Large number of clients– Linked into each application.– Interact with the master for metadata operation– Data-bearing communication goes directly to the chunkservers– No cache for file data, but cache metadata.– Translate operation offset to chunk index.– Applications/clients get over the limitations of GFS

implementation.

Page 7: Distributed Filesystems Review

Slide 7

Google File System (GFS)

• Cluster scale and performance– Thousands of disks on over a thousand machines– Hundreds of TB or several PB of storage– Hundreds or thousands of clients

• Limitations– No standard API such as POSIX.– Not integrated File System operations.– Some performance issues depend on applications and clients

implementation.– GFS does not guarantee that all replicas are byte-wise identical. It

only guarantees that the data is written at least once as an atomic unit. Append operation atomically at least once issue. (GFS may insert padding or record duplicates in between.)

– Application/Client have opportunity to get a stale chunk replica. (Reader deal with it)

– If a write by the application is large or straddles a chunk boundary, it may be added fragments from different clients.

– Need tight cooperate of applications.– Not support hard links or soft links.

Page 8: Distributed Filesystems Review

Slide 8

Google File System (GFS)

• Need further components to achieve completeness– Chubby (Distributed lock and Consistency)– BigTable (A Distributed Storage System for Structured

Data )– etc.

Page 9: Distributed Filesystems Review

Slide 9

Kosmos File System (KFS)

• A open source implementation of the Google File System

Application

KFS ClientClient library

KFS Block server KFS Block server

Linux FS Linux FS

LocationSignaling

FS OP

Organization Signaling Organization Signaling

Block Talk Signaling

Block Data Stream

Many Block Servers for distributed storage

Many clients for distributed computing

KFS Meta-data server(with HA)

Block Team Talk

BlockData Stream

Page 10: Distributed Filesystems Review

Slide 10

Kosmos File System (KFS)

• Architecture– Meta-data server = Google FS Master– Block server = Google FS Chunk Server– Client library = Google FS Client

• Workload characters– Primarily write-once/read-many workloads– Few millions of large files, where each file is on the order

of a few tens of MB to a few tens of GB in size – Mostly sequential access

• Implemented in C++– Client API support C++, Java, Python

Page 11: Distributed Filesystems Review

Slide 11

Kosmos File System (KFS)

• Valued Stuff– Client write cache (Google said not necessary)– FUSE support: KFS exports a POSIX file interface, Hadoop

does not (GFS does not, either) – Monitor tools and shell– Deploy scripts– Job placement and local read optimization– Can be integrated with Hadoop: replace HDFS, use the map-

reduce of Hadoop. (patch to Hadoop-JIRA-1963)– KFS supports atomic append, HDFS does not – KFS supports rebalancing, HDFS does not

• Status and Limitations– Not good implemented yet.– No real user– Failed to build a usable program.– Similar limitations of Google FS.

Page 12: Distributed Filesystems Review

Slide 12

Kosmos File System (KFS)

• Client support FUSE

Client Applications(e.g:shell command ls)

glibc

VFS

Ext3(for Local Disks)

FUSE Kernel Module

glibc

libfuse(FUSE user programming

library)

KFS Client KFS Meta-data Server

KFS Block ServerFS OP

/mnt/kfs(fuseFS)

/(local)

FS OP

FS OP

OP Result

OP ResultOP Result

Client Implementation

Page 13: Distributed Filesystems Review

Slide 13

Hadoop Distributed File System (HDFS)

• A open source implementation of the Google File System• HDFS relaxes a few POSIX requirements to enable streaming access to file system data. • From infrastructure for the Apache Nutch.• “Moving Computation is Cheaper than Moving Data”• Portability Across Heterogeneous Hardware and Software Platforms, Implemented by Java.

– Java client API– C language wrapper for this Java API – HTTP browser interface

• Architecture (master/slave)– Namenode = Google FS masterserver– Datanodes = Google FS chunkservers– Clients = Google FS clients– Blocks = Google FS chunks

• Namenode Safe Mode• The Persistence of File System Metadata like google FS

– Not yet support periodic checkpoints.• Communication Protocols

– RPCs

• Staging, client data buffing (like POSIX implementation)

Page 14: Distributed Filesystems Review

Slide 14

Hadoop Distributed File System (HDFS)

Page 15: Distributed Filesystems Review

Slide 15

Hadoop Distributed File System (HDFS)

• Status and Limitations– Similar limitations of Google FS.– Not yet support appending-writes to files. – Not yet implement user quotas or access permissions.– Replica placement policy not completed.– Not yet support periodic checkpoints of metadata.– Not yet support re-balancing.– Not yet support snapshot.

• Who’s using HDFS– Facebook (implement a read-only FUSE over HDFS,

300 nodes)– Yahoo! (1000 nodes)– For some non-commercial usage (log analysis, search,

etc.)

Page 16: Distributed Filesystems Review

Slide 16

GlusterFS

• Gluster for specific tasks such as HPC Clustering, Storage Clustering, Enterprise Provisioning, Database Clustering etc. – GlusterFS– GlusterHPC

Page 17: Distributed Filesystems Review

Slide 17

GlusterFS

Page 18: Distributed Filesystems Review

Slide 18

GlusterFS

Page 19: Distributed Filesystems Review

Slide 19

GlusterFS

Application(shell: ls, etc.)

POSIX

VFSFUSEfuse.ko

FUSElibfuse

Namespace BrickNamespace Bricks

(AFR)

Namespace BrickFile Data Bricks

(AFR, Stripe, etc.)

GlusterFS Client

ClientsStorage Server Cluster

Page 20: Distributed Filesystems Review

Slide 20

GlusterFS

• Architecture– Different from GoogleFS series.– No meta-data , no master server. – User space logical volume management scenario.– Server node machines export disk storages as bricks. The brick

nodes store distributed files in underling Linux file system.– The file namespaces are also stored at storage bricks, just as the file

data bricks. Except the size of the files is zero.– Bricks (file data or namespaces) support replication.– NFS like Disk Layout

• Interconnect– Infiniband RDMA (High throughput)– TCP/IP

• Features– Support FUSE, complete POSIX interface.– AFR (mirror)– Self Heal– Stripe (note: not good implemented)

Page 21: Distributed Filesystems Review

Slide 21

GlusterFS

• Valued Stuff– Easy to setup for a moderate cluster.– FUSE and POSIX– Scheduler Modules for balancing– Performance tuning flexibly– Design:

• Stackable Modules,Translators, run-time .so implementation.• Not tied to I/O Profiles or Hardware or OS

– Well-tested and with different representative benchmarks.– Performance and simplicity is better then Luster.

• Limitations– Lacks global management function, no master.– The AFR function depends on configuration, lacks automation and flexibility.– Now, cannot automatic add new bricks.– If a master component is added, it will be a better Cluster FS.

• Who’s using GlusterFS– Indian Institute of Technology Kanpur, 24 brick GlusterFS storage on

Infiniband.– Other small cluster projects.

Page 22: Distributed Filesystems Review

Slide 22

Red Hat Global File System

• Red Hat Cluster Suite

• It’s a shared storage solution, which is a traditional solution.

• Depends on – Red Hat Cluster Suite components– Configuration and management function

• Conga (luci and ricci)

– GLVM – DLM– GNBD– SAN/NAS/DAS

Page 23: Distributed Filesystems Review

Slide 23

Red Hat Global File System

• Deploy– GFS with a SAN (Superior Performance and Scalability)– GFS and GNBD with a SAN (Performance, Scalability,

Moderate Price)– GFS and GNBD with Directly Connected Storage (Economy

and Performance)

Page 24: Distributed Filesystems Review

Slide 24

Red Hat Global File System

• GFS Functions– Making a File System– Mounting a File System– Unmounting a File System– GFS Quota Management– Growing a File System– Adding Journals to a File System– Direct I/O– Data Journaling– Configuring atime Updates– Suspending Activity on a File System– Displaying Extended GFS Information and Statistics– Repairing a File System– Context-Dependent Path Names (CDPN)

• Cluster Volume Management – aggregate multiple physical volumes into a single, logical device across all

nodes in a cluster. – provides a logical view of the storage to GFS.

• Lock Management• Cluster Management, Fencing, and Recovery• Cluster Configuration Management

Page 25: Distributed Filesystems Review

Slide 25

Red Hat Global File System

• Status– It is a shared storage solution.– The solution is far from our target.– A little too complicated and not easy to manage.– High performance and scalability need high level storage

hardware and network (eg.SAN).– The implementation is not sample.

Page 26: Distributed Filesystems Review

Slide 26

Luster

• Sun Microsystems

• Target 10,000 of nodes, PB of storage, 100GB/sec throughput.

• Lustre is kernel software, which interacts with storage devices. Your Lustre deployment must be correctly installed, configured, and administered to reduce the risk of security issues or data loss.

• It uses Object-Based Storage Devices (OSDs), to manage entire file objects (inodes) instead of blocks.

• Components– Meta Data Servers (MDSs)– Object Storage Targets (OSTs)– Lustre clients.

• Luster is a little too complex to be used.

• But it seems a verified and reliable File System.

Page 27: Distributed Filesystems Review

Slide 27

Luster OSD Architecture

Page 28: Distributed Filesystems Review

Slide 28

Summary

• Shared

• Cluster

• Parallel

• Cloud

Page 29: Distributed Filesystems Review

Slide 29

Summary

• Cluster Volume Managers

• SAN File Systems

• Cluster File Systems

• Parallel NFS (pNFS)

• Object-based Storage Devices (OSD)

• Global/Parallel File System

• Distribute/Cluster/Parallel Level– Volume level (block based)– File or File system level (file, block or object(for OSD) based)– Database or application level

• Directly at the storage or in the network

Page 30: Distributed Filesystems Review

Slide 30

Summary

• Traditional/Historical– Block level: Volume Management

• EMC PowerPath (PPVM)• HP Shared LVM• IBM LVM• MACROIMPACT SAN CVM• REDHAT LVM• SANBOLIC LaScala• VERITAS

– File/File System level: • Local Disk FS• Distributed: NAS, Samba, AFP, DFS, AFS, RFS, Coda…• SAN FS

– App/DB level: RDBMS, Email system

• Advanced/Recent: File/FS level– Distributed: WAFS(NAS extention), NFM, GlobalFS, SANFS,

ClusterFS