xfuse: an infrastructure for running filesystem services

20
XFUSE: An Infrastructure for Running Filesystem Services in User Space Qianbo Huai , Windsor Hsu , Jiwei Lu , Hao Liang , Haobo Xu and Wei Chen Alibaba Group Sunnyvale, California USA Alibaba Group Shenzhen, Guangdong China

Upload: others

Post on 11-Jan-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: XFUSE: An Infrastructure for Running Filesystem Services

XFUSE: An Infrastructure for Running Filesystem Services in User Space

Qianbo Huai∗, Windsor Hsu∗, Jiwei Lu∗, Hao Liang†, Haobo Xu∗ and Wei Chen∗

∗Alibaba GroupSunnyvale, California

USA

†Alibaba GroupShenzhen, Guangdong

China

Page 2: XFUSE: An Infrastructure for Running Filesystem Services

User Space Filesystem

Benefits• Higher development efficiency and velocity• Decreased dependency on OS

Concerns• Performance• RAS (Reliability, Availability and Serviceability)• Application and build changes may be required

Page 3: XFUSE: An Infrastructure for Running Filesystem Services

Related Work• FUSE: an interface for user-space programs to export a filesystem to the Linux kernel.

• FUSE-based filesystems are accessible through standard kernel interface

• Large body of work on improving FUSE performance• E.g. ExtFUSE allows applications to register “thin” specialized request handlers in

the kernel to improve performance

• AVFS uses LD_PRELOAD to intercept libc POSIX API entry and invoke filesystem ops without context switch

• ZUFS leverages persistent memory to have its kernel module directly copy data between source and destination, eliminating the extra copy to/from buffer cache.

• NVFUSE is an embeddable file system as a library running in the user-space incorporated with SPDK library, and supports directly submitting I/O requests to NVMe SSDs.

• Re-FUSE is a framework that provides support for restartable user space filesystems.

Page 4: XFUSE: An Infrastructure for Running Filesystem Services

Our Contribution: XFUSE

XFUSE• Backward compatible with FUSE• Improves performance and RAS for XFUSE-optimized filesystems• Facilitates large-scale and gradual rollout in production

Designed for user space filesystems that• Use high speed storage devices• PMEM, fast SSDs, distributed storage systems based on high perf network

• Are deployed in production environments• With strict RAS requirements

Page 5: XFUSE: An Infrastructure for Running Filesystem Services

Agenda

• Request Flow in FUSE• XFUSE Improvements• Adaptive waiting to reduce latency• Increased parallelism to improve throughput• Online upgrade for better RAS

• Performance Evaluation• Parametric analysis• System-level performance

• Conclusion

Page 6: XFUSE: An Infrastructure for Running Filesystem Services

FUSE Request FlowRequest flow• Application makes a syscall (e.g. via POSIX API)

to a FUSE-mounted filesystem• FUSE request travels from the app thread (via

fuse.ko) to a filesystem daemon thread• FUSE reply travels, in reverse direction, from

the daemon thread back to the app thread

A synchronous FUSE request may involve two event waits in kernel• Daemon thread: wait for pending requests if

none is available at the time.• App thread: wait for request completion

Notes:

- Certain details (such as background queue, async io) are omitted and the omission does not impact our discussion

fuse.ko

Filesystem

Libfuse

UserApplication

FUSE API

DeviceHandler

Daemon Process

Syscall

processing

FUSE Request Reply

Event

Wait

pending

FilesystemHandler

Page 7: XFUSE: An Infrastructure for Running Filesystem Services

XFUSE improvements

Page 8: XFUSE: An Infrastructure for Running Filesystem Services

Adaptive WaitingProblem• Kernel event-wait and notification take a few 𝜇𝑠 to deliver• High perf storages: data may become available sooner

Add an initial busy-wait period• End-to-end latency can be as low as 3~4 𝜇𝑠

(vs. 8~9 𝜇𝑠 under event-wait)

Effectiveness of busy-wait• Performance characteristics of filesystem and storage• Thread placement (same vs. different CPUs)• Workload

Adaptive busy-event wait (or adaptive waiting)• Dynamically predict if busy-wait is beneficial, and• Turn on/off busy-wait accordingly

Invoke wait for condition

Wait on event

IsCondition

true?

Is wait< busy-wait

Period?

Return

Yield CPU

Yes

No

No

Yes RecordLatency

ComputeBusy-wait period

Page 9: XFUSE: An Infrastructure for Running Filesystem Services

Increased ParallelismFUSE• New request → pending queue (one per mount)• Request fetched → processing queue (one per FD)

XFUSE• Introduces multiple request pending queues• Groups each pair of pending and processing queues

as a channel• New request → channel (per selection policy)

Benefits• Daemon threads work on their own pending and

processing queues• Reduced kernel lock contention between daemon

threadsxfuse.ko

Filesystem

Libxfuse

UserApplication

XFUSE API

DeviceHandler

Daemon Process

Syscall

processing

FUSE Request Reply

Adaptive

Waiting

pending

FilesystemHandler

Channels

Page 10: XFUSE: An Infrastructure for Running Filesystem Services

Online UpgradeBusiness needs• Fast paced rollout of new features and bug fixes for user space filesystems• Minimal disruption to tens or hundreds of mounts and apps on each host during upgradeOnline upgrade solution• Extend libfuse to support online upgrade workflow and state transition• Monitor Service

• Coordinates the interactions between two filesystem daemons• Assists the transfer of filesystem internal states, including FDs (to special fuse device)

Monitorservice

xfuse.ko

libxfuse

Filesystem

libxfuse

Filesystem

CLI 1. Install new package

2. Start

87

5. Stop fetching requests

6. Finish fetched requests,Save states

4 3

9. Load states

10. Start to fetch requests

Upgraded instancePrevious instance

Kernel

Holds fds toxfuse device

0

200

400

600

800

0 10 20 30 40 50 60 70 80 90Th

roug

hput

(MB/

s)Time (s)

Upgrade Initiated

PackagePreparation

Pause

Upgrade Completed

Page 11: XFUSE: An Infrastructure for Running Filesystem Services

Performance Evaluation

Page 12: XFUSE: An Infrastructure for Running Filesystem Services

Parametric AnalysisObjectives• Understand the effects of policy choices and tuning params• Project potentially achievable performance

Method• Explore aspects of XFUSE individually

Aspects• Waiting strategy in adaptive waiting• Placement of app and daemon threads• Channel selection for new FUSE request

Test setup• Dedicated Linux 4.19.91 servers on Alibaba Cloud• 24 channels in XFUSE• 24 threads in TimingFS• Threads can configured to affine to CPUs

TimingFS• User space filesystem, via FUSE lowlevel API• Optimized to probe aspects of XFUSE individually• Can emulate timing characteristics storage

systems• E.g. READ copies 4KB randomly from a large file

• PMEM-like: reply to XFUSE.ko immediately• SSD-like: delays 100 𝜇𝑠 before replying

UserKernel

Reader

xfuse.ko

libxfuse

TimingFS

Page 13: XFUSE: An Infrastructure for Running Filesystem Services

Parametric Analysis: Waiting StrategyHow I/O performance is impacted by

• Varying busy-wait period (note: “0𝜇𝑠” disables busy-wait, is essentially event-wait only)

• Wait-decision algorithm; threshold for turning on/off busy-wait

Findings

• PMEM-like: 10𝜇𝑠 busy-wait, good balance between performance and CPU usage.

• SSD-like: last latency value is sufficient to predict the latency for the current request

• SSD-like: adaptive waiting outperforms busy-wait-only when system is under load

0

50

100

150

200

250

300

0 1000 2000 3000 4000 5000

Late

ncy (

us)

Throughput (MB/s)

SSD-like

05101520

Busy Wait (us)

0

50

100

150

200

250

300

0 1000 2000 3000 4000 5000

Late

ncy (

us)

Throughput (MB/s)

SSD-like

05101520

Busy Wait (us)

0

2

4

6

8

10

12

14

16

18

20

0 2000 4000 6000 8000 10000

Late

ncy (

us)

Throughput (MB/s)

PMEM-like

05101520

Busy Wait (us)

Wait-decision:

threshold = busy_wait_period + event_wait_overhead= 10𝜇𝑠 + 5𝜇𝑠 = 15𝜇𝑠

if observed_latency < thresholddo busy-event wait

elsedo event wait

Performance with Adaptive Busy-Event WaitPerformance with Busy-Event Wait

Page 14: XFUSE: An Infrastructure for Running Filesystem Services

Parametric Analysis: Thread PlacementIn production environments where thread placement can be controlledPlacement of app thread and corresponding daemon thread:• PMEM-like storage, different CPUs

• Two threads affined on the same CPU cannot busy-wait for each other.• SSD-like storage: same CPU

• Event notification on local CPU is faster than that across CPUs.

0

5

10

15

20

0 2000 4000 6000 8000 10000

Late

ncy

(us)

Throughput (MB/s)

PMEM-like

Event/SameEvent/CrossAdaptive/SameAdaptive/Cross

Same CPU

Cross CPU0

50

100

150

200

250

300

0 2000 4000 6000 8000

Late

ncy

(us)

Throughput (MB/s)

SSD-like

Event-Wait/Same CPUEvent-Wait/Cross CPUAdaptive-Wait/Same CPUAdaptive-Wait/Cross CPU

SameCPU

Cross CPUApp thread Daemon threadchannel

App thread Daemon threadchannel

Same CPU

App thread Daemon threadchannel

Cross CPU

CPUi

CPUi

CPUj

Page 15: XFUSE: An Infrastructure for Running Filesystem Services

Parametric Analysis: Channel SelectionFindings• Best strategy: evenly distribute requests across all channels• Avoid policies that keep on switching to an idle channel, which renders busy-wait ineffective

(see the RR line in PMEM-like figure).

• PID and HASH policies perform well in repeated tests• PID-policy is computationally cheaper. HASH-policy consistently avoids skewed request distribution

0

2

4

6

8

10

12

14

16

18

20

0 2000 4000 6000 8000 10000

Late

ncy

(us)

Throughput (MB/s)

PMEM-like

PIDCPUIDRRSTIMEHASH

Channel Selection

Policy

0

50

100

150

200

250

300

0 1000 2000 3000 4000 5000

Late

ncy

(us)

Throughput (MB/s)

SSD-like

PIDCPUIDRRSTHASH

Channel Selection

Policy

Channel selection

• channel_index = val % channel_num

Where val is

• PID: thread id

• CPUID: id of CPU

• RR: round-robin, i.e. val = ++channel_index

• ST: thread start time

• HASH: hash of thread idCompute 3 different hashesSelect the channel with the shortest queue

Page 16: XFUSE: An Infrastructure for Running Filesystem Services

Parametric Analysis: XFUSE vs FUSE• Project the best-case performance that XFUSE can achieve• XFUSE configuration:

• Adaptive busy-event wait: busy-wait period 10𝜇𝑠. event-wait overhead 5𝜇𝑠• 24 channels. 24 threads in TimingFS, one for each physical core.

0

2

4

6

8

10

12

14

16

18

20

0 2000 4000 6000 8000 10000

Late

ncy

(us)

Throughput (MB/s)

PMEM-like

XFUSE/Same CPUXFUSE/Cross CPUFUSE

0

50

100

150

200

250

300

0 2000 4000 6000 8000La

tenc

y (u

s)

Throughput (MB/s)

SSD-like

XFUSE/Same CPU

XFUSE/Cross CPU

FUSE

Page 17: XFUSE: An Infrastructure for Running Filesystem Services

UserKernel

RAMDisk/FastDisk/SlowDisk

Setup 1

Filebench

EXT4

Storage

Setup 2

Filebench

fuse.ko

Storage

libfuse

EXT4

Setup 3

Filebench

xfuse.ko

Storage

EXT4

StackFS

libxfuse

StackFS

System-Level PerformanceSetup a common basis for comparing XFUSE, FUSE and regular kernel-mode EXT4

• Err on the side of being conservative for XFUSE

Evaluate the performance potential of XFUSE

• In cases where FUSE has a significate gap with EXT4

Filebench simulates workloads

• Web-Server, Random-Read, File-Create

Storage types

• RAMDisk: PMEM-like

• FastDisk: SSD-like cloud disk. Avg 4KB read latency: 115𝜇𝑠. Max 80K IOPS

• SlowDisk: Cloud disk. Avg 4KB read latency: 250𝜇𝑠. Max 5K IOPS

Page 18: XFUSE: An Infrastructure for Running Filesystem Services

0

50

100

150

200

250

0 250 500 750 1000

Late

ncy

(us)

Throughput (1000 ops/s)

RAMDisk

EXT4FUSEXFUSEEXT4FUSEXFUSE

Random-Read

Web-Server

0

200

400

600

800

1000

0 60 120 180 240

Late

ncy

(u

s)

Throughput (1000 ops/s)

FastDisk

EXT4

FUSE

XFUSE

EXT4

FUSE

XFUSE

Random-Read

Web-Server

0.0

0.5

1.0

1.5

2.0

0 5 10 15 20La

tenc

y (m

s)Throughput (1000 ops/s)

SlowDisk

EXT4FUSEXFUSEEXT4FUSEXFUSE

Random-Read

Web-Server

0

200

400

600

800

1000

0 50 100 150 200 250 300 350

Late

ncy

(us)

Throughput (1000 ops/s)

File-Create EXT4FUSEXFUSEEXT4FUSEXFUSE

RAMDisk

SlowDisk

4 threads

RAMDisk (PMEM-like)

• XFUSE closes the perf gap with kernel-mode EXT4.

• For random-read, XFUSE achieves 3x throughput over FUSE

FastDisk (SSD-like)

• XFUSE offers significant benefit over FUSE.

• For random read, XFUSE delivers full throughput of the FastDisk, maxed at 80K IOPS.

SlowDisk

• Performance is bottlenecked by the storage than by conduit to user space

File-Create

• XFUSE outperforms FUSE for RAMDisk and FastDisk but by a smaller margin

• Benefit of XFUSE over FUSE is limited by the scalability of StackFS and EXT4

System-Level Performance: Results

Page 19: XFUSE: An Infrastructure for Running Filesystem Services

XFUSEA FUSE-compatible framework for filesystem in user space

Enables significantly higher performing user space filesystems• Delivers round-trip latency in the 4 𝜇𝑠 range, offers throughput exceeding 8 GB/s

Supports filesystems with strict RAS requirements in production

Page 20: XFUSE: An Infrastructure for Running Filesystem Services

Questions: [email protected]

WE ARE HIRING IN SUNNYVALE, CA

Thank You