krace: data race fuzzing for kernel file systemsmxu80/pubs/xu:krace.pdf · proper synchronization...

18
K RACE: Data Race Fuzzing for Kernel File Systems Meng Xu Sanidhya Kashyap Hanqing Zhao Taesoo Kim Georgia Institute of Technology Abstract—Data races occur when two threads fail to use proper synchronization when accessing shared data. In kernel file systems, which are highly concurrent by design, data races are common mistakes and often wreak havoc on the users, causing inconsistent states or data losses. Prior fuzzing practices on file systems have been effective in uncovering hundreds of bugs, but they mostly focus on the sequential aspect of file system execution and do not comprehensively explore the concurrency dimension and hence, forgo the opportunity to catch data races. In this paper, we bring coverage-guided fuzzing to the concur- rency dimension with three new constructs: 1) a new coverage tracking metric, alias coverage, specially designed to capture the exploration progress in the concurrency dimension; 2) an evolution algorithm for generating, mutating, and merging multi- threaded syscall sequences as inputs for concurrency fuzzing; and 3) a comprehensive lockset and happens-before modeling for kernel synchronization primitives for precise data race detection. These components are integrated into KRACE, an end-to-end fuzzing framework that has discovered 23 data races in ext4, btrfs, and the VFS layer so far, and 9 are confirmed to be harmful. I. I NTRODUCTION In the current multi-core era, concurrency has been a major thrust for performance improvements, especially for system software. As is evident in kernel and file system evolutions [14], a whole zoo of programming paradigms is introduced to exploit multi-core computation, including but not limited to asynchronous work queues, read-copy-update (RCU), and optimistic locking such as sequence locks. However, alongside performance improvements, concurrency bugs also find their ways to the code base and have become particularly detrimental to the reliability and security of file systems due to their devastating effects such as deadlocks, kernel panics, data inconsistencies, and privilege escalations [512]. In the broad spectrum of concurrency bugs, data races are an important class in which two threads erroneously access a shared memory location without proper synchronization or ordering. Obstructed by the non-determinism in thread interleavings, data races are notoriously difficult to detect and diagnose, as they only show up in rare interleavings that require precise timing to trigger. Even worse, unlike memory errors that tend to crash the system immediately upon triggering, data races do not usually raise visible signals in the short term and are often identified retrospectively when analyzing assertion failures or warnings in production logs [13]. As the state of the practice, file system developers often rely on stress testing to find data races proactively [14, 15]. By saturating a file system with intensive workloads, the chance of triggering uncommon thread interleavings, and thus data races, can be increased. However, while useful, stress testing has significant shortcomings: handwritten test suites are far from sufficient to cover the enormous state space in file system execution, not to mention keeping up with the rapid increase in file system size and complexity. More recently, coverage-guided fuzzing has proven to be a useful complement to handwritten test suites, with thousands of vulnerabilities found in userspace programs [1620]. Without a doubt, kernel file systems can be fuzzed, and generic OS fuzzers [2123] have demonstrated their viability with over 200 bugs found. In addition, file system-specific fuzzers, Janus [5] and Hydra [6], have extended the scope of file system fuzzing from memory errors into a broad set of semantic bugs, while the data race-specific fuzzer, Razzer [24], has shed lights on data race detection by combining fuzzing and static analysis. At the core of these fuzzers is the coverage measurement scheme, which summarizes unique program behaviors triggered by a given input in bitmaps. The fuzzer compares per-input coverage against the accumulated coverage bitmaps to measure the “novelty” of the input and determines whether it should serve as the seed for future fuzzing rounds. However, almost all existing coverage-guided fuzzers focus on tracking the sequential aspect of program execution only and fail to treat concurrency as a first-class citizen. To illustrate, branch coverage (i.e., control flow transition between basic blocks) has been the predominant coverage measurement metric. But such a metric captures little information about thread interleavings: different interleavings are likely to result in the same branch coverage (Figure 2), while only a small fraction may trigger a data race (Figure 3). With the sequential view of program execution, existing kernel fuzzers have been very effective in mutating and synthesizing single-threaded syscall sequences based on seed traces [25, 26] to maximize branch coverage. But no heuristics have been proposed in synthesizing multi-threaded sequences to maximize thread interleaving coverage. Last but not least, given that data races often lead to silent failures, treating only kernel panics or assertions as bug signals is not sufficient: a data race checker that handles kernel complexity is needed. To bring coverage-guided fuzzing to the concurrency dimen- sion, in this paper, we present KRACE, an end-to-end fuzzing framework that fills the gap with new components in three fundamental aspects in kernel file system fuzzing: Coverage tracking [§III] KRACE adopts two coverage tracking mechanisms. Branch coverage is tracked as usual to capture code exploration in the sequential dimension, analogous to the line coverage metric used in unit testing. In addition, to approximate exploration progress in the concurrency domain, KRACE proposes a novel coverage metric: alias instruction

Upload: others

Post on 27-Feb-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

KRACE: Data Race Fuzzing for Kernel File SystemsMeng Xu Sanidhya Kashyap Hanqing Zhao Taesoo Kim

Georgia Institute of Technology

Abstract—Data races occur when two threads fail to useproper synchronization when accessing shared data. In kernel filesystems, which are highly concurrent by design, data races arecommon mistakes and often wreak havoc on the users, causinginconsistent states or data losses. Prior fuzzing practices on filesystems have been effective in uncovering hundreds of bugs, butthey mostly focus on the sequential aspect of file system executionand do not comprehensively explore the concurrency dimensionand hence, forgo the opportunity to catch data races.

In this paper, we bring coverage-guided fuzzing to the concur-rency dimension with three new constructs: 1) a new coveragetracking metric, alias coverage, specially designed to capturethe exploration progress in the concurrency dimension; 2) anevolution algorithm for generating, mutating, and merging multi-threaded syscall sequences as inputs for concurrency fuzzing;and 3) a comprehensive lockset and happens-before modeling forkernel synchronization primitives for precise data race detection.These components are integrated into KRACE, an end-to-endfuzzing framework that has discovered 23 data races in ext4,btrfs, and the VFS layer so far, and 9 are confirmed to be harmful.

I. INTRODUCTION

In the current multi-core era, concurrency has been a majorthrust for performance improvements, especially for systemsoftware. As is evident in kernel and file system evolutions [1–4], a whole zoo of programming paradigms is introducedto exploit multi-core computation, including but not limitedto asynchronous work queues, read-copy-update (RCU), andoptimistic locking such as sequence locks. However, alongsideperformance improvements, concurrency bugs also find theirways to the code base and have become particularly detrimentalto the reliability and security of file systems due to theirdevastating effects such as deadlocks, kernel panics, datainconsistencies, and privilege escalations [5–12].

In the broad spectrum of concurrency bugs, data races arean important class in which two threads erroneously accessa shared memory location without proper synchronizationor ordering. Obstructed by the non-determinism in threadinterleavings, data races are notoriously difficult to detect anddiagnose, as they only show up in rare interleavings that requireprecise timing to trigger. Even worse, unlike memory errorsthat tend to crash the system immediately upon triggering, dataraces do not usually raise visible signals in the short term andare often identified retrospectively when analyzing assertionfailures or warnings in production logs [13].

As the state of the practice, file system developers often relyon stress testing to find data races proactively [14, 15]. Bysaturating a file system with intensive workloads, the chanceof triggering uncommon thread interleavings, and thus dataraces, can be increased. However, while useful, stress testing

has significant shortcomings: handwritten test suites are farfrom sufficient to cover the enormous state space in file systemexecution, not to mention keeping up with the rapid increasein file system size and complexity.

More recently, coverage-guided fuzzing has proven to be auseful complement to handwritten test suites, with thousands ofvulnerabilities found in userspace programs [16–20]. Withouta doubt, kernel file systems can be fuzzed, and generic OSfuzzers [21–23] have demonstrated their viability with over 200bugs found. In addition, file system-specific fuzzers, Janus [5]and Hydra [6], have extended the scope of file system fuzzingfrom memory errors into a broad set of semantic bugs, whilethe data race-specific fuzzer, Razzer [24], has shed lights ondata race detection by combining fuzzing and static analysis.At the core of these fuzzers is the coverage measurementscheme, which summarizes unique program behaviors triggeredby a given input in bitmaps. The fuzzer compares per-inputcoverage against the accumulated coverage bitmaps to measurethe “novelty” of the input and determines whether it shouldserve as the seed for future fuzzing rounds.

However, almost all existing coverage-guided fuzzers focuson tracking the sequential aspect of program execution onlyand fail to treat concurrency as a first-class citizen. To illustrate,branch coverage (i.e., control flow transition between basicblocks) has been the predominant coverage measurement metric.But such a metric captures little information about threadinterleavings: different interleavings are likely to result in thesame branch coverage (Figure 2), while only a small fractionmay trigger a data race (Figure 3).

With the sequential view of program execution, existingkernel fuzzers have been very effective in mutating andsynthesizing single-threaded syscall sequences based on seedtraces [25, 26] to maximize branch coverage. But no heuristicshave been proposed in synthesizing multi-threaded sequencesto maximize thread interleaving coverage. Last but not least,given that data races often lead to silent failures, treating onlykernel panics or assertions as bug signals is not sufficient: adata race checker that handles kernel complexity is needed.

To bring coverage-guided fuzzing to the concurrency dimen-sion, in this paper, we present KRACE, an end-to-end fuzzingframework that fills the gap with new components in threefundamental aspects in kernel file system fuzzing:Coverage tracking [§III] KRACE adopts two coverage trackingmechanisms. Branch coverage is tracked as usual to capturecode exploration in the sequential dimension, analogous tothe line coverage metric used in unit testing. In addition, toapproximate exploration progress in the concurrency domain,KRACE proposes a novel coverage metric: alias instruction

pair coverage, short for alias coverage. Conceptually, if wecould collect all pairs of memory access instructions X↔Y suchthat X in one thread may-interleave against Y in another thread,alias coverage tracks how many such interleaving points havebeen covered in execution. Consequently, if the growth of aliascoverage stalls, it signals the fuzzer to stop probing for newinterleavings in the current multi-threaded seed input.

Input generation [§IV] KRACE generates and mutates in-dividual syscalls according to a specification [21, 27]. Thenovel part of KRACE lies in evolving multi-threaded seeds andmerging them in an interleaved manner to preserve already-found coverage as well as to maximize the chances of inducingnew interleavings. Another job of the input generator is toproduce thread schedulings, (to explore the hidden input space).Although enforcing fine-grained control over thread schedulingis possible [7], the scheduling algorithm does not scale towhole-kernel concurrency, as the latter consists of not only userthreads, but also background threads internally forked by filesystems, work queues, the block layer, loop devices, RCUs, etc.,and the total number of contexts often exceeds 60 at runtime.As a result, KRACE adopts a lightweight delay injection schemeand relies on the alias coverage metric as feedback to determinewhether more delay schedules are needed.

Bug manifestation [§V] KRACE incorporates an in-housedeveloped detector to reason about data races given anexecution trace. In essence, KRACE hooks every memory accessand for each pair of accesses to the same memory address,KRACE checks whether 1) they belong to two threads and atleast one is a memory write; 2) these two accesses are strictlyordered (i.e., happens-before relation); and 3) at least one sharedlock exists that guards such accesses (i.e., lockset analysis). Thechallenges for KRACE lie in modeling the diverse set of kernelsynchronization mechanisms comprehensively, especially thoseuncommon primitives such as optimistic locking, RCU, andad-hoc schemes implemented in each file system.

KRACE adopts the software rejuvenation strategy to avoidthe aging OS problem, i.e., every execution is a fresh run froma clean-slate kernel and empty file system image. Doing sotrades performance for trackability and debuggability but isworthwhile for data race detection. As shown in §VII-B, theexploration gradually catches up and bypasses conventionalspeed-oriented fuzzers (e.g., Syzkaller) upon saturation. KRACEalso decouples data race checking from state exploration. Unlikeprior works where the bug checker runs inline in each execution,in KRACE, the checker only kicks in when new coverage (eitherbranch or alias) is reported. This prevents the expensive datarace checking from slowing down the state exploration whilestill preserving the opportunity to test every new execution statefound through fuzzing. The checking progress will eventuallycatch up when the coverage growth is toward saturation.

We evaluated KRACE by fuzzing two popular and heavilytested kernel file systems (ext4 and btrfs) in recent kernelversions and we found 23 data races, nine of which areconfirmed as potentially harmful races, and 11 are benignraces (for performance or allowed by the POSIX specification).

Fig. 1: A data race found by KRACE. This figure shows the completecall stack, thread ordering information, and locking information whenthe data race happens and the inconsistency it may cause ( 1 - 4 ).

Summary: This paper makes the following contributions:• Concept: The alias coverage metric and interleaved multi-

threaded syscall sequence merging are novel concepts thatmake coverage-guided fuzzing more effective in highlyconcurrent programs, possibly as a first step toward fuzzingfor a wide range of concurrency bugs.

• Implementation: KRACE’s data race checker encodes acomprehensive model of kernel synchronization mechanismsin the form of over 100 kernel patches (for code instrumen-tation), which are regularly updated as the kernel upgrades.

• Impact: KRACE has found 23 data races and will becontinuously running to find new cases. We will open-source KRACE as well as the collection of syscall primitivesfor multi-threaded execution as quality seeds for futureconcurrent file system fuzzing research.

II. BACKGROUND AND RELATED WORK

The past three decades have witnessed several efforts to finddata races using various techniques. In this section, we show adata race example, discuss the types of approaches that priorworks have taken, and introduce coverage-guided fuzzing as ageneric bug finding technique.Example. Intuitively, a data race is caused by two threadstrying to perform unordered and unprotected memory oper-ations to the same address. Figure 1 shows two data racesfound by KRACE that happen to make a complete scenario.The read of full is in race with both writes, as the read isnot protected by the corresponding delayed_rsv->lock as isdone on the writers’ side. According to btrfs developers, thisresults in ineffective management of the reserve space internallyused by btrfs, in particular, delays in releasing the reservedspace or space releasing followed by reservation instead ofmigration from one reserve to another. Reflected in the callstack, if the execution takes the order of 1 → 2 → 3 → 4 , then

2

block_rsv_release_bytes is inadvertently releasing bytes thatwill be used by the fsync. Such a case might eventually causeinteger overflows in the reserve space but would probablyrequire thousands of concurrent file operations to trigger.

Data race is a special type of race condition, and huntingdata races in complex software involves two facets: 1) how toconfirm an execution is racy and 2) how to produce meaningfulexecutions by exploring code and thread-scheduling.

Dynamic data race detection algorithms. Most of the initialworks [28] found race conditions by relying on the happens-before analysis [29]. However, one of the prime issues withthis approach is that it leads to false negatives. To improve thedetection accuracy, Eraser [30] proposed the lockset analysis,in which users annotate the common lock/unlock methods andfind atomicity violations. Later, several works [31, 32] proposedoptimizations to either mitigate the overhead or minimize falsepositives. To further improve the effectiveness of dynamic datarace detection, several works [33, 34] combined the idea ofhappens-before relation with lockset analysis.

Unfortunately, most of these works target userspace programsusing simple synchronization primitives (e.g., those provided bypthread or Java runtime), which only represent a small subsetof synchronization mechanisms available in the Linux kernel.KRACE follows the same trend in combining happens-beforeand lockset analysis, but unlike prior works, KRACE provides acomprehensive framework that includes not only simple lockingmethods, such as pessimistic locks (e.g., mutex, readers-writerlock, spinlock, etc.), but also optimistic locking protocols,such as sequence locks, and other forms of synchronizationmechanisms that imply more than just mutual exclusion, e.g.,RCU [35] and other publisher-subscriber models.

Both lockset and happens-before analysis require codeannotations and suffer from incompleteness, i.e., a missing lockmodel leads to false positives. Several works overcome thisissue with timing-based detection, i.e., a thread is delayed fora certain duration at some memory accesses while the systemobserves whether there are conflicting accesses to the samememory during the delay [13, 36, 37]. Moreover, most of theseworks resort to sampling [13, 34, 37–39], as an optimizationover completeness, to further minimize the runtime overheadcaused by tracking memory accesses or code paths.

However, complete timing-based detection relies on precisecontrol of thread execution speed and results in an enormoussearch space (both in where to delay and how long to delay),which again is not scalable in the kernel scope. As a result,in terms of race detection, KRACE resorts to a trial-and-error approach and fixes false positives introduced by ad-hocmechanisms along with the development. Fortunately, due tothe high coding standard and strict code review practice, ad-hocsynchronization is not common in kernel file systems.

Code/thread-schedule exploration. The effectiveness of adata race checker depends not only on the detection algorithmbut also on how well the checker can explore execution statesand cover as many code paths and thread interleavings aspossible. For code path exploration, prior detectors mostly rely

on manually written test suites [7, 36, 37] that do not capturecomplicated cases. As shown in Figure 1, triggering the datarace would require a user thread to mkdir on the same block thebackground uuid_rescan thread is working on, which (almost)in no way can be specified in manually written test cases. Analternative is to enumerate code paths statically [40–44], butthis is not scalable. Recent OS fuzzers adopt specification-based syscall synthesization [5, 6, 21, 27]. However, thesefuzzers mostly focus on generating sequential programs insteadof multi-threaded programs and are not intended to exploreinterleavings in syscall execution. KRACE adopts a similarsynthesization approach, but instead of focusing on single-threaded sequences, KRACE evolves multi-threaded programs.

In the case of thread-schedule exploration, prior approachesfall into three categories, in decreasing order of scalability butincreasing order of completeness: 1) stressing the randomscheduler with multiple trials [14]; 2) injecting delays atruntime [13, 36, 37]; and 3) enumerating every possible threadinterleaving [7, 24]. KRACE uses delay injection, a trade-offamong scalability, practicality, and completeness.Data race detection in kernels. KRACE shares its designideology with four prominent works [7, 24, 45, 46]. DataCol-lider [45] is the first work that tackles this problem by usingrandomized sampling of a small number of memory accessesin conjunction with code breakpoint and data breakpointfacilities for efficient sampling. DataCollider is simple enoughto detect several bugs in the Windows kernel modules. A similarstrategy is used by Syzkaller [21] with its Kernel ConcurrentSanitizer [46] (KCSan) module. KCSan is a dynamic datarace detector that uses compiler instrumentation, i.e., softwarewatchpoints instead of hardware watchpoints, to detect bugson non-atomic accesses that violate the Linux kernel memorymodel [47] using happens-before analysis.

SKI [7] focuses on comprehensive enumeration of threadschedules with the PCT algorithm [48] and hardware break-points. However, SKI permutes user threads only to find dataraces in the syscall handlers and thus forgoes the opportunitiesto find data races in kernel background threads. Furthermore,even with user threads only, the number of permutations canbe huge to test thoroughly. In addition, the test suites used bySKI may be too small to explore an OS for bugs.

Razzer [24] combines static analysis with fuzzing fordata race detection. In particular, Razzer first runs a points-to analysis across the whole kernel code base to identitypotentially alias instruction pairs, i.e., memory accesses thatmay point to the same memory location. After that, per eachalias pair identified, Razzer tries to generate syscalls that reachthe racy instructions at runtime. It does so with fuzzy syscallgeneration [21, 27], and sequential syscall traces are generatedfirst. Once the alias relation is confirmed in the sequentialexecution, the trace is then parallelized into multi-threadedtraces for actual data race detection.

Razzer presents an elegant pipeline for data race fuzzing, butit can be further improved: 1) running points-to analysis [49]on kernel file systems produces millions of may-alias pairs,which is almost impossible to enumerate one by one; 2) even

3

for one alias pair, how to generate syscalls that may reach theracy instructions is less clear. KRACE aims to improve bothaspects with the novel notion of alias coverage. Instead of pre-calculating the search space with points-to analysis, KRACErelies on coverage-guided fuzzing to expand the search in theconcurrency dimension gradually. Analogically, this is similarto not enumerating every path in the control-flow graph butinstead using an edge-coverage bitmap to capture the searchprogress. Doing so also eliminates the concern on how togenerate syscalls that lead execution to specific locations.

Fuzzing in general. Fuzzing has proven to be a practicalapproach to find bugs in today’s software stack, both in theuserspace [16, 20, 50–54] and in the kernel space [5, 6, 21,22, 27, 55]. Unfortunately, existing works cannot be triviallyadopted for data race fuzzing. One reason is that the mainfocus of fuzzing has been on finding memory corruptions ortriggering assertions. Although Hydra [6] extends the scopebeyond memory errors into semantic bugs in file systems, itdoes not provide any insight into finding data races.

Moreover, since modern coverage-guided fuzzing originatesand prospers from testing single-threaded programs such asbinutils, encoder/decoders, and the CGC and LAVA-M fuzzingbenchmarks, recent fuzzing efforts have focused on optimizingfuzzers’ performance on single-threaded executions too, such asapproximating sequential execution with neural networks [51].Not surprisingly, when the fuzzing practice is carried down tothe OS level [21, 22, 27, 55–59], the same sequential view ofprogram execution is inherited.

Although generating structured inputs has been a challengefor kernel fuzzing, many improvements have been proposed.For example, MoonShine [25] captures dependencies betweensyscalls and DIFUZE [26] generates interface-aware inputs.However, lacking a coverage metric and a seed evolutionalgorithm to handle state exploration in the concurrencydimension, existing OS fuzzers miss the opportunities to findthe broad spectrum of concurrency bugs, including data races.The motivation behind KRACE is to fill this gap and to bringcoverage-guided fuzzing to the concurrency dimension.

Static and symbolic analysis on kernels. Although KRACEis a dynamic analysis system, we are also aware of works thataim to find concurrency bugs with static analysis [40–44]. Mostof these approaches rely on static lockset analysis and, hence,suffer from the high false-positive rate caused by missing thehappens-before relation in the execution as well as the inherentlimitations of the points-to analysis. For instance, RacerX [41]suffers from 50% false positives on the Linux kernel.

Beyond concurrency bugs, static analysis has proven effectivein finding many security issues in kenrel drivers. For example,SymDrive [60] uses symbolic execution to emulate devicesand verify the properties of kernel drivers; DrChecker [61] iscapable of finding eight types of security issues by relaxing thecompletely sound analysis on unbounded loops with mostlysound versions. However, a major challenge in applying theseworks to data race detection in file systems is their lack ofstatefulness, i.e., although extremely effective in finding bugs

A=1 A=0

T1 T2

i1 i3

B=A+1

C=A*2

i2

i4

G[B]=false

cond=G[C]

SYS_symlink

SYS_readlink

SYS_truncate

if(flag & DIR)B=1

if(size >= res)

…… W

R

e1

e2

e3

Fig. 2: A data race found by KRACE when symlink, readlink, andtruncate on the same inode run in parallel (simplified for illustration).The race is on the indexed accesses to a global array G and occurs onlywhen B==C. A is lock-protected. This is one example showing branchcoverage is not sufficient in approximating execution states of highlyconcurrent programs. It is not difficult to cover all branches in thiscase with existing fuzzers, but to trigger the data race, merely coveringbranches e1-e3 is not enough. The thread interleavings between fourinstructions i1-i4 are equally important. The valid interleavings thatmay trigger the data race are shown in Figure 3.

A=1B=A+1

A=0C=A*2

T1 T2

A=1

B=A+1A=0

C=A*2

T1 T2

A=1

B=A+1

A=0C=A*2

T1 T2

A=1B=A+1

A=0C=A*2

T1 T2

A=1

B=A+1

A=0

C=A*2

T1 T2

A=1B=A+1

A=0

C=A*2

T1 T2

<nil> i3→i2 i3→i2B=2, C=0 B=1, C=0 B=1, C=0

<nil> i1→i4 i1→i4B=2, C=0 B=2, C=2 B=2, C=2

① ② ③

④ ⑤ ⑥

Fig. 3: Possible thread interleavings among the four instructionsshown in Figure 2. Out of the 6 interleavings, only 3 interleavings( 1 / 4 , 2 / 3 , 5 / 6 ) are effective depending on A’s value when B andC read it. Each effective interleaving results in different alias coverage.Only 5 / 6 may trigger the data race.

within one syscall execution, they miss bugs that occur becauseof the interaction between multiple syscalls, which happen tobe the majority of cases in file system operations.

III. A COVERAGE METRIC FOR CONCURRENT PROGRAMS

In this section, we show why branch coverage, the goldenmetric for fuzzing, might be insufficient to represent theexploration in the concurrency dimension, while at the sametime, why alias coverage, our new proposal, fits this purpose.

A. Branch coverage for the sequential dimension

Branch coverage originates from the program control-flowgraph (CFG), which is inherently a sequential view of program

4

execution. As shown in Figure 2, in CFGs, execution flowsthrough basic blocks and user-controllable inputs, e.g., size inSYS_truncate, determine the set of edges that join the basicblocks. For a branch coverage-guided fuzzer: given an input(e.g., a list of syscalls), it tracks the set of edges that are hit atruntime and leverages this feedback to decide whether this inputis “useful” and should be kept for more mutations. Intuitively,the fuzzer expects to probe more branches by mutating theseed, and not surprisingly, once the branch coverage growthstalls, the fuzzer will shift focus to other seeds.

In the case shown in Figure 2, exhausting all branchessequentially will only yield the status of B==1, B==2, andC==0. After that, these execution paths (represented by theseeds covering them) will be de-prioritized and considerednon-interesting by the fuzzer. However, this is not the end ofthe story. To trigger the data race when B==C==2, the executionof four critical instructions (i1-i4) has to be interleaved ina special way, as shown in Figure 3. Unfortunately, all sixinterleavings yield the same branch coverage, and the fuzzeris likely to give up the seed upon hitting a few of them.

Further note that this is an extremely simplified examplethat involves only six possible interleavings among two threads.In actual executions, the concurrency dimension can be huge,as the instructions executed by each thread are usually in thethousands or even millions, while there will be tens of threadsrunning at the same time. As a result, when fuzzing highlyconcurrent programs, we need to pay attention to not onlycode paths explored, but also meaningful thread interleavingsexplored that yield to the same branch coverage. In other words,if the fuzzer believes that there could be unexplored threadinterleavings in a seed, the seed should not be de-prioritized.

B. Alias coverage for the concurrency dimension

Intuition. At first thought, recording the exploration of threadinterleavings can be futile. A realistic kernel file system atits peak time may use over 60 internal threads, where eachthread may execute over 100,000 instructions. The total possiblenumber of thread interleavings is 60100000, an enormous searchspace that no bitmap can ever approximate.

However, it is worth noting that not all interleaved executionsare useful. In fact, only interleavings of memory-accessinginstructions to the same memory address matters. As shownin Figure 2, interleaving instructions apart from i1-i4 has noeffect on the final results of B, C, as well as the manifestationof the data race. This is true in the actual code, where hundredsand thousands of instructions sit between i1, i3 and i2, i4.

In other words, based on the crucial observation that dataraces, and even in the broader term, concurrency bugs, typicallyinvolve unexpected interactions among a few instructionsexecuted by a small number of threads [7, 62, 63], if KRACE isable to track how many interactions among these few memory-accessing instructions have been explored, it is sufficient torepresent thread interleaving coverage and to find data races.This is precisely what gets tracked by alias coverage.A formal definition. First, suppose all memory-accessinginstructions in a program are uniquely labeled: i1, i2, ...., iN.

At runtime, each memory address M keeps track of its lastdefine operation, i.e., the last instruction that writes to it aswell as the context (thread) that issues the write, representedby A ← <ix, tx>. Now, in the case in which a new access toM is observed, carried by instruction iy from context ty: if iyis a write instruction, update A ← <iy, ty> to reflect the factthat A is redefined. Otherwise:

• if tx == ty, i.e., same context memory access, do nothing,• or else, record directed pair ix→iy in the alias coverage.

Figure 3 is a working example of this alias coverage trackingrule. In cases 1 and 4 , there is no inter-context define-then-use of memory address A, and hence, the alias coverage map isempty. On the other hand, in cases 2 and 3 , the calculationof B in T1 relies on A defined in T2, hence the pair i3→i2.The same rule applies to cases 5 and 6 .Feedback mechanism. Essentially, alias coverage provides asignal to the fuzzer on whether it should expect more usefulthread interleavings out of the current test case, i.e., a multi-threaded syscall sequence. If the alias coverage keeps growing,the fuzzer should come up with more delay schedules to injectat the memory-accessing instructions (detailed in §IV-B) in thehope of probing unseen interleavings. Otherwise, if the coveragegrowth stalls, it is a sign that the concurrency dimension of thecurrent test case is toward saturation, and the most economicalchoice is to switch to other seeds for further exploration.Coverage sensitivity fine-tuning. Finding one-suits-all cover-age criteria has been a never-ending quest in software engineer-ing [64]. Even the branch coverage has several variations, suchas N-gram branch coverage, context-sensitive coverage [52],etc., which are well-documented and compared in a recentsurvey [65]. However, despite the fact that branch coverageis always subsumed by program whole-path coverage, branchcoverage is still preferred over path coverage, as the latter isoverly sensitive to input changes and thus requires a muchlarger bitmap to hold and compare. On the other hand, branchcoverage strikes a balance among effectiveness, execution speed,and bitmap accounting overhead.

Similarly, alias coverage strives to find such a balance pointin the concurrency dimension. In our experiments with kernelfile system fuzzing, KRACE observed 63,590 unique pairs ofalias instructions (directed access). Based on the data, foran empirical estimation, a bitmap of size 128KB should besufficient to avoid heavy collisions, which is close to AFL’sbranch coverage bitmap size (64KB). In addition, if moresensitivity is needed for alias coverage, KRACE can be easilyadopted from 1st-order alias pair (alias coverage) to 2nd-orderalias pair, Nth-order alias pair, and up-to total interleavingcoverage. We leave this for future exploration.

IV. INPUT GENERATION FOR CONCURRENCY FUZZING

In this section, we present how to synthesize and mergemulti-threaded syscall sequences for file system fuzzing, aswell as how to exploit a hidden input domain—thread delayschedule—to accelerate thread interleaving probing.

5

Fig. 4: Illustration of four basic syscall sequence evolution strategiessupported in KRACE: mutation, addition, deletion, and shuffling. ForKRACE, each seed contains multi-threaded syscall sequences and eachthread trace is highlighted in different shades of grayscale.

A. Multi-threaded syscall sequences

Specification-based synthesization. The goal of syscallgeneration and mutation is to generate diverse and complex fileoperations that are otherwise difficult for human developersto contemplate. Given that syscalls are highly structured data,it is almost fruitless to mutate their arguments blindly. Asa result, we use a specification to guide the generation andmutation of syscall arguments. A feature worth highlighting inKRACE’s specification is the encoding of inter-dependenciesamong syscalls, especially path components and file descriptors(fd), which are most relevant to file system fuzzing. To illustrate,as shown in Figure 5, the open syscall in seed 1 reuses thesame path component in the mkdir syscall, while the writesyscall in seed 2 relies on the return value of creat.Seed format. The seed input for KRACE is a multi-threadedsyscall sequence. Internally, it is represented by a single listof syscalls (a.k.a, the main list) and a configurable numberof sub-lists (3 in KRACE) in which each sub-list contains adisjoint sequence of syscalls in the main list. Each sub-listrepresents what will be executed by each thread at runtime. Toillustrate, as shown in Figure 5, seed 1 has three threads, whereeach thread will be executing mkdir-close, mknod-open-close,and dup2-symlink, respectively, marked in different grayscale.Evolution strategies. KRACE uses four strategies to evolve aseed for both branch and alias coverage, as shown in Figure 4.

• Mutation: a randomly picked argument in one syscall willbe modified according to specification. If a path compo-nent is mutated, it is cascaded to all its dependencies.

• Addition: a new syscall can be added to any part of thetrace in any thread, but must be after its origins.

• Deletion: a random syscall is kicked out of the main listand the sub-list. In case a file descriptor is deleted, itsdependencies are forced to re-select another valid file.

• Shuffling: syscalls in the main list are redistributed tosub-lists, but their orders in the main list are preserved.

Merging multi-threaded seeds. The power of fuzzing liesnot only in evolving a single seed but also in joining two seedsto produce more interesting test cases. To enable seed merging

Fig. 5: Semantic-preserving combination of two seeds. For KRACE,each seed contains multi-threaded syscall sequences and each threadtrace is highlighted in different shades of grayscale.

in KRACE, a naive solution might be simply to concatenatetwo traces. However, this is not the most economical use ofseeds, as it forgoes the opportunities to find new coverage byfurther interleaving these high-quality executions.

KRACE adopts a more advanced merging scheme: uponmerging, the main lists of the two seeds are interweavinglyjoined, i.e., the relative orders of syscalls are still preserved inthe resulting main list as well as in the sub-lists. As a result,the syscall inter-dependencies are preserved too. As shownin Figure 5, all the dependencies on path and fds are properlypreserved after merging (highlighted in corresponding colors).Primitive collection. Successful syscalls are valuable assetsout of the file system fuzzing practice, not only because theylead to significantly broader coverage than failed syscalls, butalso because they can be difficult, and sometimes even fortunate,to generate due to the dependencies among them. This is trueespecially for long traces of closely related syscalls. As a result,upon discovering a new seed, KRACE first prunes it and retainsonly successful syscalls and further splits these syscalls intonon-disjoint primitives where each primitive is self-contained,i.e., for any syscall, all its path and fd dependencies (alsosyscalls) are captured in the same primitive.

Over the course of fuzzing, KRACE has accumulated apool of around 10,000 primitives covering 68 file systemrelated syscalls for which KRACE has a specification. In eachprimitive, file operations span across 3 threads, with each threadcontaining 1-10 syscalls, and most importantly, all syscallssucceed. We will open-source this collection in the hope thatthese primitives may serve as quality seeds for future concurrentfile system fuzzing.

B. Thread scheduling control (weak form)

Thread scheduling is a hidden input domain for concur-rency programs. Unfortunately, there is no way to controlkernel scheduling by merely mutating syscall traces. Hookingthe scheduling implementation (or using a hypervisor) andsystematically permuting the schedules might be possiblefor small-scale programs [63] or for a few user threads inthe kernel [7, 24]. But these algorithms are far from being

6

Fig. 6: The delay injection scheme in KRACE. In this example, whiteand black circles represent the memory access points before and afterdelay injection. Injecting delays uncovers new interleavings in thiscase, as the read and write order to the memory address x is reversed.

scalable enough to cover all kernel threads. For a taste ofthe scalability requirement, Figure 14 shows the level ofconcurrency introduced by the btrfs module alone, not tomention other background threads forked by the block layer,loop device, timers, and RCU.Runtime delay injection. KRACE resorts to delay injectionto achieve a weak (and indirect) control of kernel scheduling,based on the observation that only shared memory accessesmatter in thread interleavings. KRACE’s delay injection schemeis extremely simple, as shown in Figure 6. Before launchingthe kernel, KRACE generates a ring buffer of random numbersand maps it to the kernel address space. At every memoryaccess point, the instrumented code fetches a random numberfrom the ring buffer, say T, and delays for T memory accessesobserved by KRACE system-wise (i.e., in other threads).

A ring buffer is used to hold the random numbers, as KRACEcannot pre-determine how many injection points are neededfor each execution, not to mention that such a number maybe extremely large. Injecting delays at memory access pointsis at the finest granularity for delay injection. Although thisworks well in file system fuzzing, it might nevertheless be toofine-grained and introduces too much overhead. The injectionpoints can be at the granularity of basic blocks or functions oreven customized locations such as locking operations, etc.

V. A DATA RACE CHECKER FOR KERNEL COMPLEXITY

Although the definition of data races is simple, finding themin a kernel execution trace can be difficult, primarily because ofthe variety of synchronization primitives available in the kernelcode base as well as the ad-hoc mechanisms implementedby each individual file system. In this section, we enumeratethe major categories of kernel synchronization primitives anddescribe how they can be modeled in KRACE.

A. Data race detection procedure

Overview. We say a pair of memory operations, <ix, iy>, isa data race candidate if, at runtime, we observed that

• they access the same memory location,• they are issued from different contexts tx and ty,• at least one of them is a write operation.Such information is trivial to obtain dynamically by simply

hooking every memory access. The difficulty lies in confirmingwhether a data race candidate is a true race. For this, we needtwo more analysis steps to check that:

• no locks are commonly held by both contexts, tx and ty,at the time when memory operations ix and iy are issuedfrom them, respectively. [lockset (§V-B)]

• no ordering between ix and iy can be inferred based onthe execution: i.e., there is no reason ix must happen-before iy or the other way around, regardless of how txand ty are scheduled. [happens-before (§V-C)]

Conceptually, lockset analysis produces no false negatives,i.e., if there is a data race in the execution trace, it is guaranteedto be flagged by the lockset analysis. But lockset analysis isprone to false positives, as it ignores the ordering information.Happens-before analysis helps in filtering these false positives.Kernel complexity. Although conceptually simple, locksetanalysis requires a complete model of all locking mechanismsavailable in the kernel, and similarly, happens-before analysisrequires all thread ordering primitives to be annotated. Other-wise, false positives will arise. However, after nearly 30 yearsof development, the Linux kernel has accumulated a rich setof synchronization mechanisms. KRACE takes a best-effortapproach in modeling all major synchronization primitives aswell as ad-hoc ones if we encounter them in our experiment.Due to space constraints, we present some representative ad-hocschemes modeled by KRACE in appendix §C.

Besides the variety of synchronization events, the numberof ordering points in the kernel execution is enormous. To geta taste of the complexity in real-world executions, Figure 18shows a snippet of the ordering relation (e.g., task queuing,waiting for conditions, etc.) across all user and kernel threads.

B. Lockset analysis

Most kernel locking primitives differentiate between readerand writer roles. The major difference is that a reader-lock canbe acquired by multiple threads at the same time, as long asits corresponding writer-lock is not held; while a writer-lockcan only be held by at most one thread. KRACE follows thisdistinction and tracks the acquisitions and releases of bothreader- and writer-locks for each thread at runtime. Formally,such information is stored in the form of a lockset: denoted byLSR

<t,i> for the reader-side lockset for thread t at instructioni as well as LSW

<t,i> for the writer-side lockset. Both locksetsare cached and attached to a memory cell whenever a memoryaccess on that thread is observed, as shown in Figure 7.

The lockset analysis is simple as the following: for each datarace candidate <tx, ix> and <ty, iy>, if any of the followingconditions holds, this candidate cannot be a true data race.

LSR<tx,ix> ∩ LSW

<ty,iy> ̸= ∅ (1)

LSW<tx,ix> ∩ LSR

<ty,iy> ̸= ∅ (2)

LSW<tx,ix> ∩ LSW

<ty,iy> ̸= ∅ (3)On the other hand, if none of the conditions hold for a

data race candidate, then the execution of tx and ty can beinterleaved without restrictions around those memory accesses,as shown in the reading and writing of addresses 0x34 and0x46 in Figure 7, hence, leading to data races.

7

Fig. 7: Illustration of lockset analysis in KRACE. This example showsalmost all locking mechanisms commonly used in the kernel, including1) spin lock and mutexes—[un]lock(RW, -),2) reader/writer lock—[un]lock(R/W, *),3) RCU lock—specially denoted with symbol ∆, and4) sequence lock—begin/end/retry(R/W, *).The left column shows the content in the reader lockset at the time ofmemory operation or changes to the lockset caused by other operations(/ denotes no change). The right column shows the writer counterpart.The two data races are highlighted in red and blue squares.

Pessimistic locking. Most of the kernel locking primitivesare pessimistic locking, i.e., whoever tries to acquire the lockwill be blocked from further execution until the lock holderreleases it. As a result, their APIs are always in pairs of lockand unlock to mark the start and end of a critical section.Examples of such locks include spin lock, reader/writer spinlock, mutex, reader/writer semaphore, and bit locks.

A slightly trickier primitive is the RCU lock, inwhich only reader-side critical sections are marked withrcu_read_[un]lock and the writer-side critical section is notmarked by any lock/unlock APIs, instead, it is guaranteedby the RCU grace period waiting. More specifically, when__rcu_reclaim schedules an RCU callback into execution,it is guaranteed that there is no RCU reader-side criticalsection running. Hence, in KRACE, we hook the RCU callbackdispatcher and mark RCU writer lock and unlock before andafter the callback execution.

Optimistic locking. The Linux kernel is gradually shiftingtoward lock-free design and the most prominent evidence inrecent years is the wide adoption of sequence locks [66]. Asequence lock is, in fact, more similar to a transaction than toa conventional lock. The reader is allowed to run optimisticallyinto the critical section, hoping that the data it reads will notbe modified during the transaction (hence the optimism), andaborts and retries if the data does get modified.

While boosting performance, a challenge brought by thesequence lock is that there is no clear end of the reader-sidecritical section. As shown in Figure 7, after a transactionbegins, the retry can be called multiple times, perhaps one formid-of-progress checking and the other one for before-commitchecking; in theory, each retry could be an unlock-equivalentthat marks the end of the critical section. If the lockset analysisis performed online (i.e., during execution), the lockset statesshould fork to capture that the retry may or may not be an

Fig. 8: Illustration of happens-before reasoning in KRACE. Thisexample shows a very typical execution pattern in kernel file systemswhere the user thread schedules two asynchronous works on thework queue and checks for their results later in the execution. Inparticular, one of the asynchronous works is a delayed work that alsogoes through the timer thread. Fork-style, join-style, and publisher-subscriber relations are represented by dashed, dotted, and solid arrows,respectively. The only data race is highlighted in the red square.

unlock. For KRACE, since it uses offline lockset analysis, itmay simply read the execution trace ahead to know whetherthere are more retries and behave correspondingly.

C. Happens-before analysis

Intuitively, happens-before analysis tries to find the causalrelations between specific execution points in the threads. Forexample, a kernel thread only gets into running if anotherthread forks it; as a result, there is no way to schedule thespawned thread before the parent thread creates it. This impliesthat whatever happens before the thread creation points cannotbe data racing against anything in the spawned thread. In theexample shown in Figure 8, there is no way for i2 to be racingagainst i6, as without queuing the work on the work queue(c2→c8), i6 won’t even be executed in the first place. Similarly,scheduling a thread that is waiting for a condition to be truewill not make it run bypassing the barrier. Therefore, it is notpossible for i4 to race against i8, as only when the wake_upcall is reached (c12→c5) can i4 be executed.

This intuition shows how a happens-before relation can beformally checked: by hooking kernel synchronization APIs, e.g.,when a callback function is queued and when it is executed, wecould find the synchronization points (nodes) between threadsas well as the causality events (represented by edges), as shownin Figure 8. Since the nodes in one thread are already inherentlyconnected according to program order, the whole executionbecomes a directed acyclic graph. Consequently, determiningwhether two points, <tx, ix> and <ty, iy>, may race istranslated into a graph reachability problem. If a path existsfrom <tx, ix> to <ty, iy>, it means that point X happens-before Y and thus cannot be racing. The same applies if we

8

Fig. 9: An overview of KRACE’s architecture and major components.Components in italic fonts are either new proposals from KRACE orexisting techniques customized to meet KRACE’s purpose.

can establish Y happens-before X. On the other hand, if nosuch path can be found, a happens-before relation cannot beestablished and the pair should be flagged, as in the case ofi3 and i8. All other accesses are reachable in the graph, andhence, they cannot be racing even without lock protections.

The happens-before relation commonly found in kernel filesystems can be broadly categorized into three types:Fork-style relations include RCU callbacks registered withcall_rcu, work queues and kthread-simulated work queues,direct kthread forking, timers, software interrupts (softirq),as well as inter-processor interrupts (IPI). Hooking their kernelAPIs is as easy as finding corresponding functions that registerthe callback and dispatch the callback.Join-style relations include the completion API and a widevariety of wait_* primitives such as wait_event, wait_bit,and wait_page. Hooking their kernel APIs requires locatingtheir corresponding wake_up calls besides the wait calls.Publisher-subscriber model mainly refers to the RCU pointerassignment and dereference procedure [35]. For example, ifone user thread retrieves a file descriptor (fd) from the fdtablewhich is RCU-guarded, the new fd must have been publishedfirst, hence the causality ordering. The object allocate-and-usepattern also falls into this realm: the publisher thread allocatesmemory spaces for an object, initializes its fields, and insertsthe pointer to a global or heap-based data structure (usually alist or hashtable), while the subscriber thread later dereferencesthe pointer and uses the object. As a result, KRACE also tracksthe memory allocation APIs and monitors when the allocatedpointer is first stored into a public memory slot and when it isused again to establish the ordering automatically.

VI. PUTTING EVERYTHING TOGETHER

A. Architecture

Figure 9 shows the overall architecture of KRACE. Theprimary purpose of having the compile-time preparation is toembed a KRACE runtime into the kernel such that alias coverage(as well as branch coverage) can be collected dynamically. The

runtime is also responsible for collecting information for datarace checking, leveraging the kernel API hooking. On theother hand, the fuzzing loop is still conventional, covering seedselection, mutation, and execution, with the exception that inKRACE, a test case is considered “interesting” as long as newprogress is found in either of the coverage bitmaps. In addition,all components are updated to handle the new seed format forconcurrency fuzzing: multi-threaded syscall sequences.Code instrumentation. Since the focus of KRACE is filesystems, we only instrument memory access instructions inthe target file system module and its related components suchas the virtual file system layer (VFS) or the journaling module,e.g., jbd2 for ext4. On the other hand, API annotations areperformed on the main kernel code base and have an effect evenwhen the execution goes out of the functions in our target filesystem: the locks acquired and released, as well as the orderingprimitives (e.g., queuing a timer), will be faithfully recorded.In this way, KRACE does not suffer from false positives incases like block layer calls into a callback in the file systemlayer but we do not know the prior locking contexts.Fuzzing loop. Figure 15 shows the fuzzing evolution algorithmin KRACE. Fuzzing starts with producing a new programby merging two existing seeds. The seed selection criterionused in KRACE so far is simply frequency count, i.e., lessused seeds receive priority. We expect more advanced seedselection algorithms to be developed later. After merging, eachprogram goes through several extension loops on which theprogram structure is altered with syscalls added and deleted.Each structurally changed program will further go throughseveral modification loops in which the syscall argumentsand distribution among the threads are mutated. Finally, eachmodified program runs repeatedly for several times, each witha different delay schedule, to probe for alias coverage.

Several implicit parameters can be used to fine-tune theprocess, e.g., how many times to loop at each stage (see §Bfor details). In general, we give preference to alias coverageexploration over growing the multi-threaded syscall sequences,as we prefer to explore the concurrency domain as much aspossible when the number of syscalls executed is small, makingit easier for kernel developers to debug a reported data race.Offline checking. Data race checking is conducted offline, i.e.,only when new coverage, either branch or alias, is found. Thereason is that data race checking is slow (several minutes) andsignificantly hinders the fast fuzzing experience (which onlyrequires a few seconds to finish one execution). As a result, weallow the fuzzers to quickly expand coverage and only dumpexecution logs without checking them. A few backgroundthreads check the execution logs for data races whenever theyhave free capacity. The checking progress has difficulty keepingup with seed generation in the beginning but will graduallycatch up, especially when the coverage is toward saturation.

B. Benign vs harmful data races

An unexpected problem we encountered when reporting thedata races found by KRACE is on differentiating benign and

9

harmful data races. Despite the common belief that being data-race free is one of the coding practices in the kernel, benigndata races are not totally uncommon. One major category isstatistics accounting, such as __part_stat_add in the blocklayer. These statistics are meant for information and hints onlyand do not provide any accuracy guarantees. Another exampleis the reading and writing of different bits in the same 2-, 4-,8-byte variable, especially bit-flags such as inode->i_flag orflags in file system control structures like fs_info.

Based on our experience, checking whether a data race isbenign or harmful is often time consuming, as it requires carefulanalysis of the code and documentation to infer developers’intentions. In the worst cases, it may require consulting the filesystem developers, who may not even agree among themselves.One possibility to confirm a harmful data race is to keep thesystem running until the data race causes any visible effectssuch as violating assertions or memory errors. However, this isnot always feasible, as shown in the case in Figure 1. It mightneed thousands of file operations running in parallel to triggeran integer overflow. By then, debugging such an executiontrace will be another problem.

To avoid reporting benign data races to developers, KRACEuses several simple heuristics to filter the reports. In particular,a data race is mostly benign if:

• the race involves variables that have stat in their namesor occurs within functions for statistics accounting;

• the race involves reading and writing to different bits ofthe same variable;

• the race involves kernel functions that can tolerate beingracy, e.g., list_empty_careful.

Unfortunately, these heuristics typically offer limited help forthe more complicated cases.

C. The aging OS problem

When fuzzing file systems, most generic OS fuzzers donot reload a fresh copy of the kernel instance or file systemimage [21–23] for a new fuzzing session. Instead, they directlyissue the syscall sequence on the old kernel state. The intentionis to remove the overhead of kernel booting, as a VM emulatormight take seconds to load and boot the kernel, as is evidentin our evaluations as well (§VII-B). However, this also meansthat any bugs found in this approach might come from theaccumulated effects of hundreds or even thousands of priorruns, making them extremely difficult to debug and confirm bykernel developers, as is evident in the case when many bugsfound by Syzkaller cannot be confirmed [67].

The aging OS problem is already difficult for fuzzing in thesequential domain, and bringing in the concurrency dimensionfurther complicates the story. Moreover, for KRACE, the agingOS situation creates more problems, as the lengthy threadinterleaving traces are not only difficult to debug but alsorenders analysis impossible. Slicing the execution traces doesnot seem feasible either, as cutting the trace at the wrongpoints means losing the locking and happens-before context,ultimately leading to false alarms. As a result, KRACE is forced

to use a clean-slate execution for every fuzzing run, i.e., a freshkernel and a clean file system image.

The aging OS problem is also reported by Janus [5], whichuses a library OS—LKL [68]—to enable quick reloading. Butunfortunately, LKL does not support the symmetrical multi-processing (SMP) architecture, which is the prerequisite formulti-threading (e.g., without SMP, all spin_locks becomesno-ops). As a result, LKL is mostly suitable for sequentialfuzzing, not for concurrency fuzzing.

D. Discussion and limitations

Deterministic replay. Being able to replay an executiondeterministically is extremely helpful for debugging and alsoopens the door for advanced data race triaging techniquessuch as controlled re-interleaving of thread executions. Un-fortunately, we are sorry to report that even with a totallylinearized trace of basic block enter/exit, memory accesses,lock acquisition/releases, and kernel synchronization API calls,KRACE is unable to deterministically replay an execution end-to-end. Part of the reason is the missing instrumentation in otherkernel components, including the kernel core (including thetask and IO scheduler), memory management, device drivers(except the block device), and most of the library routines.We expect that deterministic replay may be possible if weinstrument all kernel components but at the expense of hugeexecution footprints (e.g., GB-level logs) as well as significantperformance drops. We are unaware of a system that permitsdeterministic replay of over 60 kernel threads, but we are eagerto integrate if possible.Debuggability. To partially compensate for not being ableto replay a found data race deterministically, KRACE tries togenerate a comprehensive report for each data race, including1) the conflicting lines in source code, 2) the full call stack foreach thread, and 3) the callback graph. Since each instructionis labeled with a compile-time random number, KRACE is ableto pinpoint the conflicting lines in the source code when a datarace occurs. Further coupled with the basic block branchinginformation, KRACE is able to recover the full call trace, upto the syscall entry point or the thread creation point, for allinvolving threads during the race condition. The report mayalso involve the callback graph derived from the happens-before analysis, to further assist the developers with the originof the threads. In fact, kernel developers have never askedfor a deterministic replay of the trace and are able to judgewhether the race is harmful or benign based on the informationprovided.Missing bugs. Offlining the data race checker means thatKRACE might miss data race bugs. As discussed in §III-B,alias coverage is just an approximation of state explorationprogress in the concurrency dimension, and there might be newprogram states explored at runtime but that do not show up asnew coverage, i.e., meaningful interleavings missed by aliascoverage. KRACE forgoes the opportunities to check data racesin those cases and is a trade-off made in favor of expandingthe coverage with efficiency.

10

Fig. 10: Implementation of the QEMU VM-based fuzzing executorin KRACE. The VM instance and the host have three communicationchannels: 1) private memory mapping, which contains the test caseprogram to be executed by the VM and the seed quality reportgenerated by KRACE runtime; 2) globally shared memory mapping,which contains the coverage bitmaps globally available to the hostand all VM instances; 3) file sharing under the 9p protocol for sharingof large files, including the file system image and the execution log.

E. Implementation

KRACE’s code base is divided into two parts: 1) compile-time preparation, including annotations to the kernel sourcecode (in the form of kernel patches), an LLVM instrumentationpass, and the KRACE library compiled into the kernel thatprovides coverage tracking and logging at runtime; and 2) aVM-based fuzzing loop that evolves test cases, executes themin QEMU VMs, and checks for data races. The complexityof each component is described in Table III and an overviewof the runtime executor is shown in Figure 10. Due to spaceconstraints, more details can be found in §D.

VII. EVALUATION

In this section, we evaluate KRACE as a whole as wellas per each component. In particular, we show the overalleffectiveness of KRACE by listing previously unknown dataraces found (§VII-A); provide a comprehensive view ofKRACE’s performance characteristics, e.g., speed, scalability,etc., as a file system fuzzer (§VII-B); justify major designdecisions with controlled experiments (§VII-C); and compareKRACE against recent OS and data race fuzzers (§VII-D).

Experiment setup. We evaluate KRACE on a two-socket, 24-core machine running Fedora 29 with Intel Xeon E5-2687W(3.0GHz) and 256GB memory. All performance evaluationsare done on Linux v5.4-rc5, although the main fuzzer runsintermittently across versions from v5.3. We build the kernelcore with minimal components but enable as many features aspossible for the btrfs and ext4 file system modules. For allevaluations, the fuzzing starts with an empty file system imagecreated from the mkfs.* utilities. We run 24 VM instances inparallel for fuzzing and each VM runs a three-thread seed.

ID FS Racing access Status

1 btrfs heap struct: cur_trans->state pending2 btrfs heap struct: cur_trans->aborted harmful3 btrfs heap struct: delayed_rsv->full harmful4 btrfs heap struct: sb->s_flags benign5 btrfs global variable: buffers harmful6 btrfs heap struct: inode->i_mode benign7 btrfs heap struct: inode->i_atime harmful8 btrfs heap struct: BTRFS_I(inode)->disk_i_size harmful9 btrfs heap struct: root->last_log_commit harmful

10 btrfs heap struct: free_space_ctl->free_space benign11 btrfs heap struct: cache->item.used harmful12 ext4 heap struct: inode->i_mtime benign13 ext4 heap struct: inode->i_state benign14 ext4 heap struct: ext4_dir_entry_2->inode benign15 ext4 heap array: ei->i_data[block] harmful16 VFS heap string: name in link_path_walk pending17 VFS heap struct: inode->i_state benign18 VFS heap struct: inode->i_wb_list benign19 VFS heap struct: inode->i_flag benign20 VFS heap struct: inode->i_opflags benign21 VFS heap struct: file->f_mode benign*22 VFS heap struct: file->f_pos pending23 VFS heap struct: file->f_ra.ra_pages harmful

TABLE I: List of data races found and reported by KRACE so far.Status of benign* means that it is a benign race according to theexecution paths we submitted, but the kernel developers suspect thatthere might be other paths leading to potentially harmful cases.

A. Data races in popular file systems

Across intermittent fuzzing runs on two popular kernel filesystems (btrfs and ext4) during two months, KRACE foundand reported 23 new data races, of which nine have beenconfirmed to be harmful, 11 are benign, and the rest of themare still under investigation, as listed in Table I. Note thatbesides bugs in concrete file systems, KRACE also finds dataraces in the virtual file system (VFS) layer, which might affectall file systems in the kernel.Consequence. Based on our preliminary investigation, onlyone bug (#5) is likely to cause immediate effects (null-pointer dereference) when triggered. Others are likely to causeperformance degradation or specification violations, but we donot see a simple path toward memory errors. This also meansthat relying on bug signals such as KASan reports or kernelpanics might not be sufficient to find data races.

B. Fuzzing characteristics

Coverage growth. The growth patterns for both branch andalias coverage are plotted in Figure 11 (for btrfs) and Figure 12(for ext4). There are several interesting observations:

Alias coverage size. Although branch coverage for the twofile systems grow into roughly the same level (25K vs 20K),compared with ext4, btrfs has a significantly larger aliascoverage bitmap, (60K vs 9K). Given that the number of userthreads is the same (3 threads), the difference is caused bythe level of concurrency inherent in btrfs and ext4 design.As shown in Figure 14, btrfs uses at least 22 backgroundthreads and each thread may additionally fork more helperthreads, while the only background thread for ext4 is thejbd2 journaling thread. In other words, btrfs is inherentlymore concurrent than ext4, and dividing works among more

11

0 25 50 75 100 125 150 175Fuzzing time (unit: hours)

16K

18K

20K

22K

24K

# C

FG b

ranc

hes (

bran

ch c

over

age)

BranchBranch (w/o alias feedback)Branch (Syzkaller)

30K

35K

40K

45K

50K

55K

60K

# al

iase

d in

stru

ctio

n pa

irs (a

lias c

over

age)

AliasAlias (w/o delay injection)Alias (w/o seed merging)

Fig. 11: Evaluation of the coverage growth of KRACE when fuzzingthe btrfs file system for a week (168 hours) with various settings.

0 25 50 75 100 125 150 175Fuzzing time (unit: hours)

13K

14K

15K

16K

17K

18K

19K

20K

# C

FG b

ranc

hes (

bran

ch c

over

age)

BranchBranch (w/o alias feedback)Branch (Syzkaller)

1K

2K

3K

4K

5K

6K

7K

8K

9K

# al

iase

d in

stru

ctio

n pa

irs (a

lias c

over

age)

AliasAlias (w/o delay injection)Alias (w/o seed merging)

Fig. 12: Evaluation of the coverage growth of KRACE when fuzzingthe ext4 file system for a week (168 hours) with various settings.

threads naturally leads to more alias pairs. The similar logicalso applies to why alias coverage saturates much faster inext4, the less concurrent file system.

Growth synchronization. In general, the two coveragemetrics grow in synchronization. It is expected that progressesin the branch coverage will yield new alias coverage too becausenew code paths mean new memory accessing instructions andhence, new alias pairs. However, it is the other direction thatmatters more: branch coverage saturates but alias coveragekeeps growing, e.g., starting from hour 75 in the btrfs caseor hour 25 in the ext4 case. In other words, KRACE keepsfinding new execution states (thread interleavings) that wouldotherwise be missed if only branch coverage is tracked.

Instrumentation overhead. The code instrumentation fromKRACE is heavy, and we expect it to cause significant overheadin execution. To show this, we present the aggregated statisticson the execution time for seeds bearing different numbersof syscalls. For comparison, we also run these seeds on abare-metal kernel built without KRACE instrumentation. Theresults are plotted in Figure 13. In summary, in the zero-syscallcase, i.e., by merely loading (file system module) → mounting(image) → unmounting → unloading, KRACE already incurs47.6% and 34.3% overhead, and the more syscalls KRACEexecutes, the more overhead it accumulates.

0 5 10 15 20 25 30# syscalls in the seed input

5.0

7.5

10.0

12.5

15.0

17.5

20.0

22.5

25.0

Ave

rage

seed

exe

cutio

n tim

e (u

nit:

seco

nd) btrfs execution time

ext4 execution timebtrfs baselineext4 baseline

0

50

100

150

200

250

300

350

400

Ave

rage

seed

ana

lysi

s tim

e (u

nit:

seco

nd)

btrfs analysis timeext4 analysis time

Fig. 13: Evaluation of seed execution and analysis time in KRACEwith a varying number of syscalls in the seed

The overhead mainly comes from memory access instrumen-tation, as every memory access is now turned into a functioncall where atomic operations are performed and synchronized,not only with respect to all other threads on the VM, but alsoagainst all threads across all VMs, as the thread is updating theglobal bitmap on the host directly (implicitly handled by theQEMU ivshmem module). As a result, further optimizations arepossible. For example, a VM instance may accumulate coveragelocally and update the global bitmap in batches instead of onevery memory access.

It is, however, debatable whether the overhead is detrimentalto KRACE as a fuzzer since lower overhead simply means thatthe coverage growth will converge and saturates faster. In ouropinion, we consider the overhead caused by tracking morecoverage (including alias coverage) as a trade-off betweenexecution speed and seed quality. A fuzzer with fast executionsmay waste resources in non-interesting test cases, while afuzzer with slow executions but finer-grained tracking mighteventually have higher chances to explore more states.Data race checking cost. Another limiting factor for KRACEis the time needed to analyze the execution logs for datarace detection, which also depends on the length of theexecution trace. The trend is also plotted in Figure 13. Insummary, the analysis time ranges from 4-7 minutes (0-30syscalls per seed) for btrfs and 2-6 minutes for ext4. Such atime cost is obviously not feasible for online checking (evenafter optimization) but can be tolerated for offline checking,i.e., KRACE schedules a data race check only when a seedis discovered. This strategy works especially when fuzzingsaturates, as the bottleneck for making further progress thenbecomes finding new execution states instead of checking thetrace. Based on our experience, running four checker processesalongside 24 fuzzing VM instances is more than sufficient tocatch up to the progress within 96 hours in both cases.

C. Component evaluations

Coverage effectiveness. Although the two coverage metricsrepresent different aspects of program execution, we are alsocurious whether tracking explorations in the concurrencydimension may help in finding new code paths (represented by

12

branch coverage). To check this, we disabled the alias coveragefeedback and let KRACE explore the states mimicking thefeedback loop of existing OS and file system fuzzers. Theresults (Figure 11 and Figure 12) show that exploring theconcurrency domain also helps to find new code coverage.Most notably, without alias coverage feedback, branch coveragegrows much faster at the beginning, because it does notspend fuzzing effort on exploring the thread interleavings, butsaturates at a lower number (7.2% and 4.0% less). Moreover, ifjust counting the new branches explored (besides the branchesin the initial seed), the coverage reduces by 20.4% and 10.7%,respectively. The more concurrent the file system is, the morebranch coverage will be explored by enabling alias coveragefeedback. This is not surprising, as certain code paths exist tohandle contention in the system, such as the paths executedwhen try_lock fails or when sequence lock retries. Exploringin the concurrency dimension helps to reveal these paths andboost the branch coverage.

Delay injection effectiveness. To test whether injectingdelays helps in exploration in the concurrency dimension, wedisabled delay injection in this fuzzing experiment, and thealias coverage growth is shown in Figure 11 and Figure 12.With delay injection disabled, KRACE found 28.7% and 12.3%less alias coverage in btrfs and ext4, respectively. This showsthat delay injection is important in finding more alias coverage.Especially, when the branch coverage saturates, delay injectionbecomes the leading force in finding alias coverage, as shownby the enlarging gap between the growth. The more concurrentthe file system is, the more important delay injection becomes.

Seed merging effectiveness. To test whether reusing theseed helps in exploration in the concurrency dimension, wedisabled seed merging in this fuzzing experiment, i.e., KRACEonly adds, deletes, and mutates syscalls but never reuses thefound seeds. The alias coverage growth is shown in Figure 11and Figure 12. With seed merging disabled, KRACE found37.7% and 14.2% less alias coverage in btrfs and ext4,respectively. This experiment shows that reusing the seed isimportant in quickly expanding the coverage. More importantly,preserving the semantics among the syscalls and interleavingthe seeds help find more alias coverage.

Components in the data race checker. To show that it isimportant to have both happens-before and lockset analysis (andtheir sub-components) in the data race checker, we sampled asimple fuzzing run: load btrfs module, mount an empty image,execute two syscalls × three threads, unmount the image, andunload the btrfs module. The following shows the filteringeffects of each component in the data race checker:

• data race candidates: 35,658+ after lockset analysis on pessimistic locks: 13,347+ after lockset analysis on optimistic locks: 8,903+ after tracking fork-style happen-before relation: 6,275+ after tracking join-style happen-before relation: 3,509+ after handling publisher-subscriber model: 103+ after handling ad-hoc schemes: 7 (all benign races)

D. Comparison with related fuzzers

Execution speed vs coverage. In terms of efficiency, KRACEis not comparable to other OS and file system fuzzers, asone execution takes at least seven seconds in KRACE, whilethe number can be as low as 10 milliseconds for libOS-based fuzzers [5, 6] or never-refreshing VM-based fuzzerslike Syzkaller. However, the effectiveness of a fuzzer is notsolely decided by fuzzing speed. A more important metricis the coverage size, especially when saturated. Intuitively, ifthe saturated coverage is low, being fast in execution onlyimplies that the coverage will converge faster and mostly stallafterward.

On the metric of saturated coverage, KRACE outperformsSyzkaller for both btrfs and ext4 by 12.3% and 5.5%,respectively, as shown in Figure 11 and Figure 12. Evenwithout the alias coverage feedback, the branch coverage fromKRACE still outperforms Syzkaller, showing the effectivenessof KRACE’s seed evolution strategies, especially the mergingstrategy for multi-threaded seeds, which is currently notavailable in Syzkaller. In fact, KRACE is able to catch upto the branch coverage progress with Syzkaller within 30 hoursand eight hours for btrfs and ext4, respectively.Data race detection. Razzer [24] reports four data races infile systems and we find the patches for two of them, both inthe VFS layer. To check that KRACE may detect these cases,we manually revert the patches in the kernel and confirm thatboth cases are found. We would like to do the same for SKI [7],but the data races found by SKI are too old (in 3.13 kernels)and locating and reverting the patches is not easy.

VIII. CONCLUSION AND FUTURE WORK

This paper presents KRACE, an end-to-end fuzzing frame-work that brings the concurrency aspects into coverage-guidedfile system fuzzing. KRACE achieves this with three newconstructs: 1) the alias coverage metric for tracking explorationprogress in the concurrency dimension, 2) the algorithm forevolving and merging multi-threaded syscall sequences, and3) a comprehensive lockset and happens-before modeling forkernel synchronization primitives. KRACE has uncovered 23new data races so far and will keep running for more reports.

Looking forward, we plan to extend KRACE in at least threedirections: 1) data race detection in other kernel components;2) semantic checking for more types of concurrency bugs; and3) fuzzing distributed file systems that involve not only threadinterleavings but also network event ordering, which requirescompletely new coverage metrics to capture.

IX. ACKNOWLEDGMENT

We thank the anonymous reviewers and our shepherd, YanShoshitaishvili, for their insightful feedback. This researchwas supported, in part, by NSF under award CNS-1563848,CNS-1704701, CRI-1629851, and CNS-1749711; ONR undergrant N00014-18-1-2662, N00014-15-1-2162, and N00014-17-1-2895; DARPA TC (No. DARPA FA8650-15-C-7556); ETRIIITP/KEIT[B0101-17-0644]; and gifts from Facebook, Mozilla,Intel, VMware, and Google.

13

REFERENCES

[1] L. Lu, A. C. Arpaci-Dusseau, R. H. Arpaci-Dusseau, and S. Lu, “A studyof linux file system evolution,” Trans. Storage, vol. 10, no. 1, pp. 3:1–3:32,Jan. 2014. [Online]. Available: http://doi.acm.org/10.1145/2560012

[2] J. Huang, M. K. Qureshi, and K. Schwan, “An Evolutionary Study ofLinux Memory Management for Fun and Profit,” in Proceedings of the2016 USENIX Annual Technical Conference (ATC), Berkeley, CA, USA,Jun. 2016, pp. 465–478.

[3] A. Aghayev, S. Weil, M. Kuchnik, M. Nelson, G. R. Ganger, andG. Amvrosiadis, “File Systems Unfit As Distributed Storage Backends:Lessons from 10 Years of Ceph Evolution,” in Proceedings of the 27thACM Symposium on Operating Systems Principles (SOSP). New York,NY, USA: ACM, Oct. 2019, pp. 353–369.

[4] C. Min, S. Kashyap, S. Maass, W. Kang, and T. Kim, “UnderstandingManycore Scalability of File Systems,” in Proceedings of the 2016USENIX Annual Technical Conference (ATC), Denver, CO, Jun. 2016.

[5] W. Xu, H. Moon, S. Kashyap, P.-N. Tseng, and T. Kim, “Fuzzing FileSystems via Two-Dimensional Input Space Exploration,” in Proceedingsof the 40th IEEE Symposium on Security and Privacy (Oakland), SanFrancisco, CA, May 2019.

[6] S. Kim, M. Xu, S. Kashyap, J. Yoon, W. Xu, and T. Kim, “FindingSemantic Bugs in File Systems with an Extensible Fuzzing Framework,”in Proceedings of the 27th ACM Symposium on Operating SystemsPrinciples (SOSP), Ontario, Canada, Oct. 2019.

[7] P. Fonseca, R. Rodrigues, and B. B. Brandenburg, “SKI: ExposingKernel Concurrency Bugs Through Systematic Schedule Exploration,”in Proceedings of the 11th USENIX Symposium on Operating SystemsDesign and Implementation (OSDI), Broomfield, Colorado, Oct. 2014.

[8] MITRE Corporation, “CVE-2009-1235,” https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-1235, 2009.

[9] J. Corbet, “Unprivileged filesystem mounts, 2018 edition,” https://lwn.net/Articles/755593, 2018.

[10] Kernel.org Bugzilla, “Btrfs bug entries,” https://bugzilla.kernel.org/buglist.cgi?component=btrfs, 2018.

[11] MITRE Corporation, “F2FS CVE entries,” http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=f2fs, 2018.

[12] Kernel.org Bugzilla, “ext4 bug entries,” https://bugzilla.kernel.org/buglist.cgi?component=ext4, 2018.

[13] G. Li, S. Lu, M. Musuvathi, S. Nath, and R. Padhye, “Efficient ScalableThread-safety-violation Detection: Finding Thousands of ConcurrencyBugs During Testing,” in Proceedings of the 27th ACM Symposium onOperating Systems Principles (SOSP). New York, NY, USA: ACM,Oct. 2019, pp. 162–180.

[14] Silicon Graphics Inc. (SGI), “(x)fstests is a filesystem testing suite,”https://github.com/kdave/xfstests, 2018.

[15] SGI, OSDL and Bull, “Linux Test Project,” https://github.com/linux-test-project/ltp, 2018.

[16] M. Zalewski, “American Fuzzy Lop (2.52b),” http://lcamtuf.coredump.cx/afl, 2019.

[17] S. Rawat, V. Jain, A. Kumar, L. Cojocar, C. Giuffrida, and H. Bos,“VUzzer: Application-aware Evolutionary Fuzzing,” in Proceedings ofthe 24th ACM Conference on Computer and Communications Security(CCS), Dallas, TX, Oct.–Nov. 2017.

[18] Google Inc., “honggfuzz,” http://honggfuzz.com/, 2019.[19] M. Böhme, V.-T. Pham, M.-D. Nguyen, and A. Roychoudhury, “Directed

Greybox Fuzzing,” in Proceedings of the 24th ACM Conference onComputer and Communications Security (CCS), Dallas, TX, Oct.–Nov.2017.

[20] Google, “OSS-Fuzz - Continuous Fuzzing for Open Source Software,”https://github.com/google/oss-fuzz, 2018.

[21] Google Inc., “Syzkaller is an Unsupervised, Coverage-guided KernelFuzzer,” https://github.com/google/syzkaller, 2019.

[22] S. Schumilo, C. Aschermann, R. Gawlik, S. Schinzel, and T. Holz, “kAFL:Hardware-Assisted Feedback Fuzzing for OS Kernels,” in Proceedingsof the 26th USENIX Security Symposium (Security), Vancouver, Canada,Aug. 2017.

[23] NCC Group, “AFL/QEMU Fuzzing with Full-system Emulation,” https://github.com/nccgroup/TriforceAFL, 2017.

[24] D. R. Jeong, K. Kim, B. A. Shivakumar, B. Lee, and I. Shin, “Razzer:Finding Kernel Race Bugs through Fuzzing,” in Proceedings of the 40thIEEE Symposium on Security and Privacy (Oakland), San Francisco,CA, May 2019.

[25] S. Pailoor, A. Aday, and S. Jana, “MoonShine: Optimizing OS Fuzzer

Seed Selection with Trace Distillation,” in Proceedings of the 27thUSENIX Security Symposium (Security), Baltimore, MD, Aug. 2018.

[26] J. Corina, A. Machiry, C. Salls, Y. Shoshitaishvili, S. Hao, C. Kruegel,and G. Vigna, “DIFUZE: Interface Aware Fuzzing for Kernel Drivers,”in Proceedings of the 24th ACM Conference on Computer and Commu-nications Security (CCS), Dallas, TX, Oct.–Nov. 2017.

[27] D. Jones, “Linux system call fuzzer,” https://github.com/kernelslacker/trinity, 2018.

[28] R. N. Netzer and B. P. Miller, “Detecting Data Races in Parallel ProgramExecutions,” in In Advances in Languages and Compilers for ParallelComputing, 1990 Workshop. MIT Press, 1989, pp. 109–129.

[29] L. Lamport, “Time, clocks, and the ordering of events in a distributedsystem,” Commun. ACM, vol. 21, no. 7, pp. 558–565, Jul. 1978.[Online]. Available: http://doi.acm.org/10.1145/359545.359563

[30] S. Savage, M. Burrows, G. Nelson, P. Sobalvarro, and T. Anderson,“Eraser: A Dynamic Data Race Detector for Multithreaded Programs,”ACM Trans. Comput. Syst., vol. 15, no. 4, pp. 391–411, Nov. 1997.

[31] M. D. Bond, K. E. Coons, and K. S. McKinley, “PACER: ProportionalDetection of Data Races,” in Proceedings of the 2010 ACM SIGPLANConference on Programming Language Design and Implementation(PLDI). New York, NY, USA: ACM, Jun. 2010, pp. 255–268.

[32] Z. Anderson, D. Gay, R. Ennals, and E. Brewer, “SharC: CheckingData Sharing Strategies for Multithreaded C,” in Proceedings of the2008 ACM SIGPLAN Conference on Programming Language Designand Implementation (PLDI). New York, NY, USA: ACM, Jun. 2008,pp. 149–158.

[33] E. Pozniansky and A. Schuster, “Efficient On-the-fly Data Race Detectionin Multithreaded C++ Programs,” in Proceedings of the 9th ACMSymposium on Principles and Practice of Parallel Programming (PPOPP).New York, NY, USA: ACM, Jun. 2003, pp. 179–190.

[34] D. Marino, M. Musuvathi, and S. Narayanasamy, “LiteRace: EffectiveSampling for Lightweight Data-race Detection,” in Proceedings of the2009 ACM SIGPLAN Conference on Programming Language Designand Implementation (PLDI). New York, NY, USA: ACM, Jun. 2009,pp. 134–143.

[35] P. McKenney, “The RCU API, 2019 edition,” https://lwn.net/Articles/777036/, 2019.

[36] S. Park, S. Lu, and Y. Zhou, “CTrigger: Exposing Atomicity ViolationBugs from Their Hiding Places,” in Proceedings of the 14th ACMInternational Conference on Architectural Support for ProgrammingLanguages and Operating Systems (ASPLOS). New York, NY, USA:ACM, Mar. 2009, pp. 25–36.

[37] K. Sen, “Race Directed Random Testing of Concurrent Programs,” inProceedings of the 2008 ACM SIGPLAN Conference on ProgrammingLanguage Design and Implementation (PLDI). New York, NY, USA:ACM, Jun. 2008, pp. 11–21.

[38] Y. Cai, J. Zhang, L. Cao, and J. Liu, “A Deployable Sampling Strategy forData Race Detection,” in Proceedings of the 2016 24th ACM SIGSOFTInternational Symposium on Foundations of Software Engineering, ser.FSE 2016. New York, NY, USA: ACM, 2016, pp. 810–821.

[39] K. Serebryany and T. Iskhodzhanov, “ThreadSanitizer: Data RaceDetection in Practice,” in Proceedings of the Workshop on BinaryInstrumentation and Applications, ser. WBIA ’09. New York, NY,USA: ACM, 2009, pp. 62–71.

[40] P. Deligiannis, A. F. Donaldson, and Z. Rakamaric, “Fast and PreciseSymbolic Analysis of Concurrency Bugs in Device Drivers (T),” inProceedings of the 2015 30th IEEE/ACM International Conference onAutomated Software Engineering (ASE), ser. ASE ’15. Washington, DC,USA: IEEE Computer Society, 2015, pp. 166–177.

[41] D. Engler and K. Ashcraft, “RacerX: Effective, Static Detection of RaceConditions and Deadlocks,” in Proceedings of the 19th ACM Symposiumon Operating Systems Principles (SOSP), Bolton Landing, NY, Oct.2003.

[42] S. Hong and M. Kim, “Effective Pattern-driven Concurrency BugDetection for Operating Systems,” J. Syst. Softw., vol. 86, no. 2, pp.377–388, Feb. 2013.

[43] S. Lu, S. Park, C. Hu, X. Ma, W. Jiang, Z. Li, R. A. Popa, and Y. Zhou,“MUVI: Automatically Inferring Multi-variable Access Correlations andDetecting Related Semantic and Concurrency Bugs,” in Proceedingsof the 21st ACM Symposium on Operating Systems Principles (SOSP).Stevenson, WA: ACM, Oct. 2007, pp. 103–116.

[44] J. W. Voung, R. Jhala, and S. Lerner, “RELAY: Static Race Detection onMillions of Lines of Code,” in Proceedings of the the 6th Joint Meeting ofthe European Software Engineering Conference and the ACM SIGSOFT

14

Symposium on The Foundations of Software Engineering, ser. ESEC-FSE’07. New York, NY, USA: ACM, 2007, pp. 205–214.

[45] J. Erickson, M. Musuvathi, S. Burckhardt, and K. Olynyk, “EffectiveData-race Detection for the Kernel,” in Proceedings of the 9th USENIXSymposium on Operating Systems Design and Implementation (OSDI).Berkeley, CA, USA: USENIX Association, Oct. 2010, pp. 151–162.

[46] M. Elver, “Add Kernel Concurrency Sanitizer (KCSAN),” https://lwn.net/Articles/802402/, 2019.

[47] J. Alglave, W. Deacon, B. Feng, D. Howells, D. Lustig, L. Maranget, P. E.McKenney, A. Parri, N. Piggin, A. Stern, A. Yokosawa, and P. Zijlstra,“Who’s afraid of a big bad optimizing compiler?” https://lwn.net/Articles/793253/, 2019.

[48] S. Burckhardt, P. Kothari, M. Musuvathi, and S. Nagarakatte, “ARandomized Scheduler with Probabilistic Guarantees of Finding Bugs,” inProceedings of the 15th ACM International Conference on ArchitecturalSupport for Programming Languages and Operating Systems (ASPLOS).New York, NY, USA: ACM, Mar. 2010, pp. 167–178.

[49] Y. Sui and J. Xue, “SVF: Interprocedural Static Value-Flow Analysisin LLVM,” in Proceedings of the 25th International Conference onCompiler Construction (CC), Barcelona, Spain, Mar. 2016.

[50] LLVM Project, “libFuzzer - a library for coverage-guided fuzz testing,”https://llvm.org/docs/LibFuzzer.html, 2018.

[51] P. Chen and H. Chen, “Angora: Efficient Fuzzing by Principled Search,”in Proceedings of the 39th IEEE Symposium on Security and Privacy(Oakland), San Francisco, CA, May 2018.

[52] S. Gan, C. Zhang, X. Qin, X. Tu, K. Li, Z. Pei, and Z. Chen, “CollAFL:Path Sensitive Fuzzing,” in Proceedings of the 39th IEEE Symposiumon Security and Privacy (Oakland), San Francisco, CA, May 2018.

[53] M. Böhme, V.-T. Pham, and A. Roychoudhury, “Coverage-based greyboxfuzzing as markov chain,” in Proceedings of the 23rd ACM Conferenceon Computer and Communications Security (CCS), Vienna, Austria, Oct.2016.

[54] M. Böhme, V.-T. Pham, M.-D. Nguyen, and A. Roychoudhury, “Directedgreybox fuzzing,” in Proceedings of the 24th ACM Conference onComputer and Communications Security (CCS), Dallas, TX, Oct.–Nov.2017.

[55] NCC Group, “AFL/QEMU fuzzing with full-system emulation.” https://github.com/nccgroup/TriforceAFL, 2017.

[56] MWR Labs, “Cross Platform Kernel Fuzzer Framework,” https://github.com/mwrlabs/KernelFuzzer, 2016.

[57] H. Han and S. K. Cha, “IMF: Inferred Model-based Fuzzer,” in Proceed-ings of the 24th ACM Conference on Computer and CommunicationsSecurity (CCS), Dallas, TX, Oct.–Nov. 2017.

[58] NCC Group, “A linux system call fuzzer using TriforceAFL,” https://github.com/nccgroup/TriforceLinuxSyscallFuzzer, 2017.

[59] MWR Labs, “macOS Kernel Fuzzer,” https://github.com/mwrlabs/OSXFuzz, 2017.

[60] M. J. Renzelmann, A. Kadav, and M. M. Swift, “SymDrive: TestingDrivers without Devices,” in Proceedings of the 10th USENIX Symposiumon Operating Systems Design and Implementation (OSDI), Hollywood,CA, Oct. 2012.

[61] A. Machiry, C. Spensky, J. Corina, N. Stephens, C. Kruegel, andG. Vigna, “DR. Checker: A Soundy Analysis for Linux Kernel Drivers,”in Proceedings of the 26th USENIX Security Symposium (Security),Vancouver, Canada, Aug. 2017.

[62] S. Lu, S. Park, E. Seo, and Y. Zhou, “Learning from Mistakes - A Com-prehensive Study on Real World Concurrency Bug Characteristics,” inProceedings of the 13th ACM International Conference on ArchitecturalSupport for Programming Languages and Operating Systems (ASPLOS),Seattle, WA, Mar. 2008.

[63] S. Burckhardt, P. Kothari, M. Musuvathi, and S. Nagarakatte, “ARandomized Scheduler with Probabilistic Guarantees of Finding Bugs,” inProceedings of the 15th ACM International Conference on ArchitecturalSupport for Programming Languages and Operating Systems (ASPLOS),Pittsburgh, PA, Mar. 2010.

[64] P. Ammann and J. Offutt, Introduction to Software Testing, 2nd ed. NewYork, NY, USA: Cambridge University Press, 2016.

[65] J. Wang, Y. Duan, W. Song, H. Yin, and C. Song, “Be sensitiveand collaborative: Analyzing impact of coverage metrics in greyboxfuzzing,” in 22nd International Symposium on Research in Attacks,Intrusions and Defenses (RAID 2019). Chaoyang District, Beijing:USENIX Association, Sep. 2019, pp. 1–15. [Online]. Available:https://www.usenix.org/conference/raid2019/presentation/wang

[66] K. Owens and A. Arcangeli, “Seqlock implementation in linux,” https:

//github.com/torvalds/linux/blob/master/include/linux/seqlock.h, 2019.[67] Google, “syzbot,” https://syzkaller.appspot.com, 2018.[68] O. Purdila, L. A. Grijincu, and N. Tapus, “LKL: The Linux kernel

library,” in Proceedings of the 9th Roedunet International Conference(RoEduNet). IEEE, 2010.

[69] W. Xiong, S. Park, J. Zhang, Y. Zhou, and Z. Ma, “Ad Hoc Synchroniza-tion Considered Harmful,” in Proceedings of the 9th USENIX Symposiumon Operating Systems Design and Implementation (OSDI), Vancouver,Canada, Oct. 2010.

15

APPENDIX

A. Level of concurrency in the btrfs file system

1 struct btrfs_fs_info {2 /* work queues */3 struct btrfs_workqueue *workers;4 struct btrfs_workqueue *delalloc_workers;5 struct btrfs_workqueue *flush_workers;6 struct btrfs_workqueue *endio_workers;7 struct btrfs_workqueue *endio_meta_workers;8 struct btrfs_workqueue *endio_raid56_workers;9 struct btrfs_workqueue *endio_repair_workers;

10 struct btrfs_workqueue *rmw_workers;11 struct btrfs_workqueue *endio_meta_write_workers;12 struct btrfs_workqueue *endio_write_workers;13 struct btrfs_workqueue *endio_freespace_worker;14 struct btrfs_workqueue *submit_workers;15 struct btrfs_workqueue *caching_workers;16 struct btrfs_workqueue *readahead_workers;17 struct btrfs_workqueue *fixup_workers;18 struct btrfs_workqueue *delayed_workers;19 struct btrfs_workqueue *scrub_workers;20 struct btrfs_workqueue *scrub_wr_completion_workers;21 struct btrfs_workqueue *scrub_parity_workers;22 struct btrfs_workqueue *qgroup_rescan_workers;23 /* background threads */24 struct task_struct *transaction_kthread;25 struct task_struct *cleaner_kthread;26 };

Fig. 14: 20 work queues and 2 background threads used by btrfs.This does not cover all asynchronous activities observable at runtime.

B. Seed evolution in KRACE

1 def fuzzing_loop(ext_limit, mod_limit, rep_limit):2 while True:3 program = merge_seeds(select_seed_pair())4

5 ext_stall = 06 while ext_stall < ext_limit:7 ext_stall++8 [50%] program.add_syscall()9 [50%] program.del_syscall()

10

11 mod_stall = 012 while mod_stall < mod_limit:13 mod_stall++14 [80%] program.mutate()15 [20%] program.shuffle()16

17 rep_stall = 018 while rep_stall < rep_limit:19 rep_stall++20 delay = randomize_delay()21 cov, log = run(program, delay)22

23 if not cov.empty():24 rep_stall = mod_stall = ext_stall = 025 schedule_data_race_check(log)26 prune_and_save_seed(program)

Fig. 15: The seed evolution process (a.k.a the fuzzing loop) in KRACE

Three parameters tunes the behaviors of the seed evolutionloop: namely ext_limit, mod_limit, and rep_limit as shownin Figure 15. In KRACE, they take the values of 10, 10, and 5respectively. That is,

• if any new coverage, either branch or alias, is observedin 5 consecutive runs, KRACE will continue to run thesame multi-threaded seed for 5 more times but with anew delay schedule each time;

• if no new coverage is observed for 5 consecutive runs,KRACE starts to mutate the syscall arguments in the multi-threaded trace or shuffle the syscalls;

• if no new coverage is observed for 50 consecutive runs,KRACE starts to alter the input structure by adding ordeleting the syscalls in the multi-threaded traces;

• if no new coverage is observed for 500 consecutive runs,KRACE starts to merge two seeds for a new seed.

C. Ad-hoc synchronization schemes in kernel file systems

Although ad-hoc synchronization schemes are consideredharmful [69], they may still exist in kernel file systemsfor performance or functionality enhancements. Wheneverwe encounter an ad-hoc scheme (usually when analyzingfalse positives), we annotate it in the same way as majorsynchronization APIs so that subsequent runs will not reportthe false data races caused by it. In this section, we presenttwo examples we encountered in btrfs.Ad-hoc locking. An ad-hoc lock has two implications: 1)there will be data races in the lock implementation and thesedata races are all benign races; and 2) lock internals shouldbe abstracted in a way that the lockset analysis can easilyunderstand. A representative example is the btrfs tree lock, andthe purpose of having the tree lock is to be convertible betweenblocking and non-blocking mode, as shown in Figure 16.

1 /* acquire a spinning write lock, wait for both2 * blocking readers or writers */3 void btrfs_tree_lock(struct extent_buffer *eb)4 {5 u64 start_ns = 0;6 if (trace_btrfs_tree_lock_enabled())7 start_ns = ktime_get_ns();8

9 WARN_ON(eb->lock_owner == current->pid);10 again:11 wait_event(eb->read_lock_wq,12 atomic_read(&eb->blocking_readers) == 0);13 wait_event(eb->write_lock_wq, eb->blocking_writers == 0);14 write_lock(&eb->lock);15 if (atomic_read(&eb->blocking_readers)16 || eb->blocking_writers) {17 write_unlock(&eb->lock);18 goto again;19 }20 btrfs_assert_spinning_writers_get(eb);21 btrfs_assert_tree_write_locks_get(eb);22 eb->lock_owner = current->pid;23 }24 /* drop a spinning or a blocking write lock. */25 void btrfs_tree_unlock(struct extent_buffer *eb)26 {27 int blockers = eb->blocking_writers;28 BUG_ON(blockers > 1);29

30 btrfs_assert_tree_locked(eb);31 eb->lock_owner = 0;32 btrfs_assert_tree_write_locks_put(eb);33

34 if (blockers) {35 btrfs_assert_no_spinning_writers(eb);36 eb->blocking_writers--;37 cond_wake_up(&eb->write_lock_wq);38 } else {39 btrfs_assert_spinning_writers_put(eb);40 write_unlock(&eb->lock);41 }42 }

Fig. 16: A snippet of the btrfs tree lock (writer side only).

16

Tree lock API Lockset mapping

tree_lock writer-lock

tree_unlock writer-unlock

tree_read_lock reader-lock

tree_read_lock_atomic reader-lock

tree_read_unlock reader-unlock

tree_read_unlock_blocking reader-unlock

tree_set_lock_blocking_read no-op if read-locked

tree_set_lock_blocking_write no-op if write-locked

try_tree_read_lock reader-lock if succeed

try_tree_write_lock writer-lock if succeed

TABLE II: Semantic mapping between the tree lock and conventionallocks (in particular, the readers-writer lock).

In these functions, almost every memory access to the fieldsin the extent buffer, eb, could be racing against other accesses.e.g., eb->lock_owner at line 12 against eb->lock_owner = 0at line 40. So the first annotation for KRACE is to assume alldata races within these functions are safe and benign races.

To further encode the locking semantics for lockset analysis,we study the tree lock APIs and map their functionality into asimple reader-writer lock format as shown in Table II. In otherwords, calling the, e.g., tree_lock will be treated equally ascalling the writer-lock in the conventional locking mechanisms.Although tree_lock performs much more computation (e.g.,waiting for both blocking and non-blocking readers), from thelockset perspective, it is equivalent to a writer-lock.Ad-hoc ordering. Ad-hoc ordering implies undocumentedcasual relations between thread executions and a good exampleis the customization of the conventional kernel work queue inbtrfs, as shown in Figure 17.

1 static inline void __btrfs_queue_work(struct __btrfs_workqueue *wq,2 struct btrfs_work *work)3 {4 unsigned long flags;5 work->wq = wq;6 if (work->ordered_func) {7 spin_lock_irqsave(&wq->list_lock, flags);8 list_add_tail(&work->ordered_list, &wq->ordered_list);9 spin_unlock_irqrestore(&wq->list_lock, flags);

10 }11 queue_work(wq->normal_wq, &work->normal_work);12 }13 static void normal_work_helper(struct btrfs_work *work) {14 /* ... */15 work->func(work);16 if (need_order)17 set_bit(WORK_DONE_BIT, &work->flags);18 /* ... */19 }20 static void run_ordered_work(struct __btrfs_workqueue *wq) {21 /* ... */22 work = list_entry(list->next, struct btrfs_work, ordered_list);23 if (test_bit(WORK_DONE_BIT, &work->flags))24 work->ordered_func(work);25 /* ... */26 }

Fig. 17: A snippet of the btrfs work queue implementation.

In this example, the set_bit and test_bit (line 17 and23), establish an additional causal relation beyond the normalqueue_work semantic: the ordered function only gets intoexecution when the normal function finishes. Thus, althoughthe observed happens-before relation is line 8 → line 24 andline 11 → line 15, the actual relation is line 8 → line 11 →line 15 → line 24.

D. KRACE implementation details

Component LoC Languange

Compile-time preparationKernel annotations 5,653 CLLVM instrumentation pass 1,977 C++KRACE kernel runtime library 1,749 C

Fuzzing loopSeed evolution (including syscall spec.) 9,394 PythonQEMU-based fuzzing executor 5,878 PythonInitramfs and the init program 2,527 PythonData race checker 6,883 PythonDebugging tools and utilities 1,096 Python

TABLE III: Implementation complexity of KRACE in terms of LoCmeasurement of the major components shown in Figure 9.

Runtime executor. The most challenging part of KRACE’simplementation is to establish information-sharing channelsbetween the host and VM-based fuzzing instances for seedinjection, coverage tracking, and feedback collection. KRACEuses private memory mapping (PCI memory bar), publicmemory mapping (ivshmem), and the 9p file sharing protocolsfor this purpose, as shown in Figure 10.Kernel building. Building the Linux kernel with LLVM isstraightforward since kernel v5.3 and LLVM 9.0. In addition, toget the smallest possible boot time, we opt for a minimal kernelbuild with only necessary components enabled, including theblock layer, loopback device, and all other related drivers tosupport and accelerate execution in QEMU and KVM. Filesystems are built as modules, not built-in, and these moduleswill be loaded by our fuzzing agent (i.e., the init program)such that we could track the modules in full, including thethread they fork on loading and their synchronization orders.Initramfs. Again, to shorten the execution time, KRACE doesnot rely on full-blown OSes, not even tools like busybox, asthey may interfere with the file system under testing. Instead,the init program in KRACE is the fuzzing agent that takes themulti-threaded seed and interprets it. In particular, the init 1)starts tracing, 2) loads file system modules, 3) mounts the filesystem image, 4) interprets the program, 5) unmounts the filesystem image, 6) unloads the modules, and 7) stops tracing.Coverage tracking. Coverage tracking is handled by theinstrumented code which are essentially stub calls, e.g.,on_basic_block_enter, on_memory_read, etc., into the KRACEruntime library. KRACE directly updates the coverage bitmapsmaintained in the host memory regions that are globally visibleto all VM instances (and their threads). Effectively, each updateis a test_and_set_bit operation while the QEMU ivshmemprotocol ensures atomicity.Execution log. An execution log is simply an array of[<event-type>, <thread-id>, <arg1>, <arg2>, ...] filledby the KRACE runtime library and consumed by the data racechecker for data race detection as well as reporting purposessuch as call trace reconstruction.

E. A taste of the happens-before complexity in actual execution

17

46-46-7979

46-46-8194

46-8-134694

46-8-134695

65599-0-0

46-8-113651

46-8-116219

46-8-40342

46-8-41245

65588-2-0

46-43-809541

46-43-81752165553-1-0

46-8-1181703

46-8-1181704

16842752-9-0

46-8-303605

46-8-303606

65609-0-0

46-43-4841

46-43-5782 16908288-46-0

46-43-623228

46-43-629941

46-8-1156633

46-8-1157044

65588-20-0

46-43-897551

46-43-910616 16973824-83-0

46-43-792153

46-43-792154

65594-2-0

46-8-206509

46-8-206510

65543-9-0

46-43-849921

46-43-851782 16973824-69-0

46-43-722815

46-43-73484416973824-35-0

46-43-421358

46-43-421359

16908288-60-0

46-8-1247905

46-8-1248616

46-8-241622

46-8-242081

46-8-41246

46-8-97326

46-8-242082

46-8-242313

46-43-762794

46-43-762795

65589-2-0

46-47-925

46-46-8416

46-8-118363

46-8-120016

46-8-1173338

46-8-1173635

65588-24-0

46-8-192790

46-8-205483

46-8-118240

46-8-11836265591-1-0

46-43-9926

46-43-9927

16973824-5-0

46-8-1194188

46-8-1194189

16842752-15-0

46-8-1140121

46-8-1144077

46-42-3737

46-8-206015

46-8-128446

46-8-130418

46-46-1102

46-46-7826

46-8-1209795

46-8-1210092

65588-33-0

46-8-1110705

46-8-1111436

65588-10-0

46-8-1169821

46-8-1229506

46-8-1229507

16842752-33-0

46-42-3678

46-42-3726

46-43-0

46-8-228601

46-8-229152

65588-8-0

46-8-23801

46-8-30698

46-43-616158

46-43-616486

65588-62-0

46-8-1144488

65588-17-0

46-8-229153

65543-14-0

46-43-4230

16908288-45-0

46-43-789858

46-43-789859

65605-2-0

46-43-9570

46-43-9571

16973824-2-0

46-8-1157045

16908288-39-0

46-8-217341

46-8-217342

65543-11-0

46-8-1264823

46-8-1264824

65543-26-0

46-8-1213915

46-8-1213916

16842752-25-0

46-46-7978

65588-69-0

46-8-1148400

46-8-1148811

65588-18-0

46-8-136778

46-8-13690065600-1-0

46-8-1264132

46-8-1264591

46-8-113650

65589-0-0

46-8-1181406

65588-26-0

46-8-139106

46-8-13922865601-1-0

46-8-12013865592-1-0

46-8-1165902

46-8-1169523

46-43-792339

46-8-141083

46-8-14120565602-1-0

46-43-622024

46-43-622618

65615-6-0

46-8-303023

46-8-303024

65608-0-0

46-43-421870

46-8-1130967

46-8-1131378

65588-14-0

46-8-122693

46-8-122694

65593-0-0

46-8-1126776

46-8-1160609

46-8-1161020

65588-21-0

46-43-878649

46-43-878654 33619968-270-0

46-8-1190029

46-8-1190030

16842752-13-0

46-8-132402

46-8-13252465598-1-0

46-8-1247904

65543-21-0

46-8-1272665

46-8-1272666

65610-0-0

46-43-762980

46-43-623227 65615-7-0

65591-0-0

46-43-88238

46-43-312342 16908288-48-0

46-43-795641

46-43-795642

65593-2-0

46-43-10438

46-43-549007

46-43-550114 16973824-18-0

46-8-1140120

16908288-31-0

46-43-79389665553-0-0

46-8-206014

65543-8-0

46-8-217644

46-46-1101

16973824-87-0

46-43-735708

46-43-74700965580-0-0

46-8-241090

46-8-241621

65588-9-0

46-8-1185785

46-8-1186082

65588-27-0

46-8-192558

46-8-192789

46-43-313791 16908288-49-0

46-8-1169820

16842752-3-0

46-47-0

46-47-494

46-8-116490

46-8-23800

16908288-3-0

46-8-31318

46-8-11648965590-1-0

46-43-618740

46-43-618741

16973824-22-0

46-43-885691

46-43-885692

16973824-76-0

46-8-192291

46-8-192557

46-8-1139709

65588-16-0

46-43-790888

46-46-8405

46-43-451684

46-43-453525 16973824-13-0

46-43-764754 65587-0-0

46-8-1281317

46-8-1281946

46-43-863897

46-43-865758 16973824-72-0

46-43-885056

16973824-75-0

46-8-175619

46-8-175620

65543-2-0

46-8-141206

46-8-143060

46-43-441614

46-43-441615

16973824-12-0

65588-6-0

46-8-112920

65589-1-0

46-43-421871

16908288-61-0

46-8-242314

46-8-302901

46-8-303483

65609-1-0

46-8-22907

65588-1-0

46-8-1221958

46-8-1225615

46-43-785215

46-43-78670965541-1-0

46-8-1225913

46-8-1229209

46-8-149340

46-8-151194

46-8-149217

46-8-14933965606-1-0

46-8-1173636

46-8-1177151

46-43-795827

65579-3-0

46-43-10439

16973824-6-0

46-43-775053

46-43-775054

65601-2-0

46-8-1264592

65543-25-0

46-43-788184

46-43-788185

65595-1-0

46-8-1217947

46-8-1221660

46-43-638170

46-43-638171

16973824-31-0

46-43-315721

46-43-414484 16908288-52-0

46-43-894904

46-43-895269 16973824-80-0

46-43-415417

46-43-415820 65588-54-0

46-8-191759

46-8-192290

65588-5-0

46-43-619065

46-43-886760

65543-4-0

46-8-174187

46-8-175102

65588-4-0

46-43-766398

46-43-766399

65600-2-0

46-8-1281316

65579-2-0

46-43-420286

46-43-420287

16908288-56-0

46-43-615379

46-43-615830

65588-60-0

46-46-8338

46-43-79727465563-1-0

46-8-143183

46-8-145037

46-43-892708

46-43-892872

65563-2-0

46-8-1189732

65588-28-0

46-43-892873

65563-3-0

46-43-7316

46-43-7317

65612-1-0

46-43-457688 16973824-14-0

46-43-424902

46-8-216780

46-8-1221957

65588-36-0

46-43-910685

46-43-91070216973824-85-0

46-43-619066

16973824-25-0

46-43-6713

46-43-6714

65543-29-0

46-43-433633

46-43-44076965615-2-0

16973824-19-0

46-8-130541

46-43-6374

65543-28-0

46-8-151317

46-8-162142

46-8-122571

65593-1-0

46-8-1122233

46-8-1122234

16908288-23-0

46-43-853643

16973824-71-0

65588-23-0

46-8-1225912

16842752-31-0

46-8-1272470

46-9-0

46-9-295

46-43-775239

46-43-869941

16973824-74-0

46-43-788370

46-8-1263698

46-8-1264131

65588-40-0

46-43-704772

46-8-1217946

16842752-27-0

46-43-3036

46-43-3600 16908288-43-0

46-43-314756

16908288-51-0

46-8-1198075

46-8-1198076

16842752-17-0

46-8-1248617

46-8-1186083

46-8-1135631

46-8-124709

46-8-124710

65594-0-0

46-8-147237

46-8-147238

65605-0-0

46-8-175103

46-9-312

46-8-1284676

46-43-766584

46-43-420846

16908288-50-0

46-8-217643

46-43-420016

46-43-420017

16908288-55-0

65558-0-0

46-8-1217649

46-43-637305

65588-67-0

46-43-2263

16908288-42-0

46-8-1144489

46-43-459423

46-43-463451 16973824-16-0

46-43-887085

16973824-77-0

46-43-78341765580-1-0

46-8-1161021

46-8-1165604

46-43-734845

65596-2-0

46-8-143182

65603-0-0

46-42-2877

46-42-3041

65548-1-0

65580-2-0

46-43-893115

46-8-1148812

16908288-35-0

46-43-817522

65599-2-0

46-43-70841616973824-32-0

46-8-1111437

46-8-1117684

46-43-7227

65608-1-0

46-43-794082

65621-0-0

46-8-124587

65588-37-0

46-43-415093

65588-53-0

46-43-425898

46-43-425899

16973824-8-0

16973824-84-0

46-43-786895

65563-0-0

46-8-1193891

46-8-31317

16908288-5-0

46-8-130540

65597-0-0

46-8-1281046

46-8-151316

65607-0-0

46-8-1131379

46-8-1135219

65588-38-0

46-8-1206135

46-8-1206136

16842752-21-0

46-8-1202367

46-8-1205838

65588-55-0

46-43-793897

65606-2-0

46-42-3042

46-43-747195

46-43-76072065576-1-0

65543-22-0

16842752-11-0

46-46-498

16973824-86-0

46-8-1135630

16908288-29-0

46-8-1118391

46-8-1118392

16908288-21-0

46-43-817707

46-43-83102465558-1-0

46-8-1152915

65543-1-0

46-8-139229

46-8-120139

65588-68-0

46-8-1284665

46-8-145159

46-8-145160

65604-0-0

16908288-33-0

65588-11-0

46-43-630290

65615-9-0

65588-32-0

16908288-41-0

46-8-132525

46-8-134572

46-8-1197778

65588-30-0

46-43-797275

65607-2-0

65543-17-0

46-8-128323

46-8-12844565596-1-0

46-43-425511

65564-1-0

16908288-8-0

65616-0-0

65543-18-0

16908288-19-0

65588-29-0

46-43-433288

46-8-97516

46-8-104575 65576-0-0

65543-6-0

65588-35-0

46-4-867

65588-0-0

65588-15-0

65579-1-0

16908288-27-0

46-8-1247163

65588-39-0

65564-0-0

16908288-44-0

65596-0-0

46-8-126690

65595-0-0

46-47-495

33619968-273-0

46-8-1202366

16842752-19-0

46-43-887170

16973824-79-0

46-43-786710

65604-2-0

65548-0-0

46-8-136901

46-8-1213618

65588-34-0

46-8-1177449

65548-2-0

46-8-1152914

16908288-37-0

46-43-887086

16973824-78-0

46-43-897179

16973824-82-0

65543-24-0

46-8-1126364

46-8-1126775

65588-13-0

65601-0-0

65592-0-0

46-46-8195

46-8-1165901

16842752-1-0

16973824-70-0

65548-3-0

46-8-229455

65594-1-0

46-8-147115

65605-1-0

46-47-907

16973824-88-0

16973824-81-0

46-43-747010

65592-2-0

65588-63-0

65599-1-0

65598-0-0

16973824-17-0

46-43-797460

46-43-719854

16973824-34-0

46-43-420847

16908288-59-0

65588-61-0

65541-0-0

33685504-3-0

46-8-1210093

65588-59-0

65615-1-0

46-43-831573

16973824-65-0

46-43-831574

16973824-66-0

46-43-809355

46-43-809356

65598-2-0

16908288-12-0

65588-7-0

65543-12-0

16908288-25-0

65543-5-0

65590-0-0

65588-52-0

65607-1-0

16908288-6-0

65579-4-0

65600-0-0

16842752-36-0

65612-4-0

46-8-1177448

16842752-7-0

46-8-105361

46-8-1202069

65588-22-0

65588-31-0

65602-0-0

65610-1-0

65588-70-0

16973824-27-0

46-8-1152465

65588-19-0

46-8-229454

65543-15-0

65588-25-0

65543-19-0

65603-1-0

16908288-47-0

16842752-29-0

65604-1-0

46-8-1121822

65588-12-0

65570-0-0

46-8-105360

65588-3-0

65606-0-0

16842752-23-0

16842752-5-0

46-43-848060

65597-1-0

16973824-15-0

65615-10-0

16973824-73-0

16973824-68-0

16908288-10-0

16973824-33-0

47-0-89046

47-0-92297

47-0-87245

47-0-87246

16842752-41-0

47-0-85226

47-0-85550

65588-43-0

65588-44-0

47-0-84453

47-0-84902

65588-41-0

47-0-89045

47-0-103472

47-0-103473

16842752-49-0

65588-42-0

47-0-93553

47-0-93554

16842752-46-0

47-0-92967

65554-1-0

47-0-102782

65611-2-0

16842752-44-0

65554-0-0

47-0-102441

65611-1-0

48-0-95939

65541-1-58663

16777216-21-0

65541-1-78

65541-1-5850916777216-16-0

65541-0-41468

16777216-15-0

65541-0-78

65541-0-41314 16777216-9-0

65541-1-58552

16777216-19-0

65541-0-41358

65541-0-41357

16777216-10-0

65541-1-58553

16777216-20-0

16777216-11-0

65543-18-93

65543-16-1113

65543-22-93

65543-21-93

65543-11-93

65543-10-474

65543-24-93

65543-31-0

65543-31-80

65543-13-461

65543-13-462

65543-0-844

65543-0-845

65543-28-18167

65543-28-18168

65543-13-1738

65543-13-1739

65543-16-461

65543-16-462

65543-7-457

65543-7-458

65543-16-2102

65543-3-6257

65543-28-16123

65543-28-16490

65543-1-137

65543-14-96

65543-7-8252

65543-7-8253

65543-30-0

65543-10-473

65543-20-2549

65543-20-2550

65543-3-0

65543-3-457

65543-6-93

65543-23-2637

65543-23-2638

65543-10-3268

65543-10-3269

65543-5-93

65543-9-93

65543-8-93

65543-16-2103

65543-15-93

65543-12-93

65543-17-96

65543-0-4329

65543-0-4330

65543-20-0

65543-20-461

65543-30-33039

65543-23-1113

65543-30-33335

65543-29-144

65543-3-1117

65543-3-6256

65543-20-462

65543-25-93

65543-3-458

65543-3-1116

65543-23-462

65543-23-1112

65543-2-93

65543-30-33272

65543-4-93

65543-26-93

65543-13-0

65543-16-1112

65543-19-93

65543-28-1310

65543-16-0

65543-23-0

65543-23-461

65543-0-0

65543-7-0

65543-10-0

65548-3-1936

65548-3-43226 16777216-6-0

65548-1-1024

65548-1-102

65548-3-43270

65548-3-43396

65548-3-1905

16777216-5-0

65548-2-136

65548-1-1023

16777216-18-0

33685504-0-0

65548-3-43269

16777216-7-0

16777216-8-0

65553-1-78

65553-1-41394 16842752-63-0

65553-0-41616

16842752-60-0

65553-0-41506

65553-0-78

65553-0-4146216842752-55-0

65553-1-41437

16842752-64-0

65553-1-41438

65553-1-41548

16842752-67-0

65553-0-41505

16842752-56-0

16842752-65-0

16842752-57-0

65554-0-970

65588-45-0

65558-0-78

65558-0-41319

65558-0-41362

16908288-69-0

65558-1-41327

65558-1-41328

16908288-76-0

65558-1-41284

16908288-75-0

65558-1-78

65558-1-41438

16908288-77-0

16908288-68-0

65558-0-41363

65558-0-41473

16908288-73-0

16908288-74-0

16908288-70-0

65563-0-41370

65563-0-41371

16973824-48-0

65563-3-96

65563-2-264

65563-1-41358

65563-1-41468

65563-1-41314

65563-1-41357

16973824-59-0

65563-0-41624

65563-2-263

16973824-60-0

65563-0-78

65563-0-41327 16973824-45-0

16973824-55-0

16973824-47-0

65563-1-100

65563-4-0

16973824-58-0

16973824-64-0

65564-0-683

65588-56-0

65570-0-887

65612-0-0

65576-1-41689

65576-1-41799

16908288-71-0

65576-1-41645

65576-1-41688

16908288-63-0

65576-1-78

16908288-62-0

16908288-64-0

65579-4-78

65579-4-41329 16842752-59-0

65579-4-41372

65579-4-41373

16842752-62-0

65579-1-102

65579-4-41483

16842752-66-0

65579-1-690

65579-1-691

65579-2-212

65579-3-41454

16842752-58-0

65579-3-78

65579-3-41300 16842752-52-0

65579-3-41343

65579-3-41344

16842752-54-0

16842752-61-0

33751040-0-0

16842752-53-0

65580-1-78

65580-1-4130316973824-42-0

65580-0-194

65580-0-41372 16973824-36-0

65580-1-41346

65580-1-41347

16973824-44-0

65580-0-41399

65580-0-41524

16973824-38-0

65580-1-41457

65580-2-50836

16973824-62-0

65580-0-41525

65580-0-41703

65580-2-78

65580-2-5068216973824-50-0

65580-2-50725

65580-2-50726

16973824-52-0

16973824-39-0

16973824-49-0

16973824-41-0

16973824-43-0

16973824-37-0

16973824-51-0

65587-0-60434

65587-0-60544

16908288-72-0

65587-0-78

65587-0-6039016908288-65-0

65587-0-60433

16908288-66-0

16908288-67-0

65588-59-748

65588-59-749

33619968-249-0

65588-5-1436

65588-5-2469

65588-4-3148

65588-4-3549

65588-21-3153

65588-21-3554

65588-49-2997

65588-49-4118

65588-0-4171

65588-48-0

65588-0-4182

65588-0-5308

65588-0-5386

65588-10-1120

65588-0-929

65588-0-2520

65588-0-2531

65588-59-1292

65588-0-4903

65588-57-0

65588-57-744

65588-16-3161

65588-16-3562

65588-44-748

65588-44-749

33619968-187-0

65588-0-1274

65588-0-1285

65588-5-3326

65588-5-3727

65588-18-576

65588-18-1434

65588-0-128

65588-0-206

65588-0-3065

65588-0-3143

65588-44-4131

65588-44-4646

65588-0-1897

65588-0-1908

65588-44-1882

65588-44-1883

33619968-188-0

65588-0-3688

65588-0-3723

65588-39-1435

65588-39-2293

65588-53-744

65588-0-4093

65588-32-3606

65588-0-2887

65588-57-745

65588-57-1260

65588-19-1434

65588-19-1435

33619968-66-0

65588-29-3156

65588-29-3157

33619968-115-0

65588-51-745

65588-51-1866

65588-61-2983

65588-61-4217

65588-68-31

65588-24-3629

65588-0-2175

65588-60-1864

65588-60-2982

65588-11-2293

65588-11-2294

33619968-35-0

65588-30-3152

65588-30-3153

33619968-120-0

65588-0-2698

65588-0-2709

65588-35-3156

65588-35-3157

33619968-145-0

65588-26-1551

65588-26-2409

65588-0-4263

65588-50-0

65588-0-4274

65588-49-4119

33619968-205-0

65588-55-12085

65588-55-13206

65588-18-575

65588-0-3243

65588-0-3321

65588-6-1431

65588-6-1432

33619968-17-0

65588-9-2298

65588-9-3164

65588-33-575

65588-15-3153

65588-15-3554

65588-17-3554

65588-17-369716908288-32-0

65588-68-32

65588-54-7476

65588-54-7477

33619968-228-0

65588-62-4549

65588-62-4577 33751040-11-0

65588-54-11981

65588-54-13106

65588-39-3574

65588-39-360233619968-166-0

65588-13-3715

65588-0-1196

65588-21-1435

65588-21-2293

65588-0-3780

65588-0-3815

65588-0-4981

65588-0-4992

65588-14-575

65588-1-676

65588-1-1121

65588-43-4110

65588-43-4111

33619968-185-0

65588-9-575

65588-9-576

33619968-28-0

65588-48-1867

65588-48-2988

65588-0-1452

65588-0-1463

65588-51-1867

65588-51-2988

65588-20-2294

65588-20-3152

65588-40-1435

65588-40-2293

65588-41-5266

65588-41-5294 33619968-176-0

65588-0-3734

65588-0-3769

65588-0-4444

65588-0-4455

65588-0-217

65588-0-1997

65588-0-2075

65588-18-1435

33619968-62-0

65588-58-0

65588-58-572

65588-0-751

65588-0-829

65588-54-6354

65588-54-6355

33619968-227-0

65588-62-2915

65588-62-4033

65588-9-3709

65588-0-840

65588-55-13722

65588-55-13750 33685504-2-0

65588-37-2294

65588-37-3160

65588-66-872

65588-66-90033751040-15-0

65588-0-5073

65588-0-5084

65588-55-10963

65588-55-12084

65588-9-3566

16908288-17-0

65588-17-2294

65588-17-3152

65588-9-3165

33619968-31-0

65588-60-1863

33619968-251-0

65588-48-2989

65588-48-4118

65588-25-1435

65588-25-2293

65588-38-1442

65588-38-1443

33619968-158-0

65588-54-1867

65588-54-2988

65588-15-1435

65588-15-2293

65588-54-744

65588-33-3602

65588-0-2976

65588-13-2302

65588-13-3160

65588-0-4536

65588-0-4547

65588-69-83

65588-66-569

65588-59-1264

65588-4-3147

33619968-11-0

65588-3-575

65588-3-576

33619968-7-0

65588-32-2298

65588-32-3156

65588-52-4227

65588-52-4722

65588-37-1434

65588-37-1435

33619968-153-0

65588-0-5564

65588-0-5575

65588-19-575

65588-4-575

65588-4-576

33619968-8-0

65588-17-1434

65588-17-1435

33619968-58-0

65588-34-575

65588-36-3788

65588-54-8599

65588-54-9720

65588-44-3009

65588-44-4130

65588-40-575

65588-40-576

33619968-167-0

65588-4-2292

65588-63-4561

65588-63-4589 33751040-12-0

65588-16-3160

33619968-56-0

65588-46-1296

65588-0-4004

65588-0-384

65588-0-395

65588-0-3332

65588-7-3155

65588-7-3156

33619968-23-0

65588-68-52

65588-42-745

65588-42-2030

65588-33-576

33619968-132-0

65588-23-3574

65588-23-360233619968-86-0

33619968-190-0

65588-35-2298

65588-15-575

65588-0-1819

65588-54-745

65588-54-1866

65588-0-1007

65588-23-2293

65588-23-2294

33619968-84-0

65588-29-3578

65588-20-3554

65588-20-369716908288-38-0

65588-0-5486

65588-19-2301

65588-26-2410

65588-26-3268

65588-38-2301

65588-38-2302

33619968-159-0

65588-35-3578

65588-26-1550

33619968-98-0

65588-28-4182

65588-28-421033619968-111-0

65588-36-2480

65588-36-3338

65588-26-576

65588-29-360633619968-116-0

65588-6-2296

65588-6-3151

65588-49-4634

65588-0-3499

65588-0-3510

65588-5-575

65588-5-576

33619968-12-0

65588-62-4034

65588-31-1434

65588-31-1435

33619968-123-0

65588-1-126616908288-2-0

65588-14-2293

65588-14-2294

33619968-47-0

65588-0-1552

65588-54-11980

33619968-232-0

65588-11-1434

65588-11-1435

33619968-34-0

65588-16-3715

65588-0-573

65588-0-651

65588-25-3344

65588-25-3345

33619968-95-0

65588-34-3273

65588-34-3694

65588-34-576

33619968-137-0

65588-6-3152

65588-6-3553

65588-22-576

65588-22-1442

65588-17-3153

33619968-60-0

65588-55-744

65588-24-3180

65588-24-3601

65588-52-4865

65588-0-4409

65588-43-4626

65588-48-1866

33619968-199-0

65588-22-3169

65588-22-3590

65588-60-745

65588-0-4217

65588-49-0

65588-0-4228

65588-40-1434

33619968-168-0

65588-20-2293

33619968-71-0

65588-55-745

33619968-234-0

65588-28-575

65588-0-2253

65588-0-2264

65588-2-576

65588-2-977

65588-26-3269

65588-26-3690

65588-35-575

65588-27-3602

65588-0-2442

65588-0-1018

65588-62-1832

65588-62-2914

65588-54-9721

65588-54-10858

65588-27-3574

33619968-106-0

65588-19-2302

65588-19-3160

65588-0-2798

65588-0-2876

65588-16-2302

33619968-260-0

65588-39-3152

65588-39-3153

33619968-165-0

65588-67-756

65588-67-757

33619968-269-0

65588-18-3554

65588-18-370716908288-34-0

65588-15-2294

65588-15-3152

65588-24-2321

65588-24-3179

65588-28-3761

65588-27-2293

65588-27-2294

33619968-104-0

65588-8-575

65588-34-2409

65588-34-2410

33619968-139-0

65588-0-3904

65588-0-3915

65588-0-3826

65588-11-3697

33619968-200-0

65588-0-4309

65588-51-0

65588-0-4320

33619968-223-0

65588-0-2086

65588-28-576

65588-28-1434

65588-22-361833619968-81-0

65588-32-1439

65588-32-2297

65588-63-1831

65588-63-1832

33619968-263-0

65588-66-568

33619968-268-0

65588-62-745

65588-62-1831

65588-11-3152

65588-11-3153

33619968-36-0

65588-27-576

65588-27-1434

65588-55-1982

65588-55-1983

33619968-235-0

65588-48-745

33619968-129-0

65588-58-876

65588-58-90433751040-7-0

65588-52-4226

33619968-217-0

65588-37-2293

65588-50-757

65588-50-1878

65588-17-2293

65588-40-2294

65588-40-3179

65588-8-576

65588-8-1434

65588-44-3008

33619968-189-0

65588-54-8598

33619968-229-0

65588-4-2291

33619968-10-0

65588-23-576

65588-23-1434

65588-0-4714

65588-0-4725

65588-9-1438

65588-9-1439

33619968-29-0

65588-42-744

33619968-177-0

65588-53-2988

65588-53-2989

33619968-220-0

65588-48-744

33619968-61-0

65588-0-5119

65588-0-5130

65588-43-2988

65588-43-2989

33619968-184-0

33619968-222-0

65588-0-1185

65588-7-3557

65588-7-370016908288-15-0

65588-0-3054

65588-18-2294

65588-18-3152

65588-15-369716908288-28-0

65588-38-580

65588-63-2950

65588-63-2951

33619968-264-0

65588-0-1730

65588-0-1808

65588-49-748

65588-49-749

33619968-202-0

65588-36-3760

33619968-151-0

65588-29-579

65588-35-576

33619968-142-0

65588-36-2479

33619968-149-0

65588-0-4082

65588-13-3161

33619968-44-0

33619968-90-0

65588-55-7592

65588-55-7593

33619968-240-0

65588-40-3601

65588-40-362933619968-171-0

65588-36-575

65588-36-576

33619968-147-0

65588-41-3628

65588-41-3629

33619968-174-0

65588-31-2293

65588-19-3705

65588-47-872

65588-47-900 33619968-197-0

65588-54-13622

65588-54-1365033685504-1-0

65588-37-3610

65588-41-2308

65588-41-2309

33619968-173-0

65588-0-2620

65588-63-748

65588-63-749

33619968-262-0

65588-25-3766

65588-34-3272

33619968-140-0

65588-60-4645

65588-54-2989

65588-54-4110

65588-0-1374

65588-0-484

65588-0-562

65588-33-1434

65588-33-1435

33619968-133-0

65588-22-3168

33619968-80-0

65588-23-1435

33619968-83-0

65588-60-744

33619968-250-0

65588-0-4814

65588-0-4892

65588-0-5297

65588-65-0

65588-6-369616908288-14-0

65588-49-466233751040-2-0

65588-48-4662

65588-46-1260

33619968-195-0

65588-50-1879

65588-50-3000

33619968-100-0

65588-30-576

65588-30-1434

65588-7-2296

65588-65-1260

65588-65-1288 33751040-14-0

65588-37-575

65588-37-576

33619968-152-0

65588-48-4119

65588-48-4634

33619968-230-0

65588-16-2301

33619968-55-0

65588-67-1272

65588-21-3697

65588-55-6470

65588-55-6471

33619968-239-0

65588-24-2320

33619968-89-0

65588-4-1432

65588-28-3760

33619968-110-0

65588-64-0

65588-64-568

65588-47-0

65588-10-576

65588-10-977

65588-0-5219

65588-32-1438

33619968-128-0

65588-30-575

65588-26-575

33619968-97-0

65588-30-2293

65588-30-2294

33619968-119-0

65588-62-744

33619968-258-0

65588-45-919

65588-11-3554

65588-55-3104

65588-27-1435

33619968-103-0

33619968-198-0

65588-12-3298

65588-12-3699

65588-0-1630

65588-44-4674 33619968-191-0

65588-6-576

65588-30-3574

65588-30-360233619968-121-0

65588-50-756

33619968-206-0

65588-53-4749

65588-20-1435

16908288-30-0

33619968-169-0

65588-0-5027

65588-0-306

65588-33-2293

65588-33-2294

33619968-134-0

65588-45-920

33619968-192-0

65588-10-575

65588-38-3610

65588-0-3421

65588-9-2297

65588-53-4110

65588-53-1866

65588-53-1867

33619968-219-0

65588-0-3232

65588-14-1435

65588-43-4654 33619968-186-0

65588-16-576

65588-16-1434

65588-19-3562

16908288-36-0

65588-27-575

33619968-102-0

65588-61-4761

65588-0-5038

65588-0-1986

65588-56-872

65588-56-900 33751040-5-0

65588-63-4045

65588-36-1621

65588-6-575

65588-7-1440

65588-7-2295

65588-49-1870

65588-54-13107

65588-31-2294

65588-31-3156

65588-41-1088

65588-41-1089

33619968-172-0

65588-31-576

65588-13-3562

33619968-107-0

65588-61-1863

65588-61-1864

33619968-255-0

65588-0-4636

65588-55-8714

65588-0-2609

65588-52-3105

65588-42-3153

65588-42-4274

65588-41-4750

65588-45-1223

65588-45-1251 33619968-193-0

65588-39-2294

65588-6-2295

65588-12-576

65588-12-1463

65588-8-3153

65588-8-3554

65588-55-5348

65588-55-5349

33619968-238-0

65588-24-1462

65588-65-744

65588-12-2439

65588-12-3297

65588-11-576

65588-27-3152

65588-27-3153

33619968-105-0

65588-53-4606

16908288-54-0

65588-46-745

65588-0-5397

33619968-224-0

65588-0-1641

65588-2-113016908288-7-0

65588-61-744

65588-61-745

33619968-254-0

65588-0-740

65588-50-3001

65588-50-4122

65588-31-575

65588-64-569

65588-64-872

65588-51-4626

65588-51-465433751040-4-0

65588-46-0

65588-46-744

65588-8-369716908288-16-0

65588-7-576

65588-7-1439

65588-23-3152

65588-23-3153

33619968-85-0

65588-22-2305

65588-22-2306

33619968-79-0

33619968-207-0

65588-49-1871

33619968-203-0

65588-22-575

33619968-77-0

33619968-201-0

65588-11-575

65588-38-3161

65588-38-3582

65588-0-3599

65588-0-3677

16908288-20-0

65588-4-1431

33619968-9-0

65588-43-1866

65588-43-1867

33619968-183-0

65588-0-5475

65588-52-745

65588-52-1982

65588-39-575

65588-39-576

33619968-162-0

65588-12-384216908288-22-0

65588-17-576

65588-7-575

65588-4-369216908288-11-0

65588-55-4226

65588-55-4227

33619968-237-0

65588-40-3180

65588-60-2983

65588-60-4101

65588-61-2982

65588-42-4818

65588-0-4490

65588-31-3578

65588-31-360633619968-126-0

33619968-40-0

65588-42-2031

65588-42-3152

65588-32-580

65588-67-1300 33751040-16-0

65588-20-1434

33619968-70-0

65588-42-4275

65588-42-4790

65588-13-576

65588-13-1438

65588-51-744

65588-23-575

33619968-82-0

65588-0-2353

65588-0-2431

65588-33-3152

65588-66-0

65588-34-1551

65588-37-3161

33619968-155-0

65588-50-4666

33619968-24-0

65588-0-1107

65588-16-1435

65588-35-2297

33619968-144-0

65588-24-602

65588-14-576

65588-14-1434

65588-0-4625

65588-50-4123

65588-50-4638

33619968-46-0

65588-47-568

65588-63-4046

65588-36-1620

33619968-148-0

33619968-233-0

65588-38-3160

65588-0-2787

65588-21-2294

33619968-75-0

65588-52-3104

33619968-216-0

65588-20-575

33619968-179-0

65588-5-3870

65588-14-3152

65588-8-3152

33619968-27-0

65588-24-1461

33619968-88-0

65588-12-2438

33619968-39-0

33619968-194-0

33619968-117-0

65588-56-569

65588-0-662

65588-0-117

65588-2-575

33619968-6-0

65588-0-4501

33619968-208-0

65588-18-3153

33619968-266-0

65588-55-3105

33619968-236-0

65588-35-1438

65588-35-1439

33619968-143-0

65588-20-576

65588-0-1363

65588-13-1439

33619968-42-0

65588-19-576

65588-49-2996

65588-24-603

33619968-170-0

65588-61-4733

33751040-10-0

65588-34-1550

33619968-91-0

65588-34-3722

65588-57-1288

33619968-67-0

65588-30-1435

33619968-160-0

65588-65-745

33619968-267-0

33619968-51-0

65588-32-579

33619968-127-0

33619968-122-0

65588-25-575

65588-15-576

65588-15-1434

65588-14-3554

65588-14-369716908288-26-0

65588-52-1983

65588-52-744

33619968-214-0

65588-16-575

33619968-53-0

33619968-252-0

65588-26-3718

65588-25-2294

33619968-94-0

65588-31-3157

33619968-125-0

65588-21-575

33751040-6-0

33619968-178-0

33619968-180-0

65588-28-2302

65588-5-2470

33619968-14-0

65588-25-576

65588-25-1434

65588-3-1120

65588-8-2294

33619968-138-0

65588-37-3582

65588-55-9836

65588-55-9837

33619968-242-0

65588-0-3154

65588-1-675

33619968-54-0

65588-0-4803

65588-3-977

16908288-9-0

65588-29-2297

65588-29-2298

33619968-114-0

33619968-209-0

65588-17-575

33619968-57-0

65588-21-576

65588-21-1434

65588-22-1443

33619968-32-0

65588-21-3152

65588-8-1435

65588-8-2293

65588-64-900

65588-0-4398

65588-32-3157

65588-32-3578

65588-43-744

65588-43-745

33619968-182-0

33619968-22-0

65588-0-918

65588-56-568

33619968-246-0

65588-29-580

33619968-112-0

65588-47-569

33619968-196-0

33619968-64-0

65588-14-3153

65588-55-8715

33619968-241-0

65588-0-1541

65588-0-3410

33619968-33-0

65588-0-295

65588-13-2301

33751040-8-0

65588-0-2164

65588-58-573

65588-25-379433619968-96-0

33619968-87-0

65588-12-575

33619968-37-0

65588-41-4751

33619968-175-0

65588-28-1435

65588-28-2301

33619968-118-0

65588-51-2989

33619968-212-0

33751040-1-0

33619968-131-0

65588-35-3606

33619968-156-0

65588-60-4617

33751040-9-0

33619968-215-0

65588-54-5233

65588-20-3153

33619968-92-0

65588-13-575

33619968-41-0

33619968-146-0

33619968-109-0

65588-51-4110

65588-51-4111

33619968-213-0

65588-5-3325

33619968-21-0

33619968-99-0

65588-54-4111

65588-54-5232

33619968-26-0

33619968-16-0

65588-0-473

33619968-18-0

65588-55-10962

33619968-101-0

33619968-261-0

65588-29-1438

65588-29-1439

33619968-113-0

33619968-19-0

33619968-78-0

65588-38-579

33619968-124-0

33619968-25-0

65588-12-1464

33619968-65-0

65588-19-3161

33619968-68-0

33619968-130-0

33619968-164-0

33619968-20-0

65588-33-3153

33619968-135-0

33619968-69-0

65588-53-4111

33619968-221-0

65588-60-4102

65588-0-2965

65588-55-13207

33619968-245-0

65588-61-4218

33619968-257-0

65588-0-1719

33619968-48-0

65588-0-3588

65588-0-3993

65588-54-10859

16908288-40-0

65588-0-5208

65588-0-2342

16908288-18-0

33619968-248-0

65588-53-745

33619968-218-0

65588-36-3339

33619968-108-0

65588-5-1435

33619968-13-0

33619968-76-0

33619968-204-0

33619968-73-0

33751040-13-0

33619968-45-0

33619968-181-0

33619968-226-0

33619968-15-0

16908288-53-0

65588-0-1096

33619968-38-0

33619968-259-0

33619968-72-0

65588-33-3574

33619968-136-0

65588-18-2293

33619968-63-0

65588-39-1434

33619968-163-0

33619968-247-0

33751040-3-0

33619968-157-0

33619968-210-0

33619968-256-0

33619968-225-0

33619968-244-0

16908288-24-0

33619968-30-0

33619968-49-0

33619968-52-0

33619968-161-0

33619968-265-0

33619968-74-0

33619968-211-0

16908288-13-0

33619968-253-0

33619968-5-0

33619968-154-0

33619968-231-0

33619968-243-0

33619968-59-0

33619968-93-0

33619968-141-0

33619968-50-0

33619968-150-0

33619968-43-0

65590-1-1418

65590-2-0

65590-1-1685

65590-3-2598

65590-2-9015

65590-4-1470

65590-4-1533

65590-5-0

65590-4-1726

65590-1-1810

65590-3-2416

65612-3-0

65590-5-1065

65590-1-1749

65590-4-0

65590-3-0

65590-3-1955

65590-2-9217

65590-4-1381

65590-1-385

65590-1-606

65590-1-607

65613-0-0

65613-1-0

65590-2-8773

65590-2-8952

65612-2-0

65611-1-667

65611-4-0

65611-0-0

65611-3-667

65611-3-0

65611-5-0

65612-0-12786

65612-0-12927

65612-2-18323

65612-2-18647

65612-2-17880

65612-1-84

65612-2-18971

65612-0-13731

65612-0-13732

65612-0-10173

65612-0-10032

65613-1-280

65613-1-39765614-1-0

65613-1-398

65614-0-0

65615-3-0

65615-3-667

65615-6-683

65615-9-667 65615-8-0

65615-5-0

65615-1-667

65615-11-0

65615-11-667

65615-4-0

65615-13-0

65615-0-0

65615-12-0

65616-0-78

65616-0-41380

65616-0-41563

65616-0-41336

65616-0-41379

16777216-13-0

16777216-14-0

16777216-12-0

16777216-17-0

65621-0-41374

65621-0-41586

65621-0-78

65621-0-41330

65621-0-41373

16973824-56-0

16973824-63-0

16973824-54-0

16973824-57-0

16777216-11-130

16777216-10-36

16777216-14-122

16777216-10-35

16777216-19-35

16777216-19-36

16777216-13-36

16777216-8-115

16777216-7-36

16777216-7-35

16777216-20-112

16777216-13-35

16842752-15-149

16842752-14-321

16842752-49-236

16842752-0-321

16842752-26-321

16842752-40-792

16842752-40-793

16842752-64-35

16842752-62-117

16842752-46-275

16842752-45-677

16842752-17-149

16842752-16-321

16842752-64-36

16842752-20-320

16842752-20-321

16842752-56-35

16842752-22-0

16842752-22-320

16842752-32-320

16842752-32-321

16842752-27-149

16842752-65-107

16842752-19-149

16842752-34-0

16842752-34-552

16842752-16-0

16842752-16-320

16842752-29-149

16842752-28-321

16842752-48-1174

16842752-24-320

16842752-24-321

16842752-47-0

16842752-47-688

16842752-18-320

16842752-18-321

16842752-44-158

16842752-26-0

16842752-26-320

16842752-54-117

16842752-53-36

16842752-47-1009

16842752-41-158

16842752-7-149

16842752-4-0

16842752-4-320

16842752-61-36

16842752-12-320

16842752-12-321

16842752-1-149

16842752-6-320

16842752-6-321

16842752-43-711

16842752-43-712

16842752-14-0

16842752-14-320

16842752-45-1266

16842752-11-149

16842752-8-0

16842752-8-320

16842752-48-620

16842752-48-621

16842752-10-321

16842752-53-35

16842752-4-321

16842752-10-320

16842752-30-321

16842752-13-149

16842752-23-149

16842752-22-321

16842752-57-116

16842752-8-321

16842752-2-321

16842752-20-0

16842752-25-149

16842752-28-320

16842752-35-0

16842752-35-552

16842752-56-36

16842752-32-0

16842752-30-0

16842752-30-320

16842752-45-0

16842752-45-676

16842752-42-0

16842752-24-0

16842752-0-320

16842752-5-149

16842752-2-0

16842752-2-320

16842752-12-0

16842752-9-149

16842752-48-0

16842752-61-35

16842752-39-0

16842752-21-139

16842752-18-0

16842752-31-149

16842752-33-149

16842752-28-0

16842752-43-0

16842752-40-0

16842752-6-0

16842752-3-149

16842752-0-0

16842752-10-0

16908288-58-1741

16908288-58-2666

16908288-67-115

16908288-26-321

16908288-64-107

16908288-63-36

16908288-69-36

16908288-70-105

16908288-59-96

16908288-56-49

16908288-22-320

16908288-32-320

16908288-32-321

16908288-37-157

16908288-19-160

16908288-34-320

16908288-16-552

16908288-5-98

16908288-5-99

16908288-24-320

16908288-24-321

16908288-61-96

16908288-58-2667

16908288-29-147

16908288-20-357

16908288-58-836

16908288-58-1740

16908288-2-490

16908288-2-491

16908288-26-320

16908288-41-147

16908288-7-361

16908288-40-321

16908288-14-516

16908288-27-157

16908288-28-321

16908288-54-1306

16908288-18-326

16908288-40-320

16908288-10-160

16908288-9-326

16908288-11-612

16908288-66-36

16908288-38-321

16908288-8-244

16908288-54-1305

16908288-75-35

16908288-75-36

16908288-30-321

16908288-7-362

16908288-63-35

16908288-21-241

16908288-60-96

16908288-36-320

16908288-36-321

16908288-57-0

16908288-23-147

16908288-25-147

16908288-3-201

16908288-22-321

16908288-38-320

16908288-20-356

16908288-28-320

16908288-35-147

16908288-17-552

16908288-30-320

16908288-34-321

16908288-54-726

16908288-15-516

16908288-39-157

16908288-6-135

16908288-58-835

16908288-9-325

16908288-69-35

16908288-54-725

16908288-55-96

16908288-66-35

16908288-18-325

16908288-76-117

16908288-58-0

16908288-33-147

16908288-31-147

16908288-13-516

16973824-61-17

16973824-67-0

16973824-66-134

16973824-65-36

16973824-84-102

16973824-77-35

16973824-77-36

16973824-48-112

16973824-47-36

16973824-28-642

16973824-28-984

16973824-59-35

16973824-25-158

16973824-24-672

16973824-56-35

16973824-22-158

16973824-5-96

16973824-4-712

16973824-4-711

16973824-59-36

16973824-37-32047

16973824-30-337

16973824-47-35

16973824-29-0

16973824-29-1044

16973824-4-1726

16973824-44-107

16973824-26-0

16973824-26-506

16973824-12-152

16973824-11-337

16973824-7-0

16973824-7-506

16973824-57-117

16973824-56-36

16973824-1-711

16973824-1-712

16973824-86-98

16973824-4-0

16973824-10-1044

16973824-1-0

16973824-8-149

16973824-7-507

16973824-26-507

16973824-26-1016

16973824-11-0

16973824-11-336

16973824-51-36

16973824-43-36

16973824-6-96

16973824-53-0

16973824-53-11397

16973824-65-35

16973824-7-1022

16973824-78-144

16973824-38-35

16973824-38-36

16973824-60-116

16973824-75-98

16973824-86-99

16973824-23-0

16973824-20-0

16973824-75-99

16973824-30-0

16973824-30-336

16973824-4-1725

16973824-27-132

16973824-2-99

16973824-24-0

16973824-24-671

16973824-61-0

16973824-87-146

16973824-40-0

16973824-39-116

16973824-9-0

16973824-9-640

16973824-21-671

16973824-21-672

16973824-51-35

16973824-76-145

16973824-46-8525

16973824-21-0

16973824-9-980

16973824-31-135

16973824-46-0

16973824-28-0

16973824-43-35

16973824-40-12837

16973824-3-0

16973824-0-0

16973824-52-105

16973824-10-0

33619968-81-32

33619968-96-32

33619968-136-32

33619968-91-32

33619968-181-32

33619968-131-32

33619968-106-32

33619968-146-32

33619968-121-32

33619968-161-32

33619968-193-32

33619968-191-32

33619968-141-32

33619968-116-32

33619968-156-32

33619968-171-32

33619968-151-32

33619968-126-32

33619968-166-32

33619968-195-32

33619968-86-32

33619968-101-32

33619968-111-32

33619968-197-32

33619968-176-32

33619968-186-32

33685504-2-32

33685504-1-32

33751040-14-32

33751040-10-32

33751040-15-32

33751040-4-32

33751040-11-32

33751040-5-32

33751040-8-32

33751040-2-32

33751040-9-32

33751040-3-32

33751040-12-32

33751040-6-32

33751040-13-32

33751040-16-32

33751040-7-32

33751040-1-32

Fig. 18: A taste of the happens-before relation tracking in btrfs file system. This snippet is only around 10% of the actualhappens-before graph tracked in this execution. Each node in the graph is a synchronization point represented by a three-tuple<thread id, context id, instruction id> and the directed edge between two nodes A and B means A happens-before B.

18