logical concurrency control form sequential proofs
DESCRIPTION
Logical Concurrency Control Form Sequential Proofs. By: Deshmukh , Ramalingam , Ranganath and Vaswani Presented by: Omer Toledano. Overview. Using sequential proof to develop locking schemes for concurrency control. Improve it to achieve linearazibility. Example – Compute with Cache. - PowerPoint PPT PresentationTRANSCRIPT
Logical Concurrency Control Form Sequential Proofs
By: Deshmukh, Ramalingam, Ranganath and Vaswani
Presented by: Omer Toledano
OverviewUsing sequential proof to develop
locking schemes for concurrency control.
Improve it to achieve linearazibility
Example – Compute with CacheAssume we have a function f,
that we are trying to calculate. F is a pretty computational
intensive function, so we decided to preserve cache for the last result
Example – Compute with CacheSpecification:We want to create a function
called “compute” that will return f(num)
The implementation of “Compute” will use cache for the last result to improve performance
Example - Codeint lastNum = 0;int lastRes = f(0);/* @return f(num) */int Compute( num) {
int res;if (lastNum == num) {
res = lastRes;} else {
res = f(num);lastNum = num;lastRes = res;
}return res;
}
Proof Model
Proving Specifications – True Branch
int lastNum = 0;int lastRes = f(0);/* @return f(num) */int Compute( num) {
int res;// lastRes == f(lastNum)if (lastNum == num) {// lastRes == f(lastNum) && lastNum == Num
res = lastRes;// lastRes == f(lastNum) && lastNum == Num && res == lastRes
} else {…}// res == f(num) && lastRes == f(lastNum)return res;
}
Proving Specification – False Branch
int lastNum = 0;int lastRes = f(0);/* @return f(num) */int Compute( num) {
int res;// lastRes == f(lastNum)if (lastNum == num) {
…} else {
// lastRes == f(lastNum) && lastNum != numres = f(num);// res == f(num)lastNum = num;// res == f(num) && lastNum == numlastRes = res;
// res == f(num) && lastRes == res && lastNum == num
}// res == f(num) && lastRes == f(lastNum)return res;
}
Is this function thread safe?No!
Consider:
Compute(5)
Compute(5) Compute(7
)int lastNum = 0;int lastRes = f(0);/* @return f(num) */int Compute( num) {
int res;if (lastNum == num) {
res = lastRes;} else {
res = f(num);lastNum = num;lastRes = res;
}return res;
}
Consider: Compute(5
)Compute(5)
Compute(7)int Compute(5) {
int res;// lastRes == f(lastNum)if (lastNum == num) {// lastRes == f(lastNum) && lastNum
== Numres = lastRes;
} else { // Compute(7)res = f(num);// res == f(num)lastNum = num;// res == f(num) && lastNum ==
numlastRes = res;//res ==
f(7)}return res;
}
In this scenario the result of the second Compute(5) would be wrong!
How would you fix that?int Compute( num) {
int res; // acquire(l)if (lastNum == num) {
res = lastRes;
} else { // release(l)res = f(num);// acquire(l)lastNum = num;
lastRes = res;}// release(l)return res;
}
What changed in the concurrent setting?
On every stage we asserted a set of predicates based on precondition and current command.
In the concurrent setting we saw that some of the predicates was invalidated while executing the command, thus yielded a wrong answer.
GoalsWe want to find a way to
transform sequentially correct code to concurrently correct code using the same proof.
MotivationIt’s much easier to program a
sequential correct program than a concurrent one. So we’ll be able to automate the “thread proofing” process.
Sequential proofs can shed light on the “true” critical sections and what makes them “critical” (predicate invalidation), and hopefully achieve smaller critical sections.
Algorithm - IdeaDefine a set of locks that
corresponds to the predicates generated by the sequential proof.
Let’s think about the program as a graph were the vertices are a conjunction of predicates required at this point of the program and the edges are program commands.
Algorithm – Idea Cont.Let’s assume we are on some
point of the program, and assume we have 2 vertices u,v and e = (u,v).
We will acquire all the locks corresponding to new predicates on v
We will release every lock that is not needed anymore on v.
Algorithm - Exampleint Compute( num) {
int res;// lastRes == f(lastNum)if (lastNum == num) {
(u)// lastRes == f(lastNum) && lastNum == num(e)res = lastRes;(v)// lastRes == f(lastNum) && lastNum == num && res == lastRes
On v we only add one predicate (res == lastRes), so we have to take its lock before executing the command e.
(u) // lastRes == f(lastNum) && lastNum == num(e) /* acquire (l:res==lastRes) */res = lastRes;(v) // lastRes == f(lastNum) && lastNum == num && res == lastRes
Correctness of AlgorithmInput: a library L with embedded
assertions satisfied by all sequential executions of L.
Output: a Library L’ obtained by augmenting L with concurrency control such that every execution of L’ is “safe”.
Definitions
Definitions
Proof
Proof – Cont.
Is that enough?No! what about deadlocks?
It can happen when:1. While holding a lock on p we are trying to get a
lock on q2. At some point when holding the lock on q we are
trying to get the lock on p. 3. This will cause a deadlock since we are already
holding the lock on p.
To solve this we will define an equivalence relation that merges all those locks into one merged lock.
Algorithm – are all locks necessary?// lastRes == f(lastNum)if (lastNum == num) {// acquire l: lastNum == num// lastRes == f(lastNum) && lastNum == num
int Compute( num) {int res;// acquire l: lastRes == f(lastNum)// lastRes == f(lastNum)if (lastNum == num) {
// acquire l: lastNum == numres = lastRes;
} else {
This lock is redundant since it’s always acquired when another lock is acquired and released when another lock is released.
OptimizationsAs said in the last slide the
algorithm can introduce redundant locking, e.g generate a lock l that is always held whenever a lock q is acquired.
Also if we have a predicate that is never invalidated then we won’t need to acquire it before executing commands.
Optimizations – Cont.Use read-write locks:When a thread wants to
“preserve” a predicate it can acquire a read lock (with more threads)
If it want to invalidate the predicate it needs to acquire a “write” lock.
Another problem?int x = 0;
int Increment() {int tmp;// x == x_intmp = x;
tmp = tmp + 1;// going to invalidate x == x_inx = tmp;
return tmp;}
1inx x returns x
Another problem?1inx x returns x
int x = 0;
int Increment() {int tmp;// acquire(l)tmp = x;// release(l)tmp = tmp + 1;// acquire(l)x = tmp;// release(l)return tmp;
}
What can happen?Increment() - returns 0
Increment() - returns 0
After both increment x equals one
In general we can have “dirty reads” and “lost updates”
ImprovementWe will change our locking scheme to
solve the previous example problem. If at some point a branch that starts from a program line is going to falsify a predicate we are going to acquire that lock too.
int x = 0;
int Increment() {int tmp;// acquire(l)tmp = x;tmp = tmp + 1;x = tmp;// release(l)return tmp;
}
Is that enough?What about return
values?
IncX() – return (1,1)IncY() – return (1,1)
int x = 0, y = 0;IncX() {
// acquire l: x == x_inx = x + 1;(ret11, ret12) = (x, y);// release l: x == x_in
}
IncY() {// acquire l: y == y_iny = y + 1;(ret21, ret22) = (x, y);// release l: y == y_in
}
This is not linearizableint x = 0, y = 0;IncX() {
// acquire l: x == x_inx = x + 1;(ret11, ret12) = (x, y);// release l: x == x_in
}
IncY() {// acquire l: y == y_iny = y + 1;(ret21, ret22) = (x, y);// release l: y == y_in
}
The values from the two calls should be different
SolutionWe will have to determine
whether the execution of a statement s can potentially affect the return-value of another procedure invocation.
We do so by calculating if a statement s can break some procedure return value, and lock it accordingly.
ResultsAfter using real world example
and benchmarks they showed that their programs achieved same or better results than human created synchronization.
The improvement was with introducing more locks that helped minimizing the critical sections and separate them by different locks
ResultsIn the last section they produced
extension to allow linearizability with respect to a sequential specification, which is a weaker requirement that permits more concurrency than notions of atomicity.
Achieving linearizability without two phase locking.
ConclusionsThis algorithm help us automate
the “thread proofing” process and achieve good results.
Help us to get better understanding about the root cause for the critical sections and separate them with different locks for more concurrency.
ConclusionsAlso the logical point of view
helped us to understand which invariants need to be preserved.
Questions?