petsymposium.org€¦ · proceedingsonprivacyenhancingtechnologies;2020(4):434–460 nathan...

27
Proceedings on Privacy Enhancing Technologies ; 2020 (4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-Processing Private Sensor Data via Garbled Encryption Abstract: We introduce garbled encryption, a relaxation of secret-key multi-input functional encryption (MiFE) where a function key can be used to jointly compute upon only a particular subset of all possible tuples of ciphertexts. We construct garbled encryption for gen- eral functionalities based on one-way functions. We show that garbled encryption can be used to build a self-processing private sensor data system where af- ter a one-time trusted setup phase, sensors deployed in the field can periodically broadcast encrypted readings of private data that can be computed upon by anyone holding function keys to learn processed output, without any interaction. Such a system can be used to periodi- cally check, e.g., whether a cluster of servers are in an “alarm” state. We implement our garbled encryption scheme and find that it performs quite well, with function evaluations in the microseconds. The performance of our scheme was tested on a standard commodity laptop. Keywords: Secure computation, Functional encryption, Garbled circuits DOI 10.2478/popets-2020-0081 Received 2020-02-29; revised 2020-06-15; accepted 2020-06-16. 1 Introduction Self-Processing Private Sensor Data. Suppose a coalition of countries (Coalition A) negotiates a 25-year treaty with Country B, which proposes the deployment of a set of trusted sensors in Country B, where each sensor provides an hourly readout of radiation levels. However, to protect its privacy, Country B is unwilling to have all raw sensor readings transmitted in their en- tirety to Coalition A. Country B is only willing to allow Coalition A to learn some processed form of these sensor *Corresponding Author: Nathan Manohar: UCLA, E- mail: [email protected] Abhishek Jain: John Hopkins University, E-mail: ab- [email protected] Amit Sahai: UCLA, E-mail: [email protected] readings – for example to learn the maximum radiation level observed at any of 16 sensor locations. Further- more, Coalition A and Country B are willing to execute a once-and-for-all ceremony to build these trusted sen- sors. Indeed, since the sensors are creating the “inputs” to our system, it is crucial that they be built in a way that guarantees trust. This could involve physical pro- tections, as well as computer security tools like secure multi-party computation. But after this ceremony, and after the trusted sensors are deployed, how can we en- sure that Coalition A, in order to protect Country B’s privacy, learns only the hourly maximum radiation val- ues, and not more? The most straightforward solution would be to trust a third party to honestly process the sensor readings. However, even if a third party seems trustworthy when the treaty is signed, it may be unreasonable to assume that such trust would be maintained over the 25-year span of the treaty. A more complex solution would be to have the trusted sensors communicate among them- selves to compute the maximum every hour and only transmit this processed information. But this would re- quire each trusted sensor to be able to reliably receive communication from other sensors; this may be infeasi- ble (especially over long durations of time) or simply un- desirable as being able to receive communication opens up each sensor to new attack vectors. In this work, we ask: is it possible to build practical solutions to this problem where each trusted sensor is only required to be able to broadcast a short message? Somewhat sur- prisingly, we show that this is achievable without any communication between sensors, where the sensors are only required to store an AES key and broadcast several AES evaluations each time step. Hardware Assumptions for Sensors vs. Hard- ware Assumptions for Receivers. Because sensors would need to be physically deployed to collect read- ings, some level of physical trust and verification is es- sential for both sensor hardware and deployment. This is reasonable because Country B, as the country being monitored, would be subject to inspections that would verify (only!) that sensors have not been tampered with and remain correctly deployed. However, it is not rea- sonable to assume any kind of physical trust for the

Upload: others

Post on 05-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Proceedings on Privacy Enhancing Technologies ; 2020 (4):434–460

Nathan Manohar*, Abhishek Jain, and Amit Sahai

Self-Processing Private Sensor Data viaGarbled EncryptionAbstract: We introduce garbled encryption, a relaxationof secret-key multi-input functional encryption (MiFE)where a function key can be used to jointly computeupon only a particular subset of all possible tuples ofciphertexts. We construct garbled encryption for gen-eral functionalities based on one-way functions.We show that garbled encryption can be used to builda self-processing private sensor data system where af-ter a one-time trusted setup phase, sensors deployed inthe field can periodically broadcast encrypted readingsof private data that can be computed upon by anyoneholding function keys to learn processed output, withoutany interaction. Such a system can be used to periodi-cally check, e.g., whether a cluster of servers are in an“alarm” state.We implement our garbled encryption scheme and findthat it performs quite well, with function evaluations inthe microseconds. The performance of our scheme wastested on a standard commodity laptop.

Keywords: Secure computation, Functional encryption,Garbled circuits

DOI 10.2478/popets-2020-0081Received 2020-02-29; revised 2020-06-15; accepted 2020-06-16.

1 IntroductionSelf-Processing Private Sensor Data. Suppose acoalition of countries (Coalition A) negotiates a 25-yeartreaty with Country B, which proposes the deploymentof a set of trusted sensors in Country B, where eachsensor provides an hourly readout of radiation levels.However, to protect its privacy, Country B is unwillingto have all raw sensor readings transmitted in their en-tirety to Coalition A. Country B is only willing to allowCoalition A to learn some processed form of these sensor

*Corresponding Author: Nathan Manohar: UCLA, E-mail: [email protected] Jain: John Hopkins University, E-mail: [email protected] Sahai: UCLA, E-mail: [email protected]

readings – for example to learn the maximum radiationlevel observed at any of 16 sensor locations. Further-more, Coalition A and Country B are willing to executea once-and-for-all ceremony to build these trusted sen-sors. Indeed, since the sensors are creating the “inputs”to our system, it is crucial that they be built in a waythat guarantees trust. This could involve physical pro-tections, as well as computer security tools like securemulti-party computation. But after this ceremony, andafter the trusted sensors are deployed, how can we en-sure that Coalition A, in order to protect Country B’sprivacy, learns only the hourly maximum radiation val-ues, and not more?

The most straightforward solution would be to trusta third party to honestly process the sensor readings.However, even if a third party seems trustworthy whenthe treaty is signed, it may be unreasonable to assumethat such trust would be maintained over the 25-yearspan of the treaty. A more complex solution would beto have the trusted sensors communicate among them-selves to compute the maximum every hour and onlytransmit this processed information. But this would re-quire each trusted sensor to be able to reliably receivecommunication from other sensors; this may be infeasi-ble (especially over long durations of time) or simply un-desirable as being able to receive communication opensup each sensor to new attack vectors. In this work, weask: is it possible to build practical solutions to thisproblem where each trusted sensor is only required tobe able to broadcast a short message? Somewhat sur-prisingly, we show that this is achievable without anycommunication between sensors, where the sensors areonly required to store an AES key and broadcast severalAES evaluations each time step.

Hardware Assumptions for Sensors vs. Hard-ware Assumptions for Receivers. Because sensorswould need to be physically deployed to collect read-ings, some level of physical trust and verification is es-sential for both sensor hardware and deployment. Thisis reasonable because Country B, as the country beingmonitored, would be subject to inspections that wouldverify (only!) that sensors have not been tampered withand remain correctly deployed. However, it is not rea-sonable to assume any kind of physical trust for the

Page 2: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 435

receivers that should obtain only processed sensor read-ings: Indeed, there may be many users – hundreds oreven millions of them – that need to be able to moni-tor processed sensor readings. It would be undesirableand perhaps infeasible to have inspectors from Coun-try B verify that receiver hardware has not been tam-pered with. Such “reverse inspections” – where the tar-get country inspects the monitoring countries, wouldalso likely be politically problematic. For these reasons,we seek solutions to this problem that do not involvetamper-proof hardware assumptions for receivers.

Real World Applicability. Although the above dis-cussion dealt with a hypothetical treaty scenario, suchsituations arise in the real world. The Iran nuclear deal,for example, requires that Iran “allow in internationalinspectors” [7] to directly observe the state of affairson the ground including obtaining raw measurements,a prospect that it would undoubtedly prefer to avoid.International environmental treaties, such as the BaselConvention [1] and the Kyoto Protocol [2], include com-mitments from many nations to limit their pollution andother environmentally harmful activities, but withoutnations agreeing to direct monitoring of raw data, it maybe difficult to ascertain the extent to which a country isor is not following the treaty. Such issues are not onlyapplicable to agreements between nations, but also ex-tend to situations between governments and companiesas well. After the Volkswagen emissions scandal [6], acountry may wish to sanction Volkswagen in some man-ner and have a guarantee that its cars meet the emis-sions standards. A sensor network could be deployedto ensure that Volkswagen was complying with emis-sions regulations, while simultaneously ensuring thatthe sanctioning government does not gain access to de-tails of Volkswagen’s operating procedures that it wishesto remain secret.

An Impractical Theoretical Solution. A theoreticalsolution to this problem can be obtained via the notionof multi-input functional encryption (MiFE) [19]. In aMiFE scheme, parties can query a key generation au-thority for multi-input function keys that can be usedto jointly evaluate the function on a tuple of encryptedplaintexts. MiFE generalizes the notion of (single-input)functional encryption [13, 29, 31] and can be viewed asa non-interactive analogue of secure multiparty compu-tation [18, 33]. If such a primitive could be practicallyrealized, then in the above scenario, Coalition A andCountry B could generate a multi-input function keythat will output the maximum radiation level observedin a given hour given appropriate ciphertexts as input.

Fig. 1. The two steps of a self-processing sensor data scheme.First, there is the trusted setup after which A has function keydata and B has sensors. Once B deploys the sensors, any userU with access to the function key data can monitor the sensorbroadcasts and learn the function evaluations.

The deployed sensors could then encrypt their observedradiation levels along with the time and broadcast theseciphertexts. Using the MiFE function key, it would bepossible for Coalition A to learn only the hourly maxi-mum radiation level and nothing more.

Unfortunately, despite extensive research, achievingfull security for MiFE while maintaining usable effi-ciency has remained quite elusive. In fact, a construc-tion of secret-key MiFE that supports an unboundednumber of ciphertexts would imply indistinguishabilityobfuscation (iO)1 [19], which is only currently knownthrough heavy-duty cryptography and nonstandard as-sumptions. Furthermore, known constructions of secret-key MiFE for bounded ciphertexts from standard as-sumptions [4, 14] require non-black-box use of crypto-graphic primitives and are therefore unexplored withrespect to implementability and efficiency. Construc-tions of secret-key MiFE from nonstandard multilinearmaps [25] take 4 minutes running on a 32-core server forfunction evaluations on 4.7 GB ciphertexts, while onlyachieving security with respect to security parameterλ = 80. As a result, despite its immense potential, MiFEhas so far had limited impact on the security world anddoes not give a practical solution to the proposed prob-lem.

Garbled Encryption. In this work, we give a practi-cal solution to this problem by introducing, construct-

1 Even a construction of secret-key MiFE that supports onlya bounded number of ciphertexts in each “position,” but anexponential number of ciphertext combinations, implies iO.

Page 3: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 436

ing, and implementing a new primitive called garbledencryption. Garbled encryption is built from one-wayfunctions (in particular, AES) and can support functionevaluations in a few microseconds on modest hardware,while achieving security with respect to the standardsecurity parameter value of λ = 128.

Our starting observation is that, in practice, onemight not always need the full power of secret-keyMiFE. For example, a function key in secret-key MiFEneeds to be compatible with all ciphertexts, but thereare many natural scenarios where it may only be neces-sary to evaluate a function on some of the ciphertexts.To illustrate this, in the treaty problem, we only needthe function key to allow for evaluation on ciphertextsall encrypted during the same hour. In fact, using MiFEis actually disadvantageous in this instance, since thefunction key will be inherently compatible with all ci-phertexts, which would require the sensors to encryptnot only the sensor reading, but also the time of thereading and the function key to check that the times ofall the ciphertexts are equal.

In order to determine which ciphertexts a functionkey is compatible with, it is necessary that the cipher-texts be labelled. Therefore, in garbled encryption, ev-ery ciphertext has an associated index, and encryptorsare required to satisfy a promise that no two ciphertextshave the same index. Instead of querying a key gener-ation (keygen) authority for a key corresponding to afunction, in garbled encryption, a party queries the key-gen authority for a key corresponding to a function anda tuple of indices. The function key issued by the keygenauthority can then be used to evaluate the function onthe tuple of ciphertexts corresponding to those indices.By introducing this relaxation, we are able to achievegarbled encryption that supports an unbounded numberof ciphertexts and function keys, using only OWFs (inparticular, AES).

From Garbled Encryption to Self-ProcessingPrivate Sensor Data. Using our new primitive, gar-bled encryption, we are able to obtain a practical solu-tion to the self-processing private sensor data problem.After Coalition A and Country B agree on the treaty,they execute a one-time trusted setup that generates allthe garbled encryption function keys needed for the du-ration of the treaty and provides sensors to be deployedin Country B. After these sensors have been appropri-ately deployed, they will periodically broadcast cipher-texts that can be used by Coalition A to learn the out-put of the desired functionality (for example, the hourlymaximum radiation level). Note that once the sensors

have been deployed, no additional work is needed onthe part of Coalition A or Country B to keep the sys-tem running (they can go “offline”). In fact, CoalitionA can make all the function key information public andany interested user could monitor the system and ob-tain the function evaluation results. Furthermore, theciphertexts in our construction will correspond to sev-eral pseudorandom function (PRF) evaluations, imple-mented via AES, so the sensors can be computationallyextremely weak and need only support the most basicof cryptographic tools. The functionalities (determinedat setup) that are supported by the scheme are quitegeneral. A function evaluation at time T ′ may be ondata generated at any time from setup until T ′, withany number of sensor readings as input coming from asmany or as few different sensors as desired.

1.1 Our Results

Garbled Encryption. We formally define garbled en-cryption and provide a construction from one-way func-tions (in particular, a PRF that can be instantiated us-ing AES) that supports an unbounded number of ci-phertexts. We, in fact, provide two constructions: a se-lectively secure scheme in the standard model, and anadaptively secure scheme in the random oracle model. Inthe adaptive security setting, the adversary can adap-tively observe ciphertexts and function keys in any or-der, and thus realistically models scenarios where anadversary may observe broadcast ciphertexts before ob-taining key material, and then observe additional broad-cast ciphertexts afterwards. We also provide an imple-mentation of our adaptively secure garbled encryptionscheme.

A simplified version of our adaptively secure gar-bled encryption scheme also yields an adaptively securegarbling scheme [10] where the wire keys are the samesize as in non-adaptive garbling schemes. To the best ofour knowledge, such a garbling scheme was not knownpreviously. In particular, all previously known adaptivegarbling schemes required larger wire keys (even in therandom oracle model). We believe that our adaptive gar-bling scheme with short wire keys may be applicable toother scenarios where such a property is desirable.

Our scheme requires the encryption algorithm tohave a state in a manner which we call index depen-dence. This is the constraint that each ciphertext is la-beled with an index and that no two ciphertexts sharethe same index. Since index dependence still supportsparallelism, we don’t view this as a major limitation.

Page 4: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 437

Table 1. Self-Processing Private Sensor Data via Garbled Encryption Performance Summary

Function # Inputs ` GE.KeyGen GE.Dec |GE.ct| |GE.sk|

DNF 64 1-bit 27 µs 14 µs 16 B 8.7 kB

Thresh 16 32-bit 523 µs 319 µs 512 B 42.8 kB

Max 16 32-bit 613 µs 385 µs 512 B 58.2 kB

∗ For all settings of parameters, running GE.Setup and GE.Enc took <1 µs and therefore this information is omitted from the table.

However, by introducing this natural relaxation, we areable to achieve a practical construction of this primitivefrom standard assumptions.

Self-Processing Private Sensor Data and Imple-mentation. Finally, we give evaluation results for ouradaptively secure garbled encryption scheme. We findthat it performs quite well, with decryptions in the tensto hundreds of microseconds when run on a standardcommodity laptop. Table 1 gives some of our evaluationresults for self-processing private sensor data via garbledencryption for a variety of functionalities. Based on ourevaluation results, we observe that our garbled encryp-tion scheme could be applied to an automated warningsystem on 64 sensors that lasts for 10 years and issuesan all clear vs. anomaly detected reading every 10 min-utes. In such a setup, we find that the trusted authorityneed only be online for 14 seconds and that users whowish to monitor the warning system can do so by down-loading a few kilobytes of data and performing a 14 µscomputation every 10 minutes. Furthermore, for Coali-tion A and Country B to realize their 25-year treaty,they would need to perform a 2.2 minute computationduring the setup ceremony to generate the necessarygarbled circuits, which, when handed over to CoalitionA, would allow each country in Coalition A to learn themaximum radiation levels observed every hour for thenext 25 years by performing a 385 µs computation everyhour.

In summary, we believe that garbled encryptionachieves a practical solution for the self-processing pri-vate sensor data problem and other situations that mayarise requiring computing on encrypted data withoutcompromising data privacy through leakage or the useof nonstandard assumptions.

Practical Deployment Challenges. Even thoughour performance results show the practicality of our self-processing private sensor data scheme, there are various

challenges that one would need to address before beingconfident in deploying such a system. These include is-sues such as sensor failure, sensor compromise, and clocksynchronization between sensors. We discuss how to ad-dress such challenges in Section 8.

2 System Setting and ThreatModel

A self-processing private sensor data system consists ofthe following components that are created during a one-time trusted ceremony.– Sensors: These are weak computational devices

with small storage (enough to hold an AES key)that are deployed in the field. Their job is, at eachtime step, to take sensor readings, broadcast en-cryptions of the sensor readings, and update theirstored key for the next time step. They do not com-municate with each other or any other parties.

– Function Key Data: This information is neededby any user that wishes to monitor the sensor sys-tem. For scenarios where anybody is allowed tomonitor the system, these data can be stored in apublicly accessible location. Observe that the func-tion key data is generated before any sensor readingshave taken place and the sensors do not have accessto the function key data.

The following entities make use of the self-processingprivate sensor data system.– Private Data Owner: The private data owner

appropriately places the sensors in the field. Toprotect their privacy, they are unwilling to divulgethe sensor readings in the clear. However, they haveagreed to allow monitoring users to learn various

Page 5: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 438

functions of the sensors readings.

– Monitoring Users: A monitoring user is any in-terested party with access to the function key datathat wishes to monitor the sensor system. At eachtime step, they receive the encrypted broadcastsfrom the sensors and use these, along with the func-tion key data, to learn predetermined functions ofsensor readings, potentially taken during differenttime steps.

2.1 Threat Model

The goal of a self-processing private sensor data systemis to allow monitoring users to learn agreed-upon func-tions of the sensor readings taken at various time steps,without violating the privacy of the data owner. An at-tacker is allowed to observe all broadcasts from sensorsand is given all the function key data. Such an attackershould not be able to “learn anything” about the dataowner’s private data beyond what they have agreed todisclose as part of the monitoring process. This notionis captured formally via an indistinguishability securitygame. We also allow an attacker to compromise sensorsat any point during the lifetime of the system and learnthe secret key stored on the sensor. In this event, werequire that the past privacy of the data owner be up-held. In particular, the adversary should not be able to“learn anything” about previous sensor readings beyondwhat is allowed. We discuss how to handle deploymentchallenges outside the threat model in Sec. 8.

3 PreliminariesThroughout the paper, we denote the security parame-ter by λ. For an integer n ∈ N, we use [n] to denote theset {1, 2, . . . , n}. We use D0 ≈c D1 to denote that thedistributions D0,D1 are computationally indistinguish-able. Let bin(n) denote the binary representation of nand bin(n)i denote the ith bit of the binary represen-tation. We use the symbol O to denote random oraclesand O(t)(x) to mean O(O(. . . (x))), where O is appliedt times.

3.1 Garbling Schemes

We recall the definition of garbling schemes [11, 33].

Definition 1 (Garbling Schemes [11, 33]). A garblingscheme GC = (Gen,GrbI,GrbC,EvalGC) defined for aclass of circuits C consists of the following polynomialtime algorithms:– Setup, Gen(1λ): On input security parameter λ, it

generates the secret parameters gcsk.– Generation of Wire Keys, GrbI(gcsk): On input

secret parameters gcsk, it generates the wire keys~k = (k1, . . . ,k`), where ki = (k0

i , k1i ). (Through-

out, we will use the terms wire keys and labels in-terchangeably when referring to keys given out foreach input bit to a garbled circuit.)

– Garbled Circuit Generation, GrbC(gcsk, C, ~k):On input secret parameters gcsk, circuit C ∈ C, andwire keys ~k, it generates the garbled circuit C.

– Evaluation, EvalGC(C, (kx11 , . . . , kx`

` )): On inputgarbled circuit C and wire keys (kx1

1 , . . . , kx`

` ), itgenerates the output out.

It satisfies the following properties:– Correctness: For every circuit C ∈ C of input length

`, x ∈ {0, 1}`, for every security parameter λ ∈ N,it should hold that:

Pr

C(x)← EvalGC(C, (kx1

1 , . . . , kx`

` )) :gcsk← Gen(1λ),

((k01 , k

11), . . . , (k0

` , k1` ))← GrbI(gcsk),

C ← GrbC(gcsk, C, ((k01 , k

11), . . .

. . . , (k0` , k

1` )))

= 1

– Security: There exists a PPT simulator SimGC suchthat the following holds for every circuit C ∈ C ofinput length `, x ∈ {0, 1}`,(

C, kx11 , . . . , kx`

`

)∼=c SimGC(1λ, φ(C), C(x)),

where:– gcsk← Gen(1λ)– ((k0

1 , k11), . . . , (k0

` , k1` ))← GrbI(gcsk)

– C ← GrbC(gcsk, C, ((k01 , k

11), . . . , (k0

` , k1` )))

– φ(C) is the topology of C.

Theorem 1 ([11, 26, 33]). Assuming the existence ofone-way functions, there exists a secure garbling schemeGC where GrbI outputs wire keys by choosing uniformlyrandom values over their domain. Furthermore, thescheme satisfies correctness regardless of the choice ofwire keys provided that k0

i and k1i are distinct for all

i ∈ [`].

In our proof of selective security of garbled encryption,we actually need a stronger notion of security for gar-bling schemes, which we will refer to as chosen-wire key

Page 6: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 439

security. In this notion of security, the adversary is addi-tionally allowed to choose the wire keys that correspondto its chosen input x in the garbled circuit.

Definition 2 (Chosen-Wire Key Security). A garblingscheme GC is said to be chosen-wire key secure if thereexists a PPT simulator SimGC such that the followingholds for every circuit C ∈ C of input length `, x ∈{0, 1}`, and wire keys k = (k1, . . . , k`).(

C,k)∼=c

(SimGC

(1λ, φ (C), C (x) ,k

),k),

where:– gcsk← Gen(1λ)– ((k0

1 , k11), . . . , (k0

` , k1` ))← GrbI(gcsk)

– C ← GrbC(gcsk, C, ~k)– φ(C) is the topology of Cwhere ~k = (k1, . . . ,k`) with ki = (ki, k1

i ) if xi = 0 andki = (k0

i , ki) if xi = 1.

Note that in the above definition, the wire keys corre-sponding to the input x are fixed ahead of time andthe other wire keys (those corresponding to the bitwisenegation of x) are generated by GrbI.

Theorem 2. Assuming the existence of one-way func-tions, there exists a chosen-wire key secure garblingscheme GC where GrbI outputs wire keys by choosing uni-formly random values over their domain. Furthermore,the scheme satisfies correctness regardless of the choiceof wire keys provided that k0

i and k1i are distinct for all

i ∈ [`].

We give a proof of Thm. 2 in Appendix B.1.

3.2 Adaptive Garbling Schemes

In order to obtain adaptively secure garbled encryp-tion, we will need to make use of adaptively secure gar-bling schemes. Intuitively, an adaptively secure garblingscheme is secure even if an adversary chooses the inputto evaluate the circuit on after it sees the garbled cir-cuit. We require a strong notion of an adaptively securegarbling scheme (referred to as “fine-grained adaptivesecurity [10]) where the adversary may submit inputqueries bit by bit and choose future input queries afterit receives some of the input labels.

Definition 3 (Adaptively Secure Garbling Schemes).A garbling scheme GC for circuit class C is adaptivelysecure if for any PPT adversary A, there exists a PPT

simulator SimGC and a negligible function µ(·) such thatfor all sufficiently large λ ∈ N, the advantage of A is

AdvGCA =∣∣∣Pr[ExptGCA (1λ, 0) = 1]− Pr[ExptGC

A (1λ, 1) = 1]∣∣∣

≤ µ(λ),

where for each b ∈ {0, 1} and λ ∈ N, the experimentExptGC

A (1λ, b) is defined below:1. Circuit query: A submits a circuit query C ∈ C to

the challenger Chal. Let ` denote the length of theinput to C.

2. If b = 0, Chal computes– gcsk← Gen(1λ)– ((k0

1 , k11), . . . , (k0

` , k1` ))← GrbI(gcsk)

– C ← GrbC(gcsk, C, ((k01 , k

11), . . . , (k0

` , k1` )))

and sends C to A.If b = 1, Chal computes– C ← SimGC(1λ, φ(C)),where φ(C) is the topology of C and sends C to A.

3. Input queries: The following is repeated at most` times. A submits an index i and a bit value v tothe challenger. Chal keeps a set of queried indicesS, initially set to ∅ and a value y, initially set to⊥. If i 6∈ [`]\S, Chal returns ⊥. Else, Chal sets xi tov and sets S to S ∪ {i}. If |S| = `, then Chal setsx = x1 . . . x` and sets y = C(x).If b = 0, then Chal returns ki = kxi

i .If b = 1, then Chal returns ki ← SimGC(⊥, i, |S|) if|S| < ` and ki ← SimGC(y, i, |S|) if |S| = `.

4. The output of the experiment is then set to the out-put of A.

Theorem 3 ([10]). Assuming the existence of one-wayfunctions, there exists an adaptive garbling scheme GCin the random oracle model. This can be obtained byapplying a transformation to any garbling scheme.

Remark 1. Our construction of adaptively secure gar-bled encryption uses the adaptive garbling scheme fromThm. 3 in a non-black-box manner. The adaptive gar-bling scheme from Thm. 3 shows how to transforma selectively secure garbling scheme into an adap-tively secure one. This works as follows. Let C and(Xb

i )i∈[n],b∈{0,1} be the garbled circuit and wire keys,respectively, of the selective secure scheme, with eachXbi ∈ {0, 1}λ. For each i ∈ [n], a uniformly random

mask Zi ← {0, 1}λ is sampled. Set Z = Z1 ⊕ . . . ⊕ Zn.Let Yi = Z||i. The adaptively secure garbled circuit isC′ = C ⊕O′(Y0) and the wire keys are of length 2λ andare (Xb

i ⊕ O(Yi), Zi) (assume the output lengths of the

Page 7: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 440

random oracles are set appropriately). Correctness fol-lows since once one has a wire key for each of the n

input wires, one can compute Z and unmask the garbledcircuit and wire keys of the selectively secure scheme.Adaptive security follows from the fact that until onelearns all n wire keys, Z is information theoreticallyhidden and so the garbled circuit and wire keys look uni-form. In the proof of security, the simulator also samplesC′, all the Xb

i ’s, and the Zi’s uniformly at random.

3.3 Index Dependence

Throughout, we will require various algorithms to sat-isfy a notion of statefulness, which we call index depen-dence.

Definition 4 (Index Dependence). An algorithm A issaid to be index dependent if it maintains a state S ofthe form {i1, i2, . . . , i`} with each ij ∈ N. S begins asthe empty set ∅ and on each call to the algorithm, A isgiven an index i such that i 6∈ S. It then adds i to S andruns with i as one of its inputs.

An algorithm is said to be q-index dependent if theindices in the state are all in [q].

Index dependence is necessary in order to ensure thatno two ciphertexts have the same index. In implemen-tations of our schemes, an encryptor can simply keep acounter of the next free index and encrypt sequentially.

3.4 Garbled Encryption

Garbled encryption can be thought of as an indexedversion of secret-key MiFE where every ciphertext hasan associated index and the key generation algorithmtakes sets of indices as an additional input and gener-ates a function key that can only evaluate on ciphertextswith corresponding indices. As such, we model our def-inition of garbled encryption in a manner similar to thedefinition of MiFE found in the literature [19].

Syntax.Let X = {Xλ}λ∈N and Y = {Yλ}λ∈N be ensembles whereXλ, Yλ are sets each with size dependent on λ. LetF = {Fλ}λ∈N be an ensemble where each Fλ is a fi-nite collection of n-ary functions. Each function f ∈ Fλtakes as input strings x1, . . . , xn, where each xi ∈ Xλ,and outputs f(x1, . . . , xn) ∈ Yλ. Let Q denote a set of

sets, where each set Q ∈ Q is a subset of Nn. (We notethat our constructions of garbled encryption do not re-quire all the functions in the function class to have thesame arity. That is, our construction can simultaneouslyhandle, say a 2-ary and a 3-ary function. However, wedefine garbled encryption for fixed arity functions fornotational simplicity throughout.)

A garbled encryption scheme GE for n-ary functionsF and query pattern Q consists of four algorithms(GE.Setup,GE.KeyGen,GE.Enc,GE.Dec) described below:– Setup. GE.Setup(1λ) is a PPT algorithm that takes

as input a security parameter λ and outputs themaster secret key GE.msk.

– Key Generation. GE.KeyGen(GE.msk, f,Q) is aPPT algorithm that takes as input the master secretkey GE.msk, a function f ∈ Fλ, and a set Q ∈ Q. Itoutputs a functional key GE.skf,Q.

– Encryption. GE.Enc(GE.msk,m, i) is a PPT algo-rithm that takes as input the master secret keyGE.msk, a message m ∈ Xλ, and an index i ∈ N.It outputs a ciphertext GE.ct. GE.Enc is index de-pendent and the ciphertext GE.ct has an associatedindex i. If GE.Enc is asked to encrypt to an index jto which it has previously encrypted, it will output⊥.

– Decryption. GE.Dec(GE.skf,Q,GE.ct1, . . . ,GE.ctn)is a deterministic algorithm that takes as in-put a functional key GE.skf,Q and n ciphertextsGE.ct1, . . . ,GE.ctn. It outputs a value y ∈ Yλ ∪ {⊥}.

Correctness.There exists a negligible function negl(·) such that forall sufficiently large λ ∈ N, every n-ary function f ∈Fλ, set Q ∈ Q, point (j1, . . . , jn) ∈ Q, and input tuple(x1, . . . , xn) ∈ Xnλ ,

Pr

GE.msk← GE.Setup

(1λ)

;GE.skf,Q ← GE.KeyGen (GE.msk, f,Q) ;GE.Dec(GE.skf,Q,

(GE.Enc (GE.msk, xi, ji))ni=1)6= f (x1, . . . , xn)

≤ negl(λ)

where the probability is taken over the random coins ofall the algorithms.

Selective Security.We model indistinguishability-based selective securityfor garbled encryption in a similar manner as that for

Page 8: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 441

MiFE. The difference is that a function query in thegame for garbled encryption consists of both a functionf ∈ Fλ and a set Q ∈ Q and returns a function keyGE.skf,Q corresponding to (f,Q). We defer the full de-scription of this game to Appendix A.1.

Adaptive Security.One can consider a stronger notion of security, calledadaptive security, where the adversary can interleavethe challenge messages and the function queries in anyarbitrary order. We will construct adaptively securegarbled encryption in the random oracle model. Wedefer the full description of this game to Appendix A.1.

In this paper, we will focus on garbled encryptionwhere Q is taken to be the set containing all sets ofsingletons. That is, a valid function query consists of afunction f and a tuple of indices (j1, . . . , jn), and theresulting function key allows for the function evalua-tion on the tuple of ciphertexts corresponding to theseindices. We leave constructions of garbled encryptionwhere Q is defined differently to future work.

3.5 Time-Based Garbled Encryption

Using garbled encryption as defined above, it is possi-ble to build a self-processing private sensor data system.However, the system that results does not allow sen-sors to update their stored keys, which causes all sensorreadings to be compromised if a sensor is compromisedby the attacker. To address this, we propose a moregeneral version of garbled encryption called time-basedgarbled encryption that supports key updates. This willimprove the security of the resulting self-processing pri-vate sensor data system by preserving the privacy ofpast sensor readings in the event of a sensor compro-mise.

The syntax of time-based garbled encryption is thesame as that of garbled encryption except that it ad-ditionally has the key generation and encryption algo-rithms take as input a time offset t ∈ N. Intuitively, atime offset t corresponds to the number of times thatthe sensor will have updated its stored key (by hash-ing) before encrypting a particular message. A functionkey with time offset t is compatible with encryptionsgenerated with time offset t. We define and constructtime-based garbled encryption in Appendix C.

4 Selectively Secure GarbledEncryption

We construct a selectively-secure garbled encryptionscheme that supports an unbounded number of cipher-texts in the standard model from one-way functions.The idea behind the construction relies heavily on gar-bled circuits, hence the name garbled encryption. In ourconstruction, a ciphertext is simply a set of wire keyscorresponding to the message, which can be used to eval-uate garbled circuits. These wire keys will be derived bya PRF evaluation that depends on the secret key and theindex j ∈ N of the encryption. Intuitively, there are wirekeys associated with every index j and when we encryptto this index, we select the wire keys that correspondto the message. Since no two ciphertexts have the sameindex, only one set of wire keys will ever be revealed foreach index. A function key for a function f and a tupleof indices (j1, . . . , jn) is then simply a garbled circuitthat computes f with wire keys corresponding to theindices (j1, . . . , jn).

4.1 Construction

Let PRF = (PRF.Gen,PRF.Eval) be a pseudorandomfunction family with λ-bit keys that outputs in {0, 1}λ

and let GC = (Gen,GrbI,GrbC,EvalGC) be a garblingscheme. Our garbled encryption scheme GE for n-aryfunctions F and query pattern Q, where Q is the setcontaining all sets of singletons, is defined as follows:

Let ` = n · log |Xλ| denote the input length to cir-cuits C representing functions f ∈ F . We note thatthe indices (i − 1) · log |Xλ| + 1, . . . , i · log |Xλ| corre-spond to the indices of the input xi to C. Lettingri = (i− 1) · log |Xλ| and `m = log |Xλ|, we denote theseindices as ri + 1, . . . , ri + `m.– Setup. On input the security parameter 1λ,

GE.Setup runs PRF.Gen(1λ) to obtain a PRF keyK and outputs GE.msk = K.

– Key Generation. On input the master secretkey GE.msk, a function f ∈ Fλ, and a set Q ={(j1, . . . , jn)} ∈ Q, GE.KeyGen runs as follows:Let C be a circuit for f . GE.KeyGen runs the garbledcircuit generation algorithm Gen(1λ) to obtain thesecret parameters gcsk. It then sets the garbling keyskj = (k0

j , k1j ) for j ∈ [`] as follows: For i ∈ [n], it

sets the garbling keys kri+α = (k0ri+α, k

1ri+α) for

Page 9: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 442

α ∈ [`m] to be

k0ri+α = PRF.Eval(GE.msk, ji||α||0)k1ri+α = PRF.Eval(GE.msk, ji||α||1).

Setting ~k = (k1, . . . ,k`), it then runsGrbC(gcsk, C, ~k) to obtain a garbled circuit C.GE.KeyGen outputs

(C,Q)

as GE.skf,Q.

– Encryption. On input the master secret keyGE.msk, a message m and an index j ∈ N, GE.Encruns as follows: GE.Enc is index dependent andmaintains a state S initialized to the empty set ∅.It first checks that j 6∈ S. If j ∈ S, it returns ⊥.Otherwise, it adds j to S and proceeds.Defining k0

α and k1α for α ∈ [`m] by

k0α = PRF.Eval(GE.msk, j||α||0)k1α = PRF.Eval(GE.msk, j||α||1)

as above, GE.Enc computes

k =(k

bin(m)11 , . . . , k

bin(m)`m

`m

)and outputs

(j,k)

as GE.ct. We refer to j as the index of this cipher-text.

– Decryption. On input a functional key GE.skf,Q =(C,Q) and n ciphertexts GE.ct1, . . . ,GE.ctn, GE.Decfirst parses each GE.cti as (ji,ki) and asserts that(j1, . . . , jn) ∈ Q. If not, it outputs ⊥.GE.Dec runs

EvalGC(C, (k1, . . . ,kn))

to obtain out and outputs this value.

4.2 Correctness

The correctness of our garbled encryption scheme fol-lows directly from the correctness of the garbling schemeGC and Thm. 2. Consider any input tuple (x1, . . . , xn) ∈Xnλ , n-ary function f ∈ Fλ, and set Q = {(j1, . . . , jn)}.Let GE.skf,Q denote the function key for (f,Q) andGE.ct1, . . . ,GE.ctn denote encryptions of (x1, j1), . . . ,(xn, jn), respectively. GE.skf,Q is of the form (C,Q) and

ciphertexts GE.cti are of the form (ji,ki) for i ∈ [n]. Ifwe let

k = (k1, . . . ,kn),

we see that k is a vector of wire keys of length ` and fori ∈ [n] and α ∈ [`m], we note that

kri+α = PRF.Eval(GE.msk, ji||α||bα)

where bα is the αth bit of xi. But, C is the garbledcircuit obtained by running

gcsk← Gen(1λ),C ← GrbC(gcsk, C, ((k0

1 , k11), . . . , (k0

` , k1` )))

where

kbri+α = PRF.Eval(GE.msk, ji||α||b)

for i ∈ [n], α ∈ [`m]. Since PRF is pseudorandom, itfollows that k0

α and k1α will be distinct for all α ∈ [`]

with all but negl(λ) probability, and therefore, by thecorrectness of GC and Thm. 2, it follows that GE.Decwill output

EvalGC(C,k) = C(x1, . . . , xn) = f(x1, . . . , xn)

with overwhelming probability.

4.3 Security

Theorem 4. Assuming that PRF is a pseudorandomfunction family and GC is a chosen-wire key secure gar-bling scheme implied by Thm. 2, then GE is a selectively-secure garbled encryption scheme.

We give a proof of Thm. 4 in Appendix B.2.

5 Adaptively Secure GarbledEncryption

We now describe our construction of adaptively securegarbled encryption in the random oracle model. A nat-ural idea for obtaining such a scheme is to instantiateour construction of selectively secure garbled encryptionin Section 4 with an adaptively secure garbling scheme.We, note, however, that all known adaptive garblingschemes require larger wire key sizes than non-adaptivegarbling schemes. For example, wire keys in the schemeof [10] have size twice that of keys in non-adaptive gar-bling. If used naively, this would double the size of the

Page 10: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 443

ciphertexts in the garbled encryption scheme, which inturn increases the amount of information that the sen-sors need to broadcast in our self-processing sensor dataapplication.

In order to avoid an increase in the requirementson the sensors, we build an adaptive garbled encryp-tion scheme where the ciphertext size remains the sameas in our selectively secure construction; however, thefunction key sizes are larger.2 We achieve this effect byusing the adaptive garbling scheme of [10] in a slightlynon-black-box manner as well as by making some mod-ifications to our selectively secure garbled encryptionscheme.

Recall that each wire key in the scheme of [10] isa tuple of the form (Xb

i , Zi), where Xbi is a “masked”

label, and Zi’s denote the information that is used forcomputing the masks. (See Remark 1 in Section 3.2.)A potential way to ensure that wire keys are of size λand not 2λ is to decouple the tuple (Xb

i , Zi) such thatthe wire key now consists of Xb

i and the Zi value (forevery wire key) is added to the garbled circuit descrip-tion (thereby increasing its size). This, however, has theeffect of prematurely “fixing” the masks which breaksthe proof of adaptive security. To address this issue, weuse another masking layer, i.e., we carefully mask theZi values themselves such that the evaluator can obtainthe corresponding masks only once it has fixed its inputto the garbled circuit. Now, we can once again rely onthe adaptive security of the underlying garbling scheme,with the added benefit that the wire keys are short. Werefer the reader to the formal construction below formore details.

We note that our adaptively secure garbled encryp-tion scheme for the simplified case of a single functionkey and without any indexing, implies an adaptively se-cure garbling scheme where the wire key sizes are thesame as in non-adaptive garbling schemes. To the best ofour knowledge, such a garbling scheme was not knownpreviously. To compare the tradeoff in the size of thewire keys vs. the size of the garbled circuit, we observethat for a circuit C with ` inputs, the adaptively se-cure garbling scheme of [10] has wire keys of size 2λ.Our adaptively secure garbling scheme reduces the sizeof the wire keys to be λ, but increases the size of thegarbled circuit by an additive amount of λ+ (4λ)` com-pared to the garbled circuit of [10]. If both the garbledcircuit and wire keys are given out together, then this

2 For the self-processing sensor data application, we find thisto be a desirable trade-off.

tradeoff is not worth it. However, in a self-processingsensor data scheme, all the garbled circuits are gener-ated during trusted setup and stored. The sensors thengenerate the appropriate wire keys over the lifetime ofthe system and broadcast them. In our situation, weview this tradeoff as desirable since the pro of decreas-ing the amount of transmitted data outweighs the con ofincreasing the amount of initial storage. We believe thatthere may be other settings where adaptively secure gar-bling schemes with short wire keys are also desirable, inwhich case, our adaptive garbling scheme would comein handy.

5.1 Construction

We now proceed to give a formal description. Let O1 :{0, 1}3λ → {0, 1}λ and O2 : {0, 1}2λ → {0, 1}2λ berandom oracles. Let PRF = (PRF.Gen,PRF.Eval) be apseudorandom function family with λ-bit keys that out-puts in the range {0, 1}λ and let GC = (Gen,GrbI,GrbC,EvalGC) be the adaptively secure garbling scheme speci-fied by Thm. 3. Let O be the random oracle used by theadaptively secure garbling scheme. Our garbled encryp-tion scheme GE for n-ary functions F and query patternQ, where Q is the set containing all sets of singletons,is defined as follows:

Recall our notation from Section 4. Let ` = n ·log |Xλ| denote the input length to circuits C repre-senting functions f ∈ F . We note that the indices(i−1)·log |Xλ|+1, . . . , i·log |Xλ| correspond to the indicesof the input xi to C. Letting ri = (i − 1) · log |Xλ| and`m = log |Xλ|, we denote these indices as ri + 1, . . . , ri +`m.– Setup. On input the security parameter 1λ,

GE.Setup runs PRF.Gen(1λ) to obtain a PRF keyK and outputs GE.msk = K.

– Key Generation. On input the master secretkey GE.msk, a function f ∈ Fλ, and a set Q ={(j1, . . . , jn)} ∈ Q, GE.KeyGen runs as follows:Let C be a circuit for f . GE.KeyGen then runs thegarbled circuit generation algorithm Gen(1λ) to ob-tain the secret parameters gcsk. It then generates auniformly random string V ∈ {0, 1}λ.It then sets kα = (k0

α, k1α) for α ∈ [`] as follows:

For i ∈ [n], it sets the garbling keys kri+α =(k0ri+α, k

1ri+α) for α ∈ [`m] to be

k0ri+α = PRF.Eval(GE.msk, ji||α||0)k1ri+α = PRF.Eval(GE.msk, ji||α||1).

Page 11: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 444

For α ∈ [`], it sets Zα uniformly at random. SetZ = Z1 ⊕ . . .⊕ Z`. For α ∈ [`], it sets Yα accordingto the procedure specified by the garbling scheme ofThm. 3 and then sets Xb

α as O1(kbα||V ||Z)⊕O(Yα).Let ~k′ denote the garbling keys as determined by theXbα’s, Yα’s and Zα’s. It then runs GrbC(gcsk, C, ~k′)

to obtain a garbled circuit C. For α ∈ [`], b ∈ {0, 1},let

T bα = (Zα||0λ)⊕O2(kbα||V ).

For each α, with probability 1/2, swap the values ofT 0α and T 1

α.GE.KeyGen outputs

(C, V, {T 0α, T

1α}α∈[`], Q)

as GE.skf,Q.

– Encryption. On input the master secret keyGE.msk, a message m and an index j ∈ N, GE.Encruns as follows: GE.Enc is index dependent andmaintains a state S initialized to the empty set ∅.It first checks that j 6∈ S. If j ∈ S, it returns ⊥.Otherwise, it adds j to S and proceeds.Defining k0

α and k1α for α ∈ [`m] by

k0α = PRF.Eval(GE.msk, j||α||0)k1α = PRF.Eval(GE.msk, j||α||1)

as above, GE.Enc computes

k =(k

bin(m)11 , . . . , k

bin(m)`m

`m

)and outputs

(j,k)

as GE.ct. We refer to j as the index of this cipher-text.

– Decryption. On input a functional keyGE.skf,Q = (C, V, {T 0

α, T1α}α∈[`], Q) and n cipher-

texts GE.ct1, . . . ,GE.ctn, GE.Dec first parses eachGE.cti as (ji,ki) and asserts that (j1, . . . , jn) ∈ Q.If not, it outputs ⊥.Let k = (k1, . . . ,kn). For α ∈ [`], GE.Dec recoversZα by computing O2(kα||V )⊕ T bα for b ∈ {0, 1} andsetting Zα to be the λ-bit prefix of the recoveredvalue whose last λ bits are all 0. Once all the Zα’sare recovered, GE.Dec computes the Yα’s and Z =⊕

Zα and then sets Xα as O1(kα||V ||Z) ⊕ O(Yα).It then runs EvalGC on C with the Xα’s to recoverthe output.

5.2 Correctness

The correctness of our garbled encryption scheme fol-lows directly from the correctness of the garbling schemeGC and Thm. 3. Consider any input tuple (x1, . . . , xn) ∈Xnλ , n-ary function f ∈ Fλ, and set Q = {(j1, . . . , jn)},where xi = xi′ if ji = ji′ . Let GE.skf,Q denote thefunction key for (f,Q) and GE.ct1, . . . ,GE.ctn denoteencryptions of (x1, j1), . . . , (xn, jn), respectively (whereGE.cti = GE.cti′ if (xi, ji) = (xi′ , ji′). GE.skf,Q is of theform (C, V, {T 0

α, T1α}α∈[`], Q) and ciphertexts GE.cti are

of the form (ji,ki) for i ∈ [n]. If we let

k = (k1, . . . ,kn),

we see that k is a vector of wire keys of length `. For α ∈[`], decryption proceeds by computing O2(kα||V ) ⊕ T 0

α

and O2(kα||V )⊕ T 1α. For b ∈ {0, 1}, we see that

O2(kα||V )⊕ T bα = O2(kα||V )⊕ (Zα||0λ)⊕O2(kb′

α ||V )

and thus decryption obtains (Zα||0λ) when kα = kb′

α anda uniformly random looking value otherwise. Thus, wesee decryption correctly recovers Zα with overwhelmingprobability. Then, we note that O1(kα||V ||Z)⊕O(Yα) isthe Xα used by the garbling algorithm. So, decryptioncorrectly recovers the labels Xα and therefore, by thecorrectness of the garbling scheme, EvalGC run on C

and the Xα’s will give us f(x1, . . . , xn).

5.3 Security

Theorem 5. Assuming that PRF is a pseudorandomfunction family and GC is the adaptively secure garblingscheme of Thm. 3, then GE is an adaptively secure gar-bled encryption scheme.

We give a proof of Thm. 5 in Appendix B.3.

Note on Concrete Security.For a practical deployment of garbled encryption, onemight be interested in the concrete security of ourscheme. The concrete security depends on and followsimmediately from the concrete security of the under-lying garbling scheme and the number/size of garbledcircuits released. In our implementation, we used thegarbled circuit implementation of [21], which improveson the JustGarble [9] garbled circuit implementation byimplementing half-gates [34]. In order to obtain adap-tively secure garbled circuits, we applied the random

Page 12: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 445

oracle model transformation of [10]. Both [9] and [10]provide thorough concrete security analysis in the ran-dom oracle model of the garbled circuit constructions,and we refer an interested reader to these papers forfurther details.

6 From Garbled Encryption to aSelf-Processing Private SensorData System

The garbled encryption constructions of Secs. 4 and 5immediately give rise to a self-processing private sensordata system.

Let GE = (GE.Setup,GE.KeyGen,GE.Enc,GE.Dec) bea garbled encryption scheme. Let n denote the numberof sensors and let T denote the number of time stepsfor the sensors and ft for t ∈ [T ] denote the functionto be computed on the sensor readings during the tthtime step (for notational simplicity, we only consider onefunction and one sensor reading per time step, but thescheme can naturally be extended to support multiplefunctions and sensor readings per time step). During thetrusted ceremony, a key K ← GE.Setup(1λ) is generatedand stored on each of the sensors. For t ∈ [T ], com-pute GE.skt ← GE.KeyGen(K, ft, (i1, i2, . . . , in)), where(i1, i2, . . . , in) are the indices of the sensor readings thatft is computed on. The function key data is set to be(GE.skt)t∈[T ].

Once the trusted setup is completed, at time step t,the sensor i takes a measurement mi and broadcastsGE.Enc(K,mi, i||t). By the correctness of garbled en-cryption, any monitoring user with access to the func-tion key data is able to learn the desired function eval-uation of the sensor readings at every time step. By thesecurity of garbled encryption, the private data owner’sprivacy is protected against an attacker that does notcompromise the sensors.

Unfortunately, the above self-processing private sen-sor data system is rendered completely insecure in theevent of a sensor compromise, as an attacker that learnsthe key K is able to decrypt all encrypted readingsand learn all the private sensor data. To address this,we propose having the sensors update their keys ateach time step by hashing them. We formalize thisintuition by defining, constructing, and proving adap-tive security for time-based garbled encryption in Ap-pendix C. In the security notion for time-based gar-bled encryption, the adversary is able to adaptively

choose a time t at which to compromise the sensorsand learn the key stored on them. In a similar manner,we can use time-based garbled encryption to instan-tiate a self-processing private sensor data system. LetTGE = (TGE.Setup,TGE.KeyGen,TGE.Enc,TGE.Dec) bea time-based garbled encryption scheme. Let B be abound on the maximum number of time steps fromthe current time in which a sensor reading will beused in a function computation. As before, a keyK ← TGE.Setup(1λ) is generated and stored on eachof the sensors. For t ∈ [T ], function keys TGE.skt ←TGE.KeyGen(K, ft, (i1, i2, . . . , in), t) are generated. Oncethis setup is complete, at time step t, the sensor i

storing key Ki takes measurement mi and broadcastsTGE.Enc(Ki,mi, i||t + j, j) for j = 0, . . . , B. After thattime step has elapsed, it updates its stored key Ki to beO(Ki), where O is the random oracle used in the time-based garbled encryption construction to ratchet keysforward. Observe that since the key stored on a sensorat time t is O(t)(K), sensor i’s outputs at time t areequivalent to if it had run TGE.Enc(K,mi, i||t+ j, t+ j).However, the sensor only knows O(t)(K) and not K,so an attacker that compromises the sensor only recov-ers O(t)(K). By adaptive security of time-based garbledencryption (see Appendix C), the private data owner’sprivacy for past sensor readings is protected against anattacker that compromises the sensors.

Remark 2. Observe that in the above scheme, if theattacker compromises one sensor, all sensors are com-promised. Ideally, one would want only the single sensorto be compromised. Unfortunately, our approach cannotachieve this due to the fact that garbled circuits do notprovide any security guarantee if the adversary is ableto learn both wire keys for one of the inputs. Thus, com-promising a single sensor allows the adversary to learnboth wire keys for a single input, and we can no longerappeal to garbled circuit security to argue that the othersensors’ readings remain hidden. Similarly, we requirethat all function computations are performed on cipher-texts that were computed using the same underlying key(even if the sensor readings were taken at different timesteps). The reason for this is that if ciphertexts weregenerated with respect to different keys, then if the ad-versary compromises the sensors and learns one of thekeys, we will be in a similar situation where the adver-sary can learn both labels for some of the input wires,and we will be unable to appeal to garbled circuit se-curity. To avoid requiring computation only on sensorreadings taken during the same time step, we introduce abound B and have the sensors encrypt the same reading

Page 13: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 446

Table 2. Garbled Encryption Evaluation Results∗

Function # Inputs ` GE.KeyGen GE.Dec |GE.ct| |GE.sk| |selGE.sk|

DNF 64 1-bit 27 µs 14 µs 16 B 8.7 kB 2.0 kB

DNF 128 1-bit 54 µs 28 µs 16 B 17.4 kB 4.1 kB

DNF 256 1-bit 107 µs 56 µs 16 B 34.8 kB 8.2 kB

Thresh 8 32-bit 129 µs 78 µs 512 B 20.9 kB 7.2 kB

Thresh 16 32-bit 523 µs 319 µs 512 B 42.8 kB 15.4 kB

Max 8 16-bit 77 µs 48 µs 256 B 14.2 kB 7.2 kB

Max 8 32-bit 171 µs 107 µs 512 B 28.0 kB 14.3 kB

Max 16 32-bit 613 µs 385 µs 512 B 58.2 kB 30.7 kB

∗ For all settings of parameters, running GE.Setup and GE.Enc took < 1 µs andtherefore this information is omitted from the table. The results in this table are foradaptively secure garbled encryption except for |selGE.sk|, which is the size of functionkeys if we only require selective security.

under future keys in order to use this reading in futurecomputations. We note that introducing this bound isonly one solution; if the functions ft follow some knownpattern, then the sensors could be programmed to easilyknow which time step the sensor reading will be com-puted upon and encrypt under the key for that time stepby hashing the stored key.

7 Implementation and EvaluationIn order to determine the practical performance of gar-bled encryption for our self-processing private sensordata application, we implemented it and ran severaltests on a variety of parameter choices. We implementedboth the selectively secure and adaptively secure vari-ants, but report on the performance numbers for theadaptively secure variant.

Our starting point was the garbled circuit imple-mentation of [21], which improves on JustGarble [9], anopen source library for garbling and evaluating booleancircuits, by implementing half-gates [34], among otherimprovements.

7.1 Implementation Details

The entire implementation was done in C. We used thestandard 128-bit value for the security parameter, set-ting λ = 128. We viewed messages as binary strings and

allowed the length of messages, `, and the number ofinputs to be specified by the user. We instantiated theunderlying PRF in our construction using AES128 andfor adaptively secure garbled encryption, used fixed-keyAES as the random oracle. To ratchet keys forward, onecan use SHA256 as the underlying hash function. Weused the JustGarble library to build all circuits neededin SCD (Simple Circuit Description) format and garblethem. We used the circuits of [23] as building blocks.Although our scheme is easily parallelizable (differentgarbled circuits and wire keys can be computed inde-pendently), we did not utilize multiple threads and ranon a single core. Further implementation details of Just-Garble can be found in [9] and the updated library thatsupports half-gates can be found in [21].

The garbling libraries of [21] and JustGarble [9] im-plement selectively secure garbled circuits. They do notsupport chosen wire-keys since the 0 and 1 labels mustXOR to some fixed secret block. For adaptively securegarbled encryption, it was necessary to use our chosenwire-key transformation and the transformations of [10]to obtain a fine-grained adaptively secure garbled circuitimplementation. We then used this adaptively securegarbled circuit scheme as the building block for our gar-bled encryption implementation. In our construction,when we padded by 0’s to ensure correctness of encryp-tion, we used correctness parameter of 80. With thisparameter setting, a crude union bound for Coalition A-Country B treaty example gives around a 1/253 chance

Page 14: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 447

of a single decryption error at any point over the courseof the 25 years of the treaty.

7.2 Evaluation

All evaluations were ran on a 2012 MacBook Pro laptoprunning Ubuntu 16.04 LTS that supports the AES-NIinstruction set. The laptop has a 2.6 GHz Intel Corei7 processor and 16 GB RAM. Since we did not lever-age parallelism, all evaluations were performed using asingle core. The evaluation results for our adaptively se-cure garbled encryption scheme for various functionali-ties, numbers of inputs, and input lengths can be foundin Table 2. All timings were taken with microsecondresolution and were the average of 100, 000 trials. Wegive timings for the functions DNF, Thresh, and Max.DNF is the function that takes an `-bit input, breaksit into 8 blocks of `/8 bits each, computes the AND ofall the bits in each block, and then computes the ORof all of the AND computations. Thresh is the standardthreshold function that sums all the inputs and outputswhether the resulting sum is larger than some specifiedthreshold value. Max is the standard maximum functionthat outputs the largest value in a series of input values.We include the sizes of all ciphertexts and function keysto give a sense of the amount of data that may need tobe transmitted. Furthermore, we also include the sizeof garbled encryption secret keys for the selectively se-cure variants in Table 2 in order to give a sense of theamount of space that can be saved if one is satisfied withthe weaker notion of selective security.

7.3 Self-Processing Private Sensor DataPerformance Analysis

Our garbled encryption implementation can be imme-diately used to build self-processing private sensor dataschemes (see Sec. 6). Looking at the performance of ourimplementation (Table 2), we see that setup and en-cryption are extremely fast (< 1 µs) for all parametersettings and functionalities. Additionally, since the ci-phertext is simply ` 128-bit wire keys, the ciphertextsare very small, with sizes in the bytes. This is partic-ularly useful if the data collectors have limited compu-tational power and cryptographic capabilities, as theyneed to only be able to run several AES evaluations andtransmit a few hundred bytes of data. Key generationand decryption, which correspond to garbling a circuitand evaluating it, respectively, are also both extremely

fast, with timings in the tens to hundreds of microsec-onds. With function keys in the kilobytes, downloadinga function key from a public source is feasible for userswishing to monitor the system.

From our evaluation results, we observe the prac-ticality of garbled encryption to the following self-processing sensor data scenarios:

Scenario 1: Suppose there are 64 sensors, 8 each in8 different locations, taking pressure readings every 10minutes. The sensors either report normal pressure (0)or abnormal pressure (1). We would like to issue a warn-ing if all of the sensors in an area report abnormal pres-sure for 4 consecutive measurements, while keeping thereadings hidden as we do not want the public to knowwhich location has an issue and the sensors may beprone to false positives, which may cause undue panicif the data is released in the clear.

This corresponds to evaluating DNF on an inputx1|| . . . ||x8 with each xi ∈ {0, 1}32, where each xi corre-sponds to a location and contains the last 4 readings ofall 8 sensors at that location. So, we will need to com-pute wire keys for these inputs, garble the DNF function,and then evaluate the garbled circuit to determine thefunction output on these inputs.

Based on our results in Table 2, we see that ifwe wanted such a warning system to remain onlinefor 10 years and output a signal (“everything fine” or“anomaly detected”) every 10 minutes, this would re-quire generating 525, 600 garbled circuits, which couldbe done in 56.2 seconds and would take 18.3 GB ofstorage. Moreover, once the trusted authority is donegenerating these garbled circuits (56.2 seconds into the10 years), it could go offline forever after it uploadedthe 18.3 GB of garbled circuits, and the system wouldfunction correctly for the next 10 years. Furthermore,an interested party who wishes to monitor this warningsystem needs to download only 34.8 kB of garbled cir-cuit data and 4.1 kB of ciphertext data every 10 minutesand perform a 56 µs computation. Note that the sen-sors can be very weak computational devices and needonly be capable of performing a single AES evaluationand broadcasting 16 bytes every 10 minutes. Addition-ally, anyone wishing to shut down the warning systemwould need to tamper with the sensors deployed in thefield, since after 56.2 seconds, all information necessaryto perform the evaluation (apart from the still to be gen-erated sensor data and its published AES evaluations)is publicly available.

Scenario 2: Recalling the example in the introduc-tion, we saw that a coalition of countries (Coalition A)

Page 15: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 448

Table 3. Self-Processing Private Sensor Data Performance Summary

# Sensors Evaluation Frequency System Duration Trusted Setup Time Data Size/Evaluation Evaluation Time

Scenario 1 64 10 min. 10 years 56.2 seconds 38.9 kB 56 µs

Scenario 2 16 1 hour 25 years 2.2 minutes 66.4 kB 385 µs

Scenario 1 corresponds to the DNF functionality on 256 1-bit inputs. Scenario 2 corresponds to the Max functionality on 16 32-bitinputs.

wanted to negotiate a 25-year treaty with Country B

that would allow each country in Coalition A to learnthe maximum radiation level observed every hour at anyof the 16 sensors deployed throughout Country B. As-suming the radiation measurement can be given as a32-bit integer, we see from our results in Table 2 that atthe trusted setup ceremony, Country B would need togenerate 219, 000 garbled circuits, which could be donein 2.2 minutes and would take 12.7 GB of storage. Oncethe garbled circuits have been given to Coalition A andthe sensors appropriately deployed in Country B, eachmember of Coalition A could learn the maximum radi-ation level observed every hour for the next 25 years bydownloading 8.2 kB of ciphertext data every hour andperforming a 385 µs computation.

We summarize these results in Table 3. We observethat, in general, the trusted setup time scales linearlywith the total system duration divided by the evaluationfrequency, as this corresponds to the number of garbledcircuits that must be generated at setup time. The datasize/evaluation corresponds to the size of the functionkey and ciphertext data that will need to be downloadedto perform a function evaluation. The evaluation timeis simply the time to run GE.Dec. Both of these val-ues do not depend on the evaluation frequency or thesystem duration, but rather only on the complexity ofthe function to be evaluated (which, in turn, is depen-dent on the number of sensors times the bit-length ofthe sensor readings as this is the length of the input tothe circuit). We view this as a significant positive, asincreasing the system duration only increases the initialtrusted setup time and the amount of function key datato be stored, but does not affect the performance of thesystem at each time step.

8 Practical DeploymentChallenges

Our garbled encryption implementation is intended asa proof of concept, and its performance suggests thatour construction could be used to build practical self-processing sensor data schemes. However, there are var-ious challenges that would need to be addressed beforeone would be confident in deploying such a system.

One desirable property would be for our system tobe robust to sensor failures. As currently described, asingle sensor failure can render the entire system un-usable. To mitigate this, in practice, one can either (i)run several trusted setups and deploy the entire sys-tem several times in parallel or (ii) deploy the sensorsin a redundant manner. If N copies of the system aredeployed according to solution (i), then the amount ofstored function key data increases by a factor of N andthe system stays intact unless one sensor from each ofthe N copies fails. If N copies of each sensor are de-ployed (with different labels, but measuring the samedata), then the amount of stored function key data in-creases by Nm, where m is the arity of the functionsthe scheme supports. However, this system remains in-tact unless all N copies of the same sensor fail. Whichmethod to employ must be determined based on thelikelihood of sensor failure and the arity of functionsthat the scheme intends to support. Additionally, forsystems intended to remain functional for a long periodof time, it may be necessary to periodically replace sen-sors. This can be done by executing the trusted setupagain every so often, or preferably, having the initialtrusted setup also procure “replacement” sensors to beinstalled at an appropriate point in the future.

Another deployment consideration is the potentialcompromise of sensors. Indeed, each sensor must beequipped with a secret key, which, if extracted, compro-mises the entire system from that point forward. Ourfirst observation is that the sensors are deployed by thedata owner. That is, in our treaty scenario, Country

Page 16: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 449

B deploys the sensors in its own territory and shouldbe greatly incentivized to protect them from outsidetampering. However, in situations where Country B isincentivized to lie about the sensor readings, we wantprotection from tampering by Country B itself as well.To this end, the sensors can be equipped with a tamper-resistant module [5] to prevent Country B from tamper-ing with the sensor without destroying the key in theprocess. Moreover, the sensors can be equipped with atrusted platform module (TPM) or a secure hardwareenclave to protect the key, with all encryption opera-tions carried out in the trusted execution environment.We can also require Country B to be subject to periodicinspections that would verify that the sensors remaincorrectly deployed and have not been tampered with ifthe protection provided by the tamper-resistant moduleis not sufficient.

A final concern is that of clock synchronization. Inparticular, in order for the system to remain accurate,we need noncommunicating sensors to have the samesystem time. Over the course of the lifetime of the sys-tem, the various clocks on the different sensors may fallout of sync. A first observation is that in a variety of sce-narios (for example, in cases where we are taking a read-ing every hour), taking the reading exactly on the houris not of importance, and we would not expect the clocksto go out of sync by more than an hour. For situationswith much more frequent readings, clock synchroniza-tion may be an issue. In such cases, one would use thevarious techniques on clock synchronization found in theliterature, which can handle even a limited connectivitynetwork between the sensors (see, for example, [16]).

9 Conclusions and Related WorkConclusionsOur garbled encryption implementation was intendedas a proof of concept to show that our construction ispractical for various self-processing sensor data scenar-ios. We believe that garbled encryption itself is a usefulprimitive that may be useful for other applications be-yond self-processing sensor data.

Garbled CircuitsGarbled circuits were first introduced by Yao [33] andhave been studied extensively over the years. [26] pro-vides a formal proof and a rigorous treatment of Yao’sconstruction and [11] gives abstractions of garbling

schemes and formal proofs of security. There have beenmany works, such as [8, 22–24, 27, 30, 34], dedicated toreducing the amount of data that must be transmittedper garbled gate. Additionally, there have been variousimplementations (for example, [9, 20, 21, 32, 34]) of gar-bled circuits.

Multi-input Functional Encryption and Practical FEMiFE was first introduced in [19]. Constructing suchschemes and variants has been a topic of much researchin recent years [3, 4, 12, 14, 15, 17, 25]. [28] intro-duced a relaxation of functional encryption, called con-trolled functional encryption, in order to obtain an im-plementable variant of FE. While our study of garbledencryption comes from a similar motivation (namely,efficient improvements), our approach is quite different.Specifically, unlike garbled encryption, controlled FE re-quires a trusted authority to remain online at all timesand issue function keys for a function and ciphertextpair that depend on the ciphertext. Furthermore, theirwork focuses on the single-input setting, whereas we fo-cus on the multi-input setting.

10 AcknowledgementsWe thank David J. Wu for pointing us to the garbled cir-cuit library of [21] and the anonymous PETS reviewersfor suggesting key evolution to protect the data owner’spast privacy in the event that a sensor is compromised.

Nathan Manohar and Amit Sahai are supported inpart from DARPA SAFEWARE and SIEVE awards,NTT Research, NSF Frontier Award 1413955, NSFgrant 1619348, BSF grant 2012378, a Xerox Faculty Re-search Award, a Google Faculty Research Award, anequipment grant from Intel, and an Okawa Founda-tion Research Grant. This material is based upon worksupported by the Defense Advanced Research ProjectsAgency through Award HR00112020024 and the ARLunder Contract W911NF-15-C- 0205. The views ex-pressed are those of the authors and do not reflect theofficial policy or position of the Department of Defense,the National Science Foundation, NTT Research, or theU.S. Government.

Abhishek Jain is supported in part by DARPA Safe-ware grant W911NF-15-C-0213, NSF grant 1814919,Samsung research award, Johns Hopkins UniversityCatalyst award, and NSF CAREER award 1942789.

Page 17: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 450

References[1] [n.d.]. http://www.basel.int/[2] [n.d.]. https://unfccc.int/process/the-kyoto-protocol[3] Michel Abdalla, Romain Gay, Mariana Raykova, and

Hoeteck Wee. 2017. Multi-input Inner-Product FunctionalEncryption from Pairings. In Advances in Cryptology - EU-ROCRYPT 2017 - 36th Annual International Conference onthe Theory and Applications of Cryptographic Techniques,Paris, France, April 30 - May 4, 2017, Proceedings, Part I.601–626.

[4] Prabhanjan Ananth and Abhishek Jain. 2015. Indistin-guishability Obfuscation from Compact Functional Encryp-tion. In Advances in Cryptology – CRYPTO 2015: 35thAnnual Cryptology Conference, Santa Barbara, CA, USA,August 16-20, 2015, Proceedings, Part I, Rosario Gennaroand Matthew Robshaw (Eds.). Springer Berlin Heidelberg,Berlin, Heidelberg, 308–326. https://doi.org/10.1007/978-3-662-47989-6_15

[5] Ross J. Anderson. 2008. Security Engineering: A Guide toBuilding Dependable Distributed Systems (2 ed.). WileyPublishing.

[6] BBC. [n.d.]. https://www.bbc.com/news/business-34324772

[7] BBC. [n.d.]. Iran nuclear deal: Key details. https://www.bbc.com/news/world-middle-east-33521655

[8] D. Beaver, S. Micali, and P. Rogaway. 1990. The RoundComplexity of Secure Protocols. In Proceedings of theTwenty-second Annual ACM Symposium on Theory of Com-puting (STOC ’90). ACM, New York, NY, USA, 503–513.https://doi.org/10.1145/100216.100287

[9] Mihir Bellare, Viet Tung Hoang, Sriram Keelveedhi, andPhillip Rogaway. 2013. Efficient Garbling from a Fixed-KeyBlockcipher. In Proceedings of the 2013 IEEE Symposiumon Security and Privacy (SP ’13). IEEE Computer Society,Washington, DC, USA, 478–492. https://doi.org/10.1109/SP.2013.39

[10] Mihir Bellare, Viet Tung Hoang, and Phillip Rogaway. 2012.Adaptively Secure Garbling with Applications to One-TimePrograms and Secure Outsourcing. In Advances in Cryptol-ogy – ASIACRYPT 2012, Xiaoyun Wang and Kazue Sako(Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 134–153.

[11] Mihir Bellare, Viet Tung Hoang, and Phillip Rogaway. 2012.Foundations of Garbled Circuits. In Proceedings of the 2012ACM Conference on Computer and Communications Se-curity (CCS ’12). ACM, New York, NY, USA, 784–796.https://doi.org/10.1145/2382196.2382279

[12] Dan Boneh, Kevin Lewi, Mariana Raykova, Amit Sahai,Mark Zhandry, and Joe Zimmerman. 2015. SemanticallySecure Order-Revealing Encryption: Multi-input FunctionalEncryption Without Obfuscation. In Advances in Cryptology- EUROCRYPT 2015: 34th Annual International Conferenceon the Theory and Applications of Cryptographic Tech-niques, Sofia, Bulgaria, April 26-30, 2015, Proceedings, PartII. Springer Berlin Heidelberg, Berlin, Heidelberg, 563–594.https://doi.org/10.1007/978-3-662-46803-6_19

[13] Dan Boneh, Amit Sahai, and Brent Waters. 2011. Func-tional Encryption: Definitions and Challenges. In Theory of

Cryptography - 8th Theory of Cryptography Conference,TCC 2011, Providence, RI, USA, March 28-30, 2011. Pro-ceedings. 253–273.

[14] Zvika Brakerski, Ilan Komargodski, and Gil Segev. 2016.Multi-input Functional Encryption in the Private-Key Set-ting: Stronger Security from Weaker Assumptions. In Ad-vances in Cryptology - EUROCRYPT 2016 - 35th AnnualInternational Conference on the Theory and Applications ofCryptographic Techniques, Vienna, Austria, May 8-12, 2016,Proceedings, Part II. 852–880.

[15] Brent Carmer, Alex J. Malozemoff, and Mariana Raykova.2017. 5Gen-C: Multi-input Functional Encryption and Pro-gram Obfuscation for Arithmetic Circuits. In Proceedings ofthe 2017 ACM SIGSAC Conference on Computer and Com-munications Security, CCS 2017, Dallas, TX, USA, October30 - November 03, 2017. 747–764.

[16] Flaviu Cristian. 1989. Probabilistic Clock Synchronization.Distrib. Comput. 3, 3 (Sept. 1989), 146–158. https://doi.org/10.1007/BF01784024

[17] Ben Fisch, Dhinakaran Vinayagamurthy, Dan Boneh, andSergey Gorbunov. 2017. IRON: Functional Encryption usingIntel SGX. In Proceedings of the 2017 ACM SIGSAC Con-ference on Computer and Communications Security, CCS2017, Dallas, TX, USA, October 30 - November 03, 2017.765–782. https://doi.org/10.1145/3133956.3134106

[18] Oded Goldreich, Silvio Micali, and Avi Wigderson. 1987.How to Play any Mental Game or A Completeness Theoremfor Protocols with Honest Majority. In STOC.

[19] Shafi Goldwasser, S. Dov Gordon, Vipul Goyal, AbhishekJain, Jonathan Katz, Feng-Hao Liu, Amit Sahai, ElaineShi, and Hong-Sheng Zhou. 2014. Multi-input Func-tional Encryption. In Advances in Cryptology – EURO-CRYPT 2014: 33rd Annual International Conference onthe Theory and Applications of Cryptographic Techniques,Copenhagen, Denmark, May 11-15, 2014. Proceedings.Springer Berlin Heidelberg, Berlin, Heidelberg, 578–602.https://doi.org/10.1007/978-3-642-55220-5_32

[20] Adam Groce, Alex Ledger, Alex J. Malozemoff, and ArkadyYerukhimovich. 2016. CompGC: Efficient Offline/OnlineSemi-honest Two-party Computation. Cryptology ePrintArchive, Report 2016/458. https://eprint.iacr.org/2016/458.

[21] Karthik A. Jagadeesh, David J. Wu, Johannes A.Birgmeier, Dan Boneh, and Gill Bejerano. 2017.Deriving genomic diagnoses without revealing pa-tient genomes. Science 357, 6352 (2017), 692–695. https://doi.org/10.1126/science.aam9710arXiv:http://science.sciencemag.org/content/357/6352/692.full.pdf

[22] Vladimir Kolesnikov, Payman Mohassel, and Mike Rosulek.2014. FleXOR: Flexible Garbling for XOR Gates That BeatsFree-XOR. In Advances in Cryptology - CRYPTO 2014 -34th Annual Cryptology Conference, Santa Barbara, CA,USA, August 17-21, 2014, Proceedings, Part II. 440–457.

[23] Vladimir Kolesnikov, Ahmad-Reza Sadeghi, and ThomasSchneider. 2009. Improved Garbled Circuit Building Blocksand Applications to Auctions and Computing Minima. InProceedings of the 8th International Conference on Cryp-tology and Network Security (CANS ’09). Springer-Verlag,Berlin, Heidelberg, 1–20. https://doi.org/10.1007/978-3-642-10433-6_1

Page 18: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 451

[24] Vladimir Kolesnikov and Thomas Schneider. 2008. Im-proved Garbled Circuit: Free XOR Gates and Applica-tions. In Proceedings of the 35th International Collo-quium on Automata, Languages and Programming, Part II(ICALP ’08). Springer-Verlag, Berlin, Heidelberg, 486–498.https://doi.org/10.1007/978-3-540-70583-3_40

[25] Kevin Lewi, Alex J. Malozemoff, Daniel Apon, BrentCarmer, Adam Foltzer, Daniel Wagner, David W. Archer,Dan Boneh, Jonathan Katz, and Mariana Raykova. 2016.5Gen: A Framework for Prototyping Applications Using Mul-tilinear Maps and Matrix Branching Programs. In Proceed-ings of the 2016 ACM SIGSAC Conference on Computer andCommunications Security (CCS ’16). ACM, New York, NY,USA, 981–992. https://doi.org/10.1145/2976749.2978314

[26] Yehuda Lindell and Benny Pinkas. 2009. A Proof of Securityof Yao’s Protocol for Two-Party Computation. J. Cryptol.22, 2 (April 2009), 161–188. https://doi.org/10.1007/s00145-008-9036-8

[27] Moni Naor, Benny Pinkas, and Reuban Sumner. 1999. Pri-vacy Preserving Auctions and Mechanism Design. In Pro-ceedings of the 1st ACM Conference on Electronic Com-merce (EC ’99). ACM, New York, NY, USA, 129–139.https://doi.org/10.1145/336992.337028

[28] Muhammad Naveed, Shashank Agrawal, Manoj Prab-hakaran, XiaoFeng Wang, Erman Ayday, Jean-PierreHubaux, and Carl Gunter. 2014. Controlled FunctionalEncryption. In Proceedings of the 2014 ACM SIGSACConference on Computer and Communications Secu-rity (CCS ’14). ACM, New York, NY, USA, 1280–1291.https://doi.org/10.1145/2660267.2660291

[29] Adam O’Neill. 2010. Definitional Issues in Functional En-cryption. IACR Cryptology ePrint Archive 2010 (2010), 556.http://eprint.iacr.org/2010/556

[30] Benny Pinkas, Thomas Schneider, Nigel P. Smart, andStephen C. Williams. 2009. Secure Two-Party Computa-tion Is Practical. In Proceedings of the 15th InternationalConference on the Theory and Application of Cryptologyand Information Security: Advances in Cryptology (ASI-ACRYPT ’09). Springer-Verlag, Berlin, Heidelberg, 250–267.https://doi.org/10.1007/978-3-642-10366-7_15

[31] Amit Sahai and Brent Waters. 2005. Fuzzy Identity-BasedEncryption. In Advances in Cryptology - EUROCRYPT2005, 24th Annual International Conference on the The-ory and Applications of Cryptographic Techniques, Aarhus,Denmark, May 22-26, 2005, Proceedings. 457–473.

[32] Ebrahim M. Songhori, Siam U. Hussain, Ahmad-RezaSadeghi, Thomas Schneider, and Farinaz Koushanfar.2015. TinyGarble: Highly Compressed and Scalable Se-quential Garbled Circuits. In Proceedings of the 2015IEEE Symposium on Security and Privacy (SP ’15).IEEE Computer Society, Washington, DC, USA, 411–428.https://doi.org/10.1109/SP.2015.32

[33] Andrew Chi-Chih Yao. 1986. How to Generate and Ex-change Secrets. In Proceedings of the 27th Annual Sym-posium on Foundations of Computer Science (SFCS ’86).IEEE Computer Society, Washington, DC, USA, 162–167.https://doi.org/10.1109/SFCS.1986.25

[34] Samee Zahur, Mike Rosulek, and David Evans. 2015. TwoHalves Make a Whole - Reducing Data Transfer in GarbledCircuits Using Half Gates. In Advances in Cryptology - EU-

ROCRYPT 2015 - 34th Annual International Conference onthe Theory and Applications of Cryptographic Techniques,Sofia, Bulgaria, April 26-30, 2015, Proceedings, Part II. 220–250.

A Deferred Definitions

A.1 Security Definition for GarbledEncryption

Definition 5 (IND-secure Garbled Encryption). Agarbled encryption scheme GE for n-ary functionsF = {Fλ}λ∈[N], message space X = {Xλ}λ∈[N], andquery pattern Q is selectively secure if for any PPTadversary A, there exists a negligible function µ(·) suchthat for all sufficiently large λ ∈ N, the advantage of Ais

AdvGEA =∣∣∣Pr[ExptGEA (1λ, 0) = 1]− Pr[ExptGE

A (1λ, 1) = 1]∣∣∣

≤ µ(λ),

where for each b ∈ {0, 1} and λ ∈ N, the experimentExptGE

A (1λ, b) is defined below:1. Chal computes GE.msk← GE.Setup(1λ).2. Challenge message queries: The following is re-

peated at most a polynomial number of times: Asubmits queries, (xi,0, xi,1, ji), with xi,0, xi,1 ∈ Xλand ji ∈ N, to the challenger Chal. Chal computesGE.cti ← GE.Enc(GE.msk, xi,b, ji). The challengerChal then sends GE.cti to the adversary A.

3. Function queries: The following is repeated atmost a polynomial number of times: A submitsa function query (f,Q) ∈ Fλ × Q to Chal. Thechallenger Chal computes GE.skf,Q ← GE.KeyGen(GE.msk, f,Q) and sends it to A.

4. If there exists a function query (f,Q) and challengemessage queries

((x1,0, x1,1, j1), . . . , (xn,0, xn,1, jn)

)such that f(x1,0, . . . , xn,0) 6= f(x1,1, . . . , xn,1) and(j1, . . . , jn) ∈ Q for j1, . . . , jn ∈ N, then the outputof the experiment is set to ⊥. Otherwise, the outputof the experiment is set to b′, where b′ is the outputof A.

Definition 6 (Adaptive IND-secure Garbled Encryption).A garbled encryption scheme GE for n-ary functionsF = {Fλ}λ∈[N], message space X = {Xλ}λ∈[N], andquery pattern Q is adaptively secure if for any PPTadversary A, there exists a negligible function µ(·) such

Page 19: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 452

that for all sufficiently large λ ∈ N, the advantage of Ais

AdvGEA =∣∣∣Pr[ExptGEA (1λ, 0) = 1]− Pr[ExptGE

A (1λ, 1) = 1]∣∣∣

≤ µ(λ),

where for each b ∈ {0, 1} and λ ∈ N, the experimentExptGE

A (1λ, b) is defined below:1. Chal computes GE.msk← GE.Setup(1λ).2. The adversary is allowed to make the following two

types of queries in any arbitrary order.– Challenge message query: A submits a

query, (i, xi,0, xi,1), with xi,0, xi,1 ∈ Xλ andi ∈ N, to the challenger Chal. Chal computesGE.cti ← GE.Enc(GE.msk, xi,b, i). The chal-lenger Chal then sends GE.cti to the adversaryA.

– Function query: A submits a function query(f,Q) ∈ Fλ × Q to Chal. The challenger Chalcomputes GE.skf,Q ← GE.KeyGen(GE.msk, f,Q)and sends it to A.

3. If there exists a function query (f,Q) and challengemessage queries

((i1, x1,0, x1,1), . . . , (in, xn,0, xn,1)

)such that f(x1,0, . . . , xn,0) 6= f(x1,1, . . . , xn,1) and(i1, . . . , in) ∈ Q for i1, . . . , in ∈ N, then the outputof the experiment is set to ⊥. Otherwise, the outputof the experiment is set to b′, where b′ is the outputof A.

B Deferred Proofs

B.1 Proof of Thm. 2

By Thm. 1, there exists GC = (Gen,GrbI,GrbC,EvalGC),a secure garbling scheme. Let Π = (KeyGen,Enc,Dec) bean IND-CPA secure symmetric key encryption scheme.Consider the following garbling scheme GC′ = (Gen′,GrbI′,GrbC′,EvalGC′) that is chosen-wire key secure.– Gen′ = Gen and GrbI′ = GrbI.

– GrbC′(gcsk, C, ~k): GrbC′ first generates wire keys ~k′

uniformly at random. It then runs GrbC(gcsk, C, ~k′)to obtain C. For all i ∈ [`] and j ∈ {0, 1}, GrbC′

computes ctji = Enc(kji , k′ji ||0 . . . 0) where there are

λ 0’s padded at the end of the message. For eachi ∈ [`], it generates a random bit bi ∈ {0, 1} and setsct′0i = ctbi

i and ct′1i = ct¬bii . It then outputs(

C, {ct′ji}i∈[`],j∈{0,1}

)

as its garbled circuit.

– EvalGC′((C, {ctji}i∈[`],j∈{0,1}), (kx11 , . . . , kx`

` )): Forall i ∈ [`], EvalGC′ runs Dec(kxi

i , ct0i ) and

Dec(kxii , ct1

i ). If neither or both of these resultingdecryptions have λ 0’s padded at the end, EvalGC′

returns ⊥. Otherwise, it removes the λ 0’s from thedecryption that has them and sets the result to bek′xii . EvalGC′ then runs EvalGC(C, (k′x1

1 , . . . , k′x`

` ))and outputs the result.

Correctness:This follows immediately from the correctness of theoriginal garbling scheme GC and the correctness andsecurity of the encryption scheme Π. If we are givena garbled circuit (C, {ctji}i∈[`],j∈{0,1}) and wire keys(kx1

1 , . . . , kx`

` ), then by construction, for each i ∈ [`], oneof the ctji ’s will decrypt using kxi

i to give k′xii ||0 . . . 0.

With all but negl(λ) probability, none of the others willdecrypt to a value that has λ 0’s at the end. Thereforewith overwhelming probability, EvalGC′ will correctlyrecover (k′xi

i , . . . , k′x`

` ) and by the correctness of GC,the EvalGC computation will output C(x).

Chosen Wire-Key Security:The simulator SimGC′(1λ, φ(C), C(x),k) for GC′ runs asfollows: It first runs the simulator for GC, SimGC(1λ,φ(C), C(x)) to obtain (C, (k′1, . . . , k′`)). It then com-putes, for all i ∈ [`], cti = Enc(ki, k′i||0 . . . 0) wherek′i is padded by λ 0’s. It then generates wire keysr1, . . . , r` and r′1, . . . , r′` uniformly at random and com-putes ct′i = Enc(ri, r′i||0 . . . 0) for all i ∈ [`]. Then, for alli ∈ [`], it uniformly generates a bit bi ∈ {0, 1} and setsctbii = cti and ct¬bi

i = ct′i. It then outputs(C, {ctji}i∈[`],j∈{0,1}

)as its garbled circuit.

We will now show that the output of SimGC′ is in-distinguishable from the honestly generated garbled cir-cuit.

Let H0 denote the “real" game where on input acircuit C ∈ C of input length `, x ∈ {0, 1}`, and wirekeys k = (k1, . . . , k`), the garbled circuit is generatednormally as in Definition 2.

Page 20: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 453

Let H`+1 denote the “fake" game where on theabove input, the garbled circuit is generated by runningSimGC′(1λ, φ(C), C(x),k).

For i ∈ [`], let Hi denote the intermediate gamewhere on the above input, the garbled circuit is gener-ated as inHi−1 except when GrbC′ computes ct¬xi

i usingEnc, it sets

ct¬xii = Enc(k¬xi

i , ri||0 . . . 0),

where ri is a uniformly random wire key. So, in H`, thegarbled circuit generator only encrypts the wire keyscorresponding to x from C ← GrbC(gcsk, C, ~k′) using theinput wire keys k = (k1, . . . , k`) and does not encryptthe wire keys of C corresponding to the bitwise negationof x.

Lemma 1. For all i ∈ [`],

Hi−1 ≈c Hi.

Suppose there exists an algorithm A that can distin-guish between Hi−1 and Hi with nonnegligible proba-bility. Then, consider the following algorithm A′ thatcan break the IND-CPA security of Π. A′ runs A andwhen A makes an oracle query (C, x,k), A′ runs theHi−1 game except at the step where it would usuallycompute

ct¬xii = Enc(k¬xi

i , k′¬xi

i ||0 . . . 0).

Instead, it submits the pair of messages(k′¬xi

i ||0 . . . 0, ri||0 . . . 0) to its oracle and sets ct¬xii to

be its result. A′ then outputs the response of A. Notethat if the random bit b of A′’s oracle was set to 0,then it perfectly simulates Hi−1 and if its random bitwas set to 1, it perfectly simulates Hi. Therefore, if Acan distinguish these two hybrids, then A′ can breakthe IND-CPA security of Π. Therefore, it must be thatHi−1 ≈c Hi.

Lemma 2.

H` ≈c H`+1.

Suppose there exists an algorithm A that can dis-tinguish between H` and H`+1 with nonnegligibleprobability. Then, there exists an algorithm A′ thatcan break the security of the secure garbling schemeGC used in the construction of GC′. A′ simply runsSimGC′(1λ, φ(C), C(x),k) and when the SimGC′ wouldrun SimGC, the simulator for GC, A′ queries its ora-cle on (C, x) and uses this as the output of SimGC.

Note that if A′’s oracle is the actual garbling scheme,it perfectly simulates H` and if its oracle is SimGC, itperfectly simulates H`+1. Therefore, if A exists, thenA′ can distinguish between the output of SimGC andthe actual garbled circuit, contradicting the security ofGC.

So, it follows that H0 ≈c H`+1 and therefore, GC′ isa chosen-wire key secure garbling scheme.

B.2 Proof of Thm. 4

We will show that ExptGEA (1λ, 0) ≈c ExptGE

A (1λ, 1) via aseries of hybrid games.

Hybrid game H0(1λ, b).This game is the same as ExptGE

A (1λ, b) except for thefollowing: Whenever the challenger Chal would generatewire keys corresponding to an index j or a functionquery (f,Q), Chal generates these wire keys uniformlyat random instead of running PRF.Eval. It then storesthese wire keys in a lookup table and if it ever needs awire key corresponding to an index or function querythat was previously generated, it uses the value storedin the table instead of generating a new random value.This can be viewed as modifying ExptGE

A (1λ, b) by re-placing PRF.Eval(GE.msk, ·) by a truly random function.

Hybrid game H1(1λ, b).This game is the same as H0(1λ, b) except for the fol-lowing. Whenever the challenger Chal would generate agarbled circuit C for a function query (f, (j1, . . . , jn)),Chal generates the garbled circuit by running the gar-bled circuit simulator

SimGC(1λ, φ(C), f(x1, . . . , xn),k)

where x1, . . . , xn are the messages corresponding to theindices j1, . . . , jn, respectively and k are the wire keyscorresponding to (x1, j1), . . . , (xn, jn) (by finding themin the stored lookup table if they had been generatedpreviously or else by choosing them uniformly at ran-dom). Note that this is possible because A must submitall challenge message queries prior to any function keyqueries. If a message xi is undefined, Chal samples ituniformly at random so that f(x1, . . . , xn) is defined.

Page 21: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 454

Lemma 3.

ExptGEA (1λ, b) ≈c H0(1λ, b).

Suppose there exists some algorithm A that can distin-guish between the two games with nonnegligible advan-tage. Then, consider the algorithm A′ that can distin-guish PRF from a truly random function with nonnegli-gible advantage.A′ runs A and simulates the role of the chal-

lenger Chal in the game ExptGEA (1λ, b) with the fol-

lowing modification. Whenever Chal would computePRF.Eval(GE.msk, ·) on some input x, Chal insteadqueries its random oracle F on x. If A guesses thatit is playing game H0(1λ, b), A′ guesses that F is atruly random function and if A guesses that it is playingExptGE

A (1λ, b), A′ guesses that F is a PRF.Note that if F is a PRF, then A′ simulates the game

ExptGEA (1λ, b) exactly. Similarly, if F is a truly random

function, then A′ simulates the game H0(1λ, b) exactly.Therefore, it follows that A′ could distinguish betweenPRF and a truly random function with nonnegligibleadvantage, breaking the pseudorandomness property ofPRF.

Lemma 4.

H0(1λ, b) ≈c H1(1λ, b).

Suppose there exists some algorithm A that can distin-guish between the two games with nonnegligible advan-tage. Then, consider the algorithm A′ that breaks thechosen-wire key security of GC.A′ runs A and simulates the role of the chal-

lenger Chal in the game H0(1λ, b) with the followingmodification. Whenever Chal would need to output agarbled circuit C by responding to a function query(f, (j1, . . . , jn)), A′ instead queries its oracle on circuitC, input x, and garbled circuit labels ~k and outputsthe resulting circuit. Here, C is the circuit computingf , x is the input x1|| . . . ||xn, where xi is the messagecorresponding to index ji, and ~k are the wire keys cor-responding to the indices (j1, . . . , jn). If any of the mes-sages, xi or wire keys in ~k are undefined (there was nomessage submitted corresponding to ji), A′ generatesthem randomly and stores them in a lookup table. IfA guesses that it is playing game H0(1λ, b), then A′

guesses that its oracle is GrbC and if A guesses that itis playing H1(1λ, b), then A′ guesses that its oracle isSimGC.

Note that if A′’s oracle is GrbC, then A′ simulatesthe game H0(1λ, b) perfectly. Similarly, if A′’s oracle is

SimGC, then A′ simulates the game H1(1λ, b) perfectly.Therefore, it follows that A′ could distinguish betweenthe output of GrbC with specified input wire keys andSimGC with nonnegligible advantage, contradicting thechosen-wire key security of the garbling scheme GC.

Lemma 5.

H1(1λ, 0) ≈c H1(1λ, 1).

Since the adversary’s view in both of these games doesnot depend on the actual value of the messages, butrather on only the function evaluations f(x1, . . . , xn)that the adversary can compute, and since the adver-sary can only submit challenge message pairs that agreeon all computable function evaluations, it follows thatthe view of any admissible adversary in these games isidentical.

Combining the above lemmas, it follows that

ExptGEA (1λ, 0) ≈c ExptGE

A (1λ, 1)

B.3 Proof of Thm. 5

We will show that ExptGEA (1λ, 0) ≈c ExptGE

A (1λ, 1) via aseries of hybrid games.

Hybrid game H0(1λ, τ ).This game is the same as ExptGE

A (1λ, τ) except for thefollowing: Whenever the challenger Chal would generatewire keys corresponding to an index j, Chal generatesthese wire keys uniformly at random instead of runningPRF.Eval. It then stores these wire keys in a lookup ta-ble and if it ever needs a wire key corresponding to anindex that was previously generated, it uses the valuestored in the table instead of generating a new randomvalue. This can be viewed as modifying ExptGE

A (1λ, τ) byreplacing PRF.Eval(GE.msk, ·) by a truly random func-tion.

Hybrid game H1(1λ, τ ).This game is the same as H0(1λ, τ) except for the fol-lowing: Whenever the challenger Chal would respond to

Page 22: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 455

a function query, it generates the Xbα’s uniformly at ran-

dom and programs O1 so that O1(kbα||V ||Z)⊕O(Yα) =Xbα. Similarly, it generates the T bα values uniformly at

random and programsO2 so thatO2(kbα||V )⊕(Zα||0λ) =T bα.

Hybrid game H2(1λ, τ ).This game is the same asH1(1λ, τ) except for the follow-ing: Whenever the challenger Chal would generate a gar-bled circuit C for a function query (f, (j1, . . . , jn)), Chalgenerates the garbled circuit C by running the adaptivegarbled circuit simulator

C ← SimGC(1λ, φ(C)),

It sets T bα for α ∈ [`], b ∈ {0, 1} to be uniformlyrandom strings.

Whenever the challenger Chal has given out botha function key (C, V, {T 0

α, T1α}α∈[`], Q) and an input la-

bel kα for wire α ∈ [`], it runs SimGC on the same in-put query to obtain the simulated label k′α = (Xα ⊕O(Yα), Zα). It then samples a uniformly random bit b′

and programs O2 so that O2(kα||V ) ⊕ T b′

α = Zα||0λ.Once it has submitted an input query for every α ∈ [`],it programs O1 so that O1(kα||V ||Z)⊕O(Yα) = Xα.

Hybrid game H3(1λ, τ ).This game is the same as H2(1λ, τ) except for the fol-lowing: Whenever the challenger Chal would generatewire keys k0

α, k1α for an input index α ∈ [`], it instead

generates a single wire key kα and uses this as the valuefor both k0

α and k1α.

Lemma 6.

ExptGEA (1λ, τ) ≈c H0(1λ, τ).

Suppose there exists some algorithm A that can distin-guish between the two games with nonnegligible advan-tage. Then, consider the algorithm A′ that can distin-guish PRF from a truly random function with nonnegli-gible advantage.A′ runs A and simulates the role of the chal-

lenger Chal in the game ExptGEA (1λ, τ) with the fol-

lowing modification. Whenever Chal would computePRF.Eval(GE.msk, ·) on some input x, Chal insteadqueries its random oracle F on x. If A guesses thatit is playing game H0(1λ, τ), A′ guesses that F is atruly random function and if A guesses that it is playingExptGE

A (1λ, τ), A′ guesses that F is a PRF.

Note that if F is a PRF, then A′ simulates the gameExptGE

A (1λ, τ) exactly. Similarly, if F is a truly randomfunction, then A′ simulates the game H0(1λ, τ) exactly.Therefore, it follows that A′ could distinguish betweenPRF and a truly random function with nonnegligibleadvantage, breaking the pseudorandomness property ofPRF.

Lemma 7.

H0(1λ, τ) ≈c H1(1λ, τ).

These two hybrids have identical behavior and are there-fore indistinguishable.

Lemma 8.

H1(1λ, τ) ≈c H2(1λ, τ).

Suppose there exists some algorithm A that can distin-guish between the two games with nonnegligible advan-tage. Then, consider the algorithm A′ that breaks theadaptive security of GC.A′ runs A and simulates the role of the chal-

lenger Chal in the game H1(1λ, τ) with the followingmodification. Whenever Chal would need to output agarbled circuit C by responding to a function query(f, (j1, . . . , jn)), A′ instead queries its oracle on circuitC to receive a garbled circuit C. Whenever A makes aninput query, if a corresponding function query has beenmade A′ queries its oracle on this input and sets k′α tobe the received value. If not, A′ will query its oracle onthe above once a corresponding function query has beenmade.

Note that if A′’s oracle is the real garbled circuit,then A′ perfect simulates H1(1λ, τ) for A. If A′’s oracleis the simulated garbled circuit, then A′ perfectly sim-ulates H2(1λ, τ) for A.

Lemma 9.

H2(1λ, τ) ≈c H3(1λ, τ).

These two hybrids have identical behavior since at mostone of k0

α or k1α is ever used, so the fact that they are

identical is irrelevant.

Lemma 10.

H3(1λ, 0) ≈c H3(1λ, 1).

Since the adversary’s view in both of these games doesnot depend on the actual value of the messages, but

Page 23: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 456

rather on only the function evaluations f(x1, . . . , xn)that the adversary can compute, and since the adver-sary can only submit challenge message pairs that agreeon all computable function evaluations, it follows thatthe view of any admissible adversary in these gamesis identical. Note that the input queries to SimGC dodepend on the input, but SimGC’s behavior is onlydependent on the function evaluation and not on theactual input, therefore the views are identical.

Combining the above lemmas, it follows that

ExptGEA (1λ, 0) ≈c ExptGE

A (1λ, 1)

C Time-Based GarbledEncryption

In this section, we formally define, construct, and proveadaptive security for time-based garbled encryption.

C.1 Definition

Syntax.Let X = {Xλ}λ∈N and Y = {Yλ}λ∈N be ensembles whereXλ, Yλ are sets each with size dependent on λ. LetF = {Fλ}λ∈N be an ensemble where each Fλ is a fi-nite collection of n-ary functions. Each function f ∈ Fλtakes as input strings x1, . . . , xn, where each xi ∈ Xλ,and outputs f(x1, . . . , xn) ∈ Yλ. Let Q denote a set ofsets, where each set Q ∈ Q is a subset of Nn.

A time-based garbled encryption scheme TGE forn-ary functions F and query pattern Q consists of fouralgorithms(TGE.Setup,TGE.KeyGen,TGE.Enc,TGE.Dec) describedbelow:– Setup. TGE.Setup(1λ) is a PPT algorithm that

takes as input a security parameter λ and outputsthe master secret key TGE.msk.

– Key Generation. TGE.KeyGen(TGE.msk, f,Q, t) isa PPT algorithm that takes as input the mastersecret key TGE.msk, a function f ∈ Fλ, a set Q ∈ Q,and a time offset t ∈ N. It outputs a functional keyTGE.skf,Q,t.

– Encryption. TGE.Enc(TGE.msk,m, i, t) is a PPTalgorithm that takes as input the master secret keyTGE.msk, a message m ∈ Xλ, an index i ∈ N,

and a time offset t ∈ N. It outputs a ciphertextTGE.ct. TGE.Enc is index dependent and the cipher-text TGE.ct has an associated index i. If TGE.Enc isasked to encrypt to an index j to which it has pre-viously encrypted, it will output ⊥.

– Decryption. TGE.Dec(TGE.skf,Q,t,TGE.ct1, . . . ,

TGE.ctn) is a deterministic algorithm that takesas input a functional key TGE.skf,Q,t and n ci-phertexts TGE.ct1, . . . ,TGE.ctn. It outputs a valuey ∈ Yλ ∪ {⊥}.

Correctness.There exists a negligible function negl(·) such that forall sufficiently large λ ∈ N, every n-ary function f ∈ Fλ,set Q ∈ Q, point (j1, . . . , jn) ∈ Q, time offset t ∈ N, andinput tuple (x1, . . . , xn) ∈ Xnλ ,

Pr

TGE.msk← TGE.Setup

(1λ)

;TGE.skf,Q,t ← TGE.KeyGen (TGE.msk, f,Q, t) ;TGE.Dec(TGE.skf,Q,t,

(TGE.Enc (TGE.msk, xi, ji, t))ni=1)6= f (x1, . . . , xn)

≤ negl(λ)

where the probability is taken over the random coins ofall the algorithms.

Adaptive Security.The adaptive security definition for time-based garbledencryption is analogous to the one for garbled encryp-tion, updated to match the new syntax. Furthermore,we allow the adversary to make key queries for a timet ∈ N, and the adversary is given back O(t)(TGE.msk),where O : {0, 1}λ → {0, 1}λ is the random oracle used inthe construction to ratchet keys forward. In the sensorapplication, this will correspond to the situation wherethe adversary compromises a sensor at time t and learnsthe key stored on the sensor.

Definition 7 (Adaptive Security). A time-based gar-bled encryption scheme TGE for n-ary functions F ={Fλ}λ∈[N], message space X = {Xλ}λ∈[N], and querypattern Q is adaptively secure with respect to key oracleO if for any PPT adversary A, there exists a negligiblefunction µ(·) such that for all sufficiently large λ ∈ N,

Page 24: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 457

the advantage of A is

AdvTGEA =∣∣∣Pr[ExptTGEA (1λ, 0) = 1]− Pr[ExptTGE

A (1λ, 1) = 1]∣∣∣

≤ µ(λ),

where for each b ∈ {0, 1} and λ ∈ N, the experimentExptTGE

A (1λ, b) is defined below:1. Chal computes TGE.msk← TGE.Setup(1λ).2. The adversary is allowed to make the following three

types of queries in any arbitrary order.– Challenge message query: A submits a query,

(i, xi,0, xi,1, t), with xi,0, xi,1 ∈ Xλ, i ∈ N,and t ∈ N, to the challenger Chal. Chal com-putes TGE.cti ← TGE.Enc(TGE.msk, xi,b, i, t).The challenger Chal then sends TGE.cti to theadversary A.

– Function query: A submits a function query(f,Q, t) ∈ Fλ × Q × N to Chal. The challengerChal computes TGE.skf,Q,t ← TGE.KeyGen(TGE.msk, f,Q, t) and sends it to A.

– Key query: A submits a time offset t ∈ N.The challenger Chal computes O(t)(TGE.msk)and sends the result to A.

3. If there exists a function query (f,Q, t) andchallenge message queries

((i1, x1,0, x1,1, t1), . . . ,

(in, xn,0, xn,1, tn))

such that f(x1,0, . . . , xn,0) 6=f(x1,1, . . . , xn,1) and (i1, . . . , in) ∈ Q for i1, . . . , in ∈N, then the output of the experiment is set to ⊥. Ad-ditionally, if there exists a key query t and a chal-lenge message query (i, xi,0, xi,1, t′) with t′ ≥ t, thenthe output of the experiment is set to ⊥. Otherwise,the output of the experiment is set to b′, where b′ isthe output of A.

C.2 Construction

The construction is the same as the adaptively securegarbled encryption construction in Section 5 except thatwe apply a random oracle t times to the master secretkey prior to generating the wire keys for a garbled cir-cuit, where t is the time offset for the key generation orencryption algorithm in which the wire keys are gener-ated. We provide the full construction for completeness.

Let O1 : {0, 1}3λ → {0, 1}λ, O2 : {0, 1}2λ →{0, 1}2λ, and O3 : {0, 1}λ → {0, 1}λ be random ora-cles. Let PRF = (PRF.Gen,PRF.Eval) be a pseudoran-dom function family with λ-bit keys that outputs inthe range {0, 1}λ and let GC = (Gen,GrbI,GrbC,EvalGC)be the adaptively secure garbling scheme specified by

Thm. 3. Let O be the random oracle used by the adap-tively secure garbling scheme. Our time-based garbledencryption scheme TGE for n-ary functions F and querypattern Q, where Q is the set containing all sets of sin-gletons, is defined as follows:

Recalling notation from before, let ` = n · log |Xλ|denote the input length to circuits C representing func-tions f ∈ F . We note that the indices (i− 1) · log |Xλ|+1, . . . , i · log |Xλ| correspond to the indices of the inputxi to C. Letting ri = (i− 1) · log |Xλ| and `m = log |Xλ|,we denote these indices as ri + 1, . . . , ri + `m.– Setup. On input the security parameter 1λ,

TGE.Setup runs PRF.Gen(1λ) to obtain a PRF keyK and outputs TGE.msk = K.

– Key Generation. On input the master secretkey TGE.msk, a function f ∈ Fλ, a set Q ={(j1, . . . , jn)} ∈ Q, and a time offset t ∈ N,TGE.KeyGen runs as follows:Let C be a circuit for f . TGE.KeyGen then runs thegarbled circuit generation algorithm Gen(1λ) to ob-tain the secret parameters gcsk. It then generates auniformly random string V ∈ {0, 1}λ.It then sets kα = (k0

α, k1α) for α ∈ [`] as follows:

For i ∈ [n], it sets the garbling keys kri+α =(k0ri+α, k

1ri+α) for α ∈ [`m] to be

k0ri+α = PRF.Eval(O(t)

3 (TGE.msk), ji||α||0)

k1ri+α = PRF.Eval(O(t)

3 (TGE.msk), ji||α||1).

For α ∈ [`], it sets Zα uniformly at random. SetZ = Z1 ⊕ . . .⊕ Z`. For α ∈ [`], it sets Yα accordingto the procedure specified by the garbling scheme ofThm. 3 and then sets Xb

α as O1(kbα||V ||Z)⊕O(Yα).Let ~k′ denote the garbling keys as determined by theXbα’s, Yα’s and Zα’s. It then runs GrbC(gcsk, C, ~k′)

to obtain a garbled circuit C. For α ∈ [`], b ∈ {0, 1},let

T bα = (Zα||0λ)⊕O2(kbα||V ).

For each α, with probability 1/2, swap the values ofT 0α and T 1

α.TGE.KeyGen outputs

(C, V, {T 0α, T

1α}α∈[`], Q)

as TGE.skf,Q,t.

– Encryption. On input the master secret keyTGE.msk, a message m, an index j ∈ N, and a timeoffset t ∈ N, TGE.Enc runs as follows: TGE.Enc isindex dependent and maintains a state S initialized

Page 25: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 458

to the empty set ∅. It first checks that j 6∈ S. Ifj ∈ S, it returns ⊥. Otherwise, it adds j to S andproceeds.Defining k0

α and k1α for α ∈ [`m] by

k0α = PRF.Eval(O(t)

3 (TGE.msk), j||α||0)

k1α = PRF.Eval(O(t)

3 (TGE.msk), j||α||1)

as above, TGE.Enc computes

k =(k

bin(m)11 , . . . , k

bin(m)`m

`m

)and outputs

(j,k)

as TGE.ct. We refer to j as the index of this cipher-text.

– Decryption. On input a functional keyTGE.skf,Q,t = (C, V, {T 0

α, T1α}α∈[`], Q) and n

ciphertexts TGE.ct1, . . . ,TGE.ctn, TGE.Dec firstparses each TGE.cti as (ji,ki) and asserts that(j1, . . . , jn) ∈ Q. If not, it outputs ⊥.Let k = (k1, . . . ,kn). For α ∈ [`], TGE.Dec recoversZα by computing O2(kα||V )⊕ T bα for b ∈ {0, 1} andsetting Zα to be the λ-bit prefix of the recoveredvalue whose last λ bits are all 0. Once all the Zα’sare recovered, TGE.Dec computes the Yα’s and Z =⊕

Zα and then sets Xα as O1(kα||V ||Z) ⊕ O(Yα).It then runs EvalGC on C with the Xα’s to recoverthe output.

C.3 Correctness

Correctness follows analogously to that of the adaptivegarbling construction of Section 5, observing thatO(t)

3 (·)is applied to TGE.msk when generating the input labelsfor a garbled circuit in both the key generation andencryption algorithms.

C.4 Adaptive Security

We will prove adaptive security of our construction withrespect to the key oracle O3. The proof follows in asimilar manner to the proof of adaptive security for thegarbled encryption construction in Sec. 5. The main dif-ference is that if the adversary asks for a key query forsome time t and has already asked for a functional keyTGE.skf,Q,t′ for some time t′ ≥ t, then we need a way toensure that TGE.skf,Q,t′ is compatible with all combina-tions of input labels, since the adversary now possesses

the appropriate key for time offset t and is able to learnall the input labels corresponding to the garbled circuitin TGE.skf,Q,t′ . Fortunately, this is possible because thesimulator for the adaptive garbled circuit constructionof Thm. 3 outputs a uniformly random string as thegarbled circuit and later programs its random oracle.So, when the adversary makes a key query for time t,we will run the real garbled circuit algorithm and thenprogram the appropriate random oracles so that the re-sult matches the garbled circuit we already gave out.Formally, we show the following.

Theorem 6. Assuming that PRF is a pseudorandomfunction family and GC is the adaptively secure gar-bling scheme of Thm. 3, then TGE is an adaptively se-cure time-based garbled encryption scheme with respectto O3.

Stronger Adaptive Security Definition for theAdaptive Garbling Scheme of Thm. 3.In order to prove security, we first establish that theadaptive garbling scheme of Thm. 3 actually satisfies astronger notion of security than Def. 3, where betweensteps 2 and 3, we give the adversary a choice: the adver-sary can either proceed with the game as in Def. 3 orask to see all input labels. If the adversary asks for allinput labels, the challenger must (using C, C) be ableto give the adversary all the input labels. To see thisholds, observe that if b = 0, the challenger already hasall the labels and can just give them out. If b = 1, thechallenger can simply run the real garbling algorithmand then program O′ so that O′(Y0) unmasks C′ appro-priately. As we observed in Remark 1, Z is informationtheoretically hidden from the adversary after step 2 ofthe game, so Y0 = Z||0 is also information theoreticallyhidden.

We will show that ExptTGEA (1λ, 0) ≈c ExptTGE

A (1λ, 1)via a series of hybrid games.

Hybrid game H0(1λ, τ ).This game is the same as ExptTGE

A (1λ, τ) except for thefollowing:

Whenever the challenger Chal would respond to afunction query with time offset t, it first checks if a keyquery has been made for some time offset t′ ≤ t. If sucha key query has been made, it responds to the functionquery as before. Otherwise, it instead samples the Xb

α’s,and the T bα values uniformly at random when outputting

Page 26: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 459

the function key. Then, as soon as a challenge query ismade with time offset t or a key query is made withsome time offset t′ for t′ ≤ t, it programs O1 so thatO1(kbα||V ||Z)⊕O(Yα) = Xb

α and O2 so that O2(kbα||V )⊕(Zα||0λ) = T bα.

Hybrid game H1(1λ, τ ).This game is the same as H0(1λ, τ) except for the fol-lowing: Whenever the challenger Chal would generatewire keys corresponding to an index j and time off-set t when running TGE.Enc, Chal generates these wirekeys uniformly at random instead of running PRF.Eval.It then stores these wire keys in a lookup table andif it ever needs a wire key corresponding to an indexand time offset that was previously generated, it usesthe value stored in the table instead of generating anew random value. This can be viewed as modifyingExptTGE

A (1λ, τ) by replacing PRF.Eval(O(t)3 (TGE.msk), ·)

by a truly random function.

Hybrid game H2(1λ, τ ).This game is the same as H1(1λ, τ) except for the fol-lowing: Whenever the challenger Chal would generate agarbled circuit C for a function query (f, (j1, . . . , jn), t),it first checks if a key query has been made for some timeoffset t′ ≤ t. If such a key query has been made, it re-sponds to the function query as before. Otherwise, Chalgenerates the garbled circuit C by running the adaptivegarbled circuit simulator

C ← SimGC(1λ, φ(C)),

It sets T bα for α ∈ [`], b ∈ {0, 1} to be uniformly randomstrings.

Then, whenever the challenger Chal has given outboth a function key (C, V, {T 0

α, T1α}α∈[`], Q) with time

offset t and an input label kα for wire α ∈ [`] (by re-sponding to a challenge query with time offset t), itruns SimGC on the same input query to obtain the sim-ulated label k′α = (Xα ⊕ O(Yα), Zα). It then samplesa uniformly random bit b′ and programs O2 so thatO2(kα||V ) ⊕ T b

α = Zα||0λ. Once it has submitted aninput query for every α ∈ [`], it programs O1 so thatO1(kα||V ||Z)⊕O(Yα) = Xα.

Alternatively, whenever the challenger is asked torespond to a key query for time offset t′ ≤ t, it in-stead runs the honest underlying selectively secure gar-bled circuit algorithm with the circuit corresponding tof and the labels set to the Xb

α’s that were generated

when computing the function key to obtain a garbledcircuit C′. It then programs the underlying oracle O ofthe adaptive garbling scheme so that C′ ⊕ O(Y0) = C,where C is the original garbled circuit Chal had givenout when responding to the function query.

Note that by the restrictions imposed on the ad-versary, only one of the two above conditions can everoccur.

Hybrid game H3(1λ, τ ).This game is the same asH2(1λ, τ) except for the follow-ing: Whenever the challenger Chal would generate wirekeys k0

α, k1α for an input index α ∈ [`] when responding

to a challenge query, it instead generates a single wirekey kα and uses this as the value for both k0

α and k1α.

Lemma 11.

ExptTGEA (1λ, τ) ≈c H0(1λ, τ).

This hybrids are indistinguishable to an adversary un-less it is able to query O1 on some kbα||V ||Z or O2 onsome kbα||V prior to them being programmed. However,this can only be done if the adversary can determinesome kbα prior to the programming. However, from theview of the adversary, the kbα’s are the output of PRFwith a uniformly random key. Thus, an adversary thatcould do this with nonnegligible probability would haveto be able to distinguish the PRF from uniform, a con-tradiction.

Lemma 12.

H0(1λ, τ) ≈c H1(1λ, τ).

Suppose there exists some algorithm A that can distin-guish between the two games with nonnegligible advan-tage. Then, consider the algorithm A′ that can distin-guish PRF from a truly random function with nonnegli-gible advantage.A′ runs A and simulates the role of the chal-

lenger Chal in the game ExptTGEA (1λ, τ) with the fol-

lowing modification. Whenever Chal would computePRF.Eval(O(t)

3 (TGE.msk), ·) on some input x when run-ning TGE.Enc, Chal instead queries its random oracle Fon (t, x). If A guesses that it is playing game H1(1λ, τ),A′ guesses that F is a truly random function and if Aguesses that it is playing H0(1λ, τ), A′ guesses that F isa PRF. The random oracle F on (t, x) either outputs auniformly random value or computes PRF(Kt, x), whereKt is the tth uniformly random key. Observe that since

Page 27: petsymposium.org€¦ · ProceedingsonPrivacyEnhancingTechnologies;2020(4):434–460 Nathan Manohar*, Abhishek Jain, and Amit Sahai Self-ProcessingPrivateSensorDatavia GarbledEncryption

Self-Processing Private Sensor Data via Garbled Encryption 460

A′ only queries F when Chal would need to respond toa challenge query.

Note that if F is a PRF, then A′ simulates thegame H0(1λ, τ) exactly. Similarly, if F is a truly randomfunction, then A′ simulates the game H1(1λ, τ) exactly.Therefore, it follows that A′ could distinguish betweenPRF and a truly random function with nonnegligibleadvantage, breaking the pseudorandomness property ofPRF.

Lemma 13.

H1(1λ, τ) ≈c H2(1λ, τ).

Suppose there exists some algorithm A that can distin-guish between the two games with nonnegligible advan-tage. Then, consider the algorithm A′ that breaks thestronger adaptive security of GC detailed above.A′ runs A and simulates the role of the chal-

lenger Chal in the game H1(1λ, τ) with the followingmodification. Whenever Chal would need to output agarbled circuit C by responding to a function query(f, (j1, . . . , jn), t), A′ instead queries its oracle on circuitC to receive a garbled circuit C. Whenever A makes aninput query with time offset t, if a corresponding func-tion query has been made with time offset t, A′ queriesits oracle on this input and sets k′α to be the receivedvalue. If not, A′ will query its oracle on the above oncea corresponding function query has been made.

If instead, Amakes a key query for time offset t′ ≤ t,A′ asks to see all input labels after receiving C.

Note that if A′’s oracle is the real garbled circuit,then A′ perfect simulates H1(1λ, τ) for A. If A′’s oracleis the simulated garbled circuit, then A′ perfectly sim-ulates H2(1λ, τ) for A.

Lemma 14.

H2(1λ, τ) ≈c H3(1λ, τ).

These two hybrids have identical behavior since at mostone of k0

α or k1α is ever used, so the fact that they are

identical is irrelevant.

Lemma 15.

H3(1λ, 0) ≈c H3(1λ, 1).

Since the adversary’s view in both of these games doesnot depend on the actual value of the messages, butrather on only the function evaluations f(x1, . . . , xn)that the adversary can compute, and since the adver-

sary can only submit challenge message pairs that agreeon all computable function evaluations, it follows thatthe view of any admissible adversary in these gamesis identical. Note that the input queries to SimGC dodepend on the input, but SimGC’s behavior is onlydependent on the function evaluation and not on theactual input, therefore the views are identical.

Combining the above lemmas, it follows that

ExptTGEA (1λ, 0) ≈c ExptTGE

A (1λ, 1)