prahladh harsha - university of chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf ·...

17
Prahladh Harsha Toyota Technological Institute, Chicago University Press Building 1427 East 60th Street, Second Floor Chicago, IL 60637 phone : +1-773-834-2549 fax : +1-773-834-9881 email : [email protected] http://ttic.uchicago.edu/ prahladh Research Interests Computational Complexity, Probabilistically Checkable Proofs (PCPs), Information Theory, Prop- erty Testing, Proof Complexity, Communication Complexity. Education Doctor of Philosophy (PhD) Computer Science, Massachusetts Institute of Technology, 2004 Research Advisor : Professor Madhu Sudan PhD Thesis: Robust PCPs of Proximity and Shorter PCPs Master of Science (SM) Computer Science, Massachusetts Institute of Technology, 2000 Bachelor of Technology (BTech) Computer Science and Engineering, Indian Institute of Technology, Madras, 1998 Work Experience Toyota Technological Institute, Chicago September 2004 – Present Research Assistant Professor Technion, Israel Institute of Technology, Haifa February 2007 – May 2007 Visiting Scientist Microsoft Research, Silicon Valley January 2005 – September 2005 Postdoctoral Researcher Honours and Awards Summer Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scientific Research, Ban- galore. Rajiv Gandhi Science Talent Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scien- tific Research, Bangalore. Award Winner in the Indian National Mathematical Olympiad (INMO) 1993, National Board of Higher Mathematics (NBHM). National Board of Higher Mathematics (NBHM) Nurture Program award 1995-1998. The Nurture program 1995-1998, coordinated by Prof. Alladi Sitaram, Indian Statistical Institute, Bangalore involves various topics in higher mathematics. 1

Upload: others

Post on 21-Mar-2020

27 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

Prahladh Harsha

Toyota Technological Institute, ChicagoUniversity Press Building1427 East 60th Street, Second FloorChicago, IL 60637

phone : +1-773-834-2549fax : +1-773-834-9881

email : [email protected]://ttic.uchicago.edu/∼prahladh

Research Interests

Computational Complexity, Probabilistically Checkable Proofs (PCPs), Information Theory, Prop-erty Testing, Proof Complexity, Communication Complexity.

Education

• Doctor of Philosophy (PhD)Computer Science, Massachusetts Institute of Technology, 2004Research Advisor : Professor Madhu SudanPhD Thesis: Robust PCPs of Proximity and Shorter PCPs

• Master of Science (SM)Computer Science, Massachusetts Institute of Technology, 2000

• Bachelor of Technology (BTech)Computer Science and Engineering, Indian Institute of Technology, Madras, 1998

Work Experience

• Toyota Technological Institute, Chicago September 2004 – PresentResearch Assistant Professor

• Technion, Israel Institute of Technology, Haifa February 2007 – May 2007Visiting Scientist

• Microsoft Research, Silicon Valley January 2005 – September 2005Postdoctoral Researcher

Honours and Awards

• Summer Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scientific Research, Ban-galore.

• Rajiv Gandhi Science Talent Research Fellow 1997, Jawaharlal Nehru Center for Advanced Scien-tific Research, Bangalore.

• Award Winner in the Indian National Mathematical Olympiad (INMO) 1993, National Board ofHigher Mathematics (NBHM).

• National Board of Higher Mathematics (NBHM) Nurture Program award 1995-1998. The Nurtureprogram 1995-1998, coordinated by Prof. Alladi Sitaram, Indian Statistical Institute, Bangaloreinvolves various topics in higher mathematics.

1

Page 2: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

• Ranked 7th in the All India Joint Entrance Examination (JEE) for admission into the IndianInstitutes of Technology (among the 100,000 candidates who appeared for the examination).

• Papers invited to special issues

– “Robust PCPs of Proximity, Shorter PCPs, and Applications to Coding” (with Eli Ben-Sasson, Oded Goldreich, Madhu Sudan, and Salil Vadhan).Invited to special issue of SIAM Journal on Computing, for STOC 2004 (invitation declined).Invited to SIAM Journal on Computing, special issue on Randomness and Computation.

Teaching Experience

• Probabilistically checkable proofs (Univ. Chicago) Fall Quarter 2007Taught a graduate course on probabilistically checkable proofs (PCPs).

• Expanders (Stanford) Spring Quarter 2005Co-taught with Cynthia Dwork, a graduate course on expanders and their applications in ComputerScience at the Stanford University.

• Theory of Computation (MIT) Fall 2001, 2002, 2003Teaching assistant for “Theory of Computation (6.840/18.404J)”, a course taught by Prof. MikeSipser at the Massachusetts Institute of Technology.

• Advanced Complexity (MIT) Spring 2003Teaching assistant for “Advanced Complexity (6.841)”, an advanced graduate course taught byProf. Madhu Sudan at the Massachusetts Institute of Technology.

• Introduction to Algorithms (MIT) Fall 1999, Spring 2000Teaching assistant for “Introduction to Algorithms (6.046/18.410J)”, a course taught by ProfessorsMichel Goemans and Daniel Spielman (Fall 1999) and Professors Shafi Goldwasser and Silvio Micali(Spring 2000) at the Massachusetts Institute of Technology.

Publications

• Theses

1. PhD Thesis: Robust PCPs of Proximity and Shorter PCPs.Massachusetts Institute of Technology, Sep 2004.Advisor: Prof. Madhu Sudan.

2. Master’s Thesis: Small PCPs with low query complexity.Massachusetts Institute of Technology, May 2000.Advisor: Prof. Madhu Sudan.

3. Undergraduate Thesis: Distributed-Automata and Simple Test Tube Systems.Indian Institute of Technology, Madras (Chennai), May 1998.Advisor: Prof. Kamala Krithivasan.

• Journals

1. Eli Ben-Sasson, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “RobustPCPs of proximity, shorter PCPs and applications to coding.” SIAM Journal on Computing(special issue on Randomness and Computation), 36(4):889–974, 2006.

2. Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication vs. computation.” Computational Complexity, 16(1):1–33, 2007.

2

Page 3: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

3. Eli Ben-Sasson, Prahladh Harsha, and Sofya Raskhodnikova. Some 3CNF properties are hardto test. SIAM Journal on Computing, 35(1):1–21, 2005.

4. Prahladh Harsha and Madhu Sudan. “Small PCPs with low query complexity.” Computa-tional Complexity, 9(3–4):157–201, 2000/2001.

5. Kamala Krithivasan, Sakthi Balan, and Prahladh Harsha. “Distributed processing in au-tomata.” International Journal of Foundations of Computer Science, 10(4):443–463, 1999.

• Conference and Unrefereed Publications

1. Eli Ben-Sasson, Prahladh Harsha, Oded Lachish, and Arie Matsliah. “Sound 3-query PCPPsare long.” Technical Report TR07–127, Electronic Colloquium on Computational Complexity,2007.

2. Prahladh Harsha, Tom Hayes, Hariharan Narayanan, Harald Racke, and Jaikumar Radhakr-ishnan. “Minimizing average latency in oblivious routing.” In Proc. 19th ACM-SIAMSymposium on Discrete Algorithms (SODA), pages 200–207, San Francisco, California, 20–22 January 2008.

3. Prahladh Harsha, Rahul Jain, David McAllester, and Jaikumar Radhakrishnan. “The com-munication complexity of correlation.” In Proc. 22nd IEEE Conference on ComputationalComplexity, pages 10–23, San Diego, California, 13–16 June 2007.

4. Eli Ben-Sasson, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “ShortPCPs verifiable in polylogarithmic time.” In Proc. 20th IEEE Conference on ComputationalComplexity, pages 120–134, San Jose, California, 12–15 June 2005.

5. Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication vs. computation.” In Proc. 31st International Colloquium of Automata, Languagesand Programming (ICALP), volume 3142 of Lecture Notes in Computer Science, pages 745–756, Turku, Finland, 12–16 July 2004.

6. Eli Ben-Sasson, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “RobustPCPs of proximity, shorter PCPs and applications to coding.” In Proc. 36th ACM Symp. onTheory of Computing, pages 1–10, Chicago, Illinois, 13-15 June 2004.

7. Eli Ben-Sasson and Prahladh Harsha. “Lower bounds for bounded depth Frege proofs viaBuss-Pudlak games.” Technical Report TR03–004, Electronic Colloquium on ComputationalComplexity, 2003.

8. Eli Ben-Sasson, Prahladh Harsha, and Sofya Raskhodnikova. “Some 3CNF properties arehard to test.” In Proc. 35th ACM Symp. on Theory of Computing, pages 345–354, San Diego,California, 9–11 June 2003.

9. Prahladh Harsha and Madhu Sudan. “Small PCPs with low query complexity.” In Proc.18th Annual Symposium on Theoretical Aspects of Computer Science (STACS), volume 2010of Lecture Notes in Computer Science, pages 327–338, Dresden, Germany, 15–17 February2001.

Professional Activities

• Organization Membership: ACM, IEEE

• Journals refereed:Siam Journal of Computing (SICOMP), Theoretical Computer Science (TCS)

• Conferences refereed:ACM Symposium on Theory of Computing (STOC), IEEE Conference on Computational Com-plexity (CCC), IEEE Symposium on Foundations of Computer Science (FOCS), Symposium onTheoretical Aspects of Computer Science (STACS).

3

Page 4: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

Other Activities

• Organised the TTI-Chicago Theory Seminar (Fall 2004, Fall 2005, Winter 2005, Fall 2006).

Citizenship

Citizen of India.Visa Status in USA: H1-B Work Permit

References

• Prof. Madhu SudanElectrical Engineering and Computer ScienceComputer Science and Artificial Intelligence Labora-tory (CSAIL), MITStata Center, 32-G64077 Massachusetts AvenueCambridge, MA 02139.USAemail : [email protected]

• Prof. Michael SipserDepartment of MathematicsMIT77 Massachusetts Avenue, 2-365Cambridge, MA 02139.USAemail : [email protected]

• Prof. Lance FortnowEECS DepartmentNorthwestern UniversityFord Building, Rm 3–3202133 Sheridan RdEvanston, IL 60208.USAemail : [email protected]

• Dr. Eli Ben-SassonDepartment of Computer ScienceTaub Building, Room no: 523Technion Israel Institute of Technology,Haifa 32000ISRAELemail : [email protected]

4

Page 5: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

Research Statement

Prahladh Harsha

1 Introduction

My research interests are in the area of theoretical computer science, with special emphasis on computationalcomplexity. The primary focus of my research has been in the area of probabilistically checkable proofs, andrelated areas such as property testing and information theory.

The fundamental question in computational complexity is “what are the limits of feasible computation?”In fact, one of the articulations of this question is the famous “P vs. NP” question, where P refers to theclass of problems that can be solved in polynomial time and NP to the class of problems whose solutionscan be verified in polynomial time. To understand the limitations of efficient computation, we first needto understand what we mean by “efficient computation.” This natural connection between feasibility andhardness has many a time led to surprising consequences in complexity theory. One prime example is that ofprobabilistically checkable proofs. The original emphasis in the study of probabilistically checkable proofs wasin the area of program checking. Surprisingly, it was soon realized that the existence of efficient probabilisticprogram checkers actually implied that the approximation versions of several NP-complete optimizationproblems were as intractable as the original optimization problems.

Probabilistically checkable proofs provide an extremely efficient means of proof verification. The classicalcomplexity class NP refers to the class of languages whose membership can be verified with the aid of apolynomial sized proof. Probabilistically checkable proofs (PCPs) are a means of encoding these proofs (andmore generally any mathematical proof) into a format such that the encoded proof can be checked veryefficiently, although in a probabilistic manner, by looking at it at only a constant number of locations (infact, 3 bits suffice!) The main question addressed by my research in this area is the following: “how muchdoes this encoding blow up the original proof while retaining the constant number of queries into the proof,and how efficiently (with respect to running time) can the checking be performed?”

An important contribution of my work is the notion of a proof of proximity (also called PCP of proximity).A PCP of proximity is a strengthening of a PCP in the sense that it helps to decide if a statement is true withthe help of an additional proof in the form of a PCP, by merely probing the statement at a few locations.In other words, a PCP of proximity makes constant probes not only to the proof but also to the statementwhose truth it is checking. With such a stringent requirement, a PCP of proximity cannot distinguish truestatements from false; however it can distinguish true statements from ones that are far from being true (inthe sense that the statement is far from any true statement in Hamming distance). Thus, a PCP of proximitychecks if a given statement is close to being true, without even reading the statement in its entirety! Hence,the name, proof of proximity.

PCPs of proximity play a vital role in the construction of short PCPs, both in my work and in subsequentdevelopments in the area of probabilistically checkable proofs. PCPs of proximity are also used in codingtheory. All known constructions of locally testable codes are via PCPs of proximity. PCPs of proximityhave also come very handy in simplifying the original proof of the PCP Theorem, which is one of the mostinvolved proofs in complexity theory. In fact, the recent fully combinatorial proof of the PCP Theorem (dueto Dinur [Din07]) crucially relies on PCPs of proximity.

As mentioned above, the main focus of my research has been in the area of probabilistically checkableproofs. I have also worked in other areas such as property testing, information theory, proof complexity,network routing etc. Below I elaborate my work in three of these areas – probabilistically checkable proofs,property testing, and information theory.

2 Probabilistically checkable proofs

The PCP Theorem The PCP Theorem [AS98, ALM+98] is one of the crowning achievements of com-plexity theory in the last decade. Probabilistically checkable proofs [BFLS91, FGL+96, AS98], as mentioned

1

Page 6: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

earlier, are proofs that allow efficient probabilistic verification based on probing just a few bits of the proof.Informally speaking, the PCP Theorem states that any mathematical proof can be rewritten into a polyno-mially longer probabilistically checkable proof (PCP) such that its veracity can be checked very efficiently,although in a probabilistic manner, by looking at the rewritten proof at only a constant number of locations(in fact, 3 bits suffice) and furthermore proofs of false assertions are rejected with probability at least 1/2.The PCP Theorem has, since its discovery, attracted a lot of attention, motivated by its connection toinapproximability of optimization problems [FGL+96, AS98]. This connection led to a long line of fruitfulresearch yielding inapproximability results (many of them optimal) of several optimization problems (e.g.,Set-Cover [Fei98], Max-Clique [Has99], MAX-3SAT [Has01]).

However, the significance of PCPs extends far beyond their applicability to deriving inapproximabilityresults. The mere fact that proofs can be transformed into a format that supports super-fast probabilisticverification is remarkable. One would have naturally expected PCPs, as the name suggests, to lead to vastimprovement in automated proof-checkers, theorem-provers, etc. However, unfortunately, this has not beenthe case. The chief reason why PCPs are not being used today in practice for automated proof-checking isthat the blowup of the proof-size involved in all present constructions of PCPs makes it infeasible to do so.Just to put things in perspective, the original proof of the PCP Theorem [ALM+98] constructed PCPs ofnearly cubic length with a query complexity roughly of the order of a million (in order to reject proofs of falseassertion with probability at least 1/2). On the other hand, the 3-query optimal PCPs of [Has01, GLST98]have length nearly n106

, which is still a polynomial!Even with respect to inapproximability results, though the PCP Theorem has been extremely successful

in proving tight hardness results, the quantitative nature of these results has been rather unsatisfactory,once again due to the blowup involved in PCP constructions. To understand this it is instructive to comparethe inapproximability hardness results obtained from the PCP Theorem with the optimization hardnessresults obtained from the usual textbook NP-completeness reductions. For example, the NP-completenessreduction from Satisfiability (SAT) to Clique transforms a Boolean formula on n variables to a graph with atmost 10n vertices. On the other hand, the PCP reductions which show optimal inapproximability of Max-Clique transform a Boolean formula of size n to a graph of size at least n106

. What these results imply inquantitative terms is that if one assumes solving satisfiability on formulae with 1, 000 variables is intractable,then NP-completeness reductions imply that solving Clique is intractable on graphs with 10, 000 vertices;while, the PCP reductions would imply that the optimal inapproximability hardness results for Max-Cliquesets in on graphs of size at least 1000106

.

2.1 My research

Short PCPs: Most of my work in the area of PCPs has focused on constructing short PCPs. In work doneas part of my master’s thesis [HS00], I examine the size and query complexity of PCPs jointly and obtain aconstruction with reasonable performance in both parameters (more precisely n3 sized proofs with a querycomplexity of 16 bits). In a more recent work with Ben-Sasson, Goldreich, Sudan and Vadhan [BGH+06], Itake a closer look at the PCP Theorem, simplify several parameters and obtain shorter PCPs. In quantitativeterms, we obtain PCPs that are at most n · exp(logε n) in the size of the original proof for any ε > 0.

PCPs of proximity and composition: Besides constructing short PCPs, the chief contribution of ourwork [BGH+06] is the simplification of PCP constructions. Previous construction of PCPs were extremelyinvolved and elaborate. One of the main reasons for this is that “proof composition,” a key ingredient inall known PCP constructions, is a very involved process. We introduce “PCPs of proximity,” (a variantof PCPs mentioned earlier in Section 1) which facilitate very smooth composition – in fact, compositionbecomes almost definitional and syntactic given this variant. This new variant of PCPs and the correspondingcomposition have played a critical role in subsequent improvements in short PCP constructions (due to Ben-Sasson and Sudan [BS05] and Dinur [Din07]). Furthermore, these simplifications in the original proof ofthe PCP Theorem, in the guise of PCPs of proximity and the new composition, led to an alternate purelycombinatorial proof of the PCP Theorem, due to Dinur [Din07]. This work [BGH+06] was invited to the

2

Page 7: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

special issue of SIAM Journal on Computing on Randomness and Computation as well as the special issueof SIAM Journal on Computing for STOC 2004.

Efficient PCPs: In the context of efficient proof verifiers, the running time of the verification process isas important a parameter as the length of the PCP. In fact, the emphases of the initial work of Babai etal. [BFLS91] in the area of PCPs was the time taken by the verifier and the length of the proof in the newformat. In contrast, most succeeding works on PCPs have focused on the query complexity of the verifierand derived many strong inapproximability results for a wide variety of optimization problems; however, nolater work seems to have returned to the question of the extreme efficiency of the verifier. This is unfortunatebecause the efficiency parameters are significant in the context of proof-verification. Furthermore, all shortPCP constructions after the work of Babai et al. [BFLS91] achieved their improvements with respect to PCPsize by sacrificing the efficiency of the verifier.

In a subsequent work with Ben-Sasson, Goldreich, Sudan and Vadhan [BGH+05], I show that this neednot be the case and all existing short PCP constructions can be accompanied by an efficient verifier (wherethe verifier’s efficiency is with respect to running time). More formally, we show that every language in NPhas a probabilistically checkable proof where the verifier’s running time is polylogarithmic in the input sizeand the length of the probabilistically checkable proof is only polylogarithmically larger that the length ofthe classical proof.

3 Property Testing and Locally Testable Codes

Property Testing: Today’s world abounds with massive data-sets and one needs to perform computationson these data-sets where even reading the entire data-set can be prohibitively expensive. This results in thestudy of sublinear algorithms where one performs computations in time sublinear in the size of the object. Animportant sub-class of sublinear algorithms is that of property testing. In property testing, one is given theability to probe a particular object and the task is to determine if the object satisfies a given pre-determinedproperty by making only a very few probes to the object. Since, one is not allowed to read the object inits entirety but only allowed to make a few probes to the object, it is not possible to exactly determine ifthe object satisfies the property or not. However, one can decide if the object satisfies the property or or isfar from having the property (in the sense that the object has to be changed at a considerable number oflocations in order to make it satisfy the property). Property Testing has proven useful in several areas ofcomputer science in recent years, especially coding theory.

Coding Theory: An error-correcting code is a set of strings, called codewords, such that any two of themdisagree on many positions. The minimum number of positions two distinct codewords differ in is calledthe distance of the error-correcting code. The ratio of the logarithm of the number of codewords to thedimension of the space is called the rate of the code. Error-correcting codes are considered good if theyhave linear distance and constant rate. Good error-correcting codes have innumerable applications, e.g.,communicating over a noisy channel.

Locally testable codes arise naturally from the interaction of these two areas – property testing andcoding theory.

Locally testable codes: A code is said to be locally testable if a constant number of queries into a stringsuffice to determine whether the string is a codeword or is far from all codewords. In other words, onecan determine if a given string is a codeword or is far from the code by merely probing the string at a fewlocations. Locally testable codes have numerous applications and are an essential part of PCP constructions.One of the big open questions in property testing is the construction of good locally testable codes.

My research: In joint work with Ben-Sasson, Goldreich, Sudan and Vadhan [BGH+06], I show how totransform any PCP of proximity into a locally testable code with similar blowup properties. Thus, theshort PCP constructions of [BGH+06, BS05, Din07], in turn give rise to locally testable codes of comparableblowup. However, it is to be noted that since all these constructions involve a super linear blowup in the

3

Page 8: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

PCP size, we do not get good locally testable codes (in the sense that the rate of the corresponding errorcorrecting code is at least inverse polylogarithmic if not inverse polynomial).

On the negative side, in work with Ben-Sasson and Raskhodnikova [BHR05], I throw light on whyconstruction of good locally testable codes might be difficult. Random low density parity check (LDPC)codes are a family of codes which have extremely good error-correcting properties. We show that these codesare not locally testable in a very strong sense. More precisely, we show that most LDPC codes of constantrate are not testable even with a linear number of queries. As an immediate corollary of this result, weobtain properties which are easy to decide, but hard to test.

4 Information Theory

One of the most fundamental quantities in information theory is the notion of Shannon entropy of a randomvariable. Informally, Shannon entropy (or just entropy) of a random variable X is a measure of the amountof randomness in X. More formally, given a random variable X over a finite sized set X , the entropy of X,denoted by H[X], is defined to be the quantity

∑x∈X Pr[X = x] log 1

Pr[X=x] . This quantity can be shown tobe exactly the minimum expected number of bits required to encode a sample from X (up to +1 additivefactor). This interpretation of H[X] in terms of the encoding length of X is a very useful one.

Another important quantity in information theory is that of mutual information. Given two randomvariables X and Y , the mutual information measures the amount of information one random variable hasabout the other. Formally, mutual information, denoted by I[X : Y ], is defined as H[X] + H[Y ]−H[XY ].

My research: In joint work with Jain, McAllester and Radhakrishnan [HJMR07], I give a characterizationof mutual information similar to that of entropy. Our characterization is best understood in terms of thefollowing two-player game: Let (X, Y ) be a pair of random variables. Suppose Alice is given a sample xdistributed according to X and needs to send a message z to Bob so that he can generate a correlated sampley distributed according to the conditional distribution Y |X=x. We show that the minimum expected numberof bits, that Alice needs to transmit to Bob to achieve the above (in the presence of shared randomness) isthe mutual information I[X : Y ] (up to lower order logarithmic terms).

As an immediate benefit of this interpretation of mutual information, we obtain a direct sum resultin communication complexity, substantially improving on previously known direct sum results [CSWY01,JRS03, JRS05]. Furthermore, this simple interpretation of mutual information lends itself to simpler proofsof several known theorems, e.g., the reverse Shannon theorem [BSST02].

5 Other Research

Besides the above, I have worked in a variety of other areas in theoretical computer science – proof complex-ity [BH03], network routing [HHN+08], resource tradeoffs [HIK+07], and automata theory [KBH99].

Proof Complexity Proof Complexity investigates the question “Do tautologies have short proofs?” Theseminal work of Cook and Reckhow [CR79] shows that this question is equivalent to the NP vs. co-NPquestion. In fact, a proof of NP 6= co-NP immediately proves that NP 6= P.A natural initial approach to attack this problem is to study if there are tautologies that are intractable(i.e., have long proofs) under certain specific proof systems instead of all proof systems. Variousproof systems such as resolution, polynomial calculus, Frege-proofs have been extensively studied inthis approach. The Frege-proof systems are one of the strongest proof systems and till today havedefeated all attempts to show the existence of a true statement that is intractable under this system.Ajtai proved that the pigeon-hole principle is intractable under a restricted version of this proof-system, namely the bounded-depth Frege-proof system [Ajt94]. This result was further strengthenedby Beame et al. [BIK+92]. These proofs are some of the most involved proofs in Proof Complexity. Injoint work [BH03] with Ben-Sasson, I provide an alternative proof of the intractability of the pigeon-hole principle in the bounded depth Frege proof system using the interpretation of proofs as 2-playergames suggested by Pudlak and Buss [PB94].

4

Page 9: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

Network Routing In joint work with Hayes, Narayanan, Racke and Radhakrishnan [HHN+08], I considerthe problem of minimizing average latency cost while obliviously routing traffic in a network with linearlatency functions. We show that for the case when all routing requests are directed to a single target,there is a routing scheme with competitive ratio O(log n), where n denotes the number of nodes in thenetwork. As a lower bound we show that no oblivious scheme can obtain a competitive ratio of betterthan Ω(

√log n).

Resource Tradeoffs In joint work with Ishai, Kilian, Nissim and Venkatesh [HIK+07], I investigate thefollowing question: Is there a computational task that exhibits a strong tradeoff behavior between theamount of communication and the amount of time needed for local computation? Under standardcryptographic assumptions, we show that there is a function f such that f(x, y) is easy to compute(knowing x and y), and has low communication complexities (when one player knows x and the otherknows y). However, all low-communication protocols for computing f(x, y) are hard to compute.

6 Thoughts on Future Work

As my previous work indicates, I love to work on fundamental problems in complexity theory and I willcontinue to do so going into the future. An overarching goal of my research efforts is to gain a betterunderstanding what is tractable and what is not.

Like other major open questions in complexity theory, most of the problems I have been working oncontinue to remain open despite the considerable progress in the last few years. I see myself working towardsresolving these questions in the next few years. Three concrete problems that I have been working on andwould like to spend some time in the near future are listed below.

Short PCPs and efficient proof-verification The big question of whether there exists PCPs of linearsize (or even size n · poly log n) with query complexity 3 that rejects proofs of false assertions withprobability at least 1/2 continues to remain open. A similar question for locally testable codes (LTCs)is also open. The work of Ben-Sasson and Sudan [BS05] and Dinur [Din07] only demonstrates theexistence of PCPs (and LTCs) of size n · poly log n with a large but constant query complexity.

In a recent unpublished work (with Ben-Sasson, Lachish and Matsliah) [BHLM07], I show that anyconstruction of PCPs that also yields PCPs of proximity with similar properties cannot be simultane-ously short and have query complexity 3 (and reject proofs of false assertions with probability 1/2).Since all known techniques for PCP constructions also yield PCPs of proximity, this work reveals thatcompletely new ideas are required for short PCP constructions.

Quantum analogue of mutual information characterization The mutual information characteriza-tion in our paper [HJMR07], as mentioned before, gives easy proofs of several known results in in-formation theory. The analogue of such a characterization in the quantum information world is stillopen. Such a quantum characterization would immediately imply the quantum reverse Shannon theo-rem, which is currently known only in certain special cases.

Lower bounds in Frege proof systems As mentioned in the earlier section, currently no lower boundsare known for any tautologies in the Frege proof system. I strongly believe that the proof techniquesin my work with Ben-Sasson [BH03] can be generalized and hope to prove the intractability of otherstatements such as random-3CNFs in the Frege proof systems or some restricted form of Frege proofsystems.

Besides the above mentioned topics, I am also interested in other areas in theoretical computer science(e.g., derandomization, linear programming, and semi-definite programming based algorithms). In the com-ing years, I look forward to doing stimulating research – broadening my horizons, collaborating with peoplefrom different academic backgrounds, learning new techniques, and solving interesting problems.

5

Page 10: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

References

[Ajt94] Miklos Ajtai. The complexity of the pigeonhole principle. Combinatorica, 14(4):417–433, 1994. (Pre-liminary Version in 20th STOC, 1988). doi:10.1007/BF01302964.

[ALM+98] Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. Proofverification and the hardness of approximation problems. Journal of the ACM, 45(3):501–555, May 1998.(Preliminary Version in 33rd FOCS, 1992). doi:10.1145/278298.278306.

[AS98] Sanjeev Arora and Shmuel Safra. Probabilistic checking of proofs: A new characterizationof NP. Journal of the ACM, 45(1):70–122, January 1998. (Preliminary Version in 33rd FOCS, 1992).doi:10.1145/273865.273901.

[BFLS91] Laszlo Babai, Lance Fortnow, Leonid A. Levin, and Mario Szegedy. Checking computations inpolylogarithmic time. In Proceedings of the 23rd ACM Symposium on Theory of Computing (STOC),pages 21–31. New Orleans, Louisiana, 6–8 May 1991. doi:10.1145/103418.103428.

[BGH+05] Eli Ben-Sasson, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. ShortPCPs verifiable in polylogarithmic time. In Proceedings of the 20th IEEE Conference on ComputationalComplexity, pages 120–134. San Jose, California, 12–15 June 2005. doi:10.1109/CCC.2005.27.

[BGH+06] ———. Robust PCPs of proximity, shorter PCPs and applications to coding. SIAM Journal of Computing,36(4):889–974, 2006. (Special issue on Randomness and Computation; Preliminary Version in 36th STOC,2004). doi:10.1137/S0097539705446810.

[BH03] Eli Ben-Sasson and Prahladh Harsha. Lower bounds for bounded depth Frege proofs via Buss-Pudlakgames. Technical Report TR03-004, Electronic Colloquium on Computational Complexity, 2003. Availablefrom: http://www.eccc.uni-trier.de/eccc-reports/2003/TR03-004/.

[BHLM07] Eli Ben-Sasson, Prahladh Harsha, Oded Lachish, and Arie Matsliah. Sound 3-query PCPPsare long, 2007. Technical Report TR07–127, Electronic Colloquium on Computational Complexity, 2007.Available from: http://eccc.hpi-web.de/eccc-reports/2007/TR07-127/index.html.

[BHR05] Eli Ben-Sasson, Prahladh Harsha, and Sofya Raskhodnikova. Some 3CNF properties are hardto test. SIAM Journal of Computing, 35(1):1–21, 2005. (Preliminary Version in 35th STOC, 2003).doi:10.1137/S0097539704445445.

[BIK+92] Paul Beame, Russell Impagliazzo, Jan Krajıcek, Toniann Pitassi, Pavel Pudlak, and AlanWoods. Exponential lower bounds for the pigeonhole principle. In Proceedings of the 24th ACM Symposiumon Theory of Computing (STOC), pages 200–220. Victoria, British Columbia, Canada, 4–6 May 1992.doi:10.1145/129712.129733.

[BS05] Eli Ben-Sasson and Madhu Sudan. Simple PCPs with poly-log rate and query complexity. In Proceedingsof the 37th ACM Symposium on Theory of Computing (STOC), pages 266–275. Baltimore, Maryland, 21–24 May 2005. doi:10.1145/1060590.1060631.

[BSST02] Charles H. Bennett, Peter W. Shor, John A. Smolin, and Ashish V. Thapliyal. Entanglement-assisted capacity of a quantum channel and the reverse Shannon theorem. IEEE Transactions on Infor-mation Theory, 48(10):2637–2655, October 2002. (Preliminary Version in Proc. Quantum Information:Theory, Experiment and Perspectives Gdansk, Poland, 10 - 18 July 2001). doi:10.1109/TIT.2002.802612.

[CR79] Stephen A. Cook and Robert A. Reckhow. The relative efficiency of propositional proof sys-tems. Journal of Symbolic Logic, 44(1):36–50, March 1979. (Preliminary Version in 6th STOC, 1974).doi:10.2307/2273702.

[CSWY01] Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, and Andrew Chi-Chih Yao. Informationalcomplexity and the direct sum problem for simultaneous message complexity. In Proceedings of the 42ndIEEE Symposium on Foundations of Computer Science (FOCS), pages 270–278. Las Vegas, Nevada,14–17 October 2001. doi:10.1109/SFCS.2001.959901.

[Din07] Irit Dinur. The PCP theorem by gap amplification. Journal of the ACM, 54(3):12, 2007. (PreliminaryVersion in 38th STOC, 2006). doi:10.1145/1236457.1236459.

[Fei98] Uriel Feige. A threshold of lnn for approximating set cover. Journal of the ACM, 45(4):634–652, July1998. (Preliminary Version in 28th STOC, 1996). doi:10.1145/285055.285059.

6

Page 11: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

[FGL+96] Uriel Feige, Shafi Goldwasser, Laszlo Lovasz, Shmuel Safra, and Mario Szegedy. Interactiveproofs and the hardness of approximating cliques. Journal of the ACM, 43(2):268–292, March 1996.(Preliminary version in 32nd FOCS, 1991). doi:10.1145/226643.226652.

[GLST98] Venkatesan Guruswami, Daniel Lewin, Madhu Sudan, and Luca Trevisan. A tight characteriza-tion of NP with 3-query PCPs. In Proceedings of the 39th IEEE Symposium on Foundations of ComputerScience (FOCS), pages 18–27. Palo Alto, California, 8–11 November 1998. doi:10.1109/SFCS.1998.743424.

[Has99] Johan Hastad. Clique is hard to approximate within n1−ε. Acta Mathematica, 182(1):105–142, 1999.(Preliminary Version in 28th STOC, 1996 and 37th FOCS, 1997). doi:10.1007/BF02392825.

[Has01] ———. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, July 2001. (Prelim-inary Version in 29th STOC, 1997). doi:10.1145/502090.502098.

[HHN+08] Prahladh Harsha, Thomas Hayes, Hariharan Narayanan, Harald Racke, and Jaikumar Rad-hakrishnan. Minimizing average latency in oblivious routing. In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 200–207. San Francisco, California, 20–22 Jan-uary 2008.

[HIK+07] Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. Communica-tion vs. computation. Computational Complexity, 16(1):1–33, 2007. (Preliminary Version in 31st ICALP,2004). doi:10.1007/s00037-007-0224-y.

[HJMR07] Prahladh Harsha, Rahul Jain, David McAllester, and Jaikumar Radhakrishnan. The com-munication complexity of correlation. In Proceedings of the 22nd IEEE Conference on ComputationalComplexity, pages 10–23. San Diego, California, 13–16 June 2007. doi:10.1109/CCC.2007.32.

[HS00] Prahladh Harsha and Madhu Sudan. Small PCPs with low query complexity. Computa-tional Complexity, 9(3–4):157–201, December 2000. (Preliminary Version in 18th STACS, 2001).doi:10.1007/PL00001606.

[JRS03] Rahul Jain, Jaikumar Radhakrishnan, and Pranab Sen. A direct sum theorem in communica-tion complexity via message compression. In Jos C. M. Baeten, Jan Karel Lenstra, JoachimParrow, and Gerhard J. Woeginger, eds., Proceedings of the 30th International Colloquium ofAutomata, Languages and Programming (ICALP), volume 2719 of Lecture Notes in Computer Sci-ence, pages 300–315. Springer-Verlag, Eindhoven, Netherlands, 30 June–4 July 2003. Available from:http://link.springer.de/link/service/series/0558/bibs/2719/27190300.htm.

[JRS05] ———. Prior entanglement, message compression and privacy in quantum communication. In Proceedingsof the 20th IEEE Conference on Computational Complexity, pages 285–296. San Jose, California, 12–15 June 2005. doi:10.1109/CCC.2005.24.

[KBH99] Kamala Krithivasan, Sakthi Balan, and Prahladh Harsha. Distributed processing in au-tomata. International Journal of Foundations of Computer Science, 10(4):443–463, December 1999.doi:10.1142/S0129054199000319.

[PB94] Pavel Pudlak and Samuel R. Buss. How to lie without being (easily) convicted and the length of proofsin propositional calculus. In Leszek Pacholski and Jerzy Tiuryn, eds., Proceedings of the 8th Interna-tional Workshop, Conference for Computer Science Logic (CSL), volume 933 of Lecture Notes in ComputerScience, pages 151–162. Springer, Kazimierz, Poland, 25–30 September 1994. doi:10.1007/BFb0022253.

7

Page 12: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

Teaching Statement

Prahladh Harsha

Teaching is an essential component of an academic life. I enjoy teaching and its thrills and challenges – interactingwith students, attracting young minds to computer science, equipping them with a firm grasp of the fundamentalsand disseminating recent developments in the field. As researchers it is often possible for us to get lost in the specificdetails of our work; teaching provides us with a broader outlook as it encourages us to constantly rethink our researchin the general context of computer science and the real world.

Teaching Experience: I have had several teaching and mentoring opportunities with students at various levels,all of which I thoroughly enjoyed.

While a graduate student at MIT, I was a teaching assistant for six semesters, twice for the “Introduction toAlgorithms (6.046)” course, once for the graduate course “Advanced Complexity Theory (6.841)” and three times forthe “Theory of Computation (6.840)” course. The “Introduction to Algorithms (6.046)” and “Theory of Computation(6.840)” are large classes attended by a mix of about 120-160 undergraduates and graduates. My responsibilitiesfor these two courses involved teaching a weekly recitation section attended by 15-20 students, holding office hours,assisting the instructor in preparing homework problems and exams, grading exams and supervising homeworkgraders. Prof. Mike Sipser, the instructor for “Theory of Computation (6.840)” is an excellent teacher and has beenconsistently regarded by the students as one of the best lecturers at MIT. I learnt a lot TAing three times for his class.The other course I TAed at MIT was “Advanced Complexity (6.841)”, offered by Prof. Madhu Sudan. This was asmaller class attended by about 30 graduate students. My responsibilities for this course involved coming up withproblem sets, grading the problem sets, holding office hours and maintaining the course webpage. I also substitutedfor the instructor for a couple of lectures, in which I taught topics from derandomization.

During one of the summers at MIT, I participated in a high school mentoring program. As part of this program,I would interact twice a week with two high school students. During each of these sessions, I would pose a challengeproblem and via this problem introduce the students to problem-solving techniques and algorithmic ideas. It was auniquely rewarding experience and I am very happy that one of the students went on to pursue a graduate programin theoretical computer science while the other is presently a senior in mathematics.

I had no teaching obligations during my research stints at Microsoft and Toyota Technological Institute; however,I offered two courses of my own interest: (a) a graduate course on expanders at Stanford1, which I co-taught withDr. Cynthia Dwork (Microsoft Research) and (b) a graduate course on probabilistically checkable proofs at theUniversity of Chicago2. The course on Expanders at Stanford was intended for graduate students doing researchin algorithms and complexity, but I was pleasantly surprised to find students from other disciplines attending thelectures. Along with Dr. Dwork, I prepared the material for the course, gave 5 of the 10 two-hour lectures andprepared detailed lecture notes for the course (available online). I am pleased to note that these lecture notes havebeen used by other lecturers and students elsewhere. The course on probabilistically checkable proofs (PCPs) was anadvanced graduate course with some of the most recent advancements in the area of PCPs and intended for graduatestudents doing research in complexity theory. I was gratified to see that graduate students from other backgroundsin computer science as well as a senior year undergraduate enroll for the course. Accordingly, I tailored the courseto suit the students’ backgrounds while still conveying to them the recent, exciting developments in the field. Thecourse involved some of the most technically involved topics in recent years in theoretical computer science. Despitethis, the students actively participated in the classroom discussions, and with their help, I prepared detailed lecturenotes for this material, which are now available online.

Teaching Philosophy: I believe a good teacher stimulates independent thinking in his/her students, whilefostering an infectious enthusiasm in them. I realize that, as teachers, we can leave very strong impressions onstudents, thereby strongly influencing their academic and career choices. For example, my pursuit of mathematicsand career-choices have largely been shaped due to the influential role played by my teachers since high-school. Myprimary goal as a teacher is to share my passion for my subject with my students, and reaching this goal can be veryrewarding.

While teaching, I prefer the interactive style. I encourage active discussions both by posing questions myself aswell as soliciting questions from the students during a lecture. Such a dialogue with the students helps me gauge the

1CS369E (Stanford, Spring 2005) – http://ttic.uchicago.edu/~prahladh/teaching/05spring/2CMSC39600 (Univ. Chicago, Autumn 2007) – http://ttic.uchicago.edu/~prahladh/teaching/07autumn/

1

Page 13: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

class and pace my lecture accordingly. Besides the traditional classroom discussions, I would encourage the use ofonline discussion groups and wikis to further student participation. Though these lack the personal one-on-one touchof a classroom, they offer the advantage of anonymity which would enable students, who are either naturally reservedor those from minority groups, to participate freely in the discussions. I also find the use of visual-aids (animations,pictures etc) while teaching very effective. Since my TA days at MIT, I have found it very useful to carry coloredchalk/pens to lecture.

Teaching Plans: I would love to teach popular computing courses on the lines of Steve Rudich’s “Great Theo-retical Ideas in Computer Science” at Carnegie Mellon and Sanjeev Arora’s “Computational Universe” at Princeton.These courses go a long way in disambiguating the common misconception that computer science is merely the scienceof programming. I am excited to teach theoretical computer science courses, both introductory and advanced, such astheory of computation, algorithms as well as courses in the mathematical foundations of computer science. I am alsointerested in teaching introductory computer science courses where I would have the opportunity to attract youngminds to computer science.

2

Page 14: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

Short Description of Publications

Prahladh Harsha

I Probabilistically Checkable Proofs

1. P. Harsha and M. Sudan.Small PCPs with low query complexity.Computational Complexity, 9(3–4):157–201, December 2000.(Preliminary Version in 18th STACS, 2001).

Most known constructions of probabilistically checkable proofs (PCPs) either blow up the proofsize by a large polynomial or have a high (though constant) query complexity. For instance, theproof size could be decreased to almost linear (i.e., n1+ε), but at the expense of a large querycomplexity (e.g., in excess of 106) (see Polishchuk and Spielman). On the other hand, the PCPconstructions of Hastad achieve the optimal query complexity of 3, but at the expense of blowingup the proof size by a very large exponent.In this paper we give a transformation with slightly-super-cubic blowup in proof size, with a lowquery complexity. Specifically, the verifier probes the proof in 16 bits and rejects every proof ofa false assertion with probability arbitrarily close to 1/2, while accepting corrects proofs of theo-rems with probability one. The proof is obtained by revisiting known constructions and improvingnumerous components therein.

2. E. Ben-Sasson, O. Goldreich, P. Harsha, M. Sudan, and S. Vadhan.Robust PCPs of proximity, shorter PCPs and applications to coding.SIAM Journal of Computing, 36(4):889–974, 2006.(Preliminary Version in 36th STOC, 2004).

In this paper, we study the trade-off between the length of probabilistically checkable proofs(PCPs) and their query complexity. While doing so, we revisit the proof of the PCP Theoremand introduce a new variant of PCPs, that we call robust PCPs of proximity. These new PCPsfacilitate proof composition, a central ingredient in the construction of PCP systems. These newPCPs, besides allowing for a much simpler construction of PCPs, also naturally lend themselvesto the construction of locally testable codes and a relaxed notion of locally decodable codes.The main quantitative results of the papers are as follows (these results refer to proofs of satisfi-ability of circuits (of size n))

(a) We present PCPs of length n · exp(o(log log n)2) that can be verified by making o(log log n)Boolean queries.

(b) For every ε > 0, we present PCPs of length n·2logε n that can be verified by making a constantnumber of Boolean queries.

In both cases, false assertions are rejected with constant probability (which may be set to bearbitrarily close to 1). The multiplicative overhead on the length of the proof, introduced bytransforming a proof into a probabilistically checkable one, is just quasi polylogarithmic in thefirst case (of query complexity o(log log n)), and is 2(logn)ε

, for any ε > 0, in the second case (ofconstant query complexity).

3. E. Ben-Sasson, O. Goldreich, P. Harsha, M. Sudan, and S. Vadhan.Short PCPs verifiable in polylogarithmic time.In Proceedings of the 20th IEEE Conference on Computational Complexity, pages 120–134.

1

Page 15: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

In this paper, we show that the recent improvements in the construction of short PCPs can beaccompanied by efficient verifiers (efficient with respect to running time). The time complexityof the verifier and the size of the proof were the original emphases in the definition of holographicproofs, due to Babai et al. (STOC 91), and our work is the first to return to these emphasessince their work. It is to be noted that all short PCP constructions after the work of Babai etal. achieved their improvements with respect to PCP size by sacrificing efficiency of the verifier.We show that this need not be the case and give efficient (in the sense of running time) versionsof the shortest known PCPs, due to Ben-Sasson et al. (STOC 04) and Ben-Sasson and Sudan(STOC 05), respectively.More formally, we show that every language in NP has a probabilistically checkable proof of prox-imity (i.e., proofs asserting that an instance is “close” to a member of the language), where theverifiers running time is polylogarithmic in the input size and the length of the probabilisticallycheckable proof is only polylogarithmically larger that the length of the classical proof.

4. E. Ben-Sasson, P. Harsha, O. Lachish, and A. Matsliah.Sound 3-query PCPPs are long, 2007.Technical Report TR07–127, Electronic Colloquium on Computational Complexity, 2007.

In this paper, we ask the question if there exists a 3-query probabilistically checkable proof(PCP) that is simultaneously short (i.e., polynomial sized) and has maximal soundness thatcan be guaranteed by a 3-query verifier? We show that the answer is negative in the case ofprobabilistically checkable proofs of proximity (PCPPs). More precisely, our main result is thata PCPP verifier limited to querying a short proof cannot obtain the same soundness as thatobtained by a verifier querying a long proof. A language is said to have a PCPP if there existsa probabilistic verifier that can distinguish inputs in the language from inputs that are far fromthe language by merely probing the input and an additional proof at a few locations. Since allknown constructions of PCPs yield PCPPs, the above negative result shows that completely newtechniques are required to construct PCPs that are both simultaneously short and maximallysound.

II Property Testing

5. E. Ben-Sasson, P. Harsha, and S. Raskhodnikova.Some 3CNF properties are hard to test.SIAM Journal of Computing, 35(1):1–21, 2005.(Preliminary Version in 35th STOC, 2003).

The main result of this paper demonstrates that there are some properties which are very easy todecide, but are however very hard to test. Property testing deals with the question of how manyqueries are needed to distinguish (with high probability) between a input that satisfies a givenproperty and one that is ε-far (in Hamming distance) from satisfying the given property. We show(by a probabilistic argument) the existence of a property expressible by a 3CNF formula, suchthat the number of queries required for any test is the worst possible, that is, linear in the totalnumber of variables.We prove this result using two intermediate results which are interesting in their own right.

(a) A random low density parity check (LDPC) code is not locally testable (with high probability).(b) For testing linear properties (i.e., properties expressible by linear constraints), adaptivity and

2-sided error do not help.

2

Page 16: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

III Information Theory and Communication Complexity

6. P. Harsha, R. Jain, D. McAllester, and J. Radhakrishnan.The communication complexity of correlation.In Proceedings of the 22nd IEEE Conference on Computational Complexity, pages 10–23. SanDiego, California, 13–16 June 2007.

In this paper, we ask the following question. Let (X, Y ) be a joint distribution. Suppose Aliceis given a sample x distributed according to X and needs to send a message z to Bob so that hecan generate a correlated sample y distributed according to the conditional distribution Y |X=x.What is the minimum number of bits (i.e., |z|) that Alice needs to transmit in order to achievethis? Clearly, Alice needs to send at least the mutual information I[X : Y ] number of bits. Weshow that there are distributions (X, Y ) for which Alice needs to send exponentially many morebits. However, if Alice and Bob, share a random string (independent of (X, Y )), then Alice needsend Bob no more than approximately I[X : Y ] number of bits. This result gives a nice alternatecharacterization of mutual information (up to lower order logarithmic terms).As an intermediate step, we prove that the greedy rejection sampling procedure to generate onedistribution from another gives a similar characterization of relative entropy.This characterization of mutual information immediately yields an improved direct sum result incommunication complexity.

IV Proof Complexity

7. E. Ben-Sasson and P. Harsha.Lower bounds for bounded depth Frege proofs via Buss-Pudlak games.Technical Report TR03-004, Electronic Colloquium on Computational Complexity, 2003.

In this paper, we give an exposition of the exponential lower bounds (of Pitassi et al. and Karijiceket al.) for the pigeonhole principle in the bounded-depth Frege proof system. This expositionuses the interpretation of proofs as two player games by Pudlak and Buss and is simpler thanearlier proofs and relies on tools and intuition that are well-known in the context of computationcomplexity.

V Resource Tradeoffs

8. P. Harsha, Y. Ishai, J. Kilian, K. Nissim, and S. Venkatesh.Communication vs. computation.Computational Complexity, 16(1):1–33, 2007.(Preliminary Version in 31st ICALP, 2004).

In this paper, we initiate a study of tradeoffs between communication and computation in well-known communication models and in other related models. The fundamental question we inves-tigate is the following: Is there a computational task that exhibits a strong tradeoff behaviorbetween the amount of communication and the amount of time needed for local computation?Under various standard assumptions, we prove the existence of such strong tradeoffs for thefollowing computation models: (1) two-party randomized communication complexity; (2) querycomplexity; (3) property testing. For instance, we show that there is a function f such thatf(x, y) is easy to compute (knowing x and y), and has low communication complexities (whenone player knows x and the other knows y). However, under reasonable complexity assumption,all low-communication protocols for computing f(x, y) are hard to compute.

3

Page 17: Prahladh Harsha - University of Chicagottic.uchicago.edu/~prahladh/applications/harsha_all.pdf · Prahladh Harsha, Yuval Ishai, Joe Kilian, Kobbi Nissim, and Srinivas Venkatesh. “Commu-nication

VI Oblivious Network Routing

9. P. Harsha, T. Hayes, H. Narayanan, H. Racke, and J. Radhakrishnan.Minimizing average latency in oblivious routing.In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages200–207. San Francisco, California, 20–22 January 2008.

In this paper, we consider the problem of minimizing average latency cost while obliviously routingtraffic in a network with linear latency functions. This is roughly equivalent to minimizing thefunction

∑e(load(e))2, where for a network link e, load(e) denotes the amount of traffic that has

to be forwarded by the link.We show that for the case when all routing requests are directed to a single target, there is arouting scheme with competitive ratio O(log n), where n denotes the number of nodes in thenetwork. As a lower bound we show that no oblivious scheme can obtain a competitive ratio ofbetter than Ω(

√log n).

VII Automata Theory

10. K. Krithivasan, S. Balan, and P. Harsha.Distributed processing in automata.International Journal of Foundations of Computer Science, 10(4):443–463, December 1999.

In this paper, we introduce the notion of distributed automata for finite state automata and push-down automata and analyze their acceptance power. Informally, a distributed automaton refers toa group of automata working in unison to accept one language. We compare the acceptance powerof distributed automata to their centralized counterparts according to various types of acceptancemodes. We show that distributed finite state automata do not have any additional power over thecentralized ones, while distributed push-down automata with just two components is as powerfulas Turing machines.

4