Loading...
Please wait, while we are loading the content...
Similar Documents
Special Issue on Randomness and Complexity
| Content Provider | Society for Industrial and Applied Mathematics (SIAM) |
|---|---|
| Author | Goldreich, Oded Sudan, Madhu |
| Copyright Year | 2006 |
| Abstract | The idea of a SICOMP special issue on randomness and complexity occurred to us when we were in residence at the Radcliffe Institute for Advanced Study at Harvard University during the academic year 20032004. We were part of a science cluster in theoretical computer science at the Radcliffe Institute whose other members were Eli BenSasson, Dana Ron, Ronitt Rubinfeld, and Salil Vadhan. The focus of this cluster was randomness and computation. The extensive interaction within the cluster members as well as with frequent visitors most notably Irit Dinur, Shafi Goldwasser, and Tali Kaufman made us more aware than ever of the richness of the area, and the idea of editing a special issue on randomness and complexity emerged naturally.The interplay of randomness and complexity is at the heart of modern cryptography and plays a fundamental role in the design of algorithms and in complexity theory at large. Specifically, this interplay is pivotal to several intriguing notions of probabilistic proof systems e.g., interactive proofs, zeroknowledge proofs, and probabilistically checkable proofs, is the focus of the computational approach to randomness, and is essential for various types of sublinear time algorithms. All these areas were at the focus of extensive research in the last two decades, but each research generation brings its own new perspective and/or focus to them. This special issue reports some of the recent progress achieved in these related areas, where recent relates to spring 2004, when papers were invited for this issue. Following are some of the issues main themes.Cryptography. The paper of Applebaum, Ishai, and Kushilevitz provides strong evidence that many cryptographic primitives and tasks can be implemented at very low complexity. For example, they show that the existence of oneway functions that can be evaluated in ${ \cal NC}1$ implies the existence of oneway functions that can be evaluated in ${\cal NC}0$. Whereas the former are widely believed to exist e.g., based on the standard factoring assumption, most researchers have previously believed that the latter do not exist. We stress that evaluation in ${\cal NC}0$ means that each output bit only depends on a constant number of input bits. The new work further shows that dependence on four input bits suffices whereas dependence on at least three input bits is definitely necessary.Probabilistically checkable proofs PCPs. Current research in the area is marked by a renewed attention to aspects such as the following:1. Achieving constructs of almostlinear length that can be tested by very few say constant number of queries.2. Obtaining a combinatorial proof of the PCP theorem.3. Exploration of the relationship between PCP and coding theory e.g., locally testable codes.4. Applications of PCPs to obtaining new inapproximability results regarding longstanding problems such as minbisection.Specifically, the paper of BenSasson et al. presents significant improvements to the tradeoff between prooflength and the number of queries. The paper of Dinur and Reingold makes a major step in the project of obtaining combinatorial proofs of the PCP theorem. Both papers share a reformulation of the proofcomposition paradigm, where proximity testing and robustness play a central role. Finally, Khots paper puts forward new PCP parameters and introduces new PCP constructions that are used to provide evidence that minbisection is not approximable up to some constant.Randomness extraction. The construction of randomness extractors has received much attention in the last two decades. Much of the past work especially in the 1990s has focused on extracting randomness from a single weak source while using an auxiliary short uniformly distributed seed. The focus was on using the weakest possible form of a source i.e., a minentropy source. In contrast, the current era is marked by a focus on stronger sources while disallowing the use of an auxiliary uniformly distributed seed. The paper of Gabizon, Raz, and Shaltiel studies bitfixing sources, whereas the paper of Barak, Impagliazzo, and Wigderson studies extraction from a constant number of independent sources of linear minentropy which may be viewed as a single source consisting of a constant number of independent blocks. Indeed, each of these papers revisits problems raised in the mid 1980s, which were neglected in the 1990s due to the focus of that era on obtaining the best results for seedassisted extraction from a single minentropy source. Needless to say, we believe that the renewed interest in these problems especially the second one is highly justified.We wish to seize the opportunity to say a few words regarding seedassisted versus seedless randomness extraction. Seedassisted randomness extraction found many applications via direct and indirect connections to other important problems, but still one may ask what they mean for the original problem of implementing a randomized procedure using a weak source of randomness. One answer is that the seed can be obtained from an expensive highquality auxiliary source and that one wishes to minimize the use of this source and thus uses a cheaper lowquality random source for the bulk of the randomness required. Another answer is that if the seed is short enough, then one may afford to try all possible seeds, invoke the procedure with the corresponding randomness extracted from the same source output and varying seeds, and rule by majority. This suggestion is adequate for the implementation of standard randomized algorithms, but not in adversarial settings e.g., cryptography in which a randomized procedure is invoked in order to protect against some adversarial party. Thus, seedless randomness extraction is essential in many applications.Worstcase to averagecase reductions. The question of whether worstcase to averagecase reductions or even merely hardness amplification exist for $\cal{NP}$ has received much interest recently. The first part of the question is studied in the paper of Bogdanov and Trevisan which provides a negative indication, restricted to nonadaptive reductions. The second part of the question is unfortunately not represented in this special issue and the interested reader is directed to 1.Zeroknowledge. Vadhans paper presents an unconditional study of computational zeroknowledge, yielding valuable transformations between various forms of zeroknowledge e.g., from a weak form of zeroknowledge to the standard form. This work builds on studies of statistical zeroknowledge that were conducted in the late 1990s, thus fulfilling a prophecy made at the time.Lowdegree tests. The celebrated lowdegree tests have been revisited recently with a focus on derandomization and on lowdegree tests over small finite fields. The first direction is represented by the work of Shpilka and Wigderson that seems to provide a proof from The Book for a derandomized version of the linearity test. The second direction is unfortunately not represented in this special issue and the interested reader is directed to 2, 3. |
| Starting Page | ix |
| Ending Page | xi |
| Page Count | 3 |
| File Format | |
| ISSN | 00975397 |
| DOI | 10.1137/SMJCAT0000360000040000ix000001 |
| e-ISSN | 10957111 |
| Journal | SIAM Journal on Computing (SMJCAT) |
| Issue Number | 4 (Special Issue on Randomness and Complexity) |
| Volume Number | 36 |
| Language | English |
| Publisher | Society for Industrial and Applied Mathematics |
| Publisher Date | 2006-12-15 |
| Access Restriction | Subscribed |
| Content Type | Text |
| Resource Type | Article |
| Subject | Mathematics Computer Science |