NP-Hardness of Learning Programs and Partial MCSP
Author
Abstract

A long-standing open question in computational learning theory is to prove NP-hardness of learning efficient programs, the setting of which is in between proper learning and improper learning. Ko (COLT’90, SICOMP’91) explicitly raised this open question and demonstrated its difficulty by proving that there exists no relativizing proof of NP-hardness of learning programs. In this paper, we overcome Ko’s relativization barrier and prove NP-hardness of learning programs under randomized polynomial-time many-one reductions. Our result is provably non-relativizing, and comes somewhat close to the parameter range of improper learning: We observe that mildly improving our inapproximability factor is sufficient to exclude Heuristica, i.e., show the equivalence between average-case and worst-case complexities of N P. We also make progress on another long-standing open question of showing NP-hardness of the Minimum Circuit Size Problem (MCSP). We prove NP-hardness of the partial function variant of MCSP as well as other meta-computational problems, such as the problems MKTP* and MINKT* of computing the time-bounded Kolmogorov complexity of a given partial string, under randomized polynomial-time reductions. Our proofs are algorithmic information (a.k. a. Kolmogorov complexity) theoretic. We utilize black-box pseudorandom generator constructions, such as the Nisan-Wigderson generator, as a one-time encryption scheme secure against a program which “does not know” a random function. Our key technical contribution is to quantify the “knowledge” of a program by using conditional Kolmogorov complexity and show that no small program can know many random functions.

Year of Publication
2022
Conference Name
2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS)
Google Scholar | BibTeX