On the concrete hardness of Learning with Errors

Together with Rachel Player and Sam Scott (both also from the Information Security Group at Royal Holloway, University of London) we finally managed to put our survey on solving the Learning with Errors problem out. Here’s the abstract:

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

Continue reading “On the concrete hardness of Learning with Errors”

Lattice Stuff

We — with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret – managed to finish our work on the cryptanalysis of all proposed parameters of the public-key encryption scheme proposed at PKC 2012 by Huang, Liu and Yang. The key observation is that the scheme can be viewed as an easy LWE instance:

In this paper, we investigate the security of a public-key encryption scheme introduced by Huang, Liu and Yang (HLY) at PKC’12. This new scheme can be provably reduced to the hardness of solving a set of quadratic equations whose coefficients of highest degree are chosen according to a discrete Gaussian distributions. The other terms being chosen uniformly at random. Such a problem is a variant of the classical problem of solving a system of non-linear equations (PoSSo), which is known to be hard for random systems. The main hypothesis of Huang, Liu and Yang is that their variant is not easier than solving PoSSo for random instances. In this paper, we disprove this hypothesis. To this end, we exploit the fact that the new problem proposed by Huang, Liu and Yang reduces to an easy instance of the Learning With Errors (LWE) problem. The main contribution of this paper is to show that security and efficiency are essentially incompatible for the HLY proposal. That is, one cannot find parameters which yield a secure and a practical scheme. For instance, we estimate that a public-key of at least 1.03 GB is required to achieve 80-bit security against known attacks. As a proof of concept, we present practical attacks against all the parameters proposed Huang, Liu and Yang. We have been able to recover the private-key in roughly one day for the first challenge proposed by HLY and in roughly three days for the second challenge.

Furthermore, I gave a talk yesterday on solving LWE with binary secret using a variant of the BKW algorithm at SIAM AG’13.

BKW: Update

We have updated our pre-print titled “On the Complexity of the BKW Algorithm on LWE” on ePrint.

There are two main changes and the reasons why I am mentioning this update here.

  1. We included a more thorough comparison with other approaches, in particular, with lattice reduction (reducing LWE to SIS). To our surprise, BKW is quite competitive even in relatively modest dimensions. For Regev’s and Lindner-Peikert’s parameter sets (as interpreted here) we get that BKW is at least as fast as BKZ starting in dimension n \approx 250, which I find very low (see Table 4 on page 19).
  2. We also provide an alternative approximating for the running time of BKZ. The standard estimate due to Lindner-Peikert is \log_2 T_{sec} = \log_2 1.8/\delta_0 - 110 where \delta_0 is the targeted root hermit factor. Interpolating estimates from the BKZ 2.0 simulator and reflecting on the doubly exponential running time of BKZ in the blocksize \beta we found: \log_2 T_{sec} = \log_2 0.009/\delta^2_0 - 27. However, since this might be controversial, we include estimates for both models.