On the concrete hardness of Learning with Errors

Together with Rachel Player and Sam Scott (both also from the Information Security Group at Royal Holloway, University of London) we finally managed to put our survey on solving the Learning with Errors problem out. Here’s the abstract:

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

Continue reading “On the concrete hardness of Learning with Errors”

Advertisements

Improved Parameters and an Implementation of Graded Encoding Schemes from Ideal Lattices

Our paper (with Catalin Cocis, Fabien Laguillaumie and Adeline Langlois) on picking parameters and implementing GGHLite just hit eprint. Here’s the abstract:

We discuss how to set parameters for GGH-like graded encoding schemes approximating cryptographic multilinear maps from ideal lattices and propose a strategy which reduces parameter sizes for concrete instances. Secondly, we discuss a first software implementation of a graded encoding scheme based on GGHLite, an improved variant of Garg, Gentry and Halevi’s construction (GGH) due to Langlois, Stehlé and Steinfeld. Thirdly, we provide an implementation of non-interactive N-partite Diffie-Hellman key exchange. We discuss our implementation strategies and show that our implementation outperforms previous work.

Continue reading “Improved Parameters and an Implementation of Graded Encoding Schemes from Ideal Lattices”

Lazy Modulus Switching for the BKW Algorithm on LWE

our paper (with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret) on solving small secret LWE faster just hit ePrint (and was accepted for presentation at PKC 2014)

Abstract. Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0, 1}* or {-1, 0, 1}*. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.

The code used to produce experimental data is available on bitbucket, source code to compute our complexity estimations is also available. Slides for a presentation discussing this work are also available on bitbucket.

Lattice Stuff

We — with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret – managed to finish our work on the cryptanalysis of all proposed parameters of the public-key encryption scheme proposed at PKC 2012 by Huang, Liu and Yang. The key observation is that the scheme can be viewed as an easy LWE instance:

In this paper, we investigate the security of a public-key encryption scheme introduced by Huang, Liu and Yang (HLY) at PKC’12. This new scheme can be provably reduced to the hardness of solving a set of quadratic equations whose coefficients of highest degree are chosen according to a discrete Gaussian distributions. The other terms being chosen uniformly at random. Such a problem is a variant of the classical problem of solving a system of non-linear equations (PoSSo), which is known to be hard for random systems. The main hypothesis of Huang, Liu and Yang is that their variant is not easier than solving PoSSo for random instances. In this paper, we disprove this hypothesis. To this end, we exploit the fact that the new problem proposed by Huang, Liu and Yang reduces to an easy instance of the Learning With Errors (LWE) problem. The main contribution of this paper is to show that security and efficiency are essentially incompatible for the HLY proposal. That is, one cannot find parameters which yield a secure and a practical scheme. For instance, we estimate that a public-key of at least 1.03 GB is required to achieve 80-bit security against known attacks. As a proof of concept, we present practical attacks against all the parameters proposed Huang, Liu and Yang. We have been able to recover the private-key in roughly one day for the first challenge proposed by HLY and in roughly three days for the second challenge.

Furthermore, I gave a talk yesterday on solving LWE with binary secret using a variant of the BKW algorithm at SIAM AG’13.

BKW: Update

We have updated our pre-print titled “On the Complexity of the BKW Algorithm on LWE” on ePrint.

There are two main changes and the reasons why I am mentioning this update here.

  1. We included a more thorough comparison with other approaches, in particular, with lattice reduction (reducing LWE to SIS). To our surprise, BKW is quite competitive even in relatively modest dimensions. For Regev’s and Lindner-Peikert’s parameter sets (as interpreted here) we get that BKW is at least as fast as BKZ starting in dimension n \approx 250, which I find very low (see Table 4 on page 19).
  2. We also provide an alternative approximating for the running time of BKZ. The standard estimate due to Lindner-Peikert is \log_2 T_{sec} = \log_2 1.8/\delta_0 - 110 where \delta_0 is the targeted root hermit factor. Interpolating estimates from the BKZ 2.0 simulator and reflecting on the doubly exponential running time of BKZ in the blocksize \beta we found: \log_2 T_{sec} = \log_2 0.009/\delta^2_0 - 27. However, since this might be controversial, we include estimates for both models.

M4RI 20121224

I have just pushed the button to release M4RI 20121224. The main feature of this release is a considerable performance improvement. It all started with Fast matrix decomposition in F2 by Enrico Bertolazzi and Anna Rimoldi showing up on the arXiv. Here’s the abstract

In this work an efficient algorithm to perform a block decomposition (and so to compute the rank) of large dense rectangular matrices with entries in F2 is presented. Depending on the way the matrix is stored, the operations acting on rows or block of consecutive columns (stored as one integer) should be preferred. In this paper, an algorithm that completely avoids the column permutations is given. In particular, a block decomposition is presented and its running times are compared with the ones adopted into SAGE.

… and that comparison made M4RI (which realises this functionality in Sage) look pretty bad. I did’t (and still don’t) share the implicit assumption that avoiding column swaps was the key ingredient in making this code so much faster than ours. I assume the impressive timings are due to a very efficient base case implementation. Anyway, we sat down  and looked for performance bottlenecks the result of which is 20121224. I actually have no idea whether we caught up to the code described in Enrico’s and Anna’s pre-print as they did not publish their sources.

Still, the performance improvements over 20120613 were worth the trouble. Below two plots of the (normalised) leading constants giving the leading constants for multiplication and elimination respectively (more plots on imgur) That is, it plots the running time divided by n^{2.807} \cdot 10^9. In theory these plots should all have slope 0.

Multiplication on Intel Core i7
PLE on Intel Core i7

Finally, here’s the plot for Fast matrix decomposition in F2 which starts very small but has a rather large slope. That’s why I concluded that the performance stems from a very efficient base case. I should get in touch with Enrio and Anna about this.

How often can you revisit Polly Cracker before it gets silly?

We (with Pooya Farshim, Gottfried Herold, Ludovic Perret and Jean-Charles Faugère) updated our pre-print of the full version of the Polly Cracker paper. It is a major revision based on Gottfried’s corrections and critique.