Lucky Microseconds: A Timing Attack on Amazon’s s2n Implementation of TLS

Over the summer, Kenny Paterson and me spent some time looking at Amazon’s (Amazon Web Services – Labs to be precise) implementation of TLS. This implementation — called s2n — was released in June with the intent of providing a clean, easy to read, small implementation of a core subset of the TLS protocol.

Continue reading “Lucky Microseconds: A Timing Attack on Amazon’s s2n Implementation of TLS”

Sage Code for GGH Cryptanalysis by Hu and Jia

Recently, Yupu Hu and Huiwen Jia put a paper on the Cryptology ePrint Archive which describes a successful attack of the GGH (and GGHLite) candidate multilinear map. The attack does not try to recover the secret g or any other secret parameter of the map. Instead, it solves the Extraction \kappa-graded CDH (Ext-GCDH) problem directly.

Continue reading “Sage Code for GGH Cryptanalysis by Hu and Jia”

On the concrete hardness of Learning with Errors

Together with Rachel Player and Sam Scott (both also from the Information Security Group at Royal Holloway, University of London) we finally managed to put our survey on solving the Learning with Errors problem out. Here’s the abstract:

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

Continue reading “On the concrete hardness of Learning with Errors”

Cryptanalysis of the FHE based on GACD?

Jintai Ding and Chengdong Tao published a new preprint on the IACR’s ePrint titled A New Algorithm for Solving the Approximate Common Divisor Problem and Cryptanalysis of the FHE based on GACD.

*Abstract. *In this paper, we propose a new algorithm for solving the approximate common divisors problems, which is based on LLL reduction algorithm of certain special lattice and linear equation solving algorithm over integers. Through both theoretical argument and experimental data, we show that our new algorithm is a polynomial time algorithm under reasonable assumptions on the parameters. We use our algorithm to solve concrete problems that no other algorithm could solve before. Further more, we show that our algorithm can break the fully homomorphic encryption schemes, which are based on the approximate common divisors problem, in polynomial time in terms of the system parameter λ.

It is worth emphasising that the Approximate GCD problem not only underpinsone of the few fully homomorphic encryption schemes we have but it is also somewhat related to one of two candidates for multilinear maps. So if it could be shown to be easy then this would be somewhat sad as the choice of problems for building fancy crypto schemes would have gotten a lot smaller. So what is the Approxmiate GCD problem?

Approximate Greatest Common Divisions Problem: Given polynomially many samples x_{i} = q_{i}· p + r_{i} where x_{i} = O(2^{γ}), r_{i} = O(2^{ρ}) and p = O(2^{η}), recover p.

The algorithm proceeds by using the LLL algorithm to find relations .

Note that if enough such relations can be found then this gives a linear system of equations in the r_{j} which we’d only need to solve. So how does the algorithm use LLL to recover the a_{ij}? It sets up a lattice basis of dimension (t+1) × (t+1) as follows:

Here, N is simply a random integer O(2^{γ}). Now, the authors claim that running LLL on the lattice spanned by B returns about t-2 of the desired relations. They are unable to rigorously argue why this should happen but offer the following intuition. Any  vector  in the lattice spanned by B has the form . Considering the last component = = they speculate that that LLL would focus on the left hand side of this expression as the right hand side would be rather small anyway. Making implies which in turn implies , if I understood correctly.

An implementation of the first step of this algorithm for Sage is given here (in a Sage cell). Indeed, if you plug in the parameters from the authors’ table on page 9, we do get our desired relations out.

Finally, let’s look at the application to parameters as they are used in cryptography. The authors consider the fully homomorphic encryption scheme due to Marten van Dijkm Craig Gentry, Shai Halevi, Vinod Vaikuntanathan which sets γ = λ^{5}, η = λ^{2} and ρ = λ for a security level of λ, i.e. 2^{λ} operations should be needed to break it. Here is what the authors write:

We apply our algorithm to the parameters in [ 6 ] and we could break all the cases where their parameter γ < 2^{20}.

It is not really clear to me if the authors actually ran their attack or not. On the one hand, we have that a choice of parameters where γ < 2^{20} would correspond to  λ=16 as (2^{20})^{(1/5)} = 2^{4}. Hence, attacking such dimensions would not mean much. On the other hand, the estimates by the authors about LLL would mean their attack would cost 2^{135} operations.

However, as far as I can tell, it does not work for these kind of parameters. That is, LLL fails to find the desired relations once we choose parameters as they are suggested in the cryptographic literature (cf. the example in the Sage cell above).

There are two reasons why the algorithm might fail:

  1. The target vectors might not be among the shortest vectors in the lattice. For the parameters on page 9 it seems this condition holds. It is not clear that the condition holds in general. While on page 7 the authors argue for the existence of target vectors within the approximation radius of LLL, the algorithm on page 8 expects vectors which are smaller than suggested by the Gaussian heuristic, which seems to be what is needed to me.
  2. The target vectors are the shortest vectors in the lattice. However, LLL cannot find them. In such a case it seems the situation is somewhat similar to the situation discussed in this comment from [[http://eprint.iacr.org/2009/616.pdf%5D%5B%5B6]]]:

On the other hand, when t is large, ~v likely is the shortest vector in L, but known lattice reductions algorithms will not be able to find it efficiently. Specifically, as a rule of thumb, they require time roughly 2^{(t/k)} to output a 2^{k} approximation of the shortest vector. Since clearly there are exponentially (in t) many vectors in L of length at most |x_{0}|√(t + 1) < 2^{γ}√(t + 1), which is about 2^{(η−ρ)} times longer than ~v, we need better than a 2^{(η−ρ)} approximation. For t ≥ γ/η, the time needed to guarantee a 2^{η} approximation (which is not even good enough to recover ~v) is roughly 2γ/η^{2}.  Thus setting γ/η^{2} = ω(log λ) foils this attack.

So if I understand this correctly, they should have a condition on t which implies that the target vectors are smaller than what the Gaussian heuristic suggests by the appropriate LLL Unique SVP factor. In particular, they ask if there are target vectors with

|μ_i| < 1/√(t +1) 2^{(γ/(t+1) + t/4)}

but it should be more like

|μ_i| < τ/√(t +1) 2^{(γ/(t+1) – t/4)}

i.e. within the LLL approximation radius there shouldn’t be any other vectors (where τ is the Unique-SVP factor ~0.5).

Update: Since the authors show that if a short vector exists it must satisfy their conditions, this argument is invalid. Still, the authors did not show that they are able to find short enough vectors for parameters as they are found in the literature.

Of course, I might have missed something.

Lazy Modulus Switching for the BKW Algorithm on LWE

our paper (with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret) on solving small secret LWE faster just hit ePrint (and was accepted for presentation at PKC 2014)

Abstract. Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0, 1}* or {-1, 0, 1}*. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.

The code used to produce experimental data is available on bitbucket, source code to compute our complexity estimations is also available. Slides for a presentation discussing this work are also available on bitbucket.

libFES : Fast Exhaustive Search for Polynomial Systems over F2

Charles Bouillaguet set up a nice shiny website for libFES the library for exhaustive search on polynomial systems over \mathbb{F}_2. The library has a Sage interface, so it’s easy to get started. It’s also integrated in Charles’ upcoming one-stop boolean system solving patch.

He calls his benchmarketing “bragging rights” … and boy has he earned those rights! Check it out, libFES is fast!

Lattice Stuff

We — with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret – managed to finish our work on the cryptanalysis of all proposed parameters of the public-key encryption scheme proposed at PKC 2012 by Huang, Liu and Yang. The key observation is that the scheme can be viewed as an easy LWE instance:

In this paper, we investigate the security of a public-key encryption scheme introduced by Huang, Liu and Yang (HLY) at PKC’12. This new scheme can be provably reduced to the hardness of solving a set of quadratic equations whose coefficients of highest degree are chosen according to a discrete Gaussian distributions. The other terms being chosen uniformly at random. Such a problem is a variant of the classical problem of solving a system of non-linear equations (PoSSo), which is known to be hard for random systems. The main hypothesis of Huang, Liu and Yang is that their variant is not easier than solving PoSSo for random instances. In this paper, we disprove this hypothesis. To this end, we exploit the fact that the new problem proposed by Huang, Liu and Yang reduces to an easy instance of the Learning With Errors (LWE) problem. The main contribution of this paper is to show that security and efficiency are essentially incompatible for the HLY proposal. That is, one cannot find parameters which yield a secure and a practical scheme. For instance, we estimate that a public-key of at least 1.03 GB is required to achieve 80-bit security against known attacks. As a proof of concept, we present practical attacks against all the parameters proposed Huang, Liu and Yang. We have been able to recover the private-key in roughly one day for the first challenge proposed by HLY and in roughly three days for the second challenge.

Furthermore, I gave a talk yesterday on solving LWE with binary secret using a variant of the BKW algorithm at SIAM AG’13.