Our — together with Florian Göpfert, Fernando Virdia and Thomas Wunderer — paper Revisiting the Expected Cost of Solving uSVP and Applications to LWE is now available on ePrint. Here’s the abstract:
Reducing the Learning with Errors problem (LWE) to the Unique-SVP problem and then applying lattice reduction is a commonly relied-upon strategy for estimating the cost of solving LWE-based constructions. In the literature, two different conditions are formulated under which this strategy is successful. One, widely used, going back to Gama & Nguyen’s work on predicting lattice reduction (Eurocrypt 2008) and the other recently outlined by Alkim et al. (USENIX 2016). Since these two estimates predict significantly different costs for solving LWE parameter sets from the literature, we revisit the Unique-SVP strategy. We present empirical evidence from lattice-reduction experiments exhibiting a behaviour in line with the latter estimate. However, we also observe that in some situations lattice-reduction behaves somewhat better than expected from Alkim et al.’s work and explain this behaviour under standard assumptions. Finally, we show that the security estimates of some LWE-based constructions from the literature need to be revised and give refined expected solving costs.
Our work is essentially concerned with spelling out in more detail and experimentally verifying a prediction made in the New Hope paper on when lattice reduction successfully recovers an unusually short vector.
Denoting by the unusually short vector in some lattice
of dimension
(say, derived from some LWE instance using Kannan’s embedding),
the block size used for the BKZ algorithm and
the root-Hermite factor for
, then the New Hope paper predicts that
can be found if
under the assumption that the Geometric Series Assumption holds (until a projection of the unusually short vector is found).
The rationale is that this condition ensures that the projection of orthogonally to the first
(Gram-Schmidt) vectors (denoted as
) is shorter than the expectation for the
-th Gram-Schmidt vector
under the GSA and thus would be found by the SVP oracle when called on the last block of size
. Hence, for any
satisfying the above inequality, the actual behaviour would deviate from that predicted by the GSA. Finally, the argument can be completed by appealing to the intuition that a deviation from expected behaviour on random instances — such as the GSA — leads to a revelation of the underlying structural, secret information. In any event, such a deviation would already solve Decision-LWE.
In our work, we spell out this argument in more detail (e.g. how is recovered from
) and throw 23k core hours at the problem of checking if the predicted behaviour, e.g.
matches the observed behaviour, e.g.
Just like for the above plots, the general answer is a clear “yes”.
Pretty Pictures or GTFO
I forgot the most important bit. The behaviour of the BKZ algorithm on uSVP(-BDD) instances can be observed in this video.
You can observe the basis approaching the GSA until the SVP oracle finds the unusually short vector . From
,
is then immediately recovered using size reduction. The grey area is the currently worked on block. The notation in the legend isn’t consistent with the plots above or even internally (
v
), but the general idea should still be apparent. In case you’re wondering about the erratic behaviour of the tails (which occasionally goes all over the place), this is due to a bug in fpylll which has recently been fixed.