SandboxAQ Internships

You may or may not be aware that at SandboxAQ we have an internship residency programme. Residencies would typically be remote but can be on-site, they can take place year round and last between three to twelve months, full-time or part-time. To take part, you’d need to be a PhD student or postdoc somewhere.

In the interest of advertising our programme, here are two example ideas I’d be interested in.

Add SIS and (overstretched-)NTRU to the Lattice Estimator

The name “lattice estimator” at present is more aspirational than factual. In particular, we cover algorithms for solving LWE but not algorithms for solving SIS or (overstretched) NTRU. Well, we implicitly cover SIS because solving SIS implies solving LWE (and we cost that: the “dual attack”), we don’t have a nice interface to ask “how hard would this SIS instance be”. Adding this would be a nice contribution to the community, given how widely that estimator is used.

OPRFs from Lattices

Our first work on building OPRFs from lattices costs about 2MB of bandwidth if you ignore the zero-knowledge proofs and something like 128GB (yes, GB) if you count them. Since then, proving lattice statements has become a lot cheaper, so a natural project is to reconsider our construction: use newer/smaller proofs, tune the parameters, prove it in a nicer game-based model or in UC. To give you a taste of what is possible: This work building a non-interactive key-exchange (NIKE) has to solve essentially the same problem (noise drowning + ZK proofs) and achieves smaller parameters.

If you are interested, or have some other ideas, ping me and apply for a PQC resident position.

The One-More-ISIS Problem

In “Practical, Round-Optimal Lattice-Based Blind Signatures” by Shweta Agrawal, Elena Kirshanova, Damien Stehle and Anshu Yadav, the authors introduce a new candidate hard lattice problem. They introduce this problem to build blind signatures but in this blog post, I’ll ignore the application and only talk about the cryptanalytic target: One-more-ISIS.

Continue reading “The One-More-ISIS Problem”

A Fun Bug? Help Wanted!

Over at Low BDD Estimate · Issue #33 Ben reports a bug that, on first sight and unfortunately, does not stand out much. For some unusual parameters the Lattice Estimator reports wrong results. Given that we rely on a variety of heuristics to keep the running time down and given that the whole thing is mostly developed on the side, this is – again, unfortunately – not surprising.

Debugging “corner cases” can often do wonders to improve the robustness of a given piece of software. For example, back in the days when I worked a lot on M4RI, as much as I dreaded fixing bugs that only showed up on Solaris boxes, those bugs always revealed some shady assumptions in my code that were bound to produce problems elsewhere down the line.

Indeed, I think this bug puts the finger on the heuristics we rely upon and where they can go wrong. The parameter sets that Ben has in mind are quite unusual in that we have to pick quite a large dimension d (or, equivalently, a large number m of LWE samples) to make the target uniquely short. That is, I suspect fixing this bug would take a bit more than increasing the precision of some numerical computation here or there or to fix/add some if statement to account for a corner case. This makes bugs like these a high-ish priority.

On the other hand, truth be told, between this, the estimator being “mostly developed on the side” and all the other stuff I have to do, I doubt I’ll sink significant time into fixing this bug anytime soon.

But, and this is point of this post, perhaps someone would like to take this bug as an invitation to help to improve the Lattice Estimator? While, the estimator is quite widely relied upon to, well, estimate the difficulty of solving LWE and related problems, its bus factor is uncomfortably low. I’d say attempting to fix this bug would take whoever attempts to fix it on a whirlwind tour through the code base; a good way to learn it and to improve it.

Interested? Get in touch.

Lattice Estimator, Rebooted

We have “rebooted” the LWE Estimator as the Lattice Estimator. This was born out of frustration with the limitations of the old codebase.

  • Here is how we had to express, e.g., NIST Round 1 Kyber-512 for the “Estimate all the {LWE, NTRU} schemes!” project:

    n = 512
    sd = 1.5811388300841898
    q = 7681
    alpha = sqrt(2*pi)*sd/RR(q)
    m = n
    secret_distribution = "normal"
    primal_usvp(n, alpha, q, secret_distribution=secret_distribution, m=m)
    

    In contrast, here’s how we express NIST Round 3 Kyber-512 now:

    from estimator import *
    Kyber512 = LWE.Parameters(
        n=2 * 256,
        q=3329,
        Xs=ND.CenteredBinomial(3),
        Xe=ND.CenteredBinomial(3),
        m=2 * 256,
        tag="Kyber 512",
    )
    

    That is, the user should not have to pretend their input distributions are some sort of Gaussians, the estimator should be able to handle standard distributions used in cryptography. Hopefully this makes using the estimator less error-prone.

  • It is well-established by now that making the Geometric Series Assumption for “primal attacks” on the Learning with Errors problem can be somewhat off. It is more precise to use a simulator to predict the shape after lattice reduction but the old estimator did not support this. Now we do:

    lwe.primal_usvp(Kyber512, red_shape_model="GSA")
    
    rop: ≈2^141.2, red: ≈2^141.2, δ: 1.004111, β: 382, d: 973, tag: usvp
    

    lwe.primal_usvp(Kyber512, red_shape_model="CN11")
    
    rop: ≈2^144.0, red: ≈2^144.0, δ: 1.004038, β: 392, d: 976, tag: usvp
    

    The design is (hopefully) modular enough that you can plug in your favourite simulator.

  • The algorithms we costed were getting outdated. For example, we had these (really slow) estimates for the “decoding attack” that was essentially equivalent to computing a BKZ-ϐ reduced basis followed by calling an SVP oracle in some dimension η. This is now implemented as primal_bdd.

    lwe.primal_bdd(Kyber512, red_shape_model="CN11")
    
    rop: ≈2^140.5, red: ≈2^139.3, svp: ≈2^139.6, β: 375, η: 409, d: 969, tag: bdd
    

    Similarly, our estimates for dual and hybrid attacks hadn’t kept up with the state of the art. Michael and Ben (both now at Zama) contributed code to fix that and have blogged about it here.

    lwe.dual_hybrid(Kyber512)
    
    rop: ≈2^157.7, mem: ≈2^153.6, m: 512, red: ≈2^157.4, δ: 1.003726, β: 440, d: 1008, ↻: ≈2^116.5, ζ: 16, tag: dual_hybrid
    

    lwe.primal_hybrid(Kyber512)
    
    rop: ≈2^276.4, red: ≈2^276.4, svp: ≈2^155.3, β: 381, η: 2, ζ: 0, |S|: 1, d: 1007, prob: ≈2^-133.2, ↻: ≈2^135.4, tag: hybrid
    

    We’re still not complete (e.g. BKW with sieving is missing), but the more modular design, e.g. the one-big-Python-file-to-rule-them-all is no more, should make it easier to update the code.

  • The rename is motivated by our ambition to add estimation modules for attacks on NTRU (not just viewing it as LWE) and SIS, too.

For most users, the usage should be fairly simple, e.g.

params = LWE.Parameters(n=700, q=next_prime(2^13), Xs=ND.UniformMod(3), Xe=ND.CenteredBinomial(8), m=1400, tag="KewLWE")
_ = LWE.estimate.rough(params)
usvp                 :: rop: ≈2^153.9, red: ≈2^153.9, δ: 1.003279, β: 527, d: 1295, tag: usvp
dual_hybrid          :: rop: ≈2^178.9, mem: ≈2^175.1, m: 691, red: ≈2^178.7, δ: 1.002943, β: 612, d: 1360, ↻: 1, ζ: 31, tag: dual_hybrid

 _ = LWE.estimate(params)

bkw                  :: rop: ≈2^210.4, m: ≈2^198.0, mem: ≈2^199.0, b: 15, t1: 0, t2: 16, ℓ: 14, #cod: 603, #top: 0, #test: 98, tag: coded-bkw
usvp                 :: rop: ≈2^182.3, red: ≈2^182.3, δ: 1.003279, β: 527, d: 1295, tag: usvp
bdd                  :: rop: ≈2^178.7, red: ≈2^178.1, svp: ≈2^177.2, β: 512, η: 543, d: 1289, tag: bdd
dual                 :: rop: ≈2^207.8, mem: ≈2^167.1, m: 695, red: ≈2^207.6, δ: 1.002926, β: 617, d: 1394, ↻: ≈2^165.5, tag: dual
dual_hybrid          :: rop: ≈2^201.3, mem: ≈2^197.4, m: 676, red: ≈2^201.1, δ: 1.003008, β: 594, d: 1341, ↻: ≈2^141.9, ζ: 35, tag: dual_hybrid

If you are an attack algorithm designer, we would appreciate if you would contribute estimates for your algorithm to the estimator. If we already have support for it implemented, we would appreciate if you could compare our results against what you expect. If you are a scheme designer, we would appreciate if you could check if our results match what you expect. If you find suspicious behaviour or bugs, please open an issue on GitHub.

You can read the documentation here and play with the new estimator in your browser here (beware that Binder has a pretty low time-out, though).

Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices

PKC’21 is nearly upon us which – in this day and age – means a new YouTube playlist of talks. Eamonn and Fernando wrote a nice paper on on the success probability of solving unique SVP via BKZ which Fernando is describing here:

Alex is presenting our – with Amit and Nigel – work on round-optimal Verifiable Oblivious PseudoRandom Functions (VOPRF) from ideal lattices here:

Since Alex is doing an amazing job at walking you through our paper I won’t attempt this here. Rather, let me point out a – in my book – cute trick in one of our appendices that may have applications elsewhere.

Continue reading “Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices”

Multicore BKZ in FPLLL

There have been a few works recently that give FPLLL a hard time when considering parallelism on multicore systems. That is, they compare FPLLL’s single-core implementation against their multi-core implementations, which is fair enough. However, support for parallel enumeration has existed for a while in the form of fplll-extenum. Motivated by these works we merged that library into FPLLL itself a year ago. However, we didn’t document the multicore performance that this gives us. So here we go.

I ran

for t in 1 2 4 8; do
  ./compare.py -j  $(expr 28 / $t)  -t $t -s 512 -u 80 ./fplll/strategies/default.json
done

where compare.py is from the strategizer and default.json is from a PR against FPLLL. Note that these strategies were not optimised for multicore performance. I ran this on a system with two Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz i.e. 28 cores. The resulting speed-ups are: multicore-bkz-in-fplll-default.png

As you can see, the speed-up is okay for two cores but diminishes as we throw more cores at the problem. I assume that this is partly due to block sizes being relatively small (for larger block sizes G6K – which scales rather well on multiple cores – will be faster). I also suspect that this is partly an artefact of not optimising for multiple cores, i.e. picking the trade-off between enumeration (multicore) and preprocessing (partly single-core due to LLL calls) right. If someone wants to step up and compute some strategies for multiple cores, that’d be neat.

Virtual FPLLL Days 6 aka Bounded Distance Development

The sixth FPLLL Days will be held on 19 and 20 November 2020. For obvious reasons they will be held online. (Who knows, we might end up liking the format enough to do more of these online in a post-COVID world, too).

As with previous incarnations of FPLLL Days, everyone who wishes to contribute to open-source lattice-reduction software is welcome to attend. In particular, you do not have to work on FPLLL or any of its sibling projects like FPyLLL or G6K. Work on whatever advances the state of the art or seems useful to you personally. That said, we do have the ambition to suggest projects from the universe of FPLLL and I personally hope that some people will dig in with me to do the sort of plumbing that keeps projects like these running. To give you an idea of what people worked on in the past, you can find the list of project of the previous FPLLL Days on the wiki and a report on the PROMETHEUS blog.

Format is yet to be determined. We are going to coordinate using Zulip. I think it would also be useful to have a brief conference call, roll call style, on the first day to break the ice and to make it easier for people to reach out to each other for help during the event. This is why we’re asking people to indicate their timezone on the Wiki. If you have ideas for making this event a success, please let us know.

PS: For the record: Joe threatened a riot if I didn’t call it “Bounded Distance Development”, so credit goes to him for the name.

Postdoc at Royal Holloway on Lattice-based Cryptography

I’m looking for a postdoc to work with me and others – in the ISG and at Imperial College – on lattice-based cryptography and, ideally, its connections to coding theory.

The ISG is a nice place to work; it’s a friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. For example, Carlos Cid, Anamaria Costache, Lydia Garms, Jianwei Li, Sean Murphy, Rachel Player, Eamonn Postlethwaite, Joe Rowell, Fernando Virdia and me all have looked at or are looking at lattice-based cryptography.

A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

If you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”

Postdoc at Royal Holloway on Lattice-based Cryptography

Update: 25/09/2020: New deadline: 30 October.

We are looking for a postdoc to join us to work on lattice-based cryptography. This postdoc is funded by the EU H2020 PROMETHEUS project for building privacy preserving systems from advanced lattice primitives. At Royal Holloway, the project is looked after by Rachel Player and me. Feel free to e-mail me with any queries you might have.

The ISG is a nice place to work; it’s a very friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

Also, if you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”

Faster Enumeration-based Lattice Reduction

Our paper “Faster Enumeration-based Lattice Reduction: Root Hermite Factor k^{1/(2k)} in Time k^{k/8\, +\, o(k)}” – together with Shi Bai, Pierre-Alain Fouque, Paul Kirchner, Damien Stehlé and Weiqiang Wen – is now available on ePrint (the work has been accepted to CRYPTO 2020). Here’s the abstract:

We give a lattice reduction algorithm that achieves root Hermite factor k^{1/(2k)} in time k^{k/8 + o(k)} and polynomial memory. This improves on the previously best known enumeration-based algorithms which achieve the same quality, but in time k^{k/(2e) + o(k)}. A cost of k^{k/8 + o(k)} was previously mentioned as potentially achievable (Hanrot-Stehlé’10) or as a heuristic lower bound (Nguyen’10) for enumeration algorithms. We prove the complexity and quality of our algorithm under a heuristic assumption and provide empirical evidence from simulation and implementation experiments attesting to its performance for practical and cryptographic parameter sizes. Our work also suggests potential avenues for achieving costs below k^{k/8 + o(k)} for the same root Hermite factor, based on the geometry of SDBKZ-reduced bases.

Continue reading “Faster Enumeration-based Lattice Reduction”