ERC Consolidator Grant: Advanced Practical Post-Quantum Cryptography from Lattices

My ERC Consolidator Grant application titled “Advanced Practical Post-Quantum Cryptography from Lattices” has been selected(*) for funding by the European Research Council. Here’s my blurb:

Standardisation efforts for post-quantum public-key encryption and signatures are close to completion. At the same time the most recent decade has seen the deployment, at scale, of more advanced cryptographic algorithms where no efficient post-quantum candidates exist. These algorithms e.g. permit to give strong guarantees even after some parties were compromised, privacy-preserving contact lookups, credentials and e-cash. This project will tackle the challenge of “lifting” such constructions to the post-quantum era by pursuing three guiding questions:

  • What is the cost of solving lattice problems with and without hints on a quantum computer? Answers to this question will provide confidence in the entire stack of lattice-based cryptography from “basic” to “advanced”. Studying the presence of hints tackles side-channel attacks and advanced constructions.
  • What are the lattice assumptions that establish feature- and (near) performance-parity with pre-quantum cryptography? Standard lattice assumptions do not seem to establish feature parity with pairing-based or even some Diffie-Hellman-based pre-quantum constructions, how can we achieve efficient and secure advanced practical post-quantum solutions?
  • How efficient is a careful composition of lattice-base cryptography with other assumptions? If we want to deploy our post-quantum solutions in practice, we will need to design hybrid schemes that are secure if either of their pre- or post-quantum part is secure and to deploy many advanced lattice-based primitives in practice we need to carefully compose them with zero-knowledge proofs to rule out some attacks.

Lattice-based cryptography has established itself as a key technology to realise both efficient basic primitives like post-quantum encryption and advanced solutions such as computation with encrypted data and programs. It is thus well positioned to tackle the middle ground of advanced yet practical primitives for phase 2 of the post-quantum transition.

Concretely, this grant award means that I’ll be recruiting for several postdoc and PhD student (international fees, i.e. not restricted to people from the UK) positions in post-quantum and lattice-based cryptography. I have a bit of flexibility in when to put those on the market, so if you think these positions would fit you well, feel free to get in touch with me to informally discuss it.

In somewhat related news, we’re hiring for a lecturer (≈ assistant professor) position at King’s College London. We’re also hiring for PhD or postdoc residency (≈ intern) positions at SandboxAQ.

(*) Well, there is the tiny issue of Brexit: “As described in Annex 3 of the ERC Work Programme 2022, successful applicants established in a country in the process of associating to Horizon Europe will not be treated as established in an associated country if the association agreement does not apply by the time of the signature of the grant agreement.” See also UKRI’s guidance on the UK’s guarantee scheme.

The k-R-ISIS (of Knowledge) Assumption

Our paper – together with Valerio Cini, Russell W. F. Lai, Giulio Malavolta and Sri Aravinda Krishnan Thyagarajan – titled Lattice-Based SNARKs: Publicly Verifiable, Preprocessing, and Recursively Composable will be presented at CRYPTO’22. A pre-print is available and here’s the abstract:

A succinct non-interactive argument of knowledge (SNARK) allows a prover to produce a short proof that certifies the veracity of a certain NP-statement. In the last decade, a large body of work has studied candidate constructions that are secure against quantum attackers. Unfortunately, no known candidate matches the efficiency and desirable features of (pre-quantum) constructions based on bilinear pairings.

In this work, we make progress on this question. We propose the first lattice-based SNARK that simultaneously satisfies many desirable properties: It (i) is tentatively post-quantum secure, (ii) is publicly-verifiable, (iii) has a logarithmic-time verifier and (iv) has a purely algebraic structure making it amenable to efficient recursive composition. Our construction stems from a general technical toolkit that we develop to translate pairing-based schemes to lattice-based ones. At the heart of our SNARK is a new lattice-based vector commitment (VC) scheme supporting openings to constant-degree multivariate polynomial maps, which is a candidate solution for the open problem of constructing VC schemes with openings to beyond linear functions. However, the security of our constructions is based on a new family of lattice-based computational assumptions which naturally generalises the standard Short Integer Solution (SIS) assumption.

In this post, I want to give you a sense of our new family of assumptions, the k-M-ISIS family of assumptions, and its variants. Meanwhile, Russell has written a post focusing on building the SNARK and Aravind has written about the nice things that we can do with our lattice-based SNARKs.

Continue reading “The k-R-ISIS (of Knowledge) Assumption”

The One-More-ISIS Problem

In “Practical, Round-Optimal Lattice-Based Blind Signatures” by Shweta Agrawal, Elena Kirshanova, Damien Stehle and Anshu Yadav, the authors introduce a new candidate hard lattice problem. They introduce this problem to build blind signatures but in this blog post, I’ll ignore the application and only talk about the cryptanalytic target: One-more-ISIS.

Continue reading “The One-More-ISIS Problem”

A Fun Bug? Help Wanted!

Over at Low BDD Estimate · Issue #33 Ben reports a bug that, on first sight and unfortunately, does not stand out much. For some unusual parameters the Lattice Estimator reports wrong results. Given that we rely on a variety of heuristics to keep the running time down and given that the whole thing is mostly developed on the side, this is – again, unfortunately – not surprising.

Debugging “corner cases” can often do wonders to improve the robustness of a given piece of software. For example, back in the days when I worked a lot on M4RI, as much as I dreaded fixing bugs that only showed up on Solaris boxes, those bugs always revealed some shady assumptions in my code that were bound to produce problems elsewhere down the line.

Indeed, I think this bug puts the finger on the heuristics we rely upon and where they can go wrong. The parameter sets that Ben has in mind are quite unusual in that we have to pick quite a large dimension d (or, equivalently, a large number m of LWE samples) to make the target uniquely short. That is, I suspect fixing this bug would take a bit more than increasing the precision of some numerical computation here or there or to fix/add some if statement to account for a corner case. This makes bugs like these a high-ish priority.

On the other hand, truth be told, between this, the estimator being “mostly developed on the side” and all the other stuff I have to do, I doubt I’ll sink significant time into fixing this bug anytime soon.

But, and this is point of this post, perhaps someone would like to take this bug as an invitation to help to improve the Lattice Estimator? While, the estimator is quite widely relied upon to, well, estimate the difficulty of solving LWE and related problems, its bus factor is uncomfortably low. I’d say attempting to fix this bug would take whoever attempts to fix it on a whirlwind tour through the code base; a good way to learn it and to improve it.

Interested? Get in touch.

Lattice Estimator, Rebooted

We have “rebooted” the LWE Estimator as the Lattice Estimator. This was born out of frustration with the limitations of the old codebase.

  • Here is how we had to express, e.g., NIST Round 1 Kyber-512 for the “Estimate all the {LWE, NTRU} schemes!” project:

    n = 512
    sd = 1.5811388300841898
    q = 7681
    alpha = sqrt(2*pi)*sd/RR(q)
    m = n
    secret_distribution = "normal"
    primal_usvp(n, alpha, q, secret_distribution=secret_distribution, m=m)
    

    In contrast, here’s how we express NIST Round 3 Kyber-512 now:

    from estimator import *
    Kyber512 = LWE.Parameters(
        n=2 * 256,
        q=3329,
        Xs=ND.CenteredBinomial(3),
        Xe=ND.CenteredBinomial(3),
        m=2 * 256,
        tag="Kyber 512",
    )
    

    That is, the user should not have to pretend their input distributions are some sort of Gaussians, the estimator should be able to handle standard distributions used in cryptography. Hopefully this makes using the estimator less error-prone.

  • It is well-established by now that making the Geometric Series Assumption for “primal attacks” on the Learning with Errors problem can be somewhat off. It is more precise to use a simulator to predict the shape after lattice reduction but the old estimator did not support this. Now we do:

    lwe.primal_usvp(Kyber512, red_shape_model="GSA")
    
    rop: ≈2^141.2, red: ≈2^141.2, δ: 1.004111, β: 382, d: 973, tag: usvp
    

    lwe.primal_usvp(Kyber512, red_shape_model="CN11")
    
    rop: ≈2^144.0, red: ≈2^144.0, δ: 1.004038, β: 392, d: 976, tag: usvp
    

    The design is (hopefully) modular enough that you can plug in your favourite simulator.

  • The algorithms we costed were getting outdated. For example, we had these (really slow) estimates for the “decoding attack” that was essentially equivalent to computing a BKZ-ϐ reduced basis followed by calling an SVP oracle in some dimension η. This is now implemented as primal_bdd.

    lwe.primal_bdd(Kyber512, red_shape_model="CN11")
    
    rop: ≈2^140.5, red: ≈2^139.3, svp: ≈2^139.6, β: 375, η: 409, d: 969, tag: bdd
    

    Similarly, our estimates for dual and hybrid attacks hadn’t kept up with the state of the art. Michael and Ben (both now at Zama) contributed code to fix that and have blogged about it here.

    lwe.dual_hybrid(Kyber512)
    
    rop: ≈2^157.7, mem: ≈2^153.6, m: 512, red: ≈2^157.4, δ: 1.003726, β: 440, d: 1008, ↻: ≈2^116.5, ζ: 16, tag: dual_hybrid
    

    lwe.primal_hybrid(Kyber512)
    
    rop: ≈2^276.4, red: ≈2^276.4, svp: ≈2^155.3, β: 381, η: 2, ζ: 0, |S|: 1, d: 1007, prob: ≈2^-133.2, ↻: ≈2^135.4, tag: hybrid
    

    We’re still not complete (e.g. BKW with sieving is missing), but the more modular design, e.g. the one-big-Python-file-to-rule-them-all is no more, should make it easier to update the code.

  • The rename is motivated by our ambition to add estimation modules for attacks on NTRU (not just viewing it as LWE) and SIS, too.

For most users, the usage should be fairly simple, e.g.

params = LWE.Parameters(n=700, q=next_prime(2^13), Xs=ND.UniformMod(3), Xe=ND.CenteredBinomial(8), m=1400, tag="KewLWE")
_ = LWE.estimate.rough(params)
usvp                 :: rop: ≈2^153.9, red: ≈2^153.9, δ: 1.003279, β: 527, d: 1295, tag: usvp
dual_hybrid          :: rop: ≈2^178.9, mem: ≈2^175.1, m: 691, red: ≈2^178.7, δ: 1.002943, β: 612, d: 1360, ↻: 1, ζ: 31, tag: dual_hybrid

 _ = LWE.estimate(params)

bkw                  :: rop: ≈2^210.4, m: ≈2^198.0, mem: ≈2^199.0, b: 15, t1: 0, t2: 16, ℓ: 14, #cod: 603, #top: 0, #test: 98, tag: coded-bkw
usvp                 :: rop: ≈2^182.3, red: ≈2^182.3, δ: 1.003279, β: 527, d: 1295, tag: usvp
bdd                  :: rop: ≈2^178.7, red: ≈2^178.1, svp: ≈2^177.2, β: 512, η: 543, d: 1289, tag: bdd
dual                 :: rop: ≈2^207.8, mem: ≈2^167.1, m: 695, red: ≈2^207.6, δ: 1.002926, β: 617, d: 1394, ↻: ≈2^165.5, tag: dual
dual_hybrid          :: rop: ≈2^201.3, mem: ≈2^197.4, m: 676, red: ≈2^201.1, δ: 1.003008, β: 594, d: 1341, ↻: ≈2^141.9, ζ: 35, tag: dual_hybrid

If you are an attack algorithm designer, we would appreciate if you would contribute estimates for your algorithm to the estimator. If we already have support for it implemented, we would appreciate if you could compare our results against what you expect. If you are a scheme designer, we would appreciate if you could check if our results match what you expect. If you find suspicious behaviour or bugs, please open an issue on GitHub.

You can read the documentation here and play with the new estimator in your browser here (beware that Binder has a pretty low time-out, though).

10 PhD Positions at Royal Holloway’s Centre for Doctoral Training in Cyber Security for the Everyday

At Royal Holloway we are again taking applications for ten fully-funded PhD positions in Information Security. See the CDT website and the ISG website for what kind of research we do. Also, check out our past and current CDT students and our research seminar schedule to get an idea of how broad and diverse the areas of information security are in which the ISG works.

More narrowly, to give you some idea of cryptographic research (and thus supervision capacity) in the Cryptography Group at Royal Holloway: currently, we are nine permanent members of staff: Simon Blackburn (Maths), Saqib A. Kakvi, Keith Martin, Sean Murphy, Siaw-Lynn Ng, Rachel Player, Liz Quaglia and me. In addition, there are three postdocs working on cryptography and roughly 14 PhD students. Focus areas of cryptographic research currently are: lattice-based cryptography and applications, post-quantum cryptography, symmetric cryptography, statistics, access control, information-theoretic security and protocols.

To give you a better sense of what is possible, here are some example projects. These are in no way prescriptive and serve to give some ideas:

  1. I am, as always, interested in exploring lattice-based and post-quantum cryptography; algorithms for solving the hard underlying protocols, efficient implementations, lifting pre-quantum constructions to the post-quantum era.
  2. Together with my colleague Rikke Jensen, we want to explore security needs and practices in large-scale protests using ethnographic methods. We’ve done an interview-based (i.e. not ethnography-based) pilot with protesters in Hong Kong and think grounding cryptographic security notions in the needs, erm, on the ground, will prove rather fruitful.
  3. My colleague Rachel Player is looking at privacy-preserving outsourced computation, with a focus on (fully) homomorphic encryption.
  4. My (new) colleague Guido Schmitz uses formal methods to study cryptographic protocols.

Note that most of these positions are reserved for UK residents, which does, however, not mean nationality (see CDT website for details) and we can award three of our scholarships without any such constraint, i.e. international applicants. The studentship includes tuition fees and maintenance (£21,285 for each academic year).

To apply, go here. Feel free to get in touch if you have questions about whether this is right for you. Official announcement follows.

Continue reading “10 PhD Positions at Royal Holloway’s Centre for Doctoral Training in Cyber Security for the Everyday”

Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong

Our work – with Jorge Blasco, Rikke Bjerg Jensen and Lenka Mareková – on the use of digital communication technologies in large-scale protests in Hong Kong was accepted at USENIX ’21. A pre-print is available on arXiv. Here’s the abstract:

The Anti-Extradition Law Amendment Bill protests in Hong Kong present a rich context for exploring information security practices among protesters due to their large-scale urban setting and highly digitalised nature. We conducted in-depth, semi-structured interviews with 11 participants of these protests. Research findings reveal how protesters favoured Telegram and relied on its security for internal communication and organisation of on-the-ground collective action; were organised in small private groups and large public groups to enable collective action; adopted tactics and technologies that enable pseudonymity; and developed a variety of strategies to detect compromises and to achieve forms of forward secrecy and post-compromise security when group members were (presumed) arrested. We further show how group administrators had assumed the roles of leaders in these ‘leaderless’ protests and were critical to collective protest efforts.

Our work can be seen in the tradition of “Can Johnny Build a Protocol? Co-ordinating developer and user intentions for privacy-enhanced secure messaging protocols” which documented the divergence of what higher-risk users – such as those in conflict with the authorities of a nation state – need and want and what secure messaging developers design for. This divergence is noteworthy because “human-rights activists” are a common point of reference in discussions around secure messaging.

However, our focus is not activists but participants in large-scale protests, i.e. our focus is more closely tied to specific needs in moments of heightened conflict, confrontation and mass mobilisation. In particular, we interviewed people who were in some shape or form involved in the Anti-ELAB protests in Hong Kong in 2019/2020. Several of our participants described themselves as “frontliners” which roughly means they were present in areas where direct confrontations with law enforcement took place.

As the title suggests our data speaks to how security needs and practices in this population are collective in nature: how decisions about security are made, what security features are deemed important, how people learn to understand security technologies. As an example take post-compromise security and forward secrecy:

Continue reading “Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong”

Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices

PKC’21 is nearly upon us which – in this day and age – means a new YouTube playlist of talks. Eamonn and Fernando wrote a nice paper on on the success probability of solving unique SVP via BKZ which Fernando is describing here:

Alex is presenting our – with Amit and Nigel – work on round-optimal Verifiable Oblivious PseudoRandom Functions (VOPRF) from ideal lattices here:

Since Alex is doing an amazing job at walking you through our paper I won’t attempt this here. Rather, let me point out a – in my book – cute trick in one of our appendices that may have applications elsewhere.

Continue reading “Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices”

On BDD with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem

Nadia and I put our pre-print and our source code online for solving bounded distance decoding when augmented with some predicate f(\cdot) that evaluates to true on the target and false (almost) everywhere else. Here’s the abstract:

Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this work, we observe that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short.

We formalize lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. We apply our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and demonstrate that our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches. We carried out extensive experiments using our estimation and solving framework, which we also make available with this work.

Continue reading “On BDD with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem”

Multicore BKZ in FPLLL

There have been a few works recently that give FPLLL a hard time when considering parallelism on multicore systems. That is, they compare FPLLL’s single-core implementation against their multi-core implementations, which is fair enough. However, support for parallel enumeration has existed for a while in the form of fplll-extenum. Motivated by these works we merged that library into FPLLL itself a year ago. However, we didn’t document the multicore performance that this gives us. So here we go.

I ran

for t in 1 2 4 8; do
  ./compare.py -j  $(expr 28 / $t)  -t $t -s 512 -u 80 ./fplll/strategies/default.json
done

where compare.py is from the strategizer and default.json is from a PR against FPLLL. Note that these strategies were not optimised for multicore performance. I ran this on a system with two Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz i.e. 28 cores. The resulting speed-ups are: multicore-bkz-in-fplll-default.png

As you can see, the speed-up is okay for two cores but diminishes as we throw more cores at the problem. I assume that this is partly due to block sizes being relatively small (for larger block sizes G6K – which scales rather well on multiple cores – will be faster). I also suspect that this is partly an artefact of not optimising for multiple cores, i.e. picking the trade-off between enumeration (multicore) and preprocessing (partly single-core due to LLL calls) right. If someone wants to step up and compute some strategies for multiple cores, that’d be neat.