# NTT Considered Harmful?

In a typical Ring-LWE-based public-key encryption scheme, Alice publishes

$(a, b) = (a, a \cdot s + e) \in \mathbb{Z}_q[x]/(x^n+1)$

(with $n$ a power of two1) as the public key, where $s, e$ are both “small” and secret. To encrypt, Bob computes

$(c_{0}, c_{1}) = (v \cdot a + e', v \cdot b + e'' + \textnormal{Encode}(m))$

where $v, e', e''$ are small, $m$ is the message $\in \{0,1\}^n$ and $\textnormal{Encode}(\cdot)$ some encoding function, e.g. $\sum_{i=0}^{n-1} \lfloor \frac{q}{2} \rfloor m_i x^i$ . To decrypt, Alice computes

$c_{0} \cdot s - c_{1} = (v \cdot a + e')\cdot s - v \cdot (a\cdot s + e) + e'' + \textnormal{Encode}(m),$

which is equal to $e' \cdot s - v \cdot e + e'' + \textnormal{Encode}(m)$. Finally, Alice recovers $m$ from the noisy encoding of $m$ where $e' \cdot s - v \cdot e + e''$ is the noise. In the Module-LWE variant the elements essentially live in $\left(\mathbb{Z}_q[x]/(x^n+1)\right)^k$, e.g. $a$ is not a polynomial but a vector of polynomials.

Thus, both encryption and decryption involve polynomial multiplication modulo $x^n+1$. Using schoolbook multiplication this costs $\mathcal{O}(n^2)$ operations. However, when selecting parameters for Ring-LWE, we can choose $q \equiv 1 \bmod 2n$ which permits to use an NTT to realise this multiplication (we require $\equiv \bmod 2n$ to use the negacyclic NTT which has modular reductions modulo $x^n+1$ baked in). Then, using the NTT we can implement multiplication by

1. evaluation (perform NTT),
2. pointwise multiplication,
3. interpolation (perform inverse NTT).

Steps (1) and (3) take $\mathcal{O}(n \log n)$ operations by using specially chosen evaluation points (roots of one). Step (2) costs $\mathcal{O}(n)$ operations.

This is trick is very popular. For example, many (but not all!) Ring-LWE based schemes submitted to the NIST PQC competition process use it, namely NewHope, LIMA (go LIMA!), LAC, KCL, HILA5, R.EMBLEM, Ding Key-Exchange, CRYSTALS-KYBER, CRYSTALS-DILITHIUM (sorry, if I forgot one). Note that since steps (1) and (3) are the expensive steps, it makes sense to remain in the NTT domain (i.e. after applying the NTT) and only to convert back at the very end. For example, it is faster for Alice to store $s, e$ in NTT domain and, since the NTT maps uniform to uniform, to sample $a$ in NTT domain directly, i.e. to just assume that a random vector $a$ is already the output of an NTT on some other random vector.

This post is about two recent results I was involved in suggesting that this is not necessarily always the best choice (depending on your priorities.)

Warning: This is going to be one of those clickbait-y pieces where the article doesn’t live up to the promise in the headline. The NTT is fine. Some of my best friends use the NTT. In fact I’ve implemented and used the NTT myself.

# London-ish Lattice Coding & Crypto Meeting: 10 May 2017

Lattice-based approaches are emerging as a common theme in modern cryptography and coding theory. In communications, they are indispensable mathematical tools to construct powerful error-correction codes achieving the capacity of wireless channels. In cryptography, they are used to building lattice-based schemes with provable security, better asymptotic efficiency, resilience against quantum attacks and new functionalities such as fully homomorphic encryption.

This meeting — on 10 May 2017 — is aimed at connecting the two communities in the UK with a common interest in lattices, with a long-term goal of building a synergy of the two fields. It will consist of several talks on related topics, with a format that will hopefully encourage interaction.

# On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL

My paper on solving small, sparse secret instances is now on ePrint. Here’s the abstract:

We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of $(2\,L)/(2\,L+1)$ when $\log q = \Theta{\left(L \log n\right)}$, when the secret has constant hamming weight $h$ and where $L$ is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of $2^{h}$ operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with $n=1024$ and $\log_2 q \approx {47}$, while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).

If you want to see what its effect would be on your favourite small, sparse secret instance of LWE, the code for estimating the running time is included in our LWE estimator. The integration into the main function estimate_lwe is imperfect, though. To get you started, here’s the code used to produce the estimates for the rolling example in the paper.

• Our instance’s secret has hamming weight $h=64$ and a ternary secret. We always use sieving as the SVP oracle in BKZ:

sage: n, alpha, q = fhe_params(n=2048, L=2)
sage: kwds = {"optimisation_target": "sieve", "h":64, "secret_bounds":(-1,1)}

• We establish a base line:

sage: print cost_str(sis(n, alpha, q, optimisation_target="sieve"))

• We run the scaled normal form approach from Section 4 and enable amortising costs from Section 3 by setting use_lll=True:

sage: print cost_str(sis_small_secret_mod_switch(n, alpha, q, use_lll=True, **kwds))

• We run the approach from Section 5 for sparse secrets. Setting postprocess=True enables the search for solutions $\mathbf{s}_1$ with very low hamming weight (page 17):

sage: print cost_str(drop_and_solve(sis, n, alpha, q, postprocess=True, **kwds))

• We combine everything:

sage: f = sis_small_secret_mod_switch
sage: print cost_str(drop_and_solve(f, n, alpha, q, postprocess=True, **kwds))


# London-ish Lattice Coding & Crypto Meeting: 21 September 2016

The next London-ish Lattice Coding & Crypto Meeting is coming up on September 21.

## Programme

• 11:00–12:30 | Jean-Claude Belfiore: Ideal Lattices: Connections between number fields and coding constructions
• 13:30–15:00 | Dan Shepherd: Rings and Modules for Identity-Based Post-Quantum Public-Key Cryptography
• 15:30–16:30 | Antonio Campello: Sampling Algorithms for Lattice Gaussian Codes
• 16:30–17:00 | Cong Ling: Lattice Gaussian Sampling with Markov Chain Monte Carlo (MCMC)
• 17:00–18:30 | Daniel Dadush: Solving SVP and CVP in 2^n Time via Discrete Gaussian Sampling

## Venue

Arts Building Ground Floor Room 24
Royal Holloway, University of London
Egham Hill
Egham
Surrey TW20 0EX

See meeting website for details.

# fplll 5.0

Fplll 4.0.4 was released in 2013. Fplll 5.0.0, whose development started in autumn 2014, came out today. About 600 commits by 13 contributors went into this release. Overall, fplll 5.0 is quite a significant improvement over the 4.x series.

# fpylll

fpylll is a Python library for performing lattice reduction on lattices over the Integers. It is based on the fplll, a C++ library which describes itself as follows:

fplll contains several algorithms on lattices that rely on floating-point computations. This includes implementations of the floating-point LLL reduction algorithm, offering different speed/guarantees ratios. It contains a ‘wrapper’ choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user. It also includes a rigorous floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector, and the BKZ reduction algorithm.

In short, fplll is your best bet at a publicly available fast lattice-reduction library and fpylll provides a convenient interface for it — for experimentation, development and extension — from Python.

For the rest of this post, I’ll give you a tour of the features currently implemented in fpylll and point out some areas where we could do with some help.

# London-ish Lattice Coding & Crypto Meetings

Lattice-based approaches are emerging as a common theme in modern cryptography and coding theory. In communications, they are an indispensable mathematical tool to construct powerful error-correction codes achieving the capacity of wireless channels. In cryptography, they are used to building lattice-based schemes with provable security, better asymptotic efficiency, resilience against quantum attacks and new functionalities such as fully homomorphic encryption.

We are setting up meetings on lattices in cryptography and coding in the London area. 1 These meetings are inspired by similar meetings held in Lyon 2 and are aimed at connecting the two communities in the UK with a common interest in lattices, with a long-term goal of building a synergy of the two fields.

The meetings will consist of several talks on related topics, with a format that will hopefully encourage interaction (e.g. longer than usual time slots).

## Tentative program

For details (as they become available) see website.

11:00 – 12:30: Achieving Channel Capacity with Lattice Codes Cong Ling

13:30 – 15:00: Post-Quantum Cryptography Nigel Smart

15:00 – 16:30: Lattice Coding with Applications to Compute-and-Forward Alister Burr

16:30 – 18:00: A Subfield Lattice Attack on Overstretched NTRU Assumptions Martin Albrecht

## Venue

Room 611
(Dennis Gabor Seminar Room)
Department of Electrical and Electronic Engineering
Imperial College London
South Kensington London
SW7 2AZ

## Registration

Everyone is welcome. Two caveats:

1. Speakers are told the audience is somewhat familiar with lattices.
2. Please send us an email at c.ling@imperial.ac.uk, so that the size of the room fits with the number of participants.

## Footnotes:

1

Our definition of London includes Egham, where Royal Holloway’s main campus is located.

# GSW13: 3rd Generation Homomorphic Encryption from Learning with Errors

This week our reading group studied Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based by Craig Gentry, Amit Sahai and Brent Waters: a 3rd generation fully homomorphic encryption scheme.

The paper is partly motivated by that multiplication in previous schemes was complicated or at least not natural. Let’s take the BGV scheme where ciphertexts are simply LWE samples $a_i, b_i = -\langle a_i, s\rangle + \mu_i \cdot \lceil q/2\rfloor + e_i$ for $a_i \in \mathbb{Z}_q^n$ and $b_i \in \mathbb{Z}_q$ with $\mu_i$ being the message bit $\in \{0,1\}$ and $e_i$ is some “small” error. Let’s write this as $c_i = (a_i, b_i) \in \mathbb{Z}_q^{n+1}$ because it simplifies some notation down the line. In this notation, multiplication can be accomplished by $c_1 \otimes c_2$ because $\langle c_1 \otimes c_2, s \otimes s\rangle \approx \mu_1 \cdot \mu_2$. However, we now need to map $s \otimes s$ back to $s$ using “relinearisation”, this is the “unnatural” step.

However, this is only unnatural in this particular representation. To see this, let’s rewrite $a_i, b_i$ as a linear multivariate polynomial $f_i = b_i - \sum_{j=1}^n a_{ij} \cdot x_j \in \mathbb{Z}_q[x_1,\dots,x_n]$. This polynomial evaluates to $\approx \mu$ on the secret $s = (s_1,\dots,s_n)$. Note that evaluating a polynomial on $s$ is the same as reducing it modulo the set of polynomials $G = (x_1 - s_1,\dots, x_n - s_n)$.

# ENS Lyon Monthly Lattice and Crypto Meetings

Fabien Laguillaumie, Benoît Libert and Damien Stehlé are organising “soft-monthly” (that’s a word!) meetings on lattice cryptography which look very nice. Below the announcement from the c2 list slightly edited for style.

Dear all,

We are setting up a “soft-monthly” meeting on lattices and cryptography, at ENS Lyon. The meetings will consist of several talks on related topics, with a format that will hopefully encourage interactions (blackboard, long time slots).

The first meeting, on September 30th and October 1st, will be on Lattice-based and code-based group signatures.

Tentative program

We 30/09, 2:30pm – 3:30pm: Benoît Libert

Tutorial on group signatures

We 30/09, 4:00pm – 5:30pm: Khoa Nguyen

Group signatures from lattices: simpler, tighter, shorter, ring-based

Th 01/10, 10:15am – 11:15am: Khoa Nguyen

A provably secure group signature scheme from code-based assumptions

Th 1:30pm -3:00pm: Philippe Gaborit

Dynamic traceable signature on lattices with non-frameability property

Th 3:30pm – 5pm: Fabrice Mouhartem

Lattice-based group signatures for dynamic groups

Location

ENS de Lyon, Monod campus, main building, level 1, room 116.

Registration

Everyone is welcome. Two caveats:

1. speakers are told the audience is somewhat familiar with lattices and crypto
2. please send me (damien.stehle_AT_ens-lyon.fr) an email, so that the size of the room fits with the number of participants.

Housing

You may contact

or

Best regards Damien

# Sage Code for GGH Cryptanalysis by Hu and Jia

Recently, Yupu Hu and Huiwen Jia put a paper on the Cryptology ePrint Archive which describes a successful attack of the GGH (and GGHLite) candidate multilinear map. The attack does not try to recover the secret $g$ or any other secret parameter of the map. Instead, it solves the Extraction $\kappa$-graded CDH (Ext-GCDH) problem directly.