## Chen & Nguyen’s algorithm and Arora & Ge’s algorithm

In Faster Algorithms for Approximate Common Divisors: Breaking Fully-Homomorphic-Encryption Challenges over the Integers Yuanmi Chen and Phong Q. Nguyen (preprint here) propose a new algorithm for solving the approximate GCD problem. It drops the complexity from $2^{2\rho}$ to $2^{3/2\rho}$ in the general case and from $2^{\rho}$ to $2^{\rho/2}$ in the partial case (one multiple of $p$ is given noise-free) which is a pretty big deal.

The algorithm is based on two key ideas (explained using the partial approximate GCD problem):

1. Noisy solving reduced to noise-free solving

Similar to Arora & Ge’s algorithm for solving LWE Chen and Nguyen reduce the approximate solving problem to a noise-free solving problem. In fact, the strategy is exactly the same (cf. also this post). Given noisy ideal elements $f_i = \sum h_i g_i + r_i$ where $g_i$ are generators of the ideal, $h_i$ are ring elements and $r_i$ are small noise terms, then

$F_i = f_i \cdot \prod (f_i + j)(f_i - j)$

will be elements of the ideal $I$ spanned by $g_i$ if $j$ is big enough (depending on the exact setup we may drop the $-j$ part). In the approximate GCD case $g_0$ is simply a small odd integer (often denoted $p$). Additionally, if we are given some sufficient “description” of some sufficiently big ideal $\langle G_1,\dots,G_s \rangle = J \supset I$ (i.e., all elements of $I$ are in $J$ but not vice versa and $J$ is considerably bigger than $I$) then we can compute

$F_i = f_i \cdot \prod (f_i + j)(f_i - j) \mod J$

which keeps the size of $F_i$ small-ish. This is the role of $x_0$, the noise free multiple of $p$ in the partial approximate GCD problem. Now, one simply solves the noise free system $F_1,\dots,F_m$. In the PAGCD case this means to compute a single GCD, in the multivariate polynomial case (including LWE) this means to compute a Gröbner basis (or linearise, which is the same thing for the cases we are concerned with). Hence, so far Arora&Ge and Chen&Nguyen are really the same thing (it should be mentioned that this ideal due to Nguyen was already mentioned in this paper) applied to different rings.

However, this is not really why the Chen & Nguyen algorithm is efficient (although this already provides a speed-up by a factor of 5).

2. Efficient multiplication

The key idea to drop the exponent from $2^{\rho}$ to $2^{\rho/2}$ is as follows. Instead of computing with integers we compute univariate polynomials mod $x_0$, i.e. one defines

$f_j(x) = \prod_{j=0}^{j-1} (x_1 - (x + i)) \in \mathbb{F}_{x_0}[x]$

and notices that for $\rho' = \lfloor \rho/2 \rfloor$:

$\prod_{i=0}^{2^\rho-1} (x_1 - i) = \prod_{k=0}^{2^{\rho - \rho'} -1} f_{2^{\rho'}}(2^{\rho'}k)$

i.e., we can reduce $2^\rho -1$ multiplications to $2^{\rho - \rho'} - 1$ multiplications and $2^{\rho - \rho'} - 1$  polynomial evaluations. It turns out, this can be done in $\mathcal{O}(2^{\rho'})$. For the details read the paper.

But to get back to my previous point: It turns out the Arora&Ge perspective on noisy system solving is also useful for approximate GCDs. Which provides further evidence that it is useful to generalise LWE and AGCD to ideal theoretic problems in multivariate polynomial rings.

## Slides: Introduction to Algebraic Techniques in Block Cipher Cryptanalysis

This morning I delivered my talk titled “Algebraic Techniques in Cryptanlysis (of block ciphers with a bias towards Gröbner bases)” at the ECrypt PhD Summerschool here in Albena, Bulgaria. I covered:

1. Why bother
2. Setting up equation systems
3. Solving (GBs, SAT solvers, MIP, Cube Testers)

Well, here are the slides, which perhaps spend too much time explaining F4.

PS: This is as good as any opportunity to point to the paper “Algebraic Techniques in Differential Cryptanalysis Revisited” by Meiqin Wang, Yue Sun, Nicky Mouha and Bart Preneel accepted at ACISP 2011. I don’t agree with every statement in the paper – which revisits techniques Carlos and I proposed in 2009 – but our FSE 2009 paper does deserve a good whipping, i.e., we were way too optimistic about our attack.

## Postdoc Position at LIP6

My team is hiring:

One-year post-doctoral position announcement

A 12-month postdoctoral position is available at the INRIA/LIP6/UPMC SALSA team on Campus Jussieu (located in the “Quartier Latin” of Paris, France). We are seeking candidates to apply for this position. The postdoc will work in the joint ANR/NSFC EXACTA project on polynomial system solving and its applications in cryptography, computational geometry, and biology.

## On the Relation Between the Mutant Strategy and the Normal Selection Strategy in Gröbner Basis Algorithms

We finally (sorry for the delay!) finished our paper on the Mutant strategy. Here’s the abstract:

The computation of Gröbner bases remains one of the most powerful methods for tackling the Polynomial System Solving (PoSSo) problem. The most efficient known algorithms reduce the Gröbner basis computation to Gaussian eliminations on several matrices. However, several degrees of freedom are available to generate these matrices. It is well known that the particular strategies used can drastically affect the efficiency of the computations.
In this work we investigate a recently-proposed strategy, the so-called Mutant strategy, on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and describing the algorithms based on Gröbner basis concepts, we demonstrate that the Mutant strategy can be understood to be equivalent to the classical Normal Selection strategy currently used in Gröbner basis algorithms. Furthermore, we show that the partial enlargement technique can be understood as a strategy for restricting the number of S-polynomials considered in an iteration of the F4 Gröbner basis algorithm, while the new termination criterion used in MXL3 does not lead to termination at a lower degree than the classical Gebauer-Möller installation of Buchberger’s criteria.
We claim that our results map all novel concepts from the MXL family of algorithms to their well-known Gröbner basis equivalents. Using previous results that had shown the relation between the original XL algorithm and F4, we conclude that the MXL family of algorithms can be fundamentally reduced to redundant variants of F4.

## Cryptanalysis of “Fully Homomorphic Encryption over the Binary Polynomials”

Turns out, I’m not he only one who was inspired to adapt the Fully Homomorphic Encryption over the Integers scheme by van Dijk, Gentry, Halevi and Vaikuntanathan to polynomials. ﻿﻿﻿ Gu Chunsheng posted a pre-print on the IACR eprint server this week which essentially instantiates the integer scheme over univariate polynomials over $\mathbb{F}_2$.  Below is my implementation (in Sage) of his somewhat homomorphic scheme:

class BinPolySHE:
def __init__(self, n):
n = n
tau = n # choice here
P = PolynomialRing(GF(2),'x')

x = P.gen()

s = P.random_element(degree=2*n+1)
while not (s.is_irreducible() and s.degree()==2*n+1):
s = P.random_element(degree=2*n+1)

b = []

a0 = P.random_element(2*n+1)
if a0.degree() < 2*n+1:
a0 += x**(2*n+1)
e0 = P.random_element(degree=n-1)
b0 = a0*s + x*e0 # deg: 4*n+2

b.append(b0)

for i in range(1,tau):
ai = P.random_element(degree=n) # choice here
ei = P.random_element(degree=n-1)
bi = ai*s + x*ei # deg 3*n+1
bi = bi % b0
b.append(bi)

self.n = n
self.pk = b
self.sk = s
self.P = P

def encrypt(self, m):
T = []

for i in range(1, len(self.pk)):
if random() <= 0.5: # choice here
T.append(i)

c = self.P(m%2)

x = self.P.gen()

for i in T:
e = self.P.random_element(degree=self.n-1)
c += self.pk[i] + x*e

return c % self.pk[0]

def decrypt(self, c):
x = self.P.gen()
return (c % self.sk) % x


Regular readers of this blog might have noticed that the scheme looks like a bit like a univariate specialisation of this PollyCracker scheme. Indeed, just like this first PollyCracker scheme, Gu’s scheme is badly broken. Below, the source code of my attack:

    def attack(self, c):

A = Matrix(GF(2),len(self.pk),4*self.n+2)

x = self.P.gen()

for i,b in enumerate(self.pk):
for j in b.exponents():
A[i,A.ncols()-j-1] = 1
E = A.echelon_form(reduced=False)

pk2 = []
for i,r in enumerate(A.rows()):
b = 0
for j in range(A.ncols()):
b += E[i,A.ncols()-j-1]*x**j
pk2.append(b)

for b in pk2:
if c[b.degree()]:
c -= b
return c % x


The attack proceeds pretty much as discussed here: we can compute a triangular basis for the span of the public key and use that to perform all eliminations. Since the noise does not grow with each addition and does not affect the constant coefficient (which holds the message), we can essentially ignore it.

## Cryptanalysis of my Somewhat Homomorphic PollyCracker Scheme

About 5 months ago I wrote about a homomorphic scheme based on integers and an adaptation of this schemes to use multivariate polynomials. At the very end I wrote: ﻿﻿

“Of course, I assume that my adaptation above can still be broken somehow since that’s what tends to happen with multivariate crypto schemes. Also, I’m really really not an expert on public-key cryptography. Hey, this is a blog post, not a research paper … so break it in the comments

Well, with some delay, here is how to break it. Recall, that the secret is some Gröbner basis $g_0,\dots,g_{n-1}$ in $\mathbb{F}[x_0,\dots,x_{n-1}]$ (assume $\mathbb{F}_2$ for simplicity) and that a ciphertext is constructed as

$c = \sum_{i=0}^{n-1} f_ig_i + m' + m$

where $m'$ has constant coefficient 0 and $f_i$ are random polynomials. Now, let $\deg(f_i) = Q, \deg(g_i) = P, \deg(m') = N$ with $N < P$ to ensure correct decryption (i.e. no interference of the noise with the normal form computation).

To break the scheme simply request $n^(P+Q)$ encryptions of zero and compute the row echelon form on the linearisation of those polynomials of degree $P+Q$ without elimination above the pivot (i.e. don’t compute the reduced row echelon form). Throw away any element of degree $\leq N$ and call the resulting list $S$. This list $S$ allows to decrypt any ciphertext $c$ by reducing $c$ modulo $S$.

Below, the attack in Sage:

class PolynomialHomomorphic:
def __init__(self, l):
self.l = l
K = GF(2) # pick some field
# we choose a ring with l unknowns, which
# should make any GB computation at least
# as hard as 2^l if we pick the ideal sufficiently
# random.
R = PolynomialRing(K, l, 'x', order='degrevlex')
self.K = K
self.R = R
# our small ideal, defines the message space.
self.b = Ideal([x for x in R.gens()])

# these parameters are pretty arbitrary from a
# security perspective!
self.N = 1
self.P = 2
self.Q = 1
self.key_gen()

def key_gen(self):
b,l = self.b, self.l
K, R = self.K, self.R
# we pick a random ideal with l element of degree P
p = [R.gen(i)**self.P + R.random_element(degree=self.P-1)
for i in range(l)]
self.p = Ideal(p)

def encrypt(self, m):
# we pick some m' which encodes the
# same plaintext but is bigger
m = self.R(m)
mprime = self.R.random_element(degree=self.N)
mprime -= mprime.constant_coefficient()
mprime += m.constant_coefficient()

# adding a random ideal element
for f in self.p.gens():
mprime += self.R.random_element(degree=self.Q)*f
return mprime

def decrypt(self, c):
# decryption is just as in the integer case.
return c.reduce(self.p).reduce(self.b)

def break_cpa(pc, challenge):
ciphertexts = [pc.encrypt(0) for _ in xrange(2*pc.l**(pc.Q+pc.P))]
F = Sequence(ciphertexts)
A,v = F.coefficient_matrix(sparse=False)
E = A.echelon_form(full=False)
s = dict([(f.lm(),f) for f in (E*v).list() if f.degree() >= pc.P])

while True:
print challenge
if challenge.lm() in s:
challenge = challenge - s[challenge.lm()]
else:
if challenge.degree() > pc.P:
raise ValueError("Ah, snap! It didn't work.")
else:
return challenge.constant_coefficient()


﻿﻿This works because the noise is constructed in such a way as to never interfere with elimination: it does not affect any leading monomial of the ideal ever. Thus, we don’t need to consider it during the elimination and simply ignore the lower terms once we are done.

Note that this attack does not work against the integer homomorphic scheme by van Dijk et al. because there additions are not free: When we perform elimination of the higher order bits in the integer scheme also the noise accumulates; eventually it surpasses the size of the prime $p$ leaving us with no information. Put differently, we can attack schemes that are inspired by the integer-based scheme if additions are free. Thus, while it might seem tempting to replace integers by, say, univariate polynomials which are often considered essentially computationally equivalent to integers and which would provide free additions  it would break the security of the scheme.

## Mutants are people too

Despite being proven to be a redundant variant of the F4 algorithm, the XL algorithm still receives a lot of attention from the cryptographic community. This is partly because XL is considered to be conceptually much simpler than Gröbner basis algorithms. However, in doing so the wealth of theory available to understand algorithms for polynomial system solving is largely ignored.

The most recent and perhaps promising variant of the XL algorithm  is the family of MXL algorithms which are based around the concept of Mutants. Assume in some iteration the XL algorithm finds elements of degree k while considering degree D > k. In a nutshell, the idea of the MutantXL algorithm is to continue the XL algorithm at the degree k+1 instead of D+1 which is what the XL algorithm would do. The natural question to ask is thus what Mutants are in terms of Gröbner basis theory; are they something new or are they a concept which is already known in the symbolic computing world under a different name?

I was in Darmstadt this week visiting the group which mainly drives the effort behind the MXL family of algorithms. As part of my visit I gave a talk about the relation of the Mutant strategy and the normal strategy used in Gröbner basis algorithms for selecting critical pairs called … the Normal Selection Strategy. In the talk we show that the Mutant strategy is a redundant variant of the Normal Selection Strategy. Also, I talked quite a bit about S-polynomials and how they can be used to account for every single reduction that happens in XL-style algorithms. Finally, I briefly touched on the “partial enlargement strategy” which was introduced with MXL2 showing that it is equivalent to selecting a subset of S-polynomials in each iteration of F4.

Unfortunately, there’s no full paper yet, so the presentation has to suffice for now.

Update: It was pointed out to me that a better way of phrasing the relationship is to state that the Mutant selection strategy can be understood as a redundant variant of the Normal selection strategy when used in F4. This way is better because our statement is strictly about an algorithmic relation and not about why did what first knowing what … which is how one could read the original phrasing.