Faugère-Lachartre implementation for linear algebra for Gröbner bases

Fayssal’s code which implements the Faugère-Lachartre approach to linear algebra for Gröbner bases is available on Github now. Fayssal did a Master’s project on linear algebra for Gröbner bases in the team of Jean-Charles Faugère.

 

Advertisements

SAT Solvers for Sage

One of the most efficient techniques for solving polynomial systems over \mathbb{F}_2 is to convert the problem to a satisfiability problem and to use a standard SAT solver. In the past, I have used CryptoMiniSat and either my own ANF to CNF converter scripts based on Gregory Bard’s ideas or PolyBoRi’s script.

However, this setup leaves much to be desired:

  1. It’s all based on string parsing which has some overhead.
  2. Usually the instances produced using PolyBoRi’s conversion method are faster to solve. However, as the number of variables per equation increase this method becomes essentially exponentially more expensive. Hence, a compromise between the two techniques is needed.
  3. We don’t have access to learnt clauses and conflict clauses.
  4. It all feels a bit duct taped and fragile, partly because the code is not shipped with Sage.

At #418 I just finished a much nicer interface to various SAT solvers. Here are some features. Continue reading “SAT Solvers for Sage”

Call for Papers: 3nd International Conference on Symbolic Computation and Cryptography

CIEM – Castro Urdiales, Spain, 11-13 July 2012, http://scc2012.unican.es/

CALL FOR PAPERS

IMPORTANT DATES

  • Deadline for submission: April 28, 2012
  • Notification of acceptance or rejection: May 18, 2012
  • Deadline for final version: May 30, 2012
  • Deadline for registration: June 12, 2012
  • Deadline for special issue JSC: September 30, 2012

SCC 2012 is the third edition of a new series of conferences where  research and development in symbolic computation and cryptography may be presented and discussed. It is organized in response to the growing  interest in applying and developing methods, techniques, and software  tools of symbolic computation for cryptography. The use of Lattice  Reduction algorithms in cryptology and the application of Groebner  bases in the context of algebraic attacks are typical examples of  explored applications. The SCC 2012 conference is co-located with  third Workshop on Mathematical Cryptology (WMC 2012, http://wmc2012.unican.es/) , an event also organized by research group Algorithmic Mathematics  And Cryptography (AMAC), which will be held on 9-11 July 2012.

Continue reading “Call for Papers: 3nd International Conference on Symbolic Computation and Cryptography”

Ring-LWE and the GB(N) Problem

Over at the Bristol Cryptography Blog Martijn Stam writes about our “Polly Cracker, Revisted” paper:

We did not discuss the paper  in great detail, but Jake did mention one interesting avenue for continued research. Given that this new approach allows one to cast both LWE and approximate GCD in the same framework, can one also capture ring-LWE. If so, this might enable a better comparison of the various fully homomorphic encryption (FHE) schemes out there. The hope expressed by Jake was that this might allow a reduction to standard LWE (for the current batch of ring-LWE schemes), which would boost our confidence in those schemes.

This motivated me to express the Ring-LWE problem in a language of Gröbner bases, here’s what I could come up with so far. Continue reading “Ring-LWE and the GB(N) Problem”

Chen & Nguyen’s algorithm and Arora & Ge’s algorithm

In Faster Algorithms for Approximate Common Divisors: Breaking Fully-Homomorphic-Encryption Challenges over the Integers Yuanmi Chen and Phong Q. Nguyen (preprint here) propose a new algorithm for solving the approximate GCD problem. It drops the complexity from 2^{2\rho} to 2^{3/2\rho} in the general case and from 2^{\rho} to 2^{\rho/2} in the partial case (one multiple of p is given noise-free) which is a pretty big deal.

The algorithm is based on two key ideas (explained using the partial approximate GCD problem):

1. Noisy solving reduced to noise-free solving

Similar to Arora & Ge’s algorithm for solving LWE Chen and Nguyen reduce the approximate solving problem to a noise-free solving problem. In fact, the strategy is exactly the same (cf. also this post). Given noisy ideal elements f_i = \sum h_i g_i + r_i where g_i are generators of the ideal, h_i are ring elements and r_i are small noise terms, then

F_i = f_i \cdot \prod (f_i + j)(f_i - j)

will be elements of the ideal I spanned by g_i if j is big enough (depending on the exact setup we may drop the -j part). In the approximate GCD case g_0 is simply a small odd integer (often denoted p). Additionally, if we are given some sufficient “description” of some sufficiently big ideal \langle G_1,\dots,G_s \rangle = J \supset I (i.e., all elements of I are in J but not vice versa and J is considerably bigger than I) then we can compute

F_i = f_i \cdot \prod (f_i + j)(f_i - j) \mod J

which keeps the size of F_i small-ish. This is the role of x_0, the noise free multiple of p in the partial approximate GCD problem. Now, one simply solves the noise free system F_1,\dots,F_m. In the PAGCD case this means to compute a single GCD, in the multivariate polynomial case (including LWE) this means to compute a Gröbner basis (or linearise, which is the same thing for the cases we are concerned with). Hence, so far Arora&Ge and Chen&Nguyen are really the same thing (it should be mentioned that this ideal due to Nguyen was already mentioned in this paper) applied to different rings.

However, this is not really why the Chen & Nguyen algorithm is efficient (although this already provides a speed-up by a factor of 5).

2. Efficient multiplication

The key idea to drop the exponent from 2^{\rho} to 2^{\rho/2} is as follows. Instead of computing with integers we compute univariate polynomials mod x_0, i.e. one defines

f_j(x) = \prod_{j=0}^{j-1} (x_1 - (x + i)) \in \mathbb{F}_{x_0}[x]

and notices that for \rho' = \lfloor \rho/2 \rfloor:

\prod_{i=0}^{2^\rho-1} (x_1 - i) = \prod_{k=0}^{2^{\rho - \rho'} -1} f_{2^{\rho'}}(2^{\rho'}k)

i.e., we can reduce 2^\rho -1 multiplications to 2^{\rho - \rho'} - 1 multiplications and 2^{\rho - \rho'} - 1  polynomial evaluations. It turns out, this can be done in \mathcal{O}(2^{\rho'}). For the details read the paper.

But to get back to my previous point: It turns out the Arora&Ge perspective on noisy system solving is also useful for approximate GCDs. Which provides further evidence that it is useful to generalise LWE and AGCD to ideal theoretic problems in multivariate polynomial rings.

Slides: Introduction to Algebraic Techniques in Block Cipher Cryptanalysis

This morning I delivered my talk titled “Algebraic Techniques in Cryptanlysis (of block ciphers with a bias towards Gröbner bases)” at the ECrypt PhD Summerschool here in Albena, Bulgaria. I covered:

  1. Why bother
  2. Setting up equation systems
  3. Solving (GBs, SAT solvers, MIP, Cube Testers)
  4. “Advanced” Techniques

Well, here are the slides, which perhaps spend too much time explaining F4.

PS: This is as good as any opportunity to point to the paper “Algebraic Techniques in Differential Cryptanalysis Revisited” by Meiqin Wang, Yue Sun, Nicky Mouha and Bart Preneel accepted at ACISP 2011. I don’t agree with every statement in the paper – which revisits techniques Carlos and I proposed in 2009 – but our FSE 2009 paper does deserve a good whipping, i.e., we were way too optimistic about our attack.

On the Relation Between the Mutant Strategy and the Normal Selection Strategy in Gröbner Basis Algorithms

We finally (sorry for the delay!) finished our paper on the Mutant strategy. Here’s the abstract:

The computation of Gröbner bases remains one of the most powerful methods for tackling the Polynomial System Solving (PoSSo) problem. The most efficient known algorithms reduce the Gröbner basis computation to Gaussian eliminations on several matrices. However, several degrees of freedom are available to generate these matrices. It is well known that the particular strategies used can drastically affect the efficiency of the computations.
In this work we investigate a recently-proposed strategy, the so-called Mutant strategy, on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and describing the algorithms based on Gröbner basis concepts, we demonstrate that the Mutant strategy can be understood to be equivalent to the classical Normal Selection strategy currently used in Gröbner basis algorithms. Furthermore, we show that the partial enlargement technique can be understood as a strategy for restricting the number of S-polynomials considered in an iteration of the F4 Gröbner basis algorithm, while the new termination criterion used in MXL3 does not lead to termination at a lower degree than the classical Gebauer-Möller installation of Buchberger’s criteria.
We claim that our results map all novel concepts from the MXL family of algorithms to their well-known Gröbner basis equivalents. Using previous results that had shown the relation between the original XL algorithm and F4, we conclude that the MXL family of algorithms can be fundamentally reduced to redundant variants of F4.