Fayssal’s code which implements the Faugère-Lachartre approach to linear algebra for Gröbner bases is available on Github now. Fayssal did a Master’s project on linear algebra for Gröbner bases in the team of Jean-Charles Faugère.

Skip to content
# Tag: posso

## Faugère-Lachartre implementation for linear algebra for Gröbner bases

## SAT Solvers for Sage

## Call for Papers: 3nd International Conference on Symbolic Computation and Cryptography

## CALL FOR PAPERS

### IMPORTANT DATES

## Ring-LWE and the GB(N) Problem

## Chen & Nguyen’s algorithm and Arora & Ge’s algorithm

## Slides: Introduction to Algebraic Techniques in Block Cipher Cryptanalysis

## On the Relation Between the Mutant Strategy and the Normal Selection Strategy in Gröbner Basis Algorithms

Fayssal’s code which implements the Faugère-Lachartre approach to linear algebra for Gröbner bases is available on Github now. Fayssal did a Master’s project on linear algebra for Gröbner bases in the team of Jean-Charles Faugère.

One of the most efficient techniques for solving polynomial systems over is to convert the problem to a satisfiability problem and to use a standard SAT solver. In the past, I have used CryptoMiniSat and either my own ANF to CNF converter scripts based on Gregory Bard’s ideas or PolyBoRi’s script.

However, this setup leaves much to be desired:

- It’s all based on string parsing which has some overhead.
- Usually the instances produced using PolyBoRi’s conversion method are faster to solve. However, as the number of variables per equation increase this method becomes essentially exponentially more expensive. Hence, a compromise between the two techniques is needed.
- We don’t have access to learnt clauses and conflict clauses.
- It all feels a bit duct taped and fragile, partly because the code is not shipped with Sage.

At #418 I just finished a much nicer interface to various SAT solvers. Here are some features. Continue reading “SAT Solvers for Sage”

CIEM – Castro Urdiales, Spain, 11-13 July 2012, http://scc2012.unican.es/

- Deadline for submission: April 28, 2012
- Notification of acceptance or rejection: May 18, 2012
- Deadline for final version: May 30, 2012
- Deadline for registration: June 12, 2012
- Deadline for special issue JSC: September 30, 2012

SCC 2012 is the third edition of a new series of conferences where research and development in symbolic computation and cryptography may be presented and discussed. It is organized in response to the growing interest in applying and developing methods, techniques, and software tools of symbolic computation for cryptography. The use of Lattice Reduction algorithms in cryptology and the application of Groebner bases in the context of algebraic attacks are typical examples of explored applications. The SCC 2012 conference is co-located with third Workshop on Mathematical Cryptology (WMC 2012, http://wmc2012.unican.es/) , an event also organized by research group Algorithmic Mathematics And Cryptography (AMAC), which will be held on 9-11 July 2012.

Over at the Bristol Cryptography Blog Martijn Stam writes about our “Polly Cracker, Revisted” paper:

We did not discuss the paper in great detail, but Jake did mention one interesting avenue for continued research. Given that this new approach allows one to cast both LWE and approximate GCD in the same framework, can one also capture ring-LWE. If so, this might enable a better comparison of the various fully homomorphic encryption (FHE) schemes out there. The hope expressed by Jake was that this might allow a reduction to standard LWE (for the current batch of ring-LWE schemes), which would boost our confidence in those schemes.

This motivated me to express the Ring-LWE problem in a language of Gröbner bases, here’s what I could come up with so far. Continue reading “Ring-LWE and the GB(N) Problem”

In *Faster Algorithms for Approximate Common Divisors: Breaking Fully-Homomorphic-Encryption Challenges over the Integers* Yuanmi Chen and Phong Q. Nguyen (preprint here) propose a new algorithm for solving the approximate GCD problem. It drops the complexity from to in the general case and from to in the partial case (one multiple of is given noise-free) which is a pretty big deal.

The algorithm is based on two key ideas (explained using the partial approximate GCD problem):

**1. Noisy solving reduced to noise-free solving**

Similar to Arora & Ge’s algorithm for solving LWE Chen and Nguyen reduce the approximate solving problem to a noise-free solving problem. In fact, the strategy is exactly the same (cf. also this post). Given noisy ideal elements where are generators of the ideal, are ring elements and are small noise terms, then

will be elements of the ideal spanned by if is big enough (depending on the exact setup we may drop the part). In the approximate GCD case is simply a small odd integer (often denoted ). Additionally, if we are given some sufficient “description” of some sufficiently big ideal (i.e., all elements of are in but not vice versa and is considerably bigger than ) then we can compute

which keeps the size of small-ish. This is the role of , the noise free multiple of in the partial approximate GCD problem. Now, one simply solves the noise free system . In the PAGCD case this means to compute a single GCD, in the multivariate polynomial case (including LWE) this means to compute a Gröbner basis (or linearise, which is the same thing for the cases we are concerned with). Hence, so far Arora&Ge and Chen&Nguyen are really the same thing (it should be mentioned that this ideal due to Nguyen was already mentioned in this paper) applied to different rings.

However, this is not really why the Chen & Nguyen algorithm is efficient (although this already provides a speed-up by a factor of 5).

**2. Efficient multiplication**

The key idea to drop the exponent from to is as follows. Instead of computing with integers we compute univariate polynomials mod , i.e. one defines

and notices that for :

i.e., we can reduce multiplications to multiplications and polynomial evaluations. It turns out, this can be done in . For the details read the paper.

But to get back to my previous point: It turns out the Arora&Ge perspective on noisy system solving is also useful for approximate GCDs. Which provides further evidence that it is useful to generalise LWE and AGCD to ideal theoretic problems in multivariate polynomial rings.

This morning I delivered my talk titled “Algebraic Techniques in Cryptanlysis (of block ciphers with a bias towards Gröbner bases)” at the ECrypt PhD Summerschool here in Albena, Bulgaria. I covered:

- Why bother
- Setting up equation systems
- Solving (GBs, SAT solvers, MIP, Cube Testers)
- “Advanced” Techniques

Well, here are the slides, which perhaps spend too much time explaining F4.

**PS:** This is as good as any opportunity to point to the paper “Algebraic Techniques in Differential Cryptanalysis Revisited” by Meiqin Wang, Yue Sun, Nicky Mouha and Bart Preneel accepted at ACISP 2011. I don’t agree with every statement in the paper – which revisits techniques Carlos and I proposed in 2009 – but our FSE 2009 paper does deserve a good whipping, i.e., we were way too optimistic about our attack.

We finally (sorry for the delay!) finished our paper on the Mutant strategy. Here’s the abstract:

The computation of Gröbner bases remains one of the most powerful methods for tackling the Polynomial System Solving (PoSSo) problem. The most efficient known algorithms reduce the Gröbner basis computation to Gaussian eliminations on several matrices. However, several degrees of freedom are available to generate these matrices. It is well known that the particular strategies used can drastically affect the efficiency of the computations.

In this work we investigate a recently-proposed strategy, the so-calledMutant strategy, on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and describing the algorithms based on Gröbner basis concepts, we demonstrate that the Mutant strategy can be understood to be equivalent to the classical Normal Selection strategy currently used in Gröbner basis algorithms. Furthermore, we show that thepartial enlargementtechnique can be understood as a strategy for restricting the number of S-polynomials considered in an iteration of the F4 Gröbner basis algorithm, while the new termination criterion used in MXL3 does not lead to termination at a lower degree than the classical Gebauer-Möller installation of Buchberger’s criteria.

We claim that our results map all novel concepts from the MXL family of algorithms to their well-known Gröbner basis equivalents. Using previous results that had shown the relation between the original XL algorithm and F4, we conclude that the MXL family of algorithms can be fundamentally reduced to redundant variants of F4.