Fayssal’s code which implements the Faugère-Lachartre approach to linear algebra for Gröbner bases is available on Github now. Fayssal did a Master’s project on linear algebra for Gröbner bases in the team of Jean-Charles Faugère.
Fayssal’s code which implements the Faugère-Lachartre approach to linear algebra for Gröbner bases is available on Github now. Fayssal did a Master’s project on linear algebra for Gröbner bases in the team of Jean-Charles Faugère.
One of the most efficient techniques for solving polynomial systems over is to convert the problem to a satisfiability problem and to use a standard SAT solver. In the past, I have used CryptoMiniSat and either my own ANF to CNF converter scripts based on Gregory Bard’s ideas or PolyBoRi’s script.
However, this setup leaves much to be desired:
At #418 I just finished a much nicer interface to various SAT solvers. Here are some features. Continue reading “SAT Solvers for Sage”
CIEM – Castro Urdiales, Spain, 11-13 July 2012, http://scc2012.unican.es/
SCC 2012 is the third edition of a new series of conferences where research and development in symbolic computation and cryptography may be presented and discussed. It is organized in response to the growing interest in applying and developing methods, techniques, and software tools of symbolic computation for cryptography. The use of Lattice Reduction algorithms in cryptology and the application of Groebner bases in the context of algebraic attacks are typical examples of explored applications. The SCC 2012 conference is co-located with third Workshop on Mathematical Cryptology (WMC 2012, http://wmc2012.unican.es/) , an event also organized by research group Algorithmic Mathematics And Cryptography (AMAC), which will be held on 9-11 July 2012.
Over at the Bristol Cryptography Blog Martijn Stam writes about our “Polly Cracker, Revisted” paper:
We did not discuss the paper in great detail, but Jake did mention one interesting avenue for continued research. Given that this new approach allows one to cast both LWE and approximate GCD in the same framework, can one also capture ring-LWE. If so, this might enable a better comparison of the various fully homomorphic encryption (FHE) schemes out there. The hope expressed by Jake was that this might allow a reduction to standard LWE (for the current batch of ring-LWE schemes), which would boost our confidence in those schemes.
This motivated me to express the Ring-LWE problem in a language of Gröbner bases, here’s what I could come up with so far. Continue reading “Ring-LWE and the GB(N) Problem”
In Faster Algorithms for Approximate Common Divisors: Breaking Fully-Homomorphic-Encryption Challenges over the Integers Yuanmi Chen and Phong Q. Nguyen (preprint here) propose a new algorithm for solving the approximate GCD problem. It drops the complexity from to
in the general case and from
to
in the partial case (one multiple of
is given noise-free) which is a pretty big deal.
The algorithm is based on two key ideas (explained using the partial approximate GCD problem):
1. Noisy solving reduced to noise-free solving
Similar to Arora & Ge’s algorithm for solving LWE Chen and Nguyen reduce the approximate solving problem to a noise-free solving problem. In fact, the strategy is exactly the same (cf. also this post). Given noisy ideal elements where
are generators of the ideal,
are ring elements and
are small noise terms, then
will be elements of the ideal spanned by
if
is big enough (depending on the exact setup we may drop the
part). In the approximate GCD case
is simply a small odd integer (often denoted
). Additionally, if we are given some sufficient “description” of some sufficiently big ideal
(i.e., all elements of
are in
but not vice versa and
is considerably bigger than
) then we can compute
which keeps the size of small-ish. This is the role of
, the noise free multiple of
in the partial approximate GCD problem. Now, one simply solves the noise free system
. In the PAGCD case this means to compute a single GCD, in the multivariate polynomial case (including LWE) this means to compute a Gröbner basis (or linearise, which is the same thing for the cases we are concerned with). Hence, so far Arora&Ge and Chen&Nguyen are really the same thing (it should be mentioned that this ideal due to Nguyen was already mentioned in this paper) applied to different rings.
However, this is not really why the Chen & Nguyen algorithm is efficient (although this already provides a speed-up by a factor of 5).
2. Efficient multiplication
The key idea to drop the exponent from to
is as follows. Instead of computing with integers we compute univariate polynomials mod
, i.e. one defines
and notices that for :
i.e., we can reduce multiplications to
multiplications and
polynomial evaluations. It turns out, this can be done in
. For the details read the paper.
But to get back to my previous point: It turns out the Arora&Ge perspective on noisy system solving is also useful for approximate GCDs. Which provides further evidence that it is useful to generalise LWE and AGCD to ideal theoretic problems in multivariate polynomial rings.
This morning I delivered my talk titled “Algebraic Techniques in Cryptanlysis (of block ciphers with a bias towards Gröbner bases)” at the ECrypt PhD Summerschool here in Albena, Bulgaria. I covered:
Well, here are the slides, which perhaps spend too much time explaining F4.
PS: This is as good as any opportunity to point to the paper “Algebraic Techniques in Differential Cryptanalysis Revisited” by Meiqin Wang, Yue Sun, Nicky Mouha and Bart Preneel accepted at ACISP 2011. I don’t agree with every statement in the paper – which revisits techniques Carlos and I proposed in 2009 – but our FSE 2009 paper does deserve a good whipping, i.e., we were way too optimistic about our attack.
We finally (sorry for the delay!) finished our paper on the Mutant strategy. Here’s the abstract:
The computation of Gröbner bases remains one of the most powerful methods for tackling the Polynomial System Solving (PoSSo) problem. The most efficient known algorithms reduce the Gröbner basis computation to Gaussian eliminations on several matrices. However, several degrees of freedom are available to generate these matrices. It is well known that the particular strategies used can drastically affect the efficiency of the computations.
In this work we investigate a recently-proposed strategy, the so-called Mutant strategy, on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and describing the algorithms based on Gröbner basis concepts, we demonstrate that the Mutant strategy can be understood to be equivalent to the classical Normal Selection strategy currently used in Gröbner basis algorithms. Furthermore, we show that the partial enlargement technique can be understood as a strategy for restricting the number of S-polynomials considered in an iteration of the F4 Gröbner basis algorithm, while the new termination criterion used in MXL3 does not lead to termination at a lower degree than the classical Gebauer-Möller installation of Buchberger’s criteria.
We claim that our results map all novel concepts from the MXL family of algorithms to their well-known Gröbner basis equivalents. Using previous results that had shown the relation between the original XL algorithm and F4, we conclude that the MXL family of algorithms can be fundamentally reduced to redundant variants of F4.
Now that version 4.6.2 of Sage is out which contains the new Mixed Integer Programming interface that Nathann (mainly) and I (a little bit) wrote, I took the time to get my SCIP interface enough into shape to open a ticket on Sage’s bugtracker for it. Continue reading “Constraint Integer Programming in Sage”
I just gave a talk in the ISG seminar at Royal Holloway, University of London about this Polly Cracker business I’ve been thinking about lately. I’ve also decided to publish the slides. However, I’d like to stress that everything in there is preliminary, i.e. this is yet another of those presentations presenting work in progress (which I personally think is a good thing to do). Anyway, here’s the abstract:
“Since Gentry’s seminal work on homomorphic encryption, this area has received considerable attention from the cryptographic community. Perhaps one of the most natural homomorphic schemes conceivable is Polly Cracker which is naturally homomorphic. However, almost all Polly Cracker inspired schemes that have been proposed so far have been badly broken. In fact, it was conjectured about 15 years ago in “Why you cannot even hope to use Gröbner Bases in Public Key Cryptography: an open letter to a scientist who failed and a challenge to those who have not yet failed.”that it was impossible to construct a secure Polly Cracker-style scheme.
In this work we initiate a formal treatment of cryptosystems based on the hardness of Gröbner basis computations for random systems of equations, discuss their limitations, why standard techniques from homomorphic encryption research fail in this area, and propose a Polly Cracker variant based on polynomial system solving with noise which is a first step towards a provably secure Polly Cracker public-key scheme.”
Despite being proven to be a redundant variant of the F4 algorithm, the XL algorithm still receives a lot of attention from the cryptographic community. This is partly because XL is considered to be conceptually much simpler than Gröbner basis algorithms. However, in doing so the wealth of theory available to understand algorithms for polynomial system solving is largely ignored.
The most recent and perhaps promising variant of the XL algorithm is the family of MXL algorithms which are based around the concept of Mutants. Assume in some iteration the XL algorithm finds elements of degree k while considering degree D > k. In a nutshell, the idea of the MutantXL algorithm is to continue the XL algorithm at the degree k+1 instead of D+1 which is what the XL algorithm would do. The natural question to ask is thus what Mutants are in terms of Gröbner basis theory; are they something new or are they a concept which is already known in the symbolic computing world under a different name?
I was in Darmstadt this week visiting the group which mainly drives the effort behind the MXL family of algorithms. As part of my visit I gave a talk about the relation of the Mutant strategy and the normal strategy used in Gröbner basis algorithms for selecting critical pairs called … the Normal Selection Strategy. In the talk we show that the Mutant strategy is a redundant variant of the Normal Selection Strategy. Also, I talked quite a bit about S-polynomials and how they can be used to account for every single reduction that happens in XL-style algorithms. Finally, I briefly touched on the “partial enlargement strategy” which was introduced with MXL2 showing that it is equivalent to selecting a subset of S-polynomials in each iteration of F4.
Unfortunately, there’s no full paper yet, so the presentation has to suffice for now.
Update: It was pointed out to me that a better way of phrasing the relationship is to state that the Mutant selection strategy can be understood as a redundant variant of the Normal selection strategy when used in F4. This way is better because our statement is strictly about an algorithmic relation and not about why did what first knowing what … which is how one could read the original phrasing.