Algorithms for LWE and the Approximate GCD Problem over the Integers

Let n be the number of variables in some polynomial ring \mathbb{R}[x_1,\dots,x_n] (with \mathbb{R} a ring) and let d be the degree of a bunch of polynomials F = [f_1,\dots,f_m] in \mathbb{R}, i.e., \deg(f_i) = d. Of course, we can “solve” F by computing a Gröbner basis on F. Furthermore, it is well-known that if n=1 computing a GB is equivalent to computing the GCD of F and that if d=1 computing a GB is equivalent to Gaussian elimination, i.e., finding a triangular basis for a module. In a nutshell, Gröbner bases generalise GCDs and Gaussian elimination. So far, so classical.

It is no secret that I spent some time looking into a problem which we call Gröbner Bases with Noise (GBN), which again can be seen — with the appropriate choice of parameters — as a generalisation of the LWE problem (cf. these slides for some details. Sorry, the paper is still not done). Similarly, we may consider GBN as a generalisation of an approximate GCD problem over \mathbb{R}[x].

In our work (you know, the one which isn’t done yet), we define GBN over \mathbb{F}_q to keep things simple but the notion can easily be extended to for example \mathbb{Z}. Hence, one could say GBN over \mathbb{Z} is a generalisation of GCDs over \mathbb{Z}[x] and in particular over \mathbb{Z} (cf. this paper which constructs a homomorphic encryption scheme based on the approximate GCD problem over the integers) which is just \mathbb{Z}[x] restricted to constant polynomials. So, we have a connection between GBN, LWE and the approximate GCD problem.

Now, as my tag cloud gives away, I like linear algebra and have the tendency to think about problems in terms of linear algebra and triangular bases. Hence, the above connection made me think about the applicability of algorithms for solving LWE to the approximate GCD problem over the integers. It turns out, they are applicable (kinda).

Continue reading “Algorithms for LWE and the Approximate GCD Problem over the Integers”

On the Relation Between the Mutant Strategy and the Normal Selection Strategy in Gröbner Basis Algorithms

We finally (sorry for the delay!) finished our paper on the Mutant strategy. Here’s the abstract:

The computation of Gröbner bases remains one of the most powerful methods for tackling the Polynomial System Solving (PoSSo) problem. The most efficient known algorithms reduce the Gröbner basis computation to Gaussian eliminations on several matrices. However, several degrees of freedom are available to generate these matrices. It is well known that the particular strategies used can drastically affect the efficiency of the computations.
In this work we investigate a recently-proposed strategy, the so-called Mutant strategy, on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and describing the algorithms based on Gröbner basis concepts, we demonstrate that the Mutant strategy can be understood to be equivalent to the classical Normal Selection strategy currently used in Gröbner basis algorithms. Furthermore, we show that the partial enlargement technique can be understood as a strategy for restricting the number of S-polynomials considered in an iteration of the F4 Gröbner basis algorithm, while the new termination criterion used in MXL3 does not lead to termination at a lower degree than the classical Gebauer-Möller installation of Buchberger’s criteria.
We claim that our results map all novel concepts from the MXL family of algorithms to their well-known Gröbner basis equivalents. Using previous results that had shown the relation between the original XL algorithm and F4, we conclude that the MXL family of algorithms can be fundamentally reduced to redundant variants of F4.

Polly Cracker Revisited – Slides

I just gave a talk in the ISG seminar at Royal Holloway, University of London about this Polly Cracker business I’ve been thinking about lately. I’ve also decided to publish the slides. However, I’d like to stress that everything in there is preliminary, i.e. this is yet another of those presentations presenting work in progress (which I personally think is a good thing to do). Anyway, here’s the abstract:

“Since Gentry’s seminal work on homomorphic encryption, this area has received considerable attention from the cryptographic community. Perhaps one of the most natural homomorphic schemes conceivable is Polly Cracker which is naturally homomorphic. However, almost all Polly Cracker inspired schemes that have been proposed so far have been badly broken. In fact, it was conjectured about 15 years ago in “Why you cannot even hope to use Gröbner Bases in Public Key Cryptography: an open letter to a scientist who failed and a challenge to those who have not yet failed.”that it was impossible to construct a secure Polly Cracker-style scheme.

In this work we initiate a formal treatment of cryptosystems based on the hardness of Gröbner basis computations for random systems of equations, discuss their limitations, why standard techniques from homomorphic encryption research fail in this area, and propose a Polly Cracker variant based on polynomial system solving with noise which is a first step towards a provably secure Polly Cracker public-key scheme.”

Cryptanalysis of my Somewhat Homomorphic PollyCracker Scheme

About 5 months ago I wrote about a homomorphic scheme based on integers and an adaptation of this schemes to use multivariate polynomials. At the very end I wrote: 

“Of course, I assume that my adaptation above can still be broken somehow since that’s what tends to happen with multivariate crypto schemes. Also, I’m really really not an expert on public-key cryptography. Hey, this is a blog post, not a research paper … so break it in the comments :)

Well, with some delay, here is how to break it. Recall, that the secret is some Gröbner basis g_0,\dots,g_{n-1} in \mathbb{F}[x_0,\dots,x_{n-1}] (assume \mathbb{F}_2 for simplicity) and that a ciphertext is constructed as

c = \sum_{i=0}^{n-1} f_ig_i + m' + m

where m' has constant coefficient 0 and f_i are random polynomials. Now, let \deg(f_i) = Q, \deg(g_i) = P, \deg(m') = N with N < P to ensure correct decryption (i.e. no interference of the noise with the normal form computation).

To break the scheme simply request n^(P+Q) encryptions of zero and compute the row echelon form on the linearisation of those polynomials of degree P+Q without elimination above the pivot (i.e. don’t compute the reduced row echelon form). Throw away any element of degree \leq N and call the resulting list S. This list S allows to decrypt any ciphertext c by reducing c modulo S.

Below, the attack in Sage:

class PolynomialHomomorphic:
    def __init__(self, l):
        self.l = l
        K = GF(2) # pick some field
        # we choose a ring with l unknowns, which
        # should make any GB computation at least
        # as hard as 2^l if we pick the ideal sufficiently
        # random.
        R = PolynomialRing(K, l, 'x', order='degrevlex')
        self.K = K
        self.R = R
        # our small ideal, defines the message space.
        self.b = Ideal([x for x in R.gens()])

        # these parameters are pretty arbitrary from a
        # security perspective!
        self.N = 1
        self.P = 2
        self.Q = 1
        self.key_gen()

    def key_gen(self):
        b,l = self.b, self.l
        K, R = self.K, self.R
        # we pick a random ideal with l element of degree P
        p = [R.gen(i)**self.P + R.random_element(degree=self.P-1)
              for i in range(l)]
        self.p = Ideal(p)

    def encrypt(self, m):
        # we pick some m' which encodes the
        # same plaintext but is bigger
        m = self.R(m)
        mprime = self.R.random_element(degree=self.N)
        mprime -= mprime.constant_coefficient()
        mprime += m.constant_coefficient()

       # adding a random ideal element
        for f in self.p.gens():
            mprime += self.R.random_element(degree=self.Q)*f
        return mprime

    def decrypt(self, c):
        # decryption is just as in the integer case.
        return c.reduce(self.p).reduce(self.b)

def break_cpa(pc, challenge):
    ciphertexts = [pc.encrypt(0) for _ in xrange(2*pc.l**(pc.Q+pc.P))]
    F = Sequence(ciphertexts)
    A,v = F.coefficient_matrix(sparse=False)
    E = A.echelon_form(full=False)
    s = dict([(f.lm(),f) for f in (E*v).list() if f.degree() >= pc.P])

    while True:
        print challenge
        if challenge.lm() in s:
            challenge = challenge - s[challenge.lm()]
        else:
            if challenge.degree() > pc.P:
                raise ValueError("Ah, snap! It didn't work.")
            else:
                return challenge.constant_coefficient()

This works because the noise is constructed in such a way as to never interfere with elimination: it does not affect any leading monomial of the ideal ever. Thus, we don’t need to consider it during the elimination and simply ignore the lower terms once we are done.

Note that this attack does not work against the integer homomorphic scheme by van Dijk et al. because there additions are not free: When we perform elimination of the higher order bits in the integer scheme also the noise accumulates; eventually it surpasses the size of the prime p leaving us with no information. Put differently, we can attack schemes that are inspired by the integer-based scheme if additions are free. Thus, while it might seem tempting to replace integers by, say, univariate polynomials which are often considered essentially computationally equivalent to integers and which would provide free additions  it would break the security of the scheme.

Mutants are people too

Despite being proven to be a redundant variant of the F4 algorithm, the XL algorithm still receives a lot of attention from the cryptographic community. This is partly because XL is considered to be conceptually much simpler than Gröbner basis algorithms. However, in doing so the wealth of theory available to understand algorithms for polynomial system solving is largely ignored.

The most recent and perhaps promising variant of the XL algorithm  is the family of MXL algorithms which are based around the concept of Mutants. Assume in some iteration the XL algorithm finds elements of degree k while considering degree D > k. In a nutshell, the idea of the MutantXL algorithm is to continue the XL algorithm at the degree k+1 instead of D+1 which is what the XL algorithm would do. The natural question to ask is thus what Mutants are in terms of Gröbner basis theory; are they something new or are they a concept which is already known in the symbolic computing world under a different name?

I was in Darmstadt this week visiting the group which mainly drives the effort behind the MXL family of algorithms. As part of my visit I gave a talk about the relation of the Mutant strategy and the normal strategy used in Gröbner basis algorithms for selecting critical pairs called … the Normal Selection Strategy. In the talk we show that the Mutant strategy is a redundant variant of the Normal Selection Strategy. Also, I talked quite a bit about S-polynomials and how they can be used to account for every single reduction that happens in XL-style algorithms. Finally, I briefly touched on the “partial enlargement strategy” which was introduced with MXL2 showing that it is equivalent to selecting a subset of S-polynomials in each iteration of F4.

Unfortunately, there’s no full paper yet, so the presentation has to suffice for now.

Update: It was pointed out to me that a better way of phrasing the relationship is to state that the Mutant selection strategy can be understood as a redundant variant of the Normal selection strategy when used in F4. This way is better because our statement is strictly about an algorithmic relation and not about why did what first knowing what … which is how one could read the original phrasing.

Sage, Degrees and Gröbner Bases

For one reason on another I found myself computing Gröbner bases over fields which are not \mathbb{F}_2.  Thus, my interest in Sage’s interface to Singular and Magma increased quite a bit again recently (before I was mainly using PolyBoRi). Hence I wrote two patches which need review (hint, hint):

#10331

A natural question when it comes to Gröbner basis computations is of course how much resources a particular input system is going to require. For many systems (in fact it is conjectured that it holds for most systems) it turns out one can estimate the degree which will be reached during the computation by computing the degree of semi-regularity of the homogenisation of the system, i.e. one computes the first non-zero coefficient of the following power series:

\sum_{k\geq 0} c_k z^k = \frac{\prod_{i=0}^{m-1} (1-z^{d_i})}{(1-z)^n}

where d_i are the respective degrees of the input polynomials. While there is a Magma script readily available for computing the degree of semi-regularity (by Luk Bettale) we didn’t have it for Sage.

sage: sr = mq.SR(1,2,1,4)
sage: F,s = sr.polynomial_system()
sage: I = F.ideal()
sage: I.degree_of_semi_regularity()
3

Thus, we expect to only reach degree three to compute a Gröbner basis  for I:

sage: I = F.ideal()
sage: _ = I.groebner_basis('singular:slimgb',prot='sage')
Leading term degree: 1.
Leading term degree: 1. Critical pairs: 56.
Leading term degree: 2.
Leading term degree: 2. Critical pairs: 69.
Leading term degree: 3.
Leading term degree: 3. Critical pairs: 461.
Leading term degree: 3.
Leading term degree: 3. Critical pairs: 535.
Leading term degree: 2.
Leading term degree: 2. Critical pairs: 423.
Leading term degree: 2.
Leading term degree: 2. Critical pairs: 367.
Leading term degree: 3.
Leading term degree: 3. Critical pairs: 0.
Leading term degree: 1.
Leading term degree: 1. Critical pairs: 39.
Leading term degree: 1. Critical pairs: 38.
Leading term degree: 1. Critical pairs: 37.
Leading term degree: 1. Critical pairs: 36.
Leading term degree: 1. Critical pairs: 35.
Leading term degree: 1. Critical pairs: 34.
Leading term degree: 1. Critical pairs: 33.
Leading term degree: 1. Critical pairs: 32.
Leading term degree: 1. Critical pairs: 31.
Leading term degree: 1. Critical pairs: 30.
Leading term degree: 1. Critical pairs: 29.
Leading term degree: 1. Critical pairs: 28.
Leading term degree: 1. Critical pairs: 27.
Leading term degree: 1. Critical pairs: 26.
Leading term degree: 1. Critical pairs: 25.
Leading term degree: 1. Critical pairs: 24.
Leading term degree: 1. Critical pairs: 23.
Leading term degree: 1. Critical pairs: 22.
Leading term degree: 1. Critical pairs: 21.
Leading term degree: 1. Critical pairs: 20.
Leading term degree: 1. Critical pairs: 19.
Leading term degree: 1. Critical pairs: 18.
Leading term degree: 1. Critical pairs: 17.
Leading term degree: 1. Critical pairs: 16.
Leading term degree: 1. Critical pairs: 15.
Leading term degree: 1. Critical pairs: 14.
Leading term degree: 1. Critical pairs: 13.
Leading term degree: 1. Critical pairs: 12.
Leading term degree: 1. Critical pairs: 11.
Leading term degree: 1. Critical pairs: 10.
Leading term degree: 1. Critical pairs: 9.
Leading term degree: 1. Critical pairs: 8.
Leading term degree: 1. Critical pairs: 7.
Leading term degree: 1. Critical pairs: 6.
Leading term degree: 1. Critical pairs: 5.
Leading term degree: 1. Critical pairs: 4.
Leading term degree: 1. Critical pairs: 3.
Leading term degree: 1. Critical pairs: 2.

Highest degree reached during computation: 3.

Which leads me to my next point.

#10571

It’s nice to see how far a particular computation is progressed and to get some summary information about the  computation in the end. Hence, a new patch which enables live logs from Singular and Magma in Sage.

sage: _ = I.groebner_basis('singular:slimgb',prot=True) # Singular native
1461888602
> def sage584=slimgb(sage581);
def sage584=slimgb(sage581);
1M[23,23](56)2M[56,eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeebbbbb32](69)3M[69,bbb69](461)3M[100,bbbbbb74](535)2M[100,beebbbbbb3](423)2M[23,eeeebbbbbb0](367)3M[28,eeeeebb0](0)
NF:399 product criterion:16403, ext_product criterion:0
[7:3]1(39)s(38)s(37)s(36)s(35)s(34)s(33)s(32)s(31)s(30)s(29)s(28)s(27)s(26)s(25)s(24)s(23)s(22)s(21)s(20)s(19)s(18)s(17)s(16)s(15)s(14)s(13)s(12)s(11)s(10)s(9)s(8)s(7)s(6)s(5)s(4)s(3)s(2)sss
(S:39)---------------------------------------
>

sage: _ = I.groebner_basis('magma', prot=True) # Magma native
Append(~_sage_, 0);
Append(~_sage_, 0);
>>>_sage_[126]:=_sage_[128];
_sage_[126]:=_sage_[128];
>>>Append(~_sage_, 0);
Append(~_sage_, 0);
>>>_sage_[86]:=GroebnerBasis(_sage_[126]);
_sage_[86]:=GroebnerBasis(_sage_[126]);
Homogeneous weights search
Number of variables: 40, nullity: 0
Exact search time: 0.000
********************
FAUGERE F4 ALGORITHM
********************
Coefficient ring: GF(2^4)
Rank: 40
Order: Graded Reverse Lexicographical
NEW hash table
Matrix kind: Zech short
Datum size: 2
No queue sort
Initial length: 80
Inhomogeneous

Initial queue setup time: 0.009
Initial queue length: 48

*******
STEP 1
Basis length: 80, queue length: 48, step degree: 1, num pairs: 8
Basis total mons: 264, average length: 3.300
Number of pair polynomials: 8, at 25 column(s), 0.000
Average length for reductees: 6.00 [8], reductors: 10.00 [8]
Symbolic reduction time: 0.000, column sort time: 0.000
8 + 8 = 16 rows / 25 columns, 32% / 37.641% (8/r)
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [8]
Delete 1 memory chunk(s); time: 0.000
After ech memory: 7.9MB
Queue insertion time: 0.000
Step 1 time: 0.000, [0.001], mat/total: 0.000/0.009 [0.005], mem: 7.9MB

*******
STEP 2
Basis length: 88, queue length: 48, step degree: 2, num pairs: 32
Basis total mons: 340, average length: 3.864
Number of pair polynomials: 32, at 169 column(s), 0.000
Average length for reductees: 3.88 [32], reductors: 7.12 [192]
Symbolic reduction time: 0.000, column sort time: 0.000
32 + 192 = 224 rows / 293 columns, 2.2733% / 5.8884% (6.6607/r)
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [8]
Delete 1 memory chunk(s); time: 0.000
After ech memory: 7.9MB
Queue insertion time: 0.000
Step 2 time: 0.000, [0.003], mat/total: 0.000/0.009 [0.008], mem: 7.9MB

*******
STEP 3
Basis length: 96, queue length: 69, step degree: 3, num pairs: 69
Basis total mons: 472, average length: 4.917
Number of pair polynomials: 69, at 540 column(s), 0.000
Average length for reductees: 13.20 [69], reductors: 5.50 [343]
Symbolic reduction time: 0.000, column sort time: 0.000
69 + 343 = 412 rows / 719 columns, 0.94387% / 2.4223% (6.7864/r), bv
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [69]
Delete 1 memory chunk(s); time: 0.000
After ech memory: 7.9MB
Queue insertion time: 0.010
Step 3 time: 0.010, [0.015], mat/total: 0.000/0.019 [0.023], mem: 7.9MB

*******
STEP 4
Basis length: 165, queue length: 736, step degree: 3, num pairs: 461
Basis total mons: 1368, average length: 8.291
Number of pair polynomials: 461, at 802 column(s), 0.000
Average length for reductees: 10.35 [461], reductors: 7.49 [654]
Symbolic reduction time: 0.000, column sort time: 0.000
461 + 654 = 1115 rows / 804 columns, 1.0789% / 2.7301% (8.6744/r)
Before ech memory: 7.9MB
Row sort time: 0.000
0.010 + 0.000 = 0.010 [129]
Delete 1 memory chunk(s); time: 0.000
Number of unused reductors: 2
After ech memory: 7.9MB
Queue insertion time: 0.020
Step 4 time: 0.030, [0.022], mat/total: 0.010/0.049 [0.045], mem: 7.9MB

*******
STEP 5
Basis length: 294, queue length: 2094, step degree: 2, num pairs: 132
Basis total mons: 1669, average length: 5.677
Number of pair polynomials: 132, at 149 column(s), 0.000
Average length for reductees: 2.00 [132], reductors: 5.22 [148]
Symbolic reduction time: 0.000, column sort time: 0.000
132 + 148 = 280 rows / 149 columns, 2.4856% / 7.1242% (3.7036/r)
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [0]
Delete 1 memory chunk(s); time: 0.000
After ech memory: 7.9MB
Queue insertion time: 0.000
Step 5 time: 0.000, [0.001], mat/total: 0.010/0.049 [0.046], mem: 7.9MB

*******
STEP 6
Basis length: 294, queue length: 1962, step degree: 3, num pairs: 848
Basis total mons: 1669, average length: 5.677
Number of pair polynomials: 57, at 85 column(s), 0.000
Average length for reductees: 2.00 [57], reductors: 2.51 [84]
Symbolic reduction time: 0.000, column sort time: 0.000
57 + 84 = 141 rows / 85 columns, 2.7117% / 8.4127% (2.305/r)
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [0]
Delete 1 memory chunk(s); time: 0.000
After ech memory: 7.9MB
Queue insertion time: 0.000
Step 6 time: 0.000, [0.000], mat/total: 0.010/0.049 [0.046], mem: 7.9MB

*******
STEP 7
Basis length: 294, queue length: 1114, step degree: 4, num pairs: 1098
Basis total mons: 1669, average length: 5.677
Number of pair polynomials: 0, at 0 column(s), 0.000
Symbolic reduction time: 0.000, column sort time: 0.000
0 + 0 = 0 rows / 0 columns, 0% / 0% (0/r), bv
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [0]
After ech memory: 7.9MB
Queue insertion time: 0.000
Step 7 time: 0.000, [0.000], mat/total: 0.010/0.049 [0.046], mem: 7.9MB

*******
STEP 8
Basis length: 294, queue length: 16, step degree: 5, num pairs: 16
Basis total mons: 1669, average length: 5.677
Number of pair polynomials: 0, at 0 column(s), 0.000
Symbolic reduction time: 0.000, column sort time: 0.000
0 + 0 = 0 rows / 0 columns, 0% / 0% (0/r), bv
Before ech memory: 7.9MB
Row sort time: 0.000
0.000 + 0.000 = 0.000 [0]
After ech memory: 7.9MB
Queue insertion time: 0.000
Step 8 time: 0.000, [0.000], mat/total: 0.010/0.049 [0.046], mem: 7.9MB

Reduce 294 final polynomial(s) by 294
16 redundant polynomial(s) removed; time: 0.000
Interreduce 40 (out of 294) polynomial(s)
Symbolic reduction time: 0.000
Column sort time: 0.000
40 + 0 = 40 rows / 41 columns, 10.976% / 24.736% (4.5/r)
Row sort time: 0.000
0.000 + 0.000 = 0.000 [40]
Delete 1 memory chunk(s); time: 0.000
Total reduction time: 0.000
Reduction time: 0.000
Final number of polynomials: 278

Number of pairs: 759
Total pair setup time: 0.000
Max num entries matrix: 1115 by 804
Max num rows matrix: 1115 by 804
Total symbolic reduction time: 0.000
Total column sort time: 0.000
Total row sort time: 0.000
Total matrix time: 0.010
Total new polys time: 0.000
Total queue update time: 0.030
Total Faugere F4 time: 0.049, real time: 0.046
>>>_sage_[126]:=0;
_sage_[126]:=0;
>>>

sage: _ = I.groebner_basis('singular:slimgb',prot='sage') # Singular parsed
Leading term degree: 1.
Leading term degree: 1. Critical pairs: 56.
Leading term degree: 2.
Leading term degree: 2. Critical pairs: 69.
Leading term degree: 3.
Leading term degree: 3. Critical pairs: 461.
Leading term degree: 3.
Leading term degree: 3. Critical pairs: 535.
Leading term degree: 2.
Leading term degree: 2. Critical pairs: 423.
Leading term degree: 2.
Leading term degree: 2. Critical pairs: 367.
Leading term degree: 3.
Leading term degree: 3. Critical pairs: 0.
Leading term degree: 1.
Leading term degree: 1. Critical pairs: 39.
Leading term degree: 1. Critical pairs: 38.
Leading term degree: 1. Critical pairs: 37.
Leading term degree: 1. Critical pairs: 36.
Leading term degree: 1. Critical pairs: 35.
Leading term degree: 1. Critical pairs: 34.
Leading term degree: 1. Critical pairs: 33.
Leading term degree: 1. Critical pairs: 32.
Leading term degree: 1. Critical pairs: 31.
Leading term degree: 1. Critical pairs: 30.
Leading term degree: 1. Critical pairs: 29.
Leading term degree: 1. Critical pairs: 28.
Leading term degree: 1. Critical pairs: 27.
Leading term degree: 1. Critical pairs: 26.
Leading term degree: 1. Critical pairs: 25.
Leading term degree: 1. Critical pairs: 24.
Leading term degree: 1. Critical pairs: 23.
Leading term degree: 1. Critical pairs: 22.
Leading term degree: 1. Critical pairs: 21.
Leading term degree: 1. Critical pairs: 20.
Leading term degree: 1. Critical pairs: 19.
Leading term degree: 1. Critical pairs: 18.
Leading term degree: 1. Critical pairs: 17.
Leading term degree: 1. Critical pairs: 16.
Leading term degree: 1. Critical pairs: 15.
Leading term degree: 1. Critical pairs: 14.
Leading term degree: 1. Critical pairs: 13.
Leading term degree: 1. Critical pairs: 12.
Leading term degree: 1. Critical pairs: 11.
Leading term degree: 1. Critical pairs: 10.
Leading term degree: 1. Critical pairs: 9.
Leading term degree: 1. Critical pairs: 8.
Leading term degree: 1. Critical pairs: 7.
Leading term degree: 1. Critical pairs: 6.
Leading term degree: 1. Critical pairs: 5.
Leading term degree: 1. Critical pairs: 4.
Leading term degree: 1. Critical pairs: 3.
Leading term degree: 1. Critical pairs: 2.

Highest degree reached during computation: 3.

Note, that these logs are all live, i.e. the are displayed during the computation as soon as the respective system makes them available.

Both patches are up for review on Sage’s trac server.

An MSc thesis on implementing F4

Daniel Cabarcas’ Msc thesis has the title An Implementation of Faugère’s F4 Algorithm for Computing Gröbner Bases and the following abstract:

“Gröbner bases are an important tool for analyzing systems of polynomial equations. They allow the system of equations to be solved exactly and therefore have gained popularity in many areas of science and technology. However, finding Gröbner bases is a computationally intensive task, thus, several algorithms have been developed for this goal. Faugère invented an elaborate algorithm to compute Gröbner bases in 1999 called F4 , which has become a benchmark due to its efficiency.
We have implemented F4 from scratch in C++. In this thesis we revisit the theoretical foundation of the algorithm, provide details of our implementation, and compare it with other software that computes Gröbner bases.”

I probably know about 10 people who tried to implement the algorithm in such a way that it’s performance is reasonable, most didn’t succeed.  It’s nice to see that somebody has succeeded to some extend (see Experimental Results p.46ff) and wrote about it.

PS: Daniel posted about F4 on [sage-devel] before which sparked a short discussion about F4.