Now that version 4.6.2 of Sage is out which contains the new Mixed Integer Programming interface that Nathann (mainly) and I (a little bit) wrote, I took the time to get my SCIP interface enough into shape to open a ticket on Sage’s bugtracker for it. Continue reading “Constraint Integer Programming in Sage”

# Tag: posso

## Polly Cracker Revisited – Slides

I just gave a talk in the ISG seminar at Royal Holloway, University of London about this Polly Cracker business I’ve been thinking about lately. I’ve also decided to publish the slides. However, I’d like to stress that everything in there is preliminary, i.e. this is yet another of those presentations presenting work in progress (which I personally think is a good thing to do). Anyway, here’s the abstract:

“Since Gentry’s seminal work on homomorphic encryption, this area has received considerable attention from the cryptographic community. Perhaps one of the most natural homomorphic schemes conceivable is Polly Cracker which is naturally homomorphic. However, almost all Polly Cracker inspired schemes that have been proposed so far have been badly broken. In fact, it was conjectured about 15 years ago in “Why you cannot even hope to use Gröbner Bases in Public Key Cryptography: an open letter to a scientist who failed and a challenge to those who have not yet failed.”that it was impossible to construct a secure Polly Cracker-style scheme.

In this work we initiate a formal treatment of cryptosystems based on the hardness of Gröbner basis computations for random systems of equations, discuss their limitations, why standard techniques from homomorphic encryption research fail in this area, and propose a Polly Cracker variant based on polynomial system solving with noise which is a first step towards a provably secure Polly Cracker public-key scheme.”

## Mutants are people too

Despite being proven to be a redundant variant of the F4 algorithm, the XL algorithm still receives a lot of attention from the cryptographic community. This is partly because XL is considered to be conceptually much simpler than Gröbner basis algorithms. However, in doing so the wealth of theory available to understand algorithms for polynomial system solving is largely ignored.

The most recent and perhaps promising variant of the XL algorithm is the family of MXL algorithms which are based around the concept of Mutants. Assume in some iteration the XL algorithm finds elements of degree *k* while considering degree *D > k.* In a nutshell, the idea of the MutantXL algorithm is to continue the XL algorithm at the degree *k+1* instead of *D+1* which is what the XL algorithm would do. The natural question to ask is thus what Mutants are in terms of Gröbner basis theory; are they something new or are they a concept which is already known in the symbolic computing world under a different name?

I was in Darmstadt this week visiting the group which mainly drives the effort behind the MXL family of algorithms. As part of my visit I gave a talk about the relation of the Mutant strategy and the normal strategy used in Gröbner basis algorithms for selecting critical pairs called … the *Normal Selection Strategy*. In the talk we show that the Mutant strategy is a redundant variant of the Normal Selection Strategy. Also, I talked quite a bit about S-polynomials and how they can be used to account for every single reduction that happens in XL-style algorithms. Finally, I briefly touched on the “partial enlargement strategy” which was introduced with MXL2 showing that it is equivalent to selecting a subset of S-polynomials in each iteration of F4.

Unfortunately, there’s no full paper yet, so the presentation has to suffice for now.

**Update:** It was pointed out to me that a better way of phrasing the relationship is to state that the Mutant selection strategy can be understood as a redundant variant of the Normal selection strategy when used in F4. This way is better because our statement is strictly about an algorithmic relation and not about why did what first knowing what … which is how one could read the original phrasing.

## Cold Boot Key Recovery by Solving Polynomial Systems with Noise

Carlos and I finally managed to put our paper on polynomial system solving with noise and its application to the cold boot problem out.

Abstract:A method for extracting cryptographic key material from DRAM used in modern computers has been recently proposed in [9]; the technique was called Cold Boot attacks. When considering block ciphers, such as the AES and DES, simple algorithms were also proposed in [9] to recover the cryptographic key from the observed set of round subkeys in memory (computed via the cipher’s key schedule operation), which were however subject to errors due to memory bits decay. In this work we extend this analysis to consider key recovery for other ciphers used in Full Disk Encryption (FDE) products. Our algorithms are also based on closest code word decoding methods, however apply a novel method for solving a set of non-linear algebraic equations with noise based on Integer Programming. This method should have further applications in cryptology, and is likely to be of independent interest. We demonstrate the viability of the Integer Programming method by applying it against the Serpent block cipher, which has a much more complex key schedule than AES. Furthermore, we also consider the Twofish key schedule, to which we apply a dedicated method of recovery.

Btw. an older version of our code for Sage for solving polynomial systems with errors is available on bitbucket.org (… yes, I should update it to the most recent version). Here’s an example from my talk at the Tools for cryptanalysis workshop 2010:

sage: p = PRESENT(Nr=1,sbox_representation='lex') sage: F = present_dc(,r=1,return_system=True,characteristic=True) sage: H = F.gens()[:-64] sage: S = F.gens()[-64:] sage: S[:9] (Y00100 + Y10100, Y00101 + Y10101, Y00102 + Y10102, Y00103 + Y10103, Y00104 + Y10104, Y00105 + Y10105, Y00106 + Y10106, Y00107 + Y10107 + 1, Y00108 + Y10108) sage: F_prob = ProbabilisticMPolynomialSystem(F.ring(),H,S) sage: s,t = F_prob.solve_mip(solver='SCIP') Writing problem data to '/home/malb/.sage//temp/road/16007//tmp_1.mps'6605 records were writtenCPU Time: 0.20 Wall time: 25.95, Obj: 3.00

Not that this was a good way of attacking a blockcipher, but you get the idea.

## Sage, Degrees and Gröbner Bases

For one reason on another I found myself computing Gröbner bases over fields which are not . Thus, my interest in Sage’s interface to Singular and Magma increased quite a bit again recently (before I was mainly using PolyBoRi). Hence I wrote two patches which need review (hint, hint):

A natural question when it comes to Gröbner basis computations is of course how much resources a particular input system is going to require. For many systems (in fact it is conjectured that it holds for *most* systems) it turns out one can estimate the degree which will be reached during the computation by computing the degree of semi-regularity of the homogenisation of the system, i.e. one computes the first non-zero coefficient of the following power series:

where are the respective degrees of the input polynomials. While there is a Magma script readily available for computing the degree of semi-regularity (by Luk Bettale) we didn’t have it for Sage.

sage: sr = mq.SR(1,2,1,4) sage: F,s = sr.polynomial_system() sage: I = F.ideal() sage: I.degree_of_semi_regularity() 3

Thus, we expect to only reach degree three to compute a Gröbner basis for :

sage: I = F.ideal() sage: _ = I.groebner_basis('singular:slimgb',prot='sage') Leading term degree: 1. Leading term degree: 1. Critical pairs: 56. Leading term degree: 2. Leading term degree: 2. Critical pairs: 69. Leading term degree: 3. Leading term degree: 3. Critical pairs: 461. Leading term degree: 3. Leading term degree: 3. Critical pairs: 535. Leading term degree: 2. Leading term degree: 2. Critical pairs: 423. Leading term degree: 2. Leading term degree: 2. Critical pairs: 367. Leading term degree: 3. Leading term degree: 3. Critical pairs: 0. Leading term degree: 1. Leading term degree: 1. Critical pairs: 39. Leading term degree: 1. Critical pairs: 38. Leading term degree: 1. Critical pairs: 37. Leading term degree: 1. Critical pairs: 36. Leading term degree: 1. Critical pairs: 35. Leading term degree: 1. Critical pairs: 34. Leading term degree: 1. Critical pairs: 33. Leading term degree: 1. Critical pairs: 32. Leading term degree: 1. Critical pairs: 31. Leading term degree: 1. Critical pairs: 30. Leading term degree: 1. Critical pairs: 29. Leading term degree: 1. Critical pairs: 28. Leading term degree: 1. Critical pairs: 27. Leading term degree: 1. Critical pairs: 26. Leading term degree: 1. Critical pairs: 25. Leading term degree: 1. Critical pairs: 24. Leading term degree: 1. Critical pairs: 23. Leading term degree: 1. Critical pairs: 22. Leading term degree: 1. Critical pairs: 21. Leading term degree: 1. Critical pairs: 20. Leading term degree: 1. Critical pairs: 19. Leading term degree: 1. Critical pairs: 18. Leading term degree: 1. Critical pairs: 17. Leading term degree: 1. Critical pairs: 16. Leading term degree: 1. Critical pairs: 15. Leading term degree: 1. Critical pairs: 14. Leading term degree: 1. Critical pairs: 13. Leading term degree: 1. Critical pairs: 12. Leading term degree: 1. Critical pairs: 11. Leading term degree: 1. Critical pairs: 10. Leading term degree: 1. Critical pairs: 9. Leading term degree: 1. Critical pairs: 8. Leading term degree: 1. Critical pairs: 7. Leading term degree: 1. Critical pairs: 6. Leading term degree: 1. Critical pairs: 5. Leading term degree: 1. Critical pairs: 4. Leading term degree: 1. Critical pairs: 3. Leading term degree: 1. Critical pairs: 2. Highest degree reached during computation: 3.

Which leads me to my next point.

It’s nice to see how far a particular computation is progressed and to get some summary information about the computation in the end. Hence, a new patch which enables live logs from Singular and Magma in Sage.

sage: _ = I.groebner_basis('singular:slimgb',prot=True) # Singular native 1461888602 > def sage584=slimgb(sage581); def sage584=slimgb(sage581); 1M[23,23](56)2M[56,eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeebbbbb32](69)3M[69,bbb69](461)3M[100,bbbbbb74](535)2M[100,beebbbbbb3](423)2M[23,eeeebbbbbb0](367)3M[28,eeeeebb0](0) NF:399 product criterion:16403, ext_product criterion:0 [7:3]1(39)s(38)s(37)s(36)s(35)s(34)s(33)s(32)s(31)s(30)s(29)s(28)s(27)s(26)s(25)s(24)s(23)s(22)s(21)s(20)s(19)s(18)s(17)s(16)s(15)s(14)s(13)s(12)s(11)s(10)s(9)s(8)s(7)s(6)s(5)s(4)s(3)s(2)sss (S:39)--------------------------------------- > sage: _ = I.groebner_basis('magma', prot=True) # Magma native Append(~_sage_, 0); Append(~_sage_, 0); >>>_sage_[126]:=_sage_[128]; _sage_[126]:=_sage_[128]; >>>Append(~_sage_, 0); Append(~_sage_, 0); >>>_sage_[86]:=GroebnerBasis(_sage_[126]); _sage_[86]:=GroebnerBasis(_sage_[126]); Homogeneous weights search Number of variables: 40, nullity: 0 Exact search time: 0.000 ******************** FAUGERE F4 ALGORITHM ******************** Coefficient ring: GF(2^4) Rank: 40 Order: Graded Reverse Lexicographical NEW hash table Matrix kind: Zech short Datum size: 2 No queue sort Initial length: 80 Inhomogeneous Initial queue setup time: 0.009 Initial queue length: 48 ******* STEP 1 Basis length: 80, queue length: 48, step degree: 1, num pairs: 8 Basis total mons: 264, average length: 3.300 Number of pair polynomials: 8, at 25 column(s), 0.000 Average length for reductees: 6.00 [8], reductors: 10.00 [8] Symbolic reduction time: 0.000, column sort time: 0.000 8 + 8 = 16 rows / 25 columns, 32% / 37.641% (8/r) Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [8] Delete 1 memory chunk(s); time: 0.000 After ech memory: 7.9MB Queue insertion time: 0.000 Step 1 time: 0.000, [0.001], mat/total: 0.000/0.009 [0.005], mem: 7.9MB ******* STEP 2 Basis length: 88, queue length: 48, step degree: 2, num pairs: 32 Basis total mons: 340, average length: 3.864 Number of pair polynomials: 32, at 169 column(s), 0.000 Average length for reductees: 3.88 [32], reductors: 7.12 [192] Symbolic reduction time: 0.000, column sort time: 0.000 32 + 192 = 224 rows / 293 columns, 2.2733% / 5.8884% (6.6607/r) Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [8] Delete 1 memory chunk(s); time: 0.000 After ech memory: 7.9MB Queue insertion time: 0.000 Step 2 time: 0.000, [0.003], mat/total: 0.000/0.009 [0.008], mem: 7.9MB ******* STEP 3 Basis length: 96, queue length: 69, step degree: 3, num pairs: 69 Basis total mons: 472, average length: 4.917 Number of pair polynomials: 69, at 540 column(s), 0.000 Average length for reductees: 13.20 [69], reductors: 5.50 [343] Symbolic reduction time: 0.000, column sort time: 0.000 69 + 343 = 412 rows / 719 columns, 0.94387% / 2.4223% (6.7864/r), bv Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [69] Delete 1 memory chunk(s); time: 0.000 After ech memory: 7.9MB Queue insertion time: 0.010 Step 3 time: 0.010, [0.015], mat/total: 0.000/0.019 [0.023], mem: 7.9MB ******* STEP 4 Basis length: 165, queue length: 736, step degree: 3, num pairs: 461 Basis total mons: 1368, average length: 8.291 Number of pair polynomials: 461, at 802 column(s), 0.000 Average length for reductees: 10.35 [461], reductors: 7.49 [654] Symbolic reduction time: 0.000, column sort time: 0.000 461 + 654 = 1115 rows / 804 columns, 1.0789% / 2.7301% (8.6744/r) Before ech memory: 7.9MB Row sort time: 0.000 0.010 + 0.000 = 0.010 [129] Delete 1 memory chunk(s); time: 0.000 Number of unused reductors: 2 After ech memory: 7.9MB Queue insertion time: 0.020 Step 4 time: 0.030, [0.022], mat/total: 0.010/0.049 [0.045], mem: 7.9MB ******* STEP 5 Basis length: 294, queue length: 2094, step degree: 2, num pairs: 132 Basis total mons: 1669, average length: 5.677 Number of pair polynomials: 132, at 149 column(s), 0.000 Average length for reductees: 2.00 [132], reductors: 5.22 [148] Symbolic reduction time: 0.000, column sort time: 0.000 132 + 148 = 280 rows / 149 columns, 2.4856% / 7.1242% (3.7036/r) Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [0] Delete 1 memory chunk(s); time: 0.000 After ech memory: 7.9MB Queue insertion time: 0.000 Step 5 time: 0.000, [0.001], mat/total: 0.010/0.049 [0.046], mem: 7.9MB ******* STEP 6 Basis length: 294, queue length: 1962, step degree: 3, num pairs: 848 Basis total mons: 1669, average length: 5.677 Number of pair polynomials: 57, at 85 column(s), 0.000 Average length for reductees: 2.00 [57], reductors: 2.51 [84] Symbolic reduction time: 0.000, column sort time: 0.000 57 + 84 = 141 rows / 85 columns, 2.7117% / 8.4127% (2.305/r) Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [0] Delete 1 memory chunk(s); time: 0.000 After ech memory: 7.9MB Queue insertion time: 0.000 Step 6 time: 0.000, [0.000], mat/total: 0.010/0.049 [0.046], mem: 7.9MB ******* STEP 7 Basis length: 294, queue length: 1114, step degree: 4, num pairs: 1098 Basis total mons: 1669, average length: 5.677 Number of pair polynomials: 0, at 0 column(s), 0.000 Symbolic reduction time: 0.000, column sort time: 0.000 0 + 0 = 0 rows / 0 columns, 0% / 0% (0/r), bv Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [0] After ech memory: 7.9MB Queue insertion time: 0.000 Step 7 time: 0.000, [0.000], mat/total: 0.010/0.049 [0.046], mem: 7.9MB ******* STEP 8 Basis length: 294, queue length: 16, step degree: 5, num pairs: 16 Basis total mons: 1669, average length: 5.677 Number of pair polynomials: 0, at 0 column(s), 0.000 Symbolic reduction time: 0.000, column sort time: 0.000 0 + 0 = 0 rows / 0 columns, 0% / 0% (0/r), bv Before ech memory: 7.9MB Row sort time: 0.000 0.000 + 0.000 = 0.000 [0] After ech memory: 7.9MB Queue insertion time: 0.000 Step 8 time: 0.000, [0.000], mat/total: 0.010/0.049 [0.046], mem: 7.9MB Reduce 294 final polynomial(s) by 294 16 redundant polynomial(s) removed; time: 0.000 Interreduce 40 (out of 294) polynomial(s) Symbolic reduction time: 0.000 Column sort time: 0.000 40 + 0 = 40 rows / 41 columns, 10.976% / 24.736% (4.5/r) Row sort time: 0.000 0.000 + 0.000 = 0.000 [40] Delete 1 memory chunk(s); time: 0.000 Total reduction time: 0.000 Reduction time: 0.000 Final number of polynomials: 278 Number of pairs: 759 Total pair setup time: 0.000 Max num entries matrix: 1115 by 804 Max num rows matrix: 1115 by 804 Total symbolic reduction time: 0.000 Total column sort time: 0.000 Total row sort time: 0.000 Total matrix time: 0.010 Total new polys time: 0.000 Total queue update time: 0.030 Total Faugere F4 time: 0.049, real time: 0.046 >>>_sage_[126]:=0; _sage_[126]:=0; >>> sage: _ = I.groebner_basis('singular:slimgb',prot='sage') # Singular parsed Leading term degree: 1. Leading term degree: 1. Critical pairs: 56. Leading term degree: 2. Leading term degree: 2. Critical pairs: 69. Leading term degree: 3. Leading term degree: 3. Critical pairs: 461. Leading term degree: 3. Leading term degree: 3. Critical pairs: 535. Leading term degree: 2. Leading term degree: 2. Critical pairs: 423. Leading term degree: 2. Leading term degree: 2. Critical pairs: 367. Leading term degree: 3. Leading term degree: 3. Critical pairs: 0. Leading term degree: 1. Leading term degree: 1. Critical pairs: 39. Leading term degree: 1. Critical pairs: 38. Leading term degree: 1. Critical pairs: 37. Leading term degree: 1. Critical pairs: 36. Leading term degree: 1. Critical pairs: 35. Leading term degree: 1. Critical pairs: 34. Leading term degree: 1. Critical pairs: 33. Leading term degree: 1. Critical pairs: 32. Leading term degree: 1. Critical pairs: 31. Leading term degree: 1. Critical pairs: 30. Leading term degree: 1. Critical pairs: 29. Leading term degree: 1. Critical pairs: 28. Leading term degree: 1. Critical pairs: 27. Leading term degree: 1. Critical pairs: 26. Leading term degree: 1. Critical pairs: 25. Leading term degree: 1. Critical pairs: 24. Leading term degree: 1. Critical pairs: 23. Leading term degree: 1. Critical pairs: 22. Leading term degree: 1. Critical pairs: 21. Leading term degree: 1. Critical pairs: 20. Leading term degree: 1. Critical pairs: 19. Leading term degree: 1. Critical pairs: 18. Leading term degree: 1. Critical pairs: 17. Leading term degree: 1. Critical pairs: 16. Leading term degree: 1. Critical pairs: 15. Leading term degree: 1. Critical pairs: 14. Leading term degree: 1. Critical pairs: 13. Leading term degree: 1. Critical pairs: 12. Leading term degree: 1. Critical pairs: 11. Leading term degree: 1. Critical pairs: 10. Leading term degree: 1. Critical pairs: 9. Leading term degree: 1. Critical pairs: 8. Leading term degree: 1. Critical pairs: 7. Leading term degree: 1. Critical pairs: 6. Leading term degree: 1. Critical pairs: 5. Leading term degree: 1. Critical pairs: 4. Leading term degree: 1. Critical pairs: 3. Leading term degree: 1. Critical pairs: 2. Highest degree reached during computation: 3.

Note, that these logs are all live, i.e. the are displayed during the computation as soon as the respective system makes them available.

Both patches are up for review on Sage’s trac server.

## Polynomial System Solving with Noise

The reason I became somewhat interested in mixed integer programming is that it is one way of solving polynomial systems which are noisy. These kind of polynomial systems pop up in crypto all over the place but so far — to my knowledge — they have not been studied extensively. In order to clarify what is meant by “polynomial system solving with noise”, we can define a familiy of *Max-PoSSo* problems similar to the Max-SAT family of problems (in fact, you can reduce *Max-PoSSo* to *Max-SAT*).

Polynomial system solving (*PoSSo*) is the problem of finding a solution to a system of polynomial equations over some field . We consider the set , where each $latex f_i \in \mathbb{F}[x_0,\dots,x_{n-1}]$. A solution to $latex F$ is any point such that $latex \forall f \in F$, we have $latex f(x) = 0$. Note that we restrict ourselves to solutions in the base field in this context.

Then by *Max-PoSSo* we denote the problem of finding any which satisfies the maximum number of polynomials in . Likewise, by *Partial Max-PoSSo* we denote the problem to return a point such that for two sets of polynomials and $\mathcal{S}$ in , we have for all , and the number of polynomials with is maximised. Max-PoSSo is Partial Max-PoSSo with .

Finally, by *Partial Weighted Max-PoSSo* we denote the problem to return a point such that and is minimized where is a cost function which returns if and some value if . Partial Max-PoSSo is Weighted Partial Max-PoSSo where returns 1 if for all .

It turns out one can solve Partial Weighted Max-PoSSo over using mixed integer programming. Recently, Carlos and I have been busy applying these ideas to the cold boot problem, with reasonable success. An extended abstract of our work is available here. Two sets of slides discussing our techniques are also available here and here.

## Geometric XL

The paper by Sean Murphy and Maura Paterson has been around for quite some time now (same for my toy implementation). Gröbner basis algorithms and related methods like XL are algebraic in nature. In particular, their complexity is not invariant under a linear change of coordinates. For example consider Cyclic-6

```
sage: P.<a,b,c,d,e,f,h> = PolynomialRing(GF(32003))
sage: I = sage.rings.ideal.Cyclic(P,6).homogenize(h)
sage: J = Ideal(I.groebner_basis())
```

The generators of **J** are a Gröbner basis and we can use this property to find a common root for these generators. Now, consider the same equations but permute the variables in the ring.

```
sage: P.<a,b,c,d,e,f,h> = PolynomialRing(GF(32003),order='lex')
sage: I = sage.rings.ideal.Cyclic(P,6).homogenize(h)
sage: J = Ideal(I.groebner_basis())
sage: R = PolynomialRing(GF(32003),P.ngens(),list(reversed(P.variable_names())),order='lex')
sage: H = Ideal([R(str(f)) for f in J.gens()])
```

The generators of **H** do not form a Gröbner basis in **R** which is just **P** with its variables reversed. If we are only trying to solve a system of equations choosing the right permutation of variables might have a significant impact on the performance of our Gröbner basis algorithm:

```
sage: t = cputime()
sage: gb = H.groebner_basis('libsingular:std')
sage: gb[-1].degree()
19
sage: cputime(t) # output random-ish
25.36...
```

While in this example it is easy to see which variable permutation is the cheapest one, this is not necessarily the case in general. The GeometricXL algorithm is invariant under any linear change of coordinates and has the following property:

*Let D be the degree reached by the XL algorithm to solve a given system of equations under the optimal linear change of coordinates. Then GeometricXL will also solve this system of equations for the degree D, without applying this optimal linear change of coordinates first.*

The above behaviour holds under two assumptions:

- the characteristic of the base field
**K**is bigger than**D**and - the system of equations has one over “very few” solution.

To demonstrate this behaviour, consider the synthetic benchmark available here which is a Gröbner basis under a linear change of coordinates:

` sage: e,h = random_example(n=6)`

**e** is the original easy system while **h** is the “rotated” system

```
sage: e.basis_is_groebner()
True
sage: max([f.total_degree() for f in e.gens()])
2
sage: h.basis_is_groebner()
False
sage: max([f.total_degree() for f in h.gens()])
2
```

GeometricXL recovers linear factors and thus candidates for common roots at **D=2**:

```
sage: hH = h.homogenize()
sage: f = GeometricXL(hH, D=2); f.factor(False)
0.0...s -- 1. D: 2
...
(-2684)
* (-1056*x5 - 2964*x4 - 177*x3 + 6206*x2 + 376*x1 + 6257*x0 + h)
* (2957*x5 - 792*x4 - 4323*x3 - 14408*x2 - 2750*x1 - 8823*x0 + h)
```

While any Gröbner basis algorithm would have to reach at least degree 64:

```
sage: gb = h.groebner_basis('libsingular:slimgb')
sage: gb[-1].degree()
64
```

While my toy implementation is neither robust not efficient, the following table (using the same synthetic benchmark as above) should give some indication that for some problems GeometricXL is a good choice:

n | GeometricXL degree | GeometricXL time in seconds | Singular 3-0-4 time in seconds | Magma 2.14 time in seconds | Gröbner basis degree |
---|---|---|---|---|---|

2 | 2 | 0.01 | 0.00 | 0.00 | 4 |

3 | 2 | 0.07 | 0.00 | 0.00 | 8 |

4 | 2 | 0.18 | 0.00 | 0.00 | 16 |

5 | 2 | 0.37 | 0.02 | 0.01 | 32 |

6 | 2 | 0.73 | 0.12 | 0.04 | 64 |

7 | 2 | 1.36 | 0.94 | 0.09 | 128 |

8 | 2 | 2.35 | 9.59 | 0.56 | 256 |

9 | 2 | 4.24 | 117.43 | 3.79 | 512 |

10 | 2 | 7.34 | — | 28.68 | 1024 |

11 | 2 | 12.08 | — | 205.05 | 2048 |

12 | 2 | 20.03 | — | — | — |

13 | 2 | 35.53 | — | — | — |

14 | 2 | 53.98 | — | — | — |

15 | 2 | 88.36 | — | — | — |

16 | 2 | 143.16 | — | — | — |

While the benchmark is synthetical and unrealistic, there are a few problems in multivariate quadratic public key cryptography which might be tackled using an idea similar to the GeometricXL algorithm. Also, note that constructing a GeometricMatrixF5 algorithm is straight forward by replacing the first step of the algorithm.