Despite being proven to be a redundant variant of the F4 algorithm, the XL algorithm still receives a lot of attention from the cryptographic community. This is partly because XL is considered to be conceptually much simpler than Gröbner basis algorithms. However, in doing so the wealth of theory available to understand algorithms for polynomial system solving is largely ignored.

The most recent and perhaps promising variant of the XL algorithm is the family of MXL algorithms which are based around the concept of Mutants. Assume in some iteration the XL algorithm finds elements of degree k while considering degree D > k. In a nutshell, the idea of the MutantXL algorithm is to continue the XL algorithm at the degree k+1 instead of D+1 which is what the XL algorithm would do. The natural question to ask is thus what Mutants are in terms of Gröbner basis theory; are they something new or are they a concept which is already known in the symbolic computing world under a different name?

I was in Darmstadt this week visiting the group which mainly drives the effort behind the MXL family of algorithms. As part of my visit I gave a talk about the relation of the Mutant strategy and the normal strategy used in Gröbner basis algorithms for selecting critical pairs called … the Normal Selection Strategy. In the talk we show that the Mutant strategy is a redundant variant of the Normal Selection Strategy. Also, I talked quite a bit about S-polynomials and how they can be used to account for every single reduction that happens in XL-style algorithms. Finally, I briefly touched on the “partial enlargement strategy” which was introduced with MXL2 showing that it is equivalent to selecting a subset of S-polynomials in each iteration of F4.

Unfortunately, there’s no full paper yet, so the presentation has to suffice for now.

Update: It was pointed out to me that a better way of phrasing the relationship is to state that the Mutant selection strategy can be understood as a redundant variant of the Normal selection strategy when used in F4. This way is better because our statement is strictly about an algorithmic relation and not about why did what first knowing what … which is how one could read the original phrasing.

since you claim that MXL family of algorithms are just redundant of F4 algorithm, that must mean that the MXL algorithms are just repeating what F4 is doing, and no better. But how do you explain the fact that the results of experiments showing that the best MXL family algorithms beat the best available F4 implementation in Magma by a very substantial margin in terms of both time and memory, despite the fact that the MXL family of algorithms are using a linear solver by you which is normally 2-3 or more times slower than the linear solver in Magma? Maybe you should add a paragraph explain why?

“We confirm in a theoretical way what Buchmann \etAl observed on the connection between F$_4$ and MutantXL on the \MQ-system Hidden Field Equations (HFE), i.e. in some cases MutantXL is faster than F$_4$, respectively F$_5$. ”

Are you saying that the paper of C. Wolf is totally baseless and wrong as well?

Anyway, I suggest that you really should finish your paper first before you blog your paper.

> since you claim that MXL family of algorithms are just redundant of F4
> algorithm, that must mean that the MXL algorithms are just repeating what > F4 is doing, and no better. But how do you explain the fact that the results
> of experiments showing that the best MXL family algorithms beat the best
> available F4 implementation in Magma by a very substantial margin in
> terms of both time and memory, despite the fact that the MXL family of
> algorithms are using a linear solver by you which is normally 2-3 or more
> times slower than the linear solver in Magma? Maybe you should add a
> paragraph explain why?

a) that linear solver you refer to is our M4RI library and the claim that we’re usually 2-3 solver than Magma is not clear to me. Where do you get this information from? Are you comparing Magma’s dense linear algebra with M4RI or have you managed to call Magma’s linear algebra used in GB computations somehow?

b) I would suggest that the performance difference is to different choices made by Allan, which I cannot know since I don’t have insight knowledge of Magma.

> Also, please go to check the paper by Enrico Thomae and Christopher
> Wolf http://eprint.iacr.org/2010/596 which claims:
> “We confirm in a theoretical way what Buchmann \etAl observed on the
> connection between F$_4$ and MutantXL on the \MQ-system Hidden
> Field Equations (HFE), i.e. in some cases MutantXL is faster than F$_4$,
> respectively F$_5$. ”
> Are you saying that the paper of C. Wolf is totally baseless and wrong as
> well?

a) I never said your work was baseless. All we did was to point out that it turns out that the Mutant strategy can be understood as a redundant variant of the Normal Selection strategy when used in F4.

b) But yes, I think the claim you cite above is wrong.

> Anyway, I suggest that you really should finish your paper first before you
> blog your paper.

I blogged about a presentation I gave. This way I can gather feedback early and if I made a mistake I’m happy to correct it here as well.

This is to respond to the following comment above:

“a) that linear solver you refer to is our M4RI library and the claim that we’re usually 2-3 solver than Magma is not clear to me. Where do you get this information from? Are you comparing Magma’s dense linear algebra with M4RI or have you managed to call Magma’s linear algebra used in GB computations somehow?”

Yes, we experimented on the same size matrix and Magma is at least 2 times faster than your M4RI. This is also said in our papers, which, I am very surprised, you did not know.

“b) I would suggest that the performance difference is to different choices made by Allan, which I cannot know since I don’t have insight knowledge of Magma.”

are U suggest that he made bad choices that therefore his F4 implementation is no good?

“Yes, we experimented on the same size matrix and Magma is at least 2 times faster than your M4RI. This is also said in our papers, which, I am very surprised, you did not know.”

As I said above, you have to be careful what you compare. Did you compared the M4RI algorithm (of complexity O(n^3/logn) with Magma’s EchelonForm() command (which is asymptotically fast: O(n^2.807)? In that case the difference is asymptotically more than 2-3. Furthermore, I assume that Magma isn’t using dense but sparse linear algebra techniques for F4 and thus I don’t think this comparison is very meaningful. I also believe that you cannot call the LA used in F4 in Magma from the Magma command line, which makes it harder to compare. Or did you take the time from the GB protocol of Magma? Also, for sparse matrices these algorithms sometimes tend to behave quite strangely (cf. our preprint on elimination over GF(2)). For this reason it would be interesting for me get these matrices that you used for comparison: to see if we can improve our library for this use-case. Finally, I should mention that we did implement asymptotically fast matrix elimination O(n^2.807) which may or may not improve your timings further.

“are U suggest that he made bad choices that therefore his F4 implementation is no good?”

Clearly, Magma’s F4 is excellent as witnessed by its performance and how many people have failed to beat it. Yet, MXl3 managed to improve upon it and the obvious question to ask is how you did it. Perhaps Magma’s default choices are not good for the kind of systems you are considering or even in most or all cases? I don’t know, but I claim that I have a negative result:

We claim to have shown that this is not because of a better algorithm. The way we attempt to proof this is by reducing the Mutant strategy to the Normal Selection strategy. Also, we claim to have demonstrated that the Partial Enlargement strategy is closely related to e.g. Magma’s option for reducing the number of S-polynomials. Whether we are right or wrong can only be settled by others engaging with the argument we made as for instance outlined in the presentation linked in this blog post.

Martin,

since you claim that MXL family of algorithms are just redundant of F4 algorithm, that must mean that the MXL algorithms are just repeating what F4 is doing, and no better. But how do you explain the fact that the results of experiments showing that the best MXL family algorithms beat the best available F4 implementation in Magma by a very substantial margin in terms of both time and memory, despite the fact that the MXL family of algorithms are using a linear solver by you which is normally 2-3 or more times slower than the linear solver in Magma? Maybe you should add a paragraph explain why?

Also, please go to check the paper by Enrico Thomae and Christopher Wolf

http://eprint.iacr.org/2010/596

which claims:

“We confirm in a theoretical way what Buchmann \etAl observed on the connection between F$_4$ and MutantXL on the \MQ-system Hidden Field Equations (HFE), i.e. in some cases MutantXL is faster than F$_4$, respectively F$_5$. ”

Are you saying that the paper of C. Wolf is totally baseless and wrong as well?

Anyway, I suggest that you really should finish your paper first before you blog your paper.

> since you claim that MXL family of algorithms are just redundant of F4

> algorithm, that must mean that the MXL algorithms are just repeating what > F4 is doing, and no better. But how do you explain the fact that the results

> of experiments showing that the best MXL family algorithms beat the best

> available F4 implementation in Magma by a very substantial margin in

> terms of both time and memory, despite the fact that the MXL family of

> algorithms are using a linear solver by you which is normally 2-3 or more

> times slower than the linear solver in Magma? Maybe you should add a

> paragraph explain why?

a) that linear solver you refer to is our M4RI library and the claim that we’re usually 2-3 solver than Magma is not clear to me. Where do you get this information from? Are you comparing Magma’s dense linear algebra with M4RI or have you managed to call Magma’s linear algebra used in GB computations somehow?

b) I would suggest that the performance difference is to different choices made by Allan, which I cannot know since I don’t have insight knowledge of Magma.

> Also, please go to check the paper by Enrico Thomae and Christopher

> Wolf http://eprint.iacr.org/2010/596 which claims:

> “We confirm in a theoretical way what Buchmann \etAl observed on the

> connection between F$_4$ and MutantXL on the \MQ-system Hidden

> Field Equations (HFE), i.e. in some cases MutantXL is faster than F$_4$,

> respectively F$_5$. ”

> Are you saying that the paper of C. Wolf is totally baseless and wrong as

> well?

a) I never said your work was baseless. All we did was to point out that it turns out that the Mutant strategy can be understood as a redundant variant of the Normal Selection strategy when used in F4.

b) But yes, I think the claim you cite above is wrong.

> Anyway, I suggest that you really should finish your paper first before you

> blog your paper.

I blogged about a presentation I gave. This way I can gather feedback early and if I made a mistake I’m happy to correct it here as well.

This is to respond to the following comment above:

“a) that linear solver you refer to is our M4RI library and the claim that we’re usually 2-3 solver than Magma is not clear to me. Where do you get this information from? Are you comparing Magma’s dense linear algebra with M4RI or have you managed to call Magma’s linear algebra used in GB computations somehow?”

Yes, we experimented on the same size matrix and Magma is at least 2 times faster than your M4RI. This is also said in our papers, which, I am very surprised, you did not know.

This is a response to:

“b) I would suggest that the performance difference is to different choices made by Allan, which I cannot know since I don’t have insight knowledge of Magma.”

are U suggest that he made bad choices that therefore his F4 implementation is no good?

“Yes, we experimented on the same size matrix and Magma is at least 2 times faster than your M4RI. This is also said in our papers, which, I am very surprised, you did not know.”

As I said above, you have to be careful what you compare. Did you compared the M4RI algorithm (of complexity O(n^3/logn) with Magma’s EchelonForm() command (which is asymptotically fast: O(n^2.807)? In that case the difference is asymptotically more than 2-3. Furthermore, I assume that Magma isn’t using dense but sparse linear algebra techniques for F4 and thus I don’t think this comparison is very meaningful. I also believe that you cannot call the LA used in F4 in Magma from the Magma command line, which makes it harder to compare. Or did you take the time from the GB protocol of Magma? Also, for sparse matrices these algorithms sometimes tend to behave quite strangely (cf. our preprint on elimination over GF(2)). For this reason it would be interesting for me get these matrices that you used for comparison: to see if we can improve our library for this use-case. Finally, I should mention that we did implement asymptotically fast matrix elimination O(n^2.807) which may or may not improve your timings further.

“are U suggest that he made bad choices that therefore his F4 implementation is no good?”

Clearly, Magma’s F4 is excellent as witnessed by its performance and how many people have failed to beat it. Yet, MXl3 managed to improve upon it and the obvious question to ask is how you did it. Perhaps Magma’s default choices are not good for the kind of systems you are considering or even in most or all cases? I don’t know, but I claim that I have a negative result:

We claim to have shown that this is not because of a better algorithm. The way we attempt to proof this is by reducing the Mutant strategy to the Normal Selection strategy. Also, we claim to have demonstrated that the Partial Enlargement strategy is closely related to e.g. Magma’s option for reducing the number of S-polynomials. Whether we are right or wrong can only be settled by others engaging with the argument we made as for instance outlined in the presentation linked in this blog post.