Faugère-Lachartre implementation for linear algebra for Gröbner bases

Fayssal’s code which implements the Faugère-Lachartre approach to linear algebra for Gröbner bases is available on Github now. Fayssal did a Master’s project on linear algebra for Gröbner bases in the team of Jean-Charles Faugère.

 

LPN and SVP

I am currently attending ESC 2013 in Mondorf, Luxembourg. Over dinner someone mentioned that there is no known reduction from LPN to lattice reduction, i.e., it is not known that you can solve LPN with LLL and friends.  This seems rather strange to me, because the standard lattice attack on LWE seems to be carrying over as is:

sage: n = 100 # number of variables
sage: m = 400 # number of samples
sage: A = random_matrix(GF(2), m, n)
sage: s = random_vector(GF(2), n) # our secret
sage: p = 0.25 # our error rate

sage:  v = A*s + vector(GF(2),[1 if random() < p else 0 for _ in range(m)])

# we are searching for a short vector in the dual lattice
sage: B = A.kernel().matrix()
sage: L = B.change_ring(ZZ).LLL()

# because a short vector there, means few additions which means a higher bias in the sum
sage: Av = A.augment(v)
sage: sum(map(lambda x: abs(x) % 2,L[0])), (L[0]*Av)[-1]

Of course, this means running lattice reduction many times, but still: what am I missing?

PS: Obligatory, Sage cell here.

Matrix Multiplication over GF(p^e)

After my talk at Sage Days 35 in Warwick (that was in winter 2011) David Harvey had an idea on how to speed up matrix multiplication over \mathbb{F}_{p^n}. We spend some time on this in Warwick and developed this idea further (adding fun stuff like Mixed Integer Programming in the process) but did not get around to do much on this project in the mean time (I have explained the idea at the end of my talk in Mykonos, though).

Just now, in a conversation with Richard Parker I was reminded of this dormant project, i.e., the question of how many multiplications i \mathbb{F}_p it takes to do a multiplication in \mathbb{F}_{p^n}. In particular, I recalled to have written some code for Sage which gives some upper bound to this answer which is better than Karatsuba.

Well, here’s an interactive demo … gosh, I love the Sage cell server.

M4RI 20121224

I have just pushed the button to release M4RI 20121224. The main feature of this release is a considerable performance improvement. It all started with Fast matrix decomposition in F2 by Enrico Bertolazzi and Anna Rimoldi showing up on the arXiv. Here’s the abstract

In this work an efficient algorithm to perform a block decomposition (and so to compute the rank) of large dense rectangular matrices with entries in F2 is presented. Depending on the way the matrix is stored, the operations acting on rows or block of consecutive columns (stored as one integer) should be preferred. In this paper, an algorithm that completely avoids the column permutations is given. In particular, a block decomposition is presented and its running times are compared with the ones adopted into SAGE.

… and that comparison made M4RI (which realises this functionality in Sage) look pretty bad. I did’t (and still don’t) share the implicit assumption that avoiding column swaps was the key ingredient in making this code so much faster than ours. I assume the impressive timings are due to a very efficient base case implementation. Anyway, we sat down  and looked for performance bottlenecks the result of which is 20121224. I actually have no idea whether we caught up to the code described in Enrico’s and Anna’s pre-print as they did not publish their sources.

Still, the performance improvements over 20120613 were worth the trouble. Below two plots of the (normalised) leading constants giving the leading constants for multiplication and elimination respectively (more plots on imgur) That is, it plots the running time divided by n^{2.807} \cdot 10^9. In theory these plots should all have slope 0.

Multiplication on Intel Core i7
PLE on Intel Core i7

Finally, here’s the plot for Fast matrix decomposition in F2 which starts very small but has a rather large slope. That’s why I concluded that the performance stems from a very efficient base case. I should get in touch with Enrio and Anna about this.

M4RIE: support for finite fields up to degree 16 added

I committed support for finite fields up to degree 16 to M4RIE a few days ago. Furthermore, the dependency on Givaro for constructing finite fields was dropped.

Don’t get me wrong. Givaro is a fine library, much better than what I wrote for M4RIE. However, it is a C++ library while M4RIE is a C library and the little functionality of finite field arithmetic needed in M4RIE was not that hard to add natively. In the past M4RIE relied on Givaro for running tests and benchmarks, the core library was always free of C++. However, as we plan to add support for high-degree polynomials over matrices over\mathbb{F}_2, we need the ability to create finite extensions of \mathbb{F}_2 on the fly in the core library.

Continue reading “M4RIE: support for finite fields up to degree 16 added”

Linear Algebra for Gröbner Bases over GF(2): M4RI

Two days ago I wrote about LELA’s implementation of Gaussian elimination for Gröbner basis computations over \mathbb{F}_2. Yesterday, I implemented LELA’s algorithm (which is from Faugere & Lachartre paper) in M4RI. Continue reading “Linear Algebra for Gröbner Bases over GF(2): M4RI”

Linear Algebra for Gröbner Bases over GF(2): LELA

The Efficient Linear Algebra for Gröbner Basis Computations workshop in Kaiserslautern two weeks ago was a welcome opportunity to finally test out LELA, a library specifically written for linear algebra for Gröbner basis computations including for GF(2). The library implements the “Faugère-Lachartre” algorithm (a similar trick, though less developed, appeared before in PolyBoRi) and uses M4RI for dense parts over GF(2).

So, I ran my benchmark matrices through LELA, discovered a bug in the process, then Bradford returned the favour and discovered a bug in M4RI … Finally, below are the timings. The column PLE is the PLE algorithm as implemented in M4RI, M4RI is the M4RI algorithm as implemented in M4RI, GB is a very naive variant of the algorithm LELA uses and LELA is, well, LELA.

problem m n density PLE M4RI GB LELA
HFE 25 12307 13508 0.076 1.0 0.5 0.8 0.56
HFE 30 19907 29323 0.067 4.7 2.7 4.7 3.42
HFE 35 29969 55800 0.059 19.3 9.2 19.5 13.92
Mutant 26075 26407 0.184 5.7 3.9 2.1 12.07
n=24, m=26 37587 38483 0.038 20.6 21.0 19.3 7.72
n=24, m=26 37576 32288 0.040 18.6 28.4 17.0 4.09
SR(2,2,2,4) c 5640 14297 0.003 0.4 0.2 0.1 0.40
SR(2,2,2,4) c 13665 17394 0.013 2.1 3.0 2.0 1.78
SR(2,2,2,4) c 11606 16282 0.035 1.9 4.4 1.5 0.81
SR(2,2,2,4) 13067 17511 0.008 1.9 2.0 1.3 1.45
SR(2,2,2,4) 12058 16662 0.015 1.5 1.9 1.6 1.01
SR(2,2,2,4) 115834 118589 0.003 528.2 578.5 522.9 48.39

What this table means is that one can expect more than an order of magnitude of speed-up when using LELA – which is dedicated to these computations – instead of M4RI – which does not have the specialised algorithm implemented yet. For very small matrices sometimes M4RI/PLE win, but then not by a large margin. The only row where LELA doesn’t do so good is Mutant, which – btw. – is not an F4 matrix but comes from the MXL2 algorithm.  It is possible that LELA’s sparse data structures are not that well equipped to deal with this rather dense matrix.

I am in the process of implementing the algorithm LELA uses in M4RI and will report updated timings here.