The M4RIE library for dense linear algebra over small fields with even characteristic

I finally uploaded a pre-print of the M4RIE paper to the arXiv:

Abstract: In this work, we present the M4RIE library which implements efficient algorithms for linear algebra with dense matrices over \mathbb{F}_{2^e} for 2 \leq e \leq 10. As the name of the library indicates, it makes heavy use of the M4RI library both directly (i.e., by calling it) and indirectly (i.e., by using its concepts). We provide an open-source GPLv2+ C library for efficient linear algebra over \mathbb{F}_{2^e} for e small. In this library we implemented an idea due to Bradshaw and Boothby which reduces matrix multiplication over \mathbb{F}_{p^k} to a series of matrix multiplications over \mathbb{F}_p. Furthermore, we propose a caching technique – Newton-John tables – to avoid finite field multiplications which is inspired by Kronrod’s method (“M4RM”) for matrix multiplication over \mathbb{F}_2. Using these two techniques we provide asymptotically fast triangular solving with matrices (TRSM) and PLE-based Gaussian elimination. As a result, we are able to significantly improve upon the state of the art in dense linear algebra over \mathbb{F}_{2^e} with 2 \leq e \leq 10.

Efficient dense Gaussian elimination over the field with two elements

Finally, we finished our paper about Gaussian elimination in the M4RI library.

Abstract: In this work we describe an efficient implementation of a hierarchy of algorithms for Gaussian elimination upon dense matrices over the field with two elements (\mathbb{F}_2). We discuss both well-known and new algorithms as well as our implementations in the M4RI library, which has been adopted into Sage. The focus of our discussion is a block iterative algorithm for PLE decomposition which is inspired by the M4RI algorithm. The implementation presented in this work provides considerable performance gains in practice when compared to the previously fastest implementation. We provide performance figures on x86_64 CPUs to demonstrate the alacrity of our approach.

The sources of this document are available on bitbucket. But I also compiled a PDF.

Update: arXiv link.

Challenge matrices

Now, that we have a decent PNG reader/writer in M4RI, it’s much easier to get some challenge matrices out of the library. Below, I list and link a few such matrices as they appear during Gröbner basis computations.

file matrix dimensions density PLE M4RI GB
HFE 25 matrix 5 (5.1M) 12307 x 13508 0.07600 1.03 0.59 0.81
HFE 30 matrix 5 (16M) 19907 x 29323 0.06731 4.79 2.70 4.76
HFE 35 matrix 5 (37M) 29969 x 55800 0.05949 19.33 9.28 19.51
Mutant matrix (39M) 26075 x 26407 0.18497 5.71 3.98 2.10
random n=24, m=26 matrix 3 (30M) 37587 x 38483 0.03832 20.69 21.08 19.36
random n=24_ m=26 matrix 4 (24M) 37576 x 32288 0.04073 18.65 28.44 17.05
SR(2,2,2,4) compressed, matrix 2 (328K) 5640 x 14297 0.00333 0.40 0.29 0.18
SR(2,2,2,4) compressed, matrix 4 (2.4M) 13665 x 17394 0.01376 2.18 3.04 2.04
SR(2,2,2,4) compressed, matrix 5 (2.8M) 11606 x 16282 0.03532 1.94 4.46 1.59
SR(2,2,2,4) matrix 6 (1.4M) 13067 x 17511 0.00892 1.90 2.09 1.38
SR(2,2,2,4) matrix 7 (1.7M) 12058 x 16662 0.01536 1.53 1.93 1.66
SR(2,2,2,4) matrix 9 (36M) 115834 x 118589 0.00376 528.21 578.54 522.98

The first three rows are from GB computations for the hidden field equations cryptosystem (those matrices were provided by Michael Brickenstein). The “mutant” row is a matrix as it appears during a run of the MXL2 algorithm on a random system (I believe). It was contributed by Wael Said. The rows “random n=25,m=26” are matrices as they appear during a GB computation with PolyBoRi for a random system of equations in 24 variables and 26 equations. The remaining rows are matrices from PolyBoRi computations on small scale AES instances. Those rows which have “compressed” in their description correspond to systems where “linear variables” were eliminate before running the Gröbner basis algorithm.

The last three columns give running times (quite rough ones!) for computing an echelon form (not reduced) using (a) the M4RI algorithm, (b) PLE decomposition and (c) a first implementation of the TRSM for trivial pivots trick. As you can see, currently it’s not straight-forward to pick which strategy to use to eliminate matrices appearing during Gröbner basis computations: the best algorithm to pick varies between different problems and the differences can be dramatic.

PNG Images FTW

I had the pleasure to attend a meeting of the LinBox developers this week in Raleigh, NC. One of the question that came up was how to exchange and store matrices. For example, matrices for whose “class” one wants to find/implement dedicated algorithms. To give a more concrete example: matrices appearing during Gröbner basis computations have a special structure which allows to reduce them faster than random matrices (cf. this paper or this code). Hence, we’d like to have some sort of format to store such matrices such that we can work on these dedicated algorithms. Of course, such a format should be flexible, human readable (preferably writable as well) and reasonably simple. I’m not sure how the LinBox crew would think about me blogging about details of their meeting, hence I’ll only use this opportunity to plug my own proposal (at least for reasonably dense matrices): PNG images.

The PNG file format definitely is readable, since there exists many viewers for it (you are probably using one right now). The format can also be edited thanks to GIMP and friends. That is, we can draw matrices! Furthermore, PNG allows to use lossless compression, i.e., it can compress the actual image data using GZIP. Finally, it is pretty flexible: it supports between one (grayscale) and four (RGB + alpha) channels and various bit depths per channel (1,2,4,8,16). Hence, it can store between 1 bit and 64 bits per pixel. Thus, for finite fields up cardinality 2^{64} we can store each entry simply as a pixel. If we pick our colour assignment right, the pictures even make sense (using the convention that darker is larger as an integer for example).

Of course, the whole format is fundamentally biased towards dense matrices. In fact, we’ve been using 1-bit PNG images as data storage for M4RI matrices for a while now: both Sage and PolyBoRi use the GD library to write M4RI matrices to disk. However, using GD has some shortcomings such as high memory requirements (the whole image is constructed in RAM before being written to disk). This week, I implemented reading/writing of PNG 1-bit images feature directly in the M4RI library using libpng directly. This allows to save a lot of memory and for some cool other features such as control over the GZIP compression level, custom comments, “unknown chunks” as attachments, etc.

I’ve also conducted some experiments to get an impression how well this format works in terms of storage space and loading time:

file size matrix dimension density loading time on 2.6Ghz i7
2.8M 11606 x 16282 0.03532 0.190s
5.1M 12307 x 13508 0.07600 0.205s
16M 19907 x 29323 0.06731 0.619s
30M 37587 x 38483 0.03832 1.565s
36M 115834 x 118589 0.00376 12.132s
37M 29969 x 55800 0.05949 1.685s
39M 26075 x 26407 0.18497 0.847s

To me, the above table – which lists some matrices from Gröbner basis computations over GF(2) – suggests that the format is reasonably efficient. However, I don’t really have anything to compare with, so my sense might be off. Still, compared to some ASCII based formats out there, it’s pretty competitive, as far as I can tell. Note, however, that the above file sizes were produced using GZIP compression level 9 which takes pretty long to write. Using a lower level (such as the default) produces slightly larger files (about 10%-20% depending on the structure).

Finally, wouldn’t it be very awesome if we could use these pictures when debugging our code? So who speaks GDB’s macro language?

GAP for dense linear algebra over GF(2^e)

For some strange reason it hardly every occurs to me to benchmark GAP when it comes to linear algebra over GF(2^e). Turns out, that’s a really bad idea as it seems to be the best open-source implementation for such things. For example, computing the reduced row echelon form of a 4,000 x 4,000 random matrix over \mathbb{F}_{16} takes 16.94s in GAP 4.4.12 but 3267s in Sage and 1005s in NTL 5.4.2. So much much faster than other established open-source projects out there.

I hope libgap becomes a reality at some point in the future.

PS: For comparison, Magma does the same task in in 4.67s and M4RIE in 0.71s.

Asymptotically fast Gaussian elimination for M4RIE

I guess one perk of being on vacation is that one can get more work done. Hence, I finished implementing PLE decomposition (formerly known as PLS decomposition) and hence asymptotically fast Gaussian elimination. The implementation uses two matrix representations: mzd_slice_t and mzed_t. The former is optimised for implementing Karatsuba multiplication (cf., here) and the latter is optimised for using Travolta tables (cf., here). That is, multiplication is based on Karatsuba while the PLE base case is based on Travolta tables. The same applies to TRSM where the base case is also implemented using Travolta tables.

There is still a lot to be done:

  • Both TRSM and PLE base cases use only one Travolta table while Gaussian elimination and multiplication use 6 and 8 in parallel respectively.
  • There way too much copying going on. For example, I was lazy and implemented TRSM upper left with respect to matrices which do not start at offset 0 with respect to a machine word by simply copying the whole matrix out. Hence, we’re wasting memory where we shouldn’t.
  • PLE isn’t cache efficient yet and I assume that the code isn’t very good for sparse-ish matrices (cf. the journey in M4RI improving this)
Still, the results are quite nice already. Below, the left hand side plots the wall time of computing the reduced row echelon form of m \times (m + 1000) random matrices over \mathbb{F}_{2^e}. The right hand side expresses efficiency by dividing the running time by 2mnr^{\omega-2}, i.e., the number of operations (fingers crossed I didn’t screw up the constant).The fact, that we’re not there yet, is clearly expressed by the poor performance of the asymptotically fast code (“PLE”) for small dimensions (< 3,000).
Finally, it should be mentioned, that the green plot (“Travolta”) already is considerably faster than Magma, which – as far as I know – is the fastest other implementation of this kind of operation over these fields. Below, Travolta based elimination is compared with Magma’s implementation, where blueness b expresses a speed-up factor of 2^b (redness the other way around).The code is on bitbucket.

Dense matrices mod n via FFLAS/FFPACK

During last weeks Sage Days 32 I worked on finishing a patch by Clément Pernet and Burcin Eröcal which switches Sage’s native matrices mod n (for n<2^{23}) to FFLAS/FFPACK. The purpose of this exercise was to make operations with dense matrices mod n both faster and more memory efficient (currently, Sage does a lot of copying around when calling any LinBox function). Well, we almost succeeded. Continue reading “Dense matrices mod n via FFLAS/FFPACK”