Sage Development with Emacs

A while back I described my (then current) setup to develop C code with Emacs. The other programming language I tend to spend a lot of time with is Python, specifically Sage’s Python. Here’s my Emacs setup for writing Sage code. For starters, it makes sense to highlight indentation in Python.

(use-package highlight-indentation
   :ensure t)

I use anaconda-mode for auto-completion and stuff, it runs jedi for me. In particular it offers:

  • M-. Goto definition for thing at point.
  • M-, Switch to buffer of most recent marker.
  • M-? Show documentation for context at point.
  • M-r Show usage for thing at point.
(use-package anaconda-mode
  :ensure t
  :diminish anaconda-mode
  :config (bind-key "M-," #'anaconda-nav-pop-marker anaconda-mode-map))

Continue reading “Sage Development with Emacs”

On the concrete hardness of Learning with Errors

Together with Rachel Player and Sam Scott (both also from the Information Security Group at Royal Holloway, University of London) we finally managed to put our survey on solving the Learning with Errors problem out. Here’s the abstract:

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and propose a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

Continue reading “On the concrete hardness of Learning with Errors”

Looking Back and Forward for Open-Source Mathematics Software (2014)

When a year ends people make lists. I can only guess that several people are currently busy with writing “The 5 most revised papers on eprint ” and “The 8 best IACR flagship conference rump session presentations of 2014”. Since all the good lists are taken, my list has to be a little bit more personal. Alas, here is my list of stuff that happened in open-source computational mathematics in 2014 around me. That is, below I list what developments happened in 2014 and try to provide an outlook for 2015 (so that I can come back in a year to notice that nothing played out as planned).

If you are interested in any of the projects below feel invited to get involved. Also, if you are student and you are interested in working on one of the (bigger) projects listed below over the summer, get in touch: we could try to turn it into a Google Summer of Code 2015 project.

Continue reading “Looking Back and Forward for Open-Source Mathematics Software (2014)”

Improved Parameters and an Implementation of Graded Encoding Schemes from Ideal Lattices

Our paper (with Catalin Cocis, Fabien Laguillaumie and Adeline Langlois) on picking parameters and implementing GGHLite just hit eprint. Here’s the abstract:

We discuss how to set parameters for GGH-like graded encoding schemes approximating cryptographic multilinear maps from ideal lattices and propose a strategy which reduces parameter sizes for concrete instances. Secondly, we discuss a first software implementation of a graded encoding scheme based on GGHLite, an improved variant of Garg, Gentry and Halevi’s construction (GGH) due to Langlois, Stehlé and Steinfeld. Thirdly, we provide an implementation of non-interactive N-partite Diffie-Hellman key exchange. We discuss our implementation strategies and show that our implementation outperforms previous work.

Continue reading “Improved Parameters and an Implementation of Graded Encoding Schemes from Ideal Lattices”

C Development with Emacs

Recently I spent some time customising my Emacs config again. Playing around with different ways of improving your productivity by adjusting Emacs is a great way of loosing any and all productivity you might have had. It is such a fun way of wasting your time, there’s even a little scene around just that activity. This can take quite elaborate forms with people posting their Emacs init.el configuration files in literal programming style based on org-mode‘s org-babel. This is more useful than it might sound, e.g. I stole a lot from Sacha’s config.

I also recently spent a bit more time again writing C99 code making many calls to FLINT – Fast Library for Number Theory. The FLINT coding style requires that pretty much each function should have its own file which I am sure is great for something. However, for me it means that have I have to open a lot of files all over the FLINT library when I care about implementation details and not just definitions. Also, your vanilla Emacs setup won’t display those signatures when you try to write a call to those functions from your code or give you auto-completion for all functions starting with, say, fmpz_poly_set_.

Alas, here is my current setup which rectifies most of my grievances.

Continue reading “C Development with Emacs”

meataxe64

Meataxe64 is a large software development project to produce programs for working at high performance with large matrices over finite fields.

At the lowest level, the aim is to work modulo primes (only), using grease (much like “four Russians”) to reduce the amount of work, to use vectorized code in x86 assembler (SSE/AVX) to do the basic operations and to have short rows and few columns so that matrices fit suitably into the various levels of cache.  The objective is to run as fast as possible with as little use of real-memory bandwidth as possible.

At a middle level, the aim is to use linear functions to work with extension fields, and to chop the matrices up so that the lowest level can operate.

At a higher level, the aim is to make effective use of a multi-core environment, building on the advantage that the cache-friendly lower level provides to ensure that many cores can be used effectively.  The thread-farm looks after the messy signals, locks and thread handling.

It is hoped soon that a layer will be added to take a matrix that fits on disk but not in memory to extend the possible scale of operations further.

Finally I dream that a fault-tolerant distributed system can be build on top of this to handle matrices of gargantuan proportions, but this lies some considerable way into the future.

Go read the development blog, I certainly learned a lot from Richard Parker whenever we talked.