We’ll have an fplll coding sprint aka “FPLLL Days” in July. This time around, we plan a slightly modified format compared to previous instances. That is, in order to encourage new developers to get involved, we plan to have a 2 day tutorial session (shorter or longer depending on participants/interest) before the start of FPLLL Days proper.
Category: sage
fplll 5.1 and fpylll 0.2.4dev
New versions of fplll and fpylll were released today. I’ve reproduced release notes below for greater visibility. The biggest uservisible changes for fplll are probably that
 CVP enumeration is not experimental any more,
 support for external enumeration libraries (go write that GPU implementation of enumeration) was added and
 support for OSX was greatly improved.
On the fpylll side, the biggest uservisible changes are probably various API updates and a much nicer strategy/framework for gathering statistics about BKZ.
The next version of fplll will contain support for LLL reduction on Gram matrices.
On dual lattice attacks against smallsecret LWE and parameter choices in HElib and SEAL
My paper on solving small, sparse secret instances is now on ePrint. Here’s the abstract:
We present novel variants of the duallattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKWstyle algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the duallattice attack by a factor of when , when the secret has constant hamming weight and where is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with and , while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).
If you want to see what its effect would be on your favourite small, sparse secret instance of LWE, the code for estimating the running time is included in our LWE estimator. The integration into the main function estimate_lwe
is imperfect, though. To get you started, here’s the code used to produce the estimates for the rolling example in the paper.

Our instance’s secret has hamming weight and a ternary secret. We always use sieving as the SVP oracle in BKZ:
sage: n, alpha, q = fhe_params(n=2048, L=2) sage: kwds = {"optimisation_target": "sieve", "h":64, "secret_bounds":(1,1)}

We establish a base line:
sage: print cost_str(sis(n, alpha, q, optimisation_target="sieve"))

We run the scaled normal form approach from Section 4 and enable amortising costs from Section 3 by setting
use_lll=True
:sage: print cost_str(sis_small_secret_mod_switch(n, alpha, q, use_lll=True, **kwds))

We run the approach from Section 5 for sparse secrets. Setting
postprocess=True
enables the search for solutions with very low hamming weight (page 17):sage: print cost_str(drop_and_solve(sis, n, alpha, q, postprocess=True, **kwds))

We combine everything:
sage: f = sis_small_secret_mod_switch sage: print cost_str(drop_and_solve(f, n, alpha, q, postprocess=True, **kwds))
fplll 5.0
Fplll 4.0.4 was released in 2013. Fplll 5.0.0, whose development started in autumn 2014, came out today. About 600 commits by 13 contributors went into this release. Overall, fplll 5.0 is quite a significant improvement over the 4.x series.
fpylll
fpylll is a Python library for performing lattice reduction on lattices over the Integers. It is based on the fplll, a C++ library which describes itself as follows:
fplll contains several algorithms on lattices that rely on floatingpoint computations. This includes implementations of the floatingpoint LLL reduction algorithm, offering different speed/guarantees ratios. It contains a ‘wrapper’ choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user. It also includes a rigorous floatingpoint implementation of the KannanFinckePohst algorithm that finds a shortest nonzero lattice vector, and the BKZ reduction algorithm.
fplll is distributed under the GNU Lesser General Public License (either version 2.1 of the License, or, at your option, any later version) as published by the Free Software Foundation.
In short, fplll is your best bet at a publicly available fast latticereduction library and fpylll provides a convenient interface for it — for experimentation, development and extension — from Python.
For the rest of this post, I’ll give you a tour of the features currently implemented in fpylll and point out some areas where we could do with some help.
Cysignals
If you’ve written a fair amount of Cython code in your time, chances are that you got frustrated by
 buggy C/C++ code crashing your Python shell and
 the fact that you cannot interrupt C/C++ functions.
For example, the following Cython code cannot be interrupted:
while True: pass
On the other hand, if you have written Cython code in Sage, then you might have come across its nifty sig_on()
, sig_off()
and sig_check()
macros which prevent crashes and allow your calls to C/C++ code to be interrupted. Sage had signal handling — crashes, interrupts — forever (see below).
Cysignals is Sage’s signal handling reborn as a standalone module, i.e. it allows to wrap C/C++ code blocks in sig_on()
and sig_off()
pairs which catch signals such as SIGSEGV
. Using it is straightforward. Simply add
include "cysignals/signals.pxi"
to each Cython module and then wrap longrunning computations in sig_on()
/ sig_off()
pairs or check for signals with sig_check()
. See the cysignals documentation for details.
We have a first prerelease out. Prerelease because we haven’t switched Sage to the new, standalone code yet. Once this is done, we’ll publish version 1.0 since some version of this code has been in use on many different systems for at least decade.
First CoDiMa Training School in Computational Discrete Mathematics
This winter school sounds excellent:
We have just finalised the date and location for the First CoDiMa Training School in Computational Discrete Mathematics which will take place at the University of Manchester on November 16th20th, 2015. This school is intended for postgraduate students and researchers from UK institutions. It will start with the 2days handson Software Carpentry workshop covering basic concepts and tools, including working with the command line, version control and task automation, continued with introductions to GAP and SageMath systems, and followed by the series of lectures and exercise classes on a selection of topics in computational discrete mathematics.
The School’s website and the registration details will be announced shortly. In the meantime, if you’re interested in attending, please keep the dates in your diary and check for updates on our website and on our Twitter @codima_project, or tell us about your interest writing to contact at codima.ac.uk so that we will be able to notify you when the registration will begin.