We’ll have an fplll coding sprint aka “FPLLL Days” in July. This time around, we plan a slightly modified format compared to previous instances. That is, in order to encourage new developers to get involved, we plan to have a 2 day tutorial session (shorter or longer depending on participants/interest) before the start of FPLLL Days proper.
Tag: sage
On dual lattice attacks against smallsecret LWE and parameter choices in HElib and SEAL
My paper on solving small, sparse secret instances is now on ePrint. Here’s the abstract:
We present novel variants of the duallattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKWstyle algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the duallattice attack by a factor of when , when the secret has constant hamming weight and where is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with and , while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).
If you want to see what its effect would be on your favourite small, sparse secret instance of LWE, the code for estimating the running time is included in our LWE estimator. The integration into the main function estimate_lwe
is imperfect, though. To get you started, here’s the code used to produce the estimates for the rolling example in the paper.

Our instance’s secret has hamming weight and a ternary secret. We always use sieving as the SVP oracle in BKZ:
sage: n, alpha, q = fhe_params(n=2048, L=2) sage: kwds = {"optimisation_target": "sieve", "h":64, "secret_bounds":(1,1)}

We establish a base line:
sage: print cost_str(sis(n, alpha, q, optimisation_target="sieve"))

We run the scaled normal form approach from Section 4 and enable amortising costs from Section 3 by setting
use_lll=True
:sage: print cost_str(sis_small_secret_mod_switch(n, alpha, q, use_lll=True, **kwds))

We run the approach from Section 5 for sparse secrets. Setting
postprocess=True
enables the search for solutions with very low hamming weight (page 17):sage: print cost_str(drop_and_solve(sis, n, alpha, q, postprocess=True, **kwds))

We combine everything:
sage: f = sis_small_secret_mod_switch sage: print cost_str(drop_and_solve(f, n, alpha, q, postprocess=True, **kwds))
fplll 5.0
Fplll 4.0.4 was released in 2013. Fplll 5.0.0, whose development started in autumn 2014, came out today. About 600 commits by 13 contributors went into this release. Overall, fplll 5.0 is quite a significant improvement over the 4.x series.
GSW13: 3rd Generation Homomorphic Encryption from Learning with Errors
This week our reading group studied Homomorphic Encryption from Learning with Errors: ConceptuallySimpler, AsymptoticallyFaster, AttributeBased by Craig Gentry, Amit Sahai and Brent Waters: a 3rd generation fully homomorphic encryption scheme.
The paper is partly motivated by that multiplication in previous schemes was complicated or at least not natural. Let’s take the BGV scheme where ciphertexts are simply LWE samples for and with being the message bit and is some “small” error. Let’s write this as because it simplifies some notation down the line. In this notation, multiplication can be accomplished by because . However, we now need to map back to using “relinearisation”, this is the “unnatural” step.
However, this is only unnatural in this particular representation. To see this, let’s rewrite as a linear multivariate polynomial . This polynomial evaluates to on the secret . Note that evaluating a polynomial on is the same as reducing it modulo the set of polynomials .
Continue reading “GSW13: 3rd Generation Homomorphic Encryption from Learning with Errors”
fplll days 1
We’ll have a first fplll coding sprint aka “fplll days” from June 20 to June 24 at ENS Lyon.
The idea of fplll days is inspired by and might follow the format of Sage Days which are semiregularly organised by the SageMath community. The idea is simply to get a bunch of motivated developers in a room to work on code. Judging from experience in the SageMath community, lots of interesting projects get started and completed.
We intend to combine the coding sprint with the lattice meeting (to be confirmed), so we’d be looking at 3 days of coding plus 2 days of regular lattice meeting. We might organise one talk per coding day, to give people a reason to gather at a given time of the day, but the focus would be very much on working on fplll together.
If you’d like to attend, please send an email to one of the maintainers e.g. me.
Cysignals
If you’ve written a fair amount of Cython code in your time, chances are that you got frustrated by
 buggy C/C++ code crashing your Python shell and
 the fact that you cannot interrupt C/C++ functions.
For example, the following Cython code cannot be interrupted:
while True: pass
On the other hand, if you have written Cython code in Sage, then you might have come across its nifty sig_on()
, sig_off()
and sig_check()
macros which prevent crashes and allow your calls to C/C++ code to be interrupted. Sage had signal handling — crashes, interrupts — forever (see below).
Cysignals is Sage’s signal handling reborn as a standalone module, i.e. it allows to wrap C/C++ code blocks in sig_on()
and sig_off()
pairs which catch signals such as SIGSEGV
. Using it is straightforward. Simply add
include "cysignals/signals.pxi"
to each Cython module and then wrap longrunning computations in sig_on()
/ sig_off()
pairs or check for signals with sig_check()
. See the cysignals documentation for details.
We have a first prerelease out. Prerelease because we haven’t switched Sage to the new, standalone code yet. Once this is done, we’ll publish version 1.0 since some version of this code has been in use on many different systems for at least decade.
First CoDiMa Training School in Computational Discrete Mathematics
This winter school sounds excellent:
We have just finalised the date and location for the First CoDiMa Training School in Computational Discrete Mathematics which will take place at the University of Manchester on November 16th20th, 2015. This school is intended for postgraduate students and researchers from UK institutions. It will start with the 2days handson Software Carpentry workshop covering basic concepts and tools, including working with the command line, version control and task automation, continued with introductions to GAP and SageMath systems, and followed by the series of lectures and exercise classes on a selection of topics in computational discrete mathematics.
The School’s website and the registration details will be announced shortly. In the meantime, if you’re interested in attending, please keep the dates in your diary and check for updates on our website and on our Twitter @codima_project, or tell us about your interest writing to contact at codima.ac.uk so that we will be able to notify you when the registration will begin.