## Adventures in Cython Templating

Fpylll makes heavy use to Cython to expose Fplll’s functionality to Python. Fplll, in turn, makes use of C++ templates. For example, double, long double, dd_real (http://crd.lbl.gov/~dhbailey/mpdist/) and mpfr_t (http://www.mpfr.org/) are supported as floating point types. While Cython supports C++ templates, we still have to generate code for all possible instantiations of the C++ templates for Python to use/call. The way I implemented these bindings is showing its limitations. For example, here’s how attribute access to the dimension of the Gram-Schmidt object looks like:

    @property
def d(self):
"""
Number of rows of B (dimension of the lattice).

>>> from fpylll import IntegerMatrix, GSO, set_precision
>>> A = IntegerMatrix(11, 11)
>>> M = GSO.Mat(A)
>>> M.d
11

"""
if self._type == gso_mpz_d:
return self._core.mpz_d.d
IF HAVE_LONG_DOUBLE:
if self._type == gso_mpz_ld:
return self._core.mpz_ld.d
if self._type == gso_mpz_dpe:
return self._core.mpz_dpe.d
IF HAVE_QD:
if self._type == gso_mpz_dd:
return self._core.mpz_dd.d
if self._type == gso_mpz_qd:
return self._core.mpz_qd.d
if self._type == gso_mpz_mpfr:
return self._core.mpz_mpfr.d

if self._type == gso_long_d:
return self._core.long_d.d
IF HAVE_LONG_DOUBLE:
if self._type == gso_long_ld:
return self._core.long_ld.d
if self._type == gso_long_dpe:
return self._core.long_dpe.d
IF HAVE_QD:
if self._type == gso_long_dd:
return self._core.long_dd.d
if self._type == gso_long_qd:
return self._core.long_qd.d
if self._type == gso_long_mpfr:
return self._core.long_mpfr.d

raise RuntimeError("MatGSO object '%s' has no core."%self)



In the code above uppercase IF and ELSE are compile-time conditionals, lowercase if and else are run-time checks. If we wanted to add Z_NR<double> to the list of supported integer types (yep, Fplll supports that), then the above Python approximation of a switch/case statement would grow by a factor 50%. The same would have to be repeated for every member function or attribute. There must be a more better way.

## Fplll Days 3: July 6 – 14, Amsterdam

We’ll have an fplll coding sprint aka “FPLLL Days” in July. This time around, we plan a slightly modified format compared to previous instances. That is, in order to encourage new developers to get involved, we plan to have a 2 day tutorial session (shorter or longer depending on participants/interest) before the start of FPLLL Days proper.

## fplll 5.1 and fpylll 0.2.4dev

New versions of fplll and fpylll were released today. I’ve reproduced release notes below for greater visibility. The biggest user-visible changes for fplll are probably that

• CVP enumeration is not experimental any more,
• support for external enumeration libraries (go write that GPU implementation of enumeration) was added and
• support for OSX was greatly improved.

On the fpylll side, the biggest user-visible changes are probably various API updates and a much nicer strategy/framework for gathering statistics about BKZ.

The next version of fplll will contain support for LLL reduction on Gram matrices.

## On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL

My paper on solving small, sparse secret instances is now on ePrint. Here’s the abstract:

We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of $(2\,L)/(2\,L+1)$ when $\log q = \Theta{\left(L \log n\right)}$, when the secret has constant hamming weight $h$ and where $L$ is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of $2^{h}$ operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with $n=1024$ and $\log_2 q \approx {47}$, while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).

If you want to see what its effect would be on your favourite small, sparse secret instance of LWE, the code for estimating the running time is included in our LWE estimator. The integration into the main function estimate_lwe is imperfect, though. To get you started, here’s the code used to produce the estimates for the rolling example in the paper.

• Our instance’s secret has hamming weight $h=64$ and a ternary secret. We always use sieving as the SVP oracle in BKZ:

sage: n, alpha, q = fhe_params(n=2048, L=2)
sage: kwds = {"optimisation_target": "sieve", "h":64, "secret_bounds":(-1,1)}

• We establish a base line:

sage: print cost_str(sis(n, alpha, q, optimisation_target="sieve"))

• We run the scaled normal form approach from Section 4 and enable amortising costs from Section 3 by setting use_lll=True:

sage: print cost_str(sis_small_secret_mod_switch(n, alpha, q, use_lll=True, **kwds))

• We run the approach from Section 5 for sparse secrets. Setting postprocess=True enables the search for solutions $\mathbf{s}_1$ with very low hamming weight (page 17):

sage: print cost_str(drop_and_solve(sis, n, alpha, q, postprocess=True, **kwds))

• We combine everything:

sage: f = sis_small_secret_mod_switch
sage: print cost_str(drop_and_solve(f, n, alpha, q, postprocess=True, **kwds))


## fpylll

fpylll is a Python library for performing lattice reduction on lattices over the Integers. It is based on the fplll, a C++ library which describes itself as follows:

fplll contains several algorithms on lattices that rely on floating-point computations. This includes implementations of the floating-point LLL reduction algorithm, offering different speed/guarantees ratios. It contains a ‘wrapper’ choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user. It also includes a rigorous floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector, and the BKZ reduction algorithm.

fplll is distributed under the GNU Lesser General Public License (either version 2.1 of the License, or, at your option, any later version) as published by the Free Software Foundation.

In short, fplll is your best bet at a publicly available fast lattice-reduction library and fpylll provides a convenient interface for it — for experimentation, development and extension — from Python.

For the rest of this post, I’ll give you a tour of the features currently implemented in fpylll and point out some areas where we could do with some help.

## Cysignals

If you’ve written a fair amount of Cython code in your time, chances are that you got frustrated by

1. buggy C/C++ code crashing your Python shell and
2. the fact that you cannot interrupt C/C++ functions.

For example, the following Cython code cannot be interrupted:

while True:
pass


On the other hand, if you have written Cython code in Sage, then you might have come across its nifty sig_on(), sig_off() and sig_check() macros which prevent crashes and allow your calls to C/C++ code to be interrupted. Sage had signal handling — crashes, interrupts — forever (see below).

Cysignals is Sage’s signal handling reborn as a stand-alone module, i.e. it allows to wrap C/C++ code blocks in sig_on() and sig_off() pairs which catch signals such as SIGSEGV. Using it is straight-forward. Simply add

include "cysignals/signals.pxi"


to each Cython module and then wrap long-running computations in sig_on() / sig_off() pairs or check for signals with sig_check(). See the cysignals documentation for details.

We have a first pre-release out. Pre-release because we haven’t switched Sage to the new, stand-alone code yet. Once this is done, we’ll publish version 1.0 since some version of this code has been in use on many different systems for at least decade.