We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of when , when the secret has constant hamming weight and where is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with and , while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).
If you want to see what its effect would be on your favourite small, sparse secret instance of LWE, the code for estimating the running time is included in our LWE estimator. The integration into the main function estimate_lwe
is imperfect, though. To get you started, here’s the code used to produce the estimates for the rolling example in the paper.
Our instance’s secret has hamming weight and a ternary secret. We always use sieving as the SVP oracle in BKZ:
sage: n, alpha, q = fhe_params(n=2048, L=2) sage: kwds = {"optimisation_target": "sieve", "h":64, "secret_bounds":(-1,1)}
We establish a base line:
sage: print cost_str(sis(n, alpha, q, optimisation_target="sieve"))
We run the scaled normal form approach from Section 4 and enable amortising costs from Section 3 by setting use_lll=True
:
sage: print cost_str(sis_small_secret_mod_switch(n, alpha, q, use_lll=True, **kwds))
We run the approach from Section 5 for sparse secrets. Setting postprocess=True
enables the search for solutions with very low hamming weight (page 17):
sage: print cost_str(drop_and_solve(sis, n, alpha, q, postprocess=True, **kwds))
We combine everything:
sage: f = sis_small_secret_mod_switch sage: print cost_str(drop_and_solve(f, n, alpha, q, postprocess=True, **kwds))
Location Egham Salary £33,789 to £39,902 per annum – including London Allowance Closing Date Monday 10 October 2016 Interview Date To be confirmed Reference 0916-295 The ISG is seeking to recruit a post-doctoral research assistant to work in the area of Cryptography. The position is available now for the period of one year.
The PDRA will work alongside Prof. Kenny Paterson and other cryptographic researchers at Royal Holloway on topics in Cryptography. Our current areas of interest include lattice-based cryptography, multilinear maps, indistinguishability obfuscation, and applied cryptography.
Applicants should have already completed (essential if COS required), or be close to completing, a PhD in a relevant discipline. Applicants should have an outstanding research track record in Cryptography. Applicants should be able to demonstrate scientific creativity, research independence, and the ability to communicate their ideas effectively in written and verbal form.
This is a full time post, available as soon as possible for a fixed term period of 12 months. This post is based in Egham, Surrey, where the College is situated in a beautiful, leafy campus near to Windsor Great Park and within commuting distance from London.
Informal enquiries can be made to Kenny Paterson at kenny.paterson@rhul.ac.uk.
The Human Resources Department can be contacted with queries by email at: recruitment@rhul.ac.uk.
Arts Building Ground Floor Room 24
Royal Holloway, University of London
Egham Hill
Egham
Surrey TW20 0EX
See meeting website for details.
Perhaps the most significant change is the change of development model. Previous versions of fplll, while open-source, were developed behind closed doors with tarballs being made available more or less regularly. Reporting a bug meant dropping Damien an e-mail.
Starting in autumn 2014, development is now coordinated publicly on GitHub. Developers send pull requests, reporting a bug means opening an issue, etc. Hence, development is more transparent and, most importantly, inviting. Additionally, for those who wish to get involved, we collect how-to information in our contributing guidelines. There is now also a public mailing list for all things fplll development and the occasional joint coding sprint.
Fplll 5.0.0 switches from C++98 to C++11. While we haven’t upgraded all code to take advantage of C++11’s features, such as rvalue references, we try to make use of them when touching code. Marc Stevens helped a lot here by educating the rest of us. I, personally, also found Effective Modern C++ quite a good read.
Fplll now also has a test suite, testing basic arithmetic, LLL, BKZ, SVP, sieving and the pruner. Test coverage is not complete but this quite an improvement over the 4.x series. We run these tests on every pull request and commit to master.
Writing code using fplll as a library instead of a command line program used to be hit and miss: did the compiler instantiate that template? We now force instantiation of templates and link against fplll as a library ourselves during testing. We also added pkg-config support and improved the build system so that make -j8
actually runs faster than make
.
Finally, we also provide API documentation online. You will notice that we adopted a naming convention inspired by PEP 8: ClassName.function_name
.
In LLL land not much has changed in fplll 5.0.0, e.g. HPLLL wasn’t merged into fplll yet. However, Shi Bai added optional support for double-double (106 bits) and quad-double (212 bits) precision using the qd library. In our experience, double-double provides a speed-up, but quad-double does not.
Marc contributed a new implementation of enumeration. This implementation is recursive but avoids the usual performance drawback of recursive enumeration by making the compiler untangle it during compile time. The new implementation is not as fast as it could be, but it is noticeable faster than what was in the 4.x series. In the process, we also made enumeration on different objects thread-safe by eliminating global variables.
fplll 5.0 is the first public (open-source or not) complete implementation of BKZ 2.0 (see https://github.com/Tzumpi1/BKZ_2 for a previous but incomplete implementation). As mentioned in a previous post, the collection of techniques known as BKZ 2.0 is used in lattice-based cryptography to estimate the cost of strong lattice reduction. This lead to the somewhat strange situation where everybody was essentially relying on a table in the BKZ 2.0 paper to predict the cost of certain cryptanalytical attacks without being able to reproduce or verify these numbers.
BKZ 2.0’s biggest improvement is due to the use of extreme pruning (Section 4.1 of the BKZ 2.0 paper). This, first of all, entails computing optimal pruning coefficients. The implementation in fplll for computing these coefficients — the pruner — was contributed by Léo Ducas. He also wrote the first implementation in Python for using these parameters in BKZ, i.e. by adding re-randomisation. I then re-implemented that part in C++ for fplll (and in Python for fpylll).
BKZ 2.0 also uses recursive preprocessing with BKZ in a smaller block size (Section 4.2 of the BKZ 2.0 paper). The implementation in fplll was written by me back in 2014.
Around the same time, Joop van der Pol contributed using the Gaussian heuristic bound in enumeration (Section 4.3 of the BKZ 2.0 paper)
Fplll also ships with strategies for BKZ reduction up to block size 90. These strategies provide pruning parameters and block sizes for recursive preprocessing. These strategies were computed using the strategizer discussed below.
Michael Walter contributed implementations of the Self-Dual BKZ algorithm and Slide Reduction. We don’t ship dedicated reduction strategies for these algorithms, but the default strategies should work reasonably well (I haven’t tried). Hence, these algorithms can now easily compared against each other and will all benefit from future improvements to fplll such as faster enumeration etc.
C++11 has made writing C++ a lot easier. Still, C++ might not be for everyone. Python, however, is for everyone. In particular, with Sage and SciPy, Python has become a central language for computational mathematics. To make it easy for researchers to try out new algorithmic ideas, tweak algorithms or simply to experiment with existing algorithms there is now fpylll which provides an interface to fplll’s API from Python and implements a few algorithms using that API in pure Python. see my previous post on fpylll for details.
The set of strategies shipped with fplll were computed using a Python library built on top of fpylll. This transparency allows others to reproduce and verify our choices or to improve them.
Shi Bai also contributed implementations of the GaussSieve as well as the TupleSieve. However, these can at present not be used as SVP oracles inside BKZ-style algorithms.
To get an impression of the difference between fplll 4.x and 5.x, consider the -ary lattice generated by calling
latticegen q 100 50 30 b -randseed 1337
In the table below, is the time in seconds it takes to run 10 tours of BKZ with block size and is the square of the Euclidean norm of the shortest vector in the reduced lattice.
software | strategy | time | ||
---|---|---|---|---|
fplll 4.0.4 | 40 | – | 326.43s | 1.10e10 |
fplll 5.0.0 | 40 | – | 75.71s | 1.22e10 |
fplll 5.0.0 | 40 | default | 3.64s | 1.17e10 |
fplll 5.0.0 | 60 | default | 120.67s | 8.85e9 |
Now that fplll 5.0.0 is out, we’ll work on integrating it into Sage (discussion and ticket).
To:
field,Also, a lot of email is repetitive and boring but necessary, such as asking seminar speakers for their titles and abstracts, giving people advise on how to claim reimbursement when they visit Royal Holloway, responding to requests of people who’d like to pursue a PhD.
Here is my attempt to semi-automate some of the boring steps in Emacs.
I use mbsync for syncing my e-mail to my local hard disk as I often work offline, e.g. during my commute or while working on airplanes.^{1} Mbsync does not speak IMAP IDLE, aka push notifications, so I use imapnotify for this; here’s my (sanitised) imapnotify config file:
var child_process = require('child_process'); function getStdout(cmd) { var stdout = child_process.execSync(cmd); return stdout.toString().trim(); } exports.host = "imap.gmail.com"; exports.port = 993; exports.tls = true; exports.username = "martinralbrecht@gmail.com"; exports.password = // whatever needs doing, e.g. call getStdout() exports.onNewMail = "mbsync googlemail-minimal"; exports.onNewMailPost = "emacsclient -e '(mu4e-update-index)'"; exports.boxes = [ "INBOX"];
I only need imapnotify in Emacs, so I use prodigy to start/stop it.
(use-package prodigy :ensure t :init (prodigy-define-tag :name 'email :ready-message "Checking Email using IMAP IDLE. Ctrl-C to shutdown.") (prodigy-define-service :name "imapnotify" :command "imapnotify" :args (list "-c" (expand-file-name ".config/imapnotify.gmail.js" (getenv "HOME"))) :tags '(email) :kill-signal 'sigkill))
Once arrived, email is parsed by Mu which provides fast, powerful fulltext search. Finally, Mu4e provides that email client experience™ in Emacs.
On the other end, I’m not relying on Emacs’ built-in support for sending email but use opensmtpd.^{2} Using Emacs’ built-in functionality means that it will hang while sending email (due to lack of multithreading), especially on slow connections, which defeats the purpose of getting email out of the way quickly.
Mu shines at search, it’s fast and expressive. For example, to search for messages between 2 kilobytes and 2Mb, written in December 2009 with an attachment from Bill, search
size:2k..2m date:20091201..20093112 flag:attach from:bill
Below are some examples for how I use it. Other ideas can be to filter for your project students, by your course number etc.
(add-to-list 'mu4e-bookmarks '("flag:unread NOT flag:trashed AND (flag:list OR from:trac@sagemath.org)" "Unread bulk messages" ?l)) (add-to-list 'mu4e-bookmarks '("flag:unread NOT flag:trashed AND NOT flag:list AND (maildir:\"/royal holloway\" OR maildir:/INBOX)" "Unread messages addressed to me" ?i)) (add-to-list 'mu4e-bookmarks '("mime:application/* AND NOT mime:application/pgp* AND (maildir:\"/royal holloway\" OR maildir:/INBOX)" "Messages with attachments for me." ?d) t) (add-to-list 'mu4e-bookmarks '("flag:flagged" "Flagged messages" ?f) t) (add-to-list 'mu4e-bookmarks '("(maildir:\"/[Google Mail]/.Sent Mail\" OR maildir:\"/royal holloway/.sent\") AND date:7d..now" "Sent in last 7 days" ?s) t)
By default Mu’s search is REPL, i.e. you type a query, press <enter>
and look at the results. Sometimes you want real-time updates as you type, e.g. to adapt your search quickly. In this case, helm-mu has you covered. Helm adds a generic search-as-you-type interface to Emacs, here’s a nice intro.
(use-package helm-mu :ensure t :config (progn (bind-key "S" #'helm-mu mu4e-main-mode-map)))
By the way, enabling helm-follow-mode
via C-c C-f
allows to preview emails as you search.
Sometimes, you might want to file an email with some project notes to be able to find it later without any effort or you might want to refer to it directly from your TODO list. I use Org-Mode for my TODOs and notes. Mu4e comes with Org-Mode support which provides links for messages and search queries. First, enable it
(use-package org-mu4e :config (setq org-mu4e-link-query-in-headers-mode nil))
and then add some org-capture templates to make filing and email or creating a TODO based on an email easy:
(use-package org-capture :bind ("<f9>" . org-capture) :config (setq org-capture-templates '(("r" "respond ro email (mu4e)" entry (file+headline malb/inbox-org "Email") "* REPLY to [[mailto:%:fromaddress][%:fromname]] on %a\nDEADLINE: %(org-insert-time-stamp (org-read-date nil t \"+1d\"))\n%U\n\n" :immediate-finish t :prepend t) ("f" "file email (mu4e)" entry (file+headline malb/inbox-org "Email") "* %a by [[mailto:%:fromaddress][%:fromname]]\n%U\n\n%i%?\n" :immediate-finish nil :prepend nil))))
First, let’s make finding that email address easier. For this, I want an automatically maintained database holding at least
which is then used for autocompletion as I type. “Automatically maintained“ here means that this database should be built from our email correspondence, similar to e.g. what Gmail does. Adding email addresses and whatever else is in the From:
field to some database isn’t difficult per se and many clients do it. For example, Mu4e comes with this built-in.
However, there are a few different conventions out there for how people write names in a From:
field, so this needs a bit of tidying up. For example, Royal Holloway likes “Lastname, Firstname (Year)” for students; some people like to YELL their LASTNAME and then write the first name; some people misspell their own name. The code below canonicalises this.
(defun malb/canonicalise-contact-name (name) (let ((case-fold-search nil)) (setq name (or name "")) (if (string-match-p "^[^ ]+@[^ ]+\.[^ ]" name) "" (progn ;; drop email address (setq name (replace-regexp-in-string "^\\(.*\\) [^ ]+@[^ ]+\.[^ ]" "\\1" name)) ;; strip quotes (setq name (replace-regexp-in-string "^\"\\(.*\\)\"" "\\1" name)) ;; deal with YELL’d last names (setq name (replace-regexp-in-string "^\\(\\<[[:upper:]]+\\>\\) \\(.*\\)" "\\2 \\1" name)) ;; Foo, Bar becomes Bar Foo (setq name (replace-regexp-in-string "^\\(.*\\), \\([^ ]+\\).*" "\\2 \\1" name)) ;; look up names and replace from static table, TODO look this up by email (setq name (or (cdr (assoc name malb/mu4e-name-replacements)) name)) )))) (defun malb/mu4e-contact-rewrite-function (contact) (let* ((name (or (plist-get contact :name) "")) (mail (plist-get contact :mail)) (case-fold-search nil)) (plist-put contact :name (malb/canonicalise-contact-name name)) contact)) (setq mu4e-contact-rewrite-function #'malb/mu4e-contact-rewrite-function)
Now that our addresses are canonicalised, I can use those to fill in a few more bits. Given an email starting with “To: John Doe <john@example.com>” there is no point in typing the name “John” again when I do the customary “Hi …,”. Here, YASnippet comes in. YASnippet is a templating system for Emacs inspired by TextMate, which allows to map short sequences of characters to other sequences of characters, potentially by asking for more user input and/or calling some arbitrary Emacs Lisp function. For example, here’s the template we use to advertise the ISG seminar
# -*- mode: snippet -*- # name: Announce ISG Research Seminar # key: isg-announce # -- ${1:Thu}, $2 @ ${3:11:00} in ${4:HLT2}: $5 --- When: $1, $2, 2016 @ $3 Where: $4 Why: Because… reasons! Who: $5 ($6) # Title # $0 # Abstract # # Bio # Cheers, Lorenzo & Martin
and here’s my “hi” template
# -*- mode: snippet -*- # name: Say "hi" # key: Hi # -- Hi ${1:`(malb/yas-get-names-from-to-fields)`}, $0 Cheers, Martin
Using this snippet, typing Hi<Tab>
triggers email boilerplate to be inserted, with the cursor eventually placed in the position of $0
. The name used in the greeting is computed using the following function:
(defun malb/yas-get-names-from-fields (fields) (let (names ret name point-end-of-line (search-regexp (mapconcat (lambda (arg) (concat "^" arg ": ")) fields "\\|")) (case-fold-search nil)) (save-excursion (goto-char (point-min)) (while (re-search-forward search-regexp nil t) (save-excursion (setq point-end-of-line (re-search-forward "$"))) (setq name (buffer-substring-no-properties (point) point-end-of-line)) (setq name (split-string name "[^ ]+@[^ ]+," t " ")) ;; split on email@address, (setq names (append names name))) (dolist (name names) (setq name (malb/canonicalise-contact-name name)) (if (string-match "\\([^ ,]+\\)" name) (progn (setq name (match-string 1 name)) (setq name (capitalize name)) (if ret (setq ret (concat ret ", " name)) (setq ret name))))) (if ret ret "there")))) (defun malb/yas-get-names-from-to-fields () (interactive) (malb/yas-get-names-from-fields '("To")))
Of course, you can create much more elaborate snippets calling all kinds of functions to respond to all kinds of email. Once you created so many snippets that you’re at risk of loosing track, I recommend helm-yasnippet as a nice interactive interface for selecting the right snippet.
To simplify adding attachments — because traversing directory trees is boring — I wrote a small interface to Baloo, which is KDE’s version of OSX’s Spotlight, i.e. desktop search:
(defcustom helm-baloo-file-limit 100 "Limit number of entries returned by baloo to this number." :group 'helm-baloo :type '(integer :tag "Limit")) (defun baloo-search (pattern) (start-process "baloosearch" nil "baloosearch" (format "-l %d " helm-baloo-file-limit) pattern)) (defun helm-baloo-search () (baloo-search helm-pattern)) (defun helm-baloo-transform (cs) (let '(helm-baloo-clean-up-regexp (rx (or control (seq "[0;31m" (+ (not (any "["))) "[0;0m") "[0;32m" "[0;0m"))) (mapcar (function (lambda (c) (replace-regexp-in-string (rx (seq bol (+ space))) "" (replace-regexp-in-string helm-baloo-clean-up-regexp "" c)))) cs))) (defvar helm-source-baloo (helm-build-async-source "Baloo" :candidates-process #'helm-baloo-search :candidate-transformer #'helm-baloo-transform :action '(("Open" . (lambda (x) (find-file x))) ("Attach to Email" . (lambda (x) (mml-attach-file x)))))) (defun helm-baloo () (interactive) (helm :sources helm-source-baloo :buffer "*helm baloo*"))
The line ("Attach to Email" . (lambda (x) (mml-attach-file x))
adds an option to attach any file to an email by pressing <F2>
. If you prefer good ol’ locate, you can add this option to helm-locate too:
(helm-add-action-to-source "Attach to Email" #'mml-attach-file helm-source-locate)
Finally, a few more nice-to-have tweaks:
\lambda
gets replaced by because, again, UTF-8 is a thing.(add-hook 'message-mode-hook #'flyspell-mode) (add-hook 'message-mode-hook #'typo-mode) (add-hook 'message-mode-hook #'adict-guess-dictionary) (add-hook 'message-mode-hook #'footnote-mode)
GMail takes care of all sorting into folders aka labels.
Debian GNU/Linux comes with exim4 by default, which isn’t easy to configure. OpenSMTPD, on the other hand, is rather straightforward.
Location: Egham Salary: £41,030 to £48,548 per annum – including London Allowance Closing Date: Sunday 12 June 2016 Interview Date: To be confirmed Reference: 0516-162 Royal Holloway, University of London. Note that this position is roughly equivalent to a tenure-track Assistant Professor in North America or a Junior Professor in Europe.
Applications are invited from researchers whose interests are related to, or complement, current strengths of the ISG. We are particularly interested in applicants with outstanding research achievements and/or potential in relevant information/cyber security areas.
Applicants should have a Ph.D. in a relevant subject or equivalent, be a self-motivated researcher, and have a strong publication record. Applicants should be able to demonstrate an enthusiasm for teaching and communicating with diverse audiences, as well as show an awareness of contemporary issues relating to cyber security.
This is a full time and permanent post, with an intended start date of 1st September, 2016, although an earlier or slightly later start may be possible. This post is based in Egham, Surrey, where the College is situated in a beautiful, leafy campus near to Windsor Great Park and within commuting distance from London.
For an informal discussion about the post, please contact Prof. Keith Mayes on keith.mayes@rhul.ac.uk.
The Human Resources Department can be contacted with queries by email at: recruitment@rhul.ac.uk.
We particularly welcome female applicants as they are under-represented at this level in the Department of Information Security within Royal Holloway, University of London.
Location: Egham Salary: £41,030 to £48,548 per annum – including London Allowance Closing Date: Sunday 12 June 2016 Interview Date: To be confirmed Reference: 0516-161 Applications are invited for the post of Lecturer in the Information Security Group at Royal Holloway, University of London.
The post holder will contribute to the creation and/or revision, delivery and assessment of postgraduate (MSc) and undergraduate teaching modules across a wide range of topics in the field of information/cyber security; including Security Management.
Applicants should have a Ph.D. in a relevant subject or equivalent and have a sound knowledge of information/cyber security. Applicants should be able to demonstrate an enthusiasm for teaching and communicating with diverse audiences, as well as show an awareness of contemporary issues relating to cyber security
This is a five year post, available from 1st September 2016 although an earlier start may be possible. This post is based in Egham, Surrey, where the College is situated in a beautiful, leafy campus near to Windsor Great Park and within commuting distance from London.
For an informal discussion about the post, please contact Prof. Keith Mayes on keith.mayes@rhul.ac.uk.
The Human Resources Department can be contacted with queries by email at: recruitment@rhul.ac.uk.
We particularly welcome female applicants as they are under-represented at this level in the Department of Electronic Engineering within Royal Holloway, University of London.
As usual, feel free to get in touch if you have questions.
fplll contains several algorithms on lattices that rely on floating-point computations. This includes implementations of the floating-point LLL reduction algorithm, offering different speed/guarantees ratios. It contains a ‘wrapper’ choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user. It also includes a rigorous floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector, and the BKZ reduction algorithm.
fplll is distributed under the GNU Lesser General Public License (either version 2.1 of the License, or, at your option, any later version) as published by the Free Software Foundation.
In short, fplll is your best bet at a publicly available fast lattice-reduction library and fpylll provides a convenient interface for it — for experimentation, development and extension — from Python.
For the rest of this post, I’ll give you a tour of the features currently implemented in fpylll and point out some areas where we could do with some help.
First of all, fpylll is a thin wrapper around fplll. In the example below, we first generate an NTRU-like matrix and consider the norm of the first row:
from fpylll import IntegerMatrix, LLL q = 1073741789 A = IntegerMatrix.random(30, "ntrulike", bits=30, q=q) A[0].norm()
3294809651.09
We then call LLL reduction, i.e. we perform integer-linear combinations of the rows to make shorter rows and observe the output:
LLL.reduction(A) A[0].norm()
82117.5815888
If LLL reduction isn’t strong enough, we can call the BKZ algorithm for some block size .
from fpylll import BKZ BKZ.reduction(A, o=BKZ.Param(block_size=10)) A[0].norm()
71600.8858744
We may want to increase the block size to find shorter vectors.
from fpylll import BKZ BKZ.reduction(A, o=BKZ.Param(block_size=20)) A[0].norm()
68922.4558181
Or, we just go for the shortest vector. Solving the Shortest Vector Problem (SVP) is supposed to be hard problem, so we only attempt it for a small lattice.
from fpylll import SVP q = 1073741789 B = IntegerMatrix.random(10, "ntrulike", bits=7, q=127) SVP.shortest_vector(B)
(1, 2, -3, 5, -2, 4, 5, -1, 1, 1, 0, -1, 6, -3, 0, 2, 6, -8, 0, 1)
Of course, fpylll being a Python library means you can use your favourite Python libraries with it. For example, say, we want to LLL reduce many matrices in parallel, using all our cores, and to compute the norm of the shortest vector across all matrices after LLL reduction. We’ll make use of Python’s multiprocessing:
from multiprocessing import Pool
If we want to recover the reduced matrix, we have to return it. However, LLL.reduction
returns nothing and its input A
will live in its own process. So we add a small function which returns A
.
def f(A): from fpylll import LLL LLL.reduction(A) return A
For this example, we want dimension 40, four worker processes and 32 matrices:
from fpylll import IntegerMatrix d = 20 workers = 4 tasks = 32 A = [IntegerMatrix.random(d, "ntrulike", bits=30, q=1073741789) for _ in range(tasks)]
Let’s get to work: we create a pool of workers and kick off the computation:
pool = Pool(workers) A = pool.map(f, A)
Finally, we output the minimal norm found:
min([A_[0].norm() for A_ in A])
7194.545155880252
The main objective of fpylll is to make developing and experimenting with the kind of algorithms implemented in fplll easier. For example, there are a few variants of the BKZ algorithm in the literature which essentially re-combine the same building blocks — LLL and an SVP oracle — in some way. These kind of algorithms should be easy to implement. The code below is an implementation of the plain BKZ algorithm in 70 lines of Python.
from fpylll import IntegerMatrix, GSO, LLL, BKZ from fpylll import Enumeration as Enum from fpylll import gso class BKZReduction: def __init__(self, A): wrapper = LLL.Wrapper(A) wrapper() self.A = A self.M = GSO.Mat(A, flags=gso.GSO.ROW_EXPO) self.lll_obj = LLL.Reduction(self.M) def __call__(self, block_size): self.M.discover_all_rows() while True: clean = self.bkz_tour(block_size, 0, self.A.nrows) if clean: break def bkz_tour(self, block_size, min_row, max_row): clean = True for kappa in range(min_row, max_row-1): bs = min(block_size, max_row - kappa) clean &= self.svp_reduction(kappa, bs) return clean def svp_reduction(self, kappa, block_size): clean = True self.lll_obj(0, kappa, kappa + block_size) if self.lll_obj.nswaps > 0: clean = False max_dist, expo = self.M.get_r_exp(kappa, kappa) delta_max_dist = self.lll_obj.delta * max_dist solution, max_dist = Enum.enumerate(self.M, max_dist, expo, kappa, kappa + block_size, None) if max_dist >= delta_max_dist: return clean nonzero_vectors = len([x for x in solution if x]) if nonzero_vectors == 1: first_nonzero_vector = None for i in range(block_size): if abs(solution[i]) == 1: first_nonzero_vector = i break self.M.move_row(kappa + first_nonzero_vector, kappa) self.lll_obj.size_reduction(kappa, kappa + 1) else: d = self.M.d self.M.create_row() with self.M.row_ops(d, d+1): for i in range(block_size): self.M.row_addmul(d, kappa + i, solution[i]) self.M.move_row(d, kappa) self.lll_obj(kappa, kappa, kappa + block_size + 1) self.M.move_row(kappa + block_size, d) self.M.remove_last_row() return False
In the meantime fpylll has gained a contrib
module which implements additional algorithms. As of writing, it contains a simple demo implementation of BKZ (see above), a simple implementation of Dual BKZ and a slightly feature enhanced re-implementation of fplll’s BKZ: it collects additional statistics compared to fplll’s implementation of the same algorithm. Let’s run it to see what that means:
from copy import copy from fpylll.contrib.bkz import BKZReduction C = copy(A) b = BKZReduction(C) b(BKZ.Param(block_size=30, flags=BKZ.AUTO_ABORT|BKZ.VERBOSE)) stats = b.stats; stats
{"i": 5, "total": 1.02, "time": 0.16, "preproc": 0.03, "svp": 0.05, "r_0": 4.7503e+09, "slope": -0.0541, "enum nodes": 19.29, "max(kappa)": 10}
That output isn’t that different from fplll outputs. However, in contrast to fplll (because I didn’t bother to implement it over there, yet) we also get access to a stats
object after the computation finished. Let’s use it to inquire how many nodes where visited during enumeration
stats.enum_nodes
4085856
and how much time we spent in enumeration:
stats.svp_time
0.32868
fpylll also offers a few additional utility functions which go beyond what fplll offers such as copying submatrices and modular reduction.
fpylll integrates reasonably nicely with Sage (once #20291 is merged, that is): converting back and forth between data types is seamless. For example:
sage: A = random_matrix(ZZ, 10, 10) sage: from fpylll import IntegerMatrix, LLL sage: B = IntegerMatrix.from_matrix(A) sage: LLL.reduction(B) sage: B.to_matrix(A)
[ -1 1 -1 0 1 0 -1 -2 0 -3] [ 4 0 0 0 -1 0 -1 -1 0 1] [ -1 1 0 2 -3 -1 -2 0 0 3] [ 0 -1 -3 -1 -1 0 -3 0 2 3] [ -2 2 0 0 -1 2 -1 2 -5 0] [ -1 0 3 0 4 2 1 -2 1 2] [ -1 6 -4 1 2 -1 -2 4 2 0] [ -1 1 -7 -3 2 -3 6 -2 -4 3] [ 0 -7 -2 8 7 -9 -4 1 -4 -1] [ -1 5 6 -12 4 -14 -4 -1 -2 5]
In fact, when installed inside Sage, element access for IntegerMatrix
accepts and returns sage.rings.integer.Integer
directly, instead of Python integers.
sage: type(B[0,0]) <type 'sage.rings.integer.Integer'>
fpylll also integrates somewhat with NumPy. To see how, let’s create a small NTRU-like matrix again:
from fpylll import * A = IntegerMatrix.random(4, "ntrulike", bits=7, q=127)
We’d like to do some analysis on its Gram-Schmidt matrix, so let’s compute it:
sage: M = GSO.Mat(A) sage: M.update_gso() True
Let’s dump it into a NumPy array and spot check that the result is reasonably close:
sage: import numpy sage: from fpylll.numpy import dump_mu sage: N = numpy.ndarray(dtype="double", shape=(8,8)) sage: dump_mu(N, M, 0, 8) sage: N[1,0] - M.get_mu(1,0) 0.0
Finally, let’s do something more or less useful with our output:
sage: numpy.linalg.eigvals(N) [ -2.14381988e-39 +0.00000000e+00j -1.51590958e-39 +1.51590958e-39j -1.51590958e-39 -1.51590958e-39j 3.26265223e-55 +2.14381988e-39j 3.26265223e-55 -2.14381988e-39j 2.14381988e-39 +0.00000000e+00j 1.51590958e-39 +1.51590958e-39j 1.51590958e-39 -1.51590958e-39j]
fpylll runs tests on every check-in for Python 2 and 3. As an added benefit, this extends test coverage for fplll as well, which only has a few highlevel tests.
“This is all nice and well”, I hear you say, “but I prefer to do my computations in Lisp, so thanks, but not thanks”.
No worries, Hy has you covered:
=> (import [fpylll [*]]) => (setv q 1073741789) => (setv A (.random IntegerMatrix 30 "ntrulike" :bits 30 :q q)) => (car A) row 0 of <IntegerMatrix(60, 60) at 0x7f1cbbfbf888> => (get A 1) row 1 of <IntegerMatrix(60, 60) at 0x7f1cbbfbf888> => (-> (car A) (.norm)) 4019682565.5285482 => (.reduction LLL A) => (.norm (car A)) 6937.9845776709535
fpylll isn’t quite done yet. Besides testing and documentation, it would be nice if someone would attempt to re-implement fplll’s LLL wrapper in pure Python. This would serve as a test case to see if everything that’s needed really is exposed and as a starting point for others who like to tweak the strategy. Speaking of LLL, fpylll is currently somewhat biased towards playing with BKZ, i.e. it would be nice to see how useful it is for trying out tweaks to the LLL algorithm.
Lattice-based approaches are emerging as a common theme in modern cryptography and coding theory. In communications, they are an indispensable mathematical tool to construct powerful error-correction codes achieving the capacity of wireless channels. In cryptography, they are used to building lattice-based schemes with provable security, better asymptotic efficiency, resilience against quantum attacks and new functionalities such as fully homomorphic encryption.
We are setting up meetings on lattices in cryptography and coding in the London area. ^{1} These meetings are inspired by similar meetings held in Lyon ^{2} and are aimed at connecting the two communities in the UK with a common interest in lattices, with a long-term goal of building a synergy of the two fields.
The meetings will consist of several talks on related topics, with a format that will hopefully encourage interaction (e.g. longer than usual time slots).
For details (as they become available) see website.
11:00 12:30: Achieving Channel Capacity with Lattice Codes Cong Ling
13:30 15:00: Post-Quantum Cryptography Nigel Smart
15:00 16:30: Lattice Coding with Applications to Compute-and-Forward Alister Burr
16:30 – 18:00: A Subfield Lattice Attack on Overstretched NTRU Assumptions Martin Albrecht
Room 611
(Dennis Gabor Seminar Room)
Department of Electrical and Electronic Engineering
Imperial College London
South Kensington London
SW7 2AZ
Everyone is welcome. Two caveats:
Our definition of London includes Egham, where Royal Holloway’s main campus is located.
The paper is partly motivated by that multiplication in previous schemes was complicated or at least not natural. Let’s take the BGV scheme where ciphertexts are simply LWE samples for and with being the message bit and is some “small” error. Let’s write this as because it simplifies some notation down the line. In this notation, multiplication can be accomplished by because . However, we now need to map back to using “relinearisation”, this is the “unnatural” step.
However, this is only unnatural in this particular representation. To see this, let’s rewrite as a linear multivariate polynomial . This polynomial evaluates to on the secret . Note that evaluating a polynomial on is the same as reducing it modulo the set of polynomials .
Now, multiplying produces a quadratic polynomial. Its coefficients are produced from all the terms in the tensor product . In other words, the tensor product is just another way of writing the quadratic polynomial . Evaluating on evaluates to . To map this operation to note that evaluating on is the same as taking the inner product of vector of coefficients of and the vector . For example, evaluating at and is the same as taking the inner product of and . That is, if we precompute all the products up to degree two of and the remaining operations are just an inner product.
Still, is a quadratic polynomial, we’d want a linear polynomial. In BGV this reduction is referred to as “relinearisation” (which is not helpful if you’re coming from a commutative algebra background). Phrased in terms of commutative algebra, what we’re doing is to reduce modulo elements in the ideal generated by .
Let’s look at what happens when we reduce by some element in the ideal generated by . We start with the leading term of which is . To reduce this term we’d add to , which is an element in the ideal generated by since it is a multiple of . This produces which has a smaller leading term. Now, we’d add the appropriate element with leading term and so on.
Of course, this process, as just described, assumes access to which is the same as giving out our secret . Instead, we want to precompute multiples with leading terms , and and publish those. This is still insecure, but adding some noise, assuming circular security and doing a bit more trickery we can make this as secure as the original scheme. This is then what “relinearisation” does. There is also an issue with noise blow up, e.g. multiplying a sample by a scalar like makes its noise much bigger. Hence, we essentially publish elements with leading terms , , , , , , , , , and so on, which allows us to avoid those multiplications. Before I move on to GSW13 proper, I should note that all this has been pointed out in 2011.
Ciphertexts in the GSW13 scheme are matrices over with entries. The secret key is a vector of dimensions over with at least one big coefficient . Let’s restrict . We say that encrypts when for some small . To decrypt we simply compute and check if there are large coefficients in .
Ciphertexts in fully homomorphic encryption schemes are meant to be added and multiplied. Starting with addition, consider . We have
Moving on to multiplication, we first observe that if matrices and have a common exact eigenvector with eigenvalues and , then their product has eigenvector with eigenvalue . But what about the noise?
Considering :
In the above expression , , , are all small, is not. In other words, the signal is not drowned by the noise and we have achieved multiplication. How far can we go with this, though?
Assume , , are all in absolute value. After multiplication is bounded by . Hence, if we have levels, the noise grows worse than . We require , otherwise the wrap-around will kill us. Also, must be sub-exponential in for security reasons. The bigger this gap, the easier the lattice problem we are basing security on and if the gap is exponential then there is a polynomial-time algorithm for solving this lattice problem: the famous LLL algorithm. As a consequence, the scheme as-is allows to evaluate polynomials of degree sublinear in .
To improve on this, we need a new notion. We call strongly bounded if the entries of are bounded by 1 and the error is bounded by .
In what follows, we will only consider a NAND
gate: . NAND
is a universal gate, so we can build any circuit with it. However, in this context its main appeal is that it ensures that the messages stay . Note the term above which is big if is big. If and are strongly bounded then error vector of is bounded by instead of . Now, If we could make the entries of bounded by 1 again, itself would be strongly bounded and we could repeat the above argument. In other words, this would enable circuits of depth , i.e. polynomial depth circuits (instead of merely polynomial degree) when is sub-exponential as above.
We’ll make use of an operation BitDecomp
which splits a vector of integers into a longer vector of the bits of the integers. For example, becomes which has length . Here’s a simple implementation in Sage:
def bit_decomp(v): R = v.base_ring() k = ZZ((R.order()-1).nbits()) w = vector(R, len(v)*k) for i in range(len(v)): for j in range(k): if 2**j & ZZ(v[i]): w[k*i+j] = 1 else: w[k*i+j] = 0 return w
We also need a function which reverses the process, i.e. adds up the appropriate powers of two: . It is called in the GSW13 paper, but I’ll simply call it … BitComp
.
def bit_comp(v): R = v.base_ring() k = ZZ((R.order()-1).nbits()) assert(k.divides(len(v))) w = vector(R, len(v)//k) for i in range(len(v)//k): for j in range(k): w[i] += 2**j * ZZ(v[k*i+j]) return w
Actually, the definition of BitComp
is a bit more general than just adding up bits. As defined above — following the GSW13 paper — it will add up any integer entry of multiplied with the appropriate power of two multiplied in. This is relevant for the next function we define, namely Flatten
which we define as BitDecomp(BitComp(⋅))
.
flatten = lambda v: bit_decomp(bit_comp(v))
Finally we also define PowersOf2
which produces a new vector from as :
def powers_of_two(v): R = v.base_ring() k = ZZ((R.order()-1).nbits()) w = vector(R, len(v)*k) for i in range(len(v)): for j in range(k): w[k*i+j] = 2**j * v[i] return w
For example the output of PowersOf2
on for is . Having defined these functions, let’s look at some identities. It holds that
which can verified by recalling integer multiplication. For example, . Another example: and
Or in the form of some Sage code:
q = 8 R = IntegerModRing(q) v = random_vector(R, 10) w = random_vector(R, 10) v.dot_product(w) == bit_decomp(v).dot_product(powers_of_two(w))
Furthermore, let be an dimensional vector and be a dimensional vector. Then it holds that
because .
Finally, we have
by combining the previous two statements.
For example, let , , .
BitComp
on gives BitDecomp
on gives ,The same example in Sage:
q = 8 R = IntegerModRing(q) a = vector(R, (2,3,0)) b = vector(R, (3,)) bit_comp(a).dot_product(b) == flatten(a).dot_product(powers_of_two(b))
Hence, running Flatten
on produces a strongly bounded matrix . Boom.^{1}
It remains to sample a key and to argue why this construction is secure if LWE is secure for the choice of parameters.
To generate a public key, pick LWE parameters and and sample as usual.
The secret key is of dimension . The public key is , where we roll into , which now has dimension .
To encrypt, sample an matrix with entries and compute which is an matrix. That is, we do exactly what we would do with Regev’s original public-key scheme based on LWE: doing random linear combinations of the rows of to make fresh samples. Run BitDecomp
on the output to get a matrix. Finally, set
For correctness, observe:
The security argument is surprisingly simple. Consider . Because is already the output of Flatten
it reveals nothing more than .
Unpacking to , note that is statistically uniform by the leftover hash lemma for uniform . Finally, is indistinguishable from a uniform by the decisional LWE assumption. Boom.
So far, this scheme is not more efficient than previous ring-based schemes such as BGV, even asymptotically. However, an observation by Zvika Brakerski and Vinod Vaikuntanathan in Lattice-Based FHE as Secure as PKE changed this. This observation is that the order of multiplications matters. Let’s multiply four ciphertexts .
The traditional approach would be:
In this approach the noise grows as follows:
Note the factor. That is, for multiplications we get a noise of size .
In contrast, consider this sequential approach:
Under this order, the noise grows as:
Note that each matrix is mutliplied by some only. Hence, here the noise grows linearly in number of multiplications i.e. as .
Lecturer in Information Security
[…]
Applications are invited for the post of Lecturer in the Information Security Group at Royal Holloway, University of London
Applications are invited from researchers whose interests are related to, or complement, current strengths of the ISG. We are particularly interested in applicants who will be able to help drive forward research related to Internet of Things (IoT) security.
Applicants should have a Ph.D. in a relevant subject or equivalent, be a self-motivated researcher, and have a strong publication record. Applicants should be able to demonstrate an enthusiasm for teaching and communicating with diverse audiences, as well as show an awareness of contemporary issues relating to cyber security.
This is a full time and permanent post, with an intended start date of 1st September, 2016, although an earlier or slightly later start may be possible. This post is based in Egham, Surrey, where the College is situated in a beautiful, leafy campus near to Windsor Great Park and within commuting distance from London.
For an informal discussion about the post, please contact Prof. Keith Mayes on keith.mayes@rhul.ac.uk.
To view further details of this post and to apply please visit https://jobs.royalholloway.ac.uk/. The Human Resources Department can be contacted with queries by email at: recruitment@rhul.ac.uk or via telephone on: +44 (0)1784 41 4241.
Please quote the reference: 0216-068
Closing Date: Midnight, 1st April 2016
Interview Date: To be confirmed
We particularly welcome female applicants as they are under-represented at this level in the Department of Information Security within Royal Holloway, University of London.