Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong

Our work – with Jorge Blasco, Rikke Bjerg Jensen and Lenka Mareková – on the use of digital communication technologies in large-scale protests in Hong Kong was accepted at USENIX ’21. A pre-print is available on arXiv. Here’s the abstract:

The Anti-Extradition Law Amendment Bill protests in Hong Kong present a rich context for exploring information security practices among protesters due to their large-scale urban setting and highly digitalised nature. We conducted in-depth, semi-structured interviews with 11 participants of these protests. Research findings reveal how protesters favoured Telegram and relied on its security for internal communication and organisation of on-the-ground collective action; were organised in small private groups and large public groups to enable collective action; adopted tactics and technologies that enable pseudonymity; and developed a variety of strategies to detect compromises and to achieve forms of forward secrecy and post-compromise security when group members were (presumed) arrested. We further show how group administrators had assumed the roles of leaders in these ‘leaderless’ protests and were critical to collective protest efforts.

Our work can be seen in the tradition of “Can Johnny Build a Protocol? Co-ordinating developer and user intentions for privacy-enhanced secure messaging protocols” which documented the divergence of what higher-risk users – such as those in conflict with the authorities of a nation state – need and want and what secure messaging developers design for. This divergence is noteworthy because “human-rights activists” are a common point of reference in discussions around secure messaging.

However, our focus is not activists but participants in large-scale protests, i.e. our focus is more closely tied to specific needs in moments of heightened conflict, confrontation and mass mobilisation. In particular, we interviewed people who were in some shape or form involved in the Anti-ELAB protests in Hong Kong in 2019/2020. Several of our participants described themselves as “frontliners” which roughly means they were present in areas where direct confrontations with law enforcement took place.

As the title suggests our data speaks to how security needs and practices in this population are collective in nature: how decisions about security are made, what security features are deemed important, how people learn to understand security technologies. As an example take post-compromise security and forward secrecy:

Continue reading “Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong”

On the concrete hardness of Learning with Errors

Together with Rachel Player and Sam Scott (both also from the Information Security Group at Royal Holloway, University of London) we finally managed to put our survey on solving the Learning with Errors problem out. Here’s the abstract:

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

Continue reading “On the concrete hardness of Learning with Errors”

Improved Parameters and an Implementation of Graded Encoding Schemes from Ideal Lattices

Our paper (with Catalin Cocis, Fabien Laguillaumie and Adeline Langlois) on picking parameters and implementing GGHLite just hit eprint. Here’s the abstract:

We discuss how to set parameters for GGH-like graded encoding schemes approximating cryptographic multilinear maps from ideal lattices and propose a strategy which reduces parameter sizes for concrete instances. Secondly, we discuss a first software implementation of a graded encoding scheme based on GGHLite, an improved variant of Garg, Gentry and Halevi’s construction (GGH) due to Langlois, Stehlé and Steinfeld. Thirdly, we provide an implementation of non-interactive N-partite Diffie-Hellman key exchange. We discuss our implementation strategies and show that our implementation outperforms previous work.

Continue reading “Improved Parameters and an Implementation of Graded Encoding Schemes from Ideal Lattices”

Lazy Modulus Switching for the BKW Algorithm on LWE

our paper (with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret) on solving small secret LWE faster just hit ePrint (and was accepted for presentation at PKC 2014)

Abstract. Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0, 1}* or {-1, 0, 1}*. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.

The code used to produce experimental data is available on bitbucket, source code to compute our complexity estimations is also available. Slides for a presentation discussing this work are also available on bitbucket.

Lattice Stuff

We — with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret – managed to finish our work on the cryptanalysis of all proposed parameters of the public-key encryption scheme proposed at PKC 2012 by Huang, Liu and Yang. The key observation is that the scheme can be viewed as an easy LWE instance:

In this paper, we investigate the security of a public-key encryption scheme introduced by Huang, Liu and Yang (HLY) at PKC’12. This new scheme can be provably reduced to the hardness of solving a set of quadratic equations whose coefficients of highest degree are chosen according to a discrete Gaussian distributions. The other terms being chosen uniformly at random. Such a problem is a variant of the classical problem of solving a system of non-linear equations (PoSSo), which is known to be hard for random systems. The main hypothesis of Huang, Liu and Yang is that their variant is not easier than solving PoSSo for random instances. In this paper, we disprove this hypothesis. To this end, we exploit the fact that the new problem proposed by Huang, Liu and Yang reduces to an easy instance of the Learning With Errors (LWE) problem. The main contribution of this paper is to show that security and efficiency are essentially incompatible for the HLY proposal. That is, one cannot find parameters which yield a secure and a practical scheme. For instance, we estimate that a public-key of at least 1.03 GB is required to achieve 80-bit security against known attacks. As a proof of concept, we present practical attacks against all the parameters proposed Huang, Liu and Yang. We have been able to recover the private-key in roughly one day for the first challenge proposed by HLY and in roughly three days for the second challenge.

Furthermore, I gave a talk yesterday on solving LWE with binary secret using a variant of the BKW algorithm at SIAM AG’13.

BKW: Update

We have updated our pre-print titled “On the Complexity of the BKW Algorithm on LWE” on ePrint.

There are two main changes and the reasons why I am mentioning this update here.

  1. We included a more thorough comparison with other approaches, in particular, with lattice reduction (reducing LWE to SIS). To our surprise, BKW is quite competitive even in relatively modest dimensions. For Regev’s and Lindner-Peikert’s parameter sets (as interpreted here) we get that BKW is at least as fast as BKZ starting in dimension n \approx 250, which I find very low (see Table 4 on page 19).
  2. We also provide an alternative approximating for the running time of BKZ. The standard estimate due to Lindner-Peikert is \log_2 T_{sec} = \log_2 1.8/\delta_0 - 110 where \delta_0 is the targeted root hermit factor. Interpolating estimates from the BKZ 2.0 simulator and reflecting on the doubly exponential running time of BKZ in the blocksize \beta we found: \log_2 T_{sec} = \log_2 0.009/\delta^2_0 - 27. However, since this might be controversial, we include estimates for both models.

M4RI 20121224

I have just pushed the button to release M4RI 20121224. The main feature of this release is a considerable performance improvement. It all started with Fast matrix decomposition in F2 by Enrico Bertolazzi and Anna Rimoldi showing up on the arXiv. Here’s the abstract

In this work an efficient algorithm to perform a block decomposition (and so to compute the rank) of large dense rectangular matrices with entries in F2 is presented. Depending on the way the matrix is stored, the operations acting on rows or block of consecutive columns (stored as one integer) should be preferred. In this paper, an algorithm that completely avoids the column permutations is given. In particular, a block decomposition is presented and its running times are compared with the ones adopted into SAGE.

… and that comparison made M4RI (which realises this functionality in Sage) look pretty bad. I did’t (and still don’t) share the implicit assumption that avoiding column swaps was the key ingredient in making this code so much faster than ours. I assume the impressive timings are due to a very efficient base case implementation. Anyway, we sat down  and looked for performance bottlenecks the result of which is 20121224. I actually have no idea whether we caught up to the code described in Enrico’s and Anna’s pre-print as they did not publish their sources.

Still, the performance improvements over 20120613 were worth the trouble. Below two plots of the (normalised) leading constants giving the leading constants for multiplication and elimination respectively (more plots on imgur) That is, it plots the running time divided by n^{2.807} \cdot 10^9. In theory these plots should all have slope 0.

Multiplication on Intel Core i7
PLE on Intel Core i7

Finally, here’s the plot for Fast matrix decomposition in F2 which starts very small but has a rather large slope. That’s why I concluded that the performance stems from a very efficient base case. I should get in touch with Enrio and Anna about this.

Rank-profile revealing Gaussian elimination and the CUP matrix decomposition

by Claude-Pierre Jeannerod, Clément Pernet, Arne Storjohann is now available on the archive. I like this paper a lot and we also referenced it in both the M4RI elimination paper and the M4RIE paper so three cheers that it’s now available.

Abstract: Transforming a matrix over a field to echelon form, or decomposing the matrix as a product of structured matrices that reveal the rank profile, is a fundamental building block of computational exact linear algebra. This paper surveys the well known variations of such decompositions and transformations that have been proposed in the literature. We present an algorithm to compute the CUP decomposition of a matrix, adapted from the LSP algorithm of Ibarra, Moran and Hui (1982), and show reductions from the other most common Gaussian elimination based matrix transformations and decompositions to the CUP decomposition. We discuss the advantages of the CUP algorithm over other existing algorithms by studying time and space complexities: the asymptotic time complexity is rank sensitive, and comparing the constants of the leading terms, the algorithms for computing matrix invariants based on the CUP decomposition are always at least as good except in one case. We also show that the CUP algorithm, as well as the computation of other invariants such as transformation to reduced column echelon form using the CUP algorithm, all work in place, allowing for example to compute the inverse of a matrix on the same storage as the input matrix.

http://arxiv.org/abs/1112.5717

The M4RIE library for dense linear algebra over small fields with even characteristic

I finally uploaded a pre-print of the M4RIE paper to the arXiv:

Abstract: In this work, we present the M4RIE library which implements efficient algorithms for linear algebra with dense matrices over \mathbb{F}_{2^e} for 2 \leq e \leq 10. As the name of the library indicates, it makes heavy use of the M4RI library both directly (i.e., by calling it) and indirectly (i.e., by using its concepts). We provide an open-source GPLv2+ C library for efficient linear algebra over \mathbb{F}_{2^e} for e small. In this library we implemented an idea due to Bradshaw and Boothby which reduces matrix multiplication over \mathbb{F}_{p^k} to a series of matrix multiplications over \mathbb{F}_p. Furthermore, we propose a caching technique – Newton-John tables – to avoid finite field multiplications which is inspired by Kronrod’s method (“M4RM”) for matrix multiplication over \mathbb{F}_2. Using these two techniques we provide asymptotically fast triangular solving with matrices (TRSM) and PLE-based Gaussian elimination. As a result, we are able to significantly improve upon the state of the art in dense linear algebra over \mathbb{F}_{2^e} with 2 \leq e \leq 10.