Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices

PKC’21 is nearly upon us which – in this day and age – means a new YouTube playlist of talks. Eamonn and Fernando wrote a nice paper on on the success probability of solving unique SVP via BKZ which Fernando is describing here:

Alex is presenting our – with Amit and Nigel – work on round-optimal Verifiable Oblivious PseudoRandom Functions (VOPRF) from ideal lattices here:

Since Alex is doing an amazing job at walking you through our paper I won’t attempt this here. Rather, let me point out a – in my book – cute trick in one of our appendices that may have applications elsewhere.

Continue reading “Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices”

The Vacuity of the Open Source Security Testing Methodology Manual

Our paper – together with Rikke Jensen – on the Open Source Security Testing Methodology Manual has been accepted to the Security Standardisation Research Conference (SSR 2020). Here’s the abstract:

The Open Source Security Testing Methodology Manual (OSSTMM) provides a “scientific methodology for the accurate characterization of operational security”. It is extensively referenced in writings aimed at security testing professionals such as textbooks, standards and academic papers. In this work we offer a fundamental critique of OSSTMM and argue that it fails to deliver on its promise of actual security. Our contribution is threefold and builds on a textual critique of this methodology. First, OSSTMM’s central principle is that security can be understood as a quantity of which an entity has more or less. We show why this is wrong and how OSSTMM’s unified security score, the rav, is an empty abstraction. Second, OSSTMM disregards risk by replacing it with a trust metric which confuses multiple definitions of trust and, as a result, produces a meaningless score. Finally, OSSTMM has been hailed for its attention to human security. Yet it understands all human agency as a security threat that needs to be constantly monitored and controlled. Thus, we argue that OSSTMM is neither fit for purpose nor can it be salvaged, and it should be abandoned by security professionals.

This is most definitely the strangest paper I have ever written. First, the idea for writing this paper came out of teaching IY5610 Security Testing in the Information Security MSc at Royal Holloway. Where my employer likes the tagline “research inspired teaching”, I guess this is a case of “teaching inspired research”.

Second, this paper, bringing together scholarship from many different disciplines has a most eclectic list of references: security testing, cryptography, HCI, ethnography, military field manuals, supreme court decisions, we got it all.

Third, the paper is unusual, at least for information security, in how it proceeds:

While information security research routinely features critiques of security technologies in the form of “attack papers”, analogues of such works for policies, frameworks and conceptions are largely absent from its core venues. This work is a textual critique of OSSTMM based on a close reading of the methodology and pursues two purposes. First, immediately, to show that OSSTMM is inadequate as a security testing methodology, despite being referenced routinely in the security testing literature. Second, more mediated, to show that the ideas at the core of OSSTMM are wrong. As we show [later in the paper], these ideas are not OSSTMM’s privilege. It is for this reason that we chose the form of a textual critique over alternative approaches such as empirical studies to the effectiveness of OSSTMM in practice.

That said, the paper says things that I think are worth saying beyond OSSTMM. Both bogus quantification and questionable ideas about social aspects of information security are widespread in the field. Thus, while OSSTMM provides particularly striking examples of these mistakes, we think our points apply more broadly:

While OSSTMM expresses the methodological dogma that scientific knowledge equals quantification particularly crudely this is not its privilege. Rather, this conviction is common across information security, as exemplified, for example, in CVSS which claims to score security vulnerabilities by a single magnitude. Moreover, the somewhat bad reputation of security testing as a “tickbox exercise” speaks of the same limitation: counting rather than understanding. Echoing the critique of CVSS, we thus suggest, too, that security professionals “skip converting qualitative measurements to numbers”. The healthy debates in other disciplines provide material for a debate within information security to examine the correctness and utility of assigning numerical values to various pieces of data.

A mistake we criticise in OSSTMM is the failure to recognise that the moments of a social organisation are different from the moments of a computer network. This, too, is no privilege of OSSTMM as can be easily verified by the prevalence of mantras along the lines of “humans/people/users are the weakest link”. This standpoint, which is as prevalent as it is wrong, offers the curious indictment that people fail to integrate into a piece of technology that does not work for them. In the context of security testing this standpoint has a home under the heading of “social engineering” and its most visible expression: routine but ineffective phishing simulations. It is worth noting, though, that even when the focus is exclusively on technology, not engaging with the social relations that this technology ought to serve may produce undesirable results, for example leading to designs of technological controls with draconian effects where less invasive means would have been adequate.

More broadly, the tendency of information security to rely on psychology, dominated by individualistic and behavioural perspectives and quantitative approaches to understanding social and human aspects of security, may represent an obstacle. Alternative methodological approaches from the social sciences, particularly from sociology and even anthropology, such as semi-structured interviews, participant-led focus groups and ethnography offer promising avenues to deeply understand the security practices and needs in an organisation.

Writing (Crypto) Papers and Version Control

Academics write. Academics in my field also tend to write in groups of two to five people. Back in the dark days — which I’m told are not over for many researchers — this used to involve mailing LaTeX sources around, forgetting to attach the right file, “I take the editing token for file.tex” e-mails, questions like “Where is the most recent version of the draft?” and so on. In some cases, I’m told authors actually sat down together and did grammar fixes in a meeting. In a word, it was horrible.

Judging from anecdotal evidence, it is not that bad anymore. Many people now use some sort of revision control to write their papers, with either Subversion or Git being the tool of choice. However, my impression is that we don’t use the tools available to us to the extent we should. So let me try to make my case for a better practice of collaborative writing for (crypto) academics.

Continue reading “Writing (Crypto) Papers and Version Control”

On the concrete hardness of Learning with Errors

Together with Rachel Player and Sam Scott (both also from the Information Security Group at Royal Holloway, University of London) we finally managed to put our survey on solving the Learning with Errors problem out. Here’s the abstract:

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

Continue reading “On the concrete hardness of Learning with Errors”

Lazy Modulus Switching for the BKW Algorithm on LWE

our paper (with Jean-Charles FaugèreRobert Fitzpatrick and Ludovic Perret) on solving small secret LWE faster just hit ePrint (and was accepted for presentation at PKC 2014)

Abstract. Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0, 1}* or {-1, 0, 1}*. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.

The code used to produce experimental data is available on bitbucket, source code to compute our complexity estimations is also available. Slides for a presentation discussing this work are also available on bitbucket.

Enrico Bertolazzi’s linear algebra code over GF(2) available

Enrico made the code (if the link doesn’t work search for his name on Research Gate) for his LU factorisation code over GF(2) available online under the GPL. This is an implement of the algorithm described by Anna and him in Fast matrix decomposition in F2 and for which they give timings in that paper (discussed a bit here). I had to fix some includes to make it compile on my box, but nothing major. I can also confirm the impressive performance of their software (I ran testRankComputation).

Continue reading “Enrico Bertolazzi’s linear algebra code over GF(2) available”

An All-In-One Approach to Differential Cryptanalysis for Small Block Ciphers

a paper that I wrote with Gregor Leander is finally done, out and accepted for presentation at SAC.

We present a framework that unifies several standard differential techniques. This unified view allows us to consider many, potentially all, output differences for a given input difference and to combine the information derived from them in an optimal way. We then propose a new attack that implicitly mounts several standard, truncated, impossible, improbable and possible future variants of differential attacks in parallel and hence allows to significantly improve upon known differential attacks using the same input difference. To demonstrate the viability of our techniques, we apply them to KATAN-32. In particular, our attack allows us to break 115 rounds of KATAN-32, which is 37 rounds more than previous work. For this, our attack exploits the non-uniformity of the difference distribution after 91 rounds which is 20 rounds more than the previously best known differential characteristic. Since our results still cover less than 1/2 of the cipher, they further strengthen our confidence in KATAN-32’s resistance against differential attacks.

 

Rank-profile revealing Gaussian elimination and the CUP matrix decomposition

by Claude-Pierre Jeannerod, Clément Pernet, Arne Storjohann is now available on the archive. I like this paper a lot and we also referenced it in both the M4RI elimination paper and the M4RIE paper so three cheers that it’s now available.

Abstract: Transforming a matrix over a field to echelon form, or decomposing the matrix as a product of structured matrices that reveal the rank profile, is a fundamental building block of computational exact linear algebra. This paper surveys the well known variations of such decompositions and transformations that have been proposed in the literature. We present an algorithm to compute the CUP decomposition of a matrix, adapted from the LSP algorithm of Ibarra, Moran and Hui (1982), and show reductions from the other most common Gaussian elimination based matrix transformations and decompositions to the CUP decomposition. We discuss the advantages of the CUP algorithm over other existing algorithms by studying time and space complexities: the asymptotic time complexity is rank sensitive, and comparing the constants of the leading terms, the algorithms for computing matrix invariants based on the CUP decomposition are always at least as good except in one case. We also show that the CUP algorithm, as well as the computation of other invariants such as transformation to reduced column echelon form using the CUP algorithm, all work in place, allowing for example to compute the inverse of a matrix on the same storage as the input matrix.

http://arxiv.org/abs/1112.5717

Efficient dense Gaussian elimination over the field with two elements

Finally, we finished our paper about Gaussian elimination in the M4RI library.

Abstract: In this work we describe an efficient implementation of a hierarchy of algorithms for Gaussian elimination upon dense matrices over the field with two elements (\mathbb{F}_2). We discuss both well-known and new algorithms as well as our implementations in the M4RI library, which has been adopted into Sage. The focus of our discussion is a block iterative algorithm for PLE decomposition which is inspired by the M4RI algorithm. The implementation presented in this work provides considerable performance gains in practice when compared to the previously fastest implementation. We provide performance figures on x86_64 CPUs to demonstrate the alacrity of our approach.

The sources of this document are available on bitbucket. But I also compiled a PDF.

Update: arXiv link.