Multicore BKZ in FPLLL

There have been a few works recently that give FPLLL a hard time when considering parallelism on multicore systems. That is, they compare FPLLL’s single-core implementation against their multi-core implementations, which is fair enough. However, support for parallel enumeration has existed for a while in the form of fplll-extenum. Motivated by these works we merged that library into FPLLL itself a year ago. However, we didn’t document the multicore performance that this gives us. So here we go.

I ran

for t in 1 2 4 8; do
  ./compare.py -j  $(expr 28 / $t)  -t $t -s 512 -u 80 ./fplll/strategies/default.json
done

where compare.py is from the strategizer and default.json is from a PR against FPLLL. Note that these strategies were not optimised for multicore performance. I ran this on a system with two Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz i.e. 28 cores. The resulting speed-ups are: multicore-bkz-in-fplll-default.png

As you can see, the speed-up is okay for two cores but diminishes as we throw more cores at the problem. I assume that this is partly due to block sizes being relatively small (for larger block sizes G6K – which scales rather well on multiple cores – will be faster). I also suspect that this is partly an artefact of not optimising for multiple cores, i.e. picking the trade-off between enumeration (multicore) and preprocessing (partly single-core due to LLL calls) right. If someone wants to step up and compute some strategies for multiple cores, that’d be neat.

Virtual FPLLL Days 6 aka Bounded Distance Development

The sixth FPLLL Days will be held on 19 and 20 November 2020. For obvious reasons they will be held online. (Who knows, we might end up liking the format enough to do more of these online in a post-COVID world, too).

As with previous incarnations of FPLLL Days, everyone who wishes to contribute to open-source lattice-reduction software is welcome to attend. In particular, you do not have to work on FPLLL or any of its sibling projects like FPyLLL or G6K. Work on whatever advances the state of the art or seems useful to you personally. That said, we do have the ambition to suggest projects from the universe of FPLLL and I personally hope that some people will dig in with me to do the sort of plumbing that keeps projects like these running. To give you an idea of what people worked on in the past, you can find the list of project of the previous FPLLL Days on the wiki and a report on the PROMETHEUS blog.

Format is yet to be determined. We are going to coordinate using Zulip. I think it would also be useful to have a brief conference call, roll call style, on the first day to break the ice and to make it easier for people to reach out to each other for help during the event. This is why we’re asking people to indicate their timezone on the Wiki. If you have ideas for making this event a success, please let us know.

PS: For the record: Joe threatened a riot if I didn’t call it “Bounded Distance Development”, so credit goes to him for the name.

The Vacuity of the Open Source Security Testing Methodology Manual

Our paper – together with Rikke Jensen – on the Open Source Security Testing Methodology Manual has been accepted to the Security Standardisation Research Conference (SSR 2020). Here’s the abstract:

The Open Source Security Testing Methodology Manual (OSSTMM) provides a “scientific methodology for the accurate characterization of operational security”. It is extensively referenced in writings aimed at security testing professionals such as textbooks, standards and academic papers. In this work we offer a fundamental critique of OSSTMM and argue that it fails to deliver on its promise of actual security. Our contribution is threefold and builds on a textual critique of this methodology. First, OSSTMM’s central principle is that security can be understood as a quantity of which an entity has more or less. We show why this is wrong and how OSSTMM’s unified security score, the rav, is an empty abstraction. Second, OSSTMM disregards risk by replacing it with a trust metric which confuses multiple definitions of trust and, as a result, produces a meaningless score. Finally, OSSTMM has been hailed for its attention to human security. Yet it understands all human agency as a security threat that needs to be constantly monitored and controlled. Thus, we argue that OSSTMM is neither fit for purpose nor can it be salvaged, and it should be abandoned by security professionals.

This is most definitely the strangest paper I have ever written. First, the idea for writing this paper came out of teaching IY5610 Security Testing in the Information Security MSc at Royal Holloway. Where my employer likes the tagline “research inspired teaching”, I guess this is a case of “teaching inspired research”.

Second, this paper, bringing together scholarship from many different disciplines has a most eclectic list of references: security testing, cryptography, HCI, ethnography, military field manuals, supreme court decisions, we got it all.

Third, the paper is unusual, at least for information security, in how it proceeds:

While information security research routinely features critiques of security technologies in the form of “attack papers”, analogues of such works for policies, frameworks and conceptions are largely absent from its core venues. This work is a textual critique of OSSTMM based on a close reading of the methodology and pursues two purposes. First, immediately, to show that OSSTMM is inadequate as a security testing methodology, despite being referenced routinely in the security testing literature. Second, more mediated, to show that the ideas at the core of OSSTMM are wrong. As we show [later in the paper], these ideas are not OSSTMM’s privilege. It is for this reason that we chose the form of a textual critique over alternative approaches such as empirical studies to the effectiveness of OSSTMM in practice.

That said, the paper says things that I think are worth saying beyond OSSTMM. Both bogus quantification and questionable ideas about social aspects of information security are widespread in the field. Thus, while OSSTMM provides particularly striking examples of these mistakes, we think our points apply more broadly:

While OSSTMM expresses the methodological dogma that scientific knowledge equals quantification particularly crudely this is not its privilege. Rather, this conviction is common across information security, as exemplified, for example, in CVSS which claims to score security vulnerabilities by a single magnitude. Moreover, the somewhat bad reputation of security testing as a “tickbox exercise” speaks of the same limitation: counting rather than understanding. Echoing the critique of CVSS, we thus suggest, too, that security professionals “skip converting qualitative measurements to numbers”. The healthy debates in other disciplines provide material for a debate within information security to examine the correctness and utility of assigning numerical values to various pieces of data.

A mistake we criticise in OSSTMM is the failure to recognise that the moments of a social organisation are different from the moments of a computer network. This, too, is no privilege of OSSTMM as can be easily verified by the prevalence of mantras along the lines of “humans/people/users are the weakest link”. This standpoint, which is as prevalent as it is wrong, offers the curious indictment that people fail to integrate into a piece of technology that does not work for them. In the context of security testing this standpoint has a home under the heading of “social engineering” and its most visible expression: routine but ineffective phishing simulations. It is worth noting, though, that even when the focus is exclusively on technology, not engaging with the social relations that this technology ought to serve may produce undesirable results, for example leading to designs of technological controls with draconian effects where less invasive means would have been adequate.

More broadly, the tendency of information security to rely on psychology, dominated by individualistic and behavioural perspectives and quantitative approaches to understanding social and human aspects of security, may represent an obstacle. Alternative methodological approaches from the social sciences, particularly from sociology and even anthropology, such as semi-structured interviews, participant-led focus groups and ethnography offer promising avenues to deeply understand the security practices and needs in an organisation.

Postdoc at Royal Holloway on Lattice-based Cryptography

I’m looking for a postdoc to work with me and others – in the ISG and at Imperial College – on lattice-based cryptography and, ideally, its connections to coding theory.

The ISG is a nice place to work; it’s a friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. For example, Carlos Cid, Anamaria Costache, Lydia Garms, Jianwei Li, Sean Murphy, Rachel Player, Eamonn Postlethwaite, Joe Rowell, Fernando Virdia and me all have looked at or are looking at lattice-based cryptography.

A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

If you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”

Mesh Messaging in Large-scale Protests: Breaking Bridgefy

Together with Jorge Blasco, Rikke Bjerg Jensen and Lenka Marekova we have studied the security of the Bridgefy mesh messaging application. This work was motivated by (social) media reports that this application was or is used by participants in large-scale protests in anticipation of or in response to government-mandated Internet shutdowns (or simply because the network infrastructure cannot handle as many devices at the same time as there are during such large protests). The first reports were about Hong Kong, later reports were then about India, Iran, US, Zimbabwe, Belarus and Thailand (typically referencing Hong Kong as an inspiration). In such a situation, mesh networking seems promising: a network is spanned between participants’ phones to create an ad-hoc local network to route messages.

Now, Bridgefy wasn’t designed with this use-case in mind. Rather, its designers had large sports events or natural disasters in mind. Leaving aside the discussion here if those use-cases too warrant a secure-by-default design, we had reason to suspect that the security offered by Bridgefy might not match the expectation of those who might rely on it.

Indeed, we found a series of vulnerabilities in Bridgefy. Our results show that Bridgefy currently permits its users to be tracked, offers no authenticity, no effective confidentiality protections and lacks resilience against adversarially crafted messages. We verify these vulnerabilities by demonstrating a series of practical attacks on Bridgefy. Thus, if protesters rely on Bridgefy, an adversary can produce social graphs about them, read their messages, impersonate anyone to anyone and shut down the entire network with a single maliciously crafted message. For a good overview, see Dan Goodin’s article on our work at Ars Technica.

We disclosed these vulnerabilities to the Bridgefy developers in April 2020 and agreed on a public disclosure date of 20 August 2020. Starting from 1 June 2020, the Bridgefy team began warning their users that they should not expect confidentiality guarantees from the current version of the application.

Let me stress, however, that, as of 24 August, Bridgefy has not been patched to fix these vulnerabilities and thus that these vulnerabilities are present in the currently deployed version. The developers are currently implementing/testing a switch to the Signal protocol to provide cryptographic assurances in their SDK. This switch, if done correctly, would rule out many of the attacks described in our work. They hope to have this fix deployed soon.

Continue reading “Mesh Messaging in Large-scale Protests: Breaking Bridgefy”

Conda, Jupyter and Emacs

Jupyter is great. Yet, I find myself missing all the little tweaks I made to Emacs whenever I have Jupyter open in a browser. The obvious solution is to have Jupyter in Emacs. One solution is EIN, the Emacs IPython Notebook. However, I had a mixed experience with it: it would often hang and eat up memory (I never bothered to try to debug this behaviour). A neat alternative, for me, is emacs-jupyter. Here’s my setup.

Continue reading “Conda, Jupyter and Emacs”

What does “secure” mean in Information Security?

This is text – written by Rikke Jensen and me – first appeared in the ISG Newsletter 2019/2020 under the title “What is Information Security?”. I’ve added a few links to this version.

The most fundamental task in information security is to establish what we mean by (information) security.

A possible answer to this question is given in countless LinkedIn posts, thought-leader blog entries and industry white papers: Confidentiality, Integrity, Availability. Since the vacuity of the “CIA Triad” is covered in the first lecture of the Security Management module of our MSc, we will assume our readers are familiar with it and will avoid this non-starter. Let us consider the matter more closely.

One subfield of information security that takes great care in tending to its definitions is cryptography. For example, Katz and Lindell write: “A key intellectual contribution of modern cryptography has been the recognition that formal definitions of security are an essential first step in the design of any cryptographic primitive or protocol”. Indeed, finding the correct security definition for a cryptographic primitive or protocol is a critical part of cryptographic work. That these definitions can be non-intuitive yet correct is made acutely apparent when asking students in class to come up with ideas of what it could mean for a block cipher to be secure. They never arrive at PRP security but propose security notions that are, well, broken.

Fine, we can grant cryptography that it knows how to define what a secure block cipher is. That is, we can know what is meant by it being secure, but does that imply that we are? Cryptographic security notions – and everything that depends on them – do not exist in a vacuum, they have reasons to be. While the immediate objects of cryptography are not social relations, it presumes and models them. This fact is readily acknowledged in the introductions of cryptographic papers where authors illustrate the utility of their proposed constructions by reference to some social situation where several parties have conflicting ends but a need or desire to interact. Yet, this part of the definitional work has not received the same rigour from the cryptographic community as complexity-theoretic and mathematical questions. For example, Goldreich writes: “The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural ‘security concerns’ ”. Following Blanchette we may ask back: “How does one identify such ‘natural security concerns’? On these questions, the literature remains silent”.

Continue reading “What does “secure” mean in Information Security?”

Cryptographic Security Proofs as Dynamic Malware Analysis

This is text first appeared in the ISG Newsletter 2019/2020. I’ve added a bunch of links to this version.

RSA encryption with insecure padding (PKCS #1 v1.5) is a gift that keeps on giving variants of Bleichenbacher’s chosen ciphertext attack. As the readers of this newsletter will know, RSA-OAEP (PKCS #1 v2) is recommended for RSA encryption. How do we know, though, that switching to RSA-OAEP will give us an encryption scheme that resists chosen ciphertext attacks? Cryptography has two answers to this. Without any additional assumptions the answer is that we don’t know (yet). In the Random Oracle Model (ROM), though, we can give an affirmative answer, i.e. RSA-OAEP was proven secure. Indeed, security proofs in the ROM (and its cousin the Ideal Cipher Model) underpin many cryptographic constructions that are widely deployed, such as generic transforms to achieve security against active attacks and block cipher modes of operation. This article is meant to give some intuition about how such ROM proofs go by means of an analogy to dynamic malware analysis.

Continue reading “Cryptographic Security Proofs as Dynamic Malware Analysis”

Postdoc at Royal Holloway on Lattice-based Cryptography

Update: 25/09/2020: New deadline: 30 October.

We are looking for a postdoc to join us to work on lattice-based cryptography. This postdoc is funded by the EU H2020 PROMETHEUS project for building privacy preserving systems from advanced lattice primitives. At Royal Holloway, the project is looked after by Rachel Player and me. Feel free to e-mail me with any queries you might have.

The ISG is a nice place to work; it’s a very friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

Also, if you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”