Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong

Our work – with Jorge Blasco, Rikke Bjerg Jensen and Lenka Mareková – on the use of digital communication technologies in large-scale protests in Hong Kong was accepted at USENIX ’21. A pre-print is available on arXiv. Here’s the abstract:

The Anti-Extradition Law Amendment Bill protests in Hong Kong present a rich context for exploring information security practices among protesters due to their large-scale urban setting and highly digitalised nature. We conducted in-depth, semi-structured interviews with 11 participants of these protests. Research findings reveal how protesters favoured Telegram and relied on its security for internal communication and organisation of on-the-ground collective action; were organised in small private groups and large public groups to enable collective action; adopted tactics and technologies that enable pseudonymity; and developed a variety of strategies to detect compromises and to achieve forms of forward secrecy and post-compromise security when group members were (presumed) arrested. We further show how group administrators had assumed the roles of leaders in these ‘leaderless’ protests and were critical to collective protest efforts.

Our work can be seen in the tradition of “Can Johnny Build a Protocol? Co-ordinating developer and user intentions for privacy-enhanced secure messaging protocols” which documented the divergence of what higher-risk users – such as those in conflict with the authorities of a nation state – need and want and what secure messaging developers design for. This divergence is noteworthy because “human-rights activists” are a common point of reference in discussions around secure messaging.

However, our focus is not activists but participants in large-scale protests, i.e. our focus is more closely tied to specific needs in moments of heightened conflict, confrontation and mass mobilisation. In particular, we interviewed people who were in some shape or form involved in the Anti-ELAB protests in Hong Kong in 2019/2020. Several of our participants described themselves as “frontliners” which roughly means they were present in areas where direct confrontations with law enforcement took place.

As the title suggests our data speaks to how security needs and practices in this population are collective in nature: how decisions about security are made, what security features are deemed important, how people learn to understand security technologies. As an example take post-compromise security and forward secrecy:

Continue reading “Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong”

Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices

PKC’21 is nearly upon us which – in this day and age – means a new YouTube playlist of talks. Eamonn and Fernando wrote a nice paper on on the success probability of solving unique SVP via BKZ which Fernando is describing here:

Alex is presenting our – with Amit and Nigel – work on round-optimal Verifiable Oblivious PseudoRandom Functions (VOPRF) from ideal lattices here:

Since Alex is doing an amazing job at walking you through our paper I won’t attempt this here. Rather, let me point out a – in my book – cute trick in one of our appendices that may have applications elsewhere.

Continue reading “Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices”

Reader/Senior Lecturer/Associate Professor in the ISG

The ISG is recruiting a senior lecturer/reader (≡ associate professor in the US system). This is a full-time, permanent research and teaching position.

Look, I know this is post-Brexit England but let me give you a personal pitch of why you should apply:

  • It’s a big group. We got ~20 permanent members of staff working across the field of information security: cryptography, systems and social. Check out our seminar programme and our publications to get a sense of what is going on in the group.
  • It’s a group with a good mix of areas and lots of interaction. UK universities don’t work like German ones where professors have their little empires which don’t interact all too much. Rather, the hierarchies are pretty flat within a department (everybody is line managed by the Head of Department) which facilitates more interaction; at least within the ISG that’s true. For example, I’m currently working on a project with someone from the systems and software security lab and one of our social scientists. I doubt this sort of collaboration would have come about if we didn’t attend the same meetings, taught the same modules, went to lunch and the pub together etc. Interdisciplinarity from above is annoying, when it emerges spontaneously it can be great.
  • It’s a nice group. People are genuinely friendly and we help each other out. It will be easy to find someone to proof read your grant applications or share previously successfully funded ones etc. I don’t know any official numbers but the unionisation level seems to be relatively high, which I also take as an indication that people don’t adopt a “everyone for themselves” approach.
  • We got funding for our Centre for Doctoral Training for the next few years (then we have to reapply). This means 10 PhD positions per year. Also, our CDT attracts strong students. My research career really took off after getting a chance to work with our amazing students.
  • The ISG is its own department (in a school with Physics, EE, Mathematics and Computer Science). All of our teaching is on information security with a focus on our Information Security MSc (which is huge). So you’ll get to teach information security.
  • The ISG has strong industry links. Thus, if that’s your cup of tea, it will be easy to get introductions etc. A side effect of these strong links is that consulting opportunities tend to pop up. Consulting is not only permitted by the employer but encouraged (they take a cut if you do it through them).
  • The ISG is a large group but Royal Holloway is a relatively small university. That means getting things done by speaking to the person in charge is often possible, i.e. it’s not some massive bureaucracy and exceptions can be negotiated.
  • It’s within one standard deviation from London. This means UCL and Surrey, and thus the researchers there, aren’t too far away. Also, you get to live in London (or near Egham if that’s your thing, no judgement).

We’d appreciate any help in spreading the word. Happy to answer any questions I can answer.

The ad says “senior lecturer” but, speaking for myself, I’d recommend to apply even if you’re going for the lecturer/assistant professor/Juniorprofessor stage in your career. Also, I’d encourage people from all areas of information security to apply.

Continue reading “Reader/Senior Lecturer/Associate Professor in the ISG”

On BDD with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem

Nadia and I put our pre-print and our source code online for solving bounded distance decoding when augmented with some predicate f(\cdot) that evaluates to true on the target and false (almost) everywhere else. Here’s the abstract:

Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this work, we observe that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short.

We formalize lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. We apply our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and demonstrate that our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches. We carried out extensive experiments using our estimation and solving framework, which we also make available with this work.

Continue reading “On BDD with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem”

The 31st HP/HPE (Virtual) Colloquium on Information Security

This year, my colleague Rikke Jensen and I took over coordinating “HP/HPE Day”, our department’s annual flagship event. It will take place as a virtual event this year, which allows us to invite a bit more broadly than we usually do. Registration is free but mandatory – tickets will be allocated on a first come, first served basis.

Continue reading “The 31st HP/HPE (Virtual) Colloquium on Information Security”

Multicore BKZ in FPLLL

There have been a few works recently that give FPLLL a hard time when considering parallelism on multicore systems. That is, they compare FPLLL’s single-core implementation against their multi-core implementations, which is fair enough. However, support for parallel enumeration has existed for a while in the form of fplll-extenum. Motivated by these works we merged that library into FPLLL itself a year ago. However, we didn’t document the multicore performance that this gives us. So here we go.

I ran

for t in 1 2 4 8; do
  ./compare.py -j  $(expr 28 / $t)  -t $t -s 512 -u 80 ./fplll/strategies/default.json
done

where compare.py is from the strategizer and default.json is from a PR against FPLLL. Note that these strategies were not optimised for multicore performance. I ran this on a system with two Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz i.e. 28 cores. The resulting speed-ups are: multicore-bkz-in-fplll-default.png

As you can see, the speed-up is okay for two cores but diminishes as we throw more cores at the problem. I assume that this is partly due to block sizes being relatively small (for larger block sizes G6K – which scales rather well on multiple cores – will be faster). I also suspect that this is partly an artefact of not optimising for multiple cores, i.e. picking the trade-off between enumeration (multicore) and preprocessing (partly single-core due to LLL calls) right. If someone wants to step up and compute some strategies for multiple cores, that’d be neat.

Virtual FPLLL Days 6 aka Bounded Distance Development

The sixth FPLLL Days will be held on 19 and 20 November 2020. For obvious reasons they will be held online. (Who knows, we might end up liking the format enough to do more of these online in a post-COVID world, too).

As with previous incarnations of FPLLL Days, everyone who wishes to contribute to open-source lattice-reduction software is welcome to attend. In particular, you do not have to work on FPLLL or any of its sibling projects like FPyLLL or G6K. Work on whatever advances the state of the art or seems useful to you personally. That said, we do have the ambition to suggest projects from the universe of FPLLL and I personally hope that some people will dig in with me to do the sort of plumbing that keeps projects like these running. To give you an idea of what people worked on in the past, you can find the list of project of the previous FPLLL Days on the wiki and a report on the PROMETHEUS blog.

Format is yet to be determined. We are going to coordinate using Zulip. I think it would also be useful to have a brief conference call, roll call style, on the first day to break the ice and to make it easier for people to reach out to each other for help during the event. This is why we’re asking people to indicate their timezone on the Wiki. If you have ideas for making this event a success, please let us know.

PS: For the record: Joe threatened a riot if I didn’t call it “Bounded Distance Development”, so credit goes to him for the name.

The Vacuity of the Open Source Security Testing Methodology Manual

Our paper – together with Rikke Jensen – on the Open Source Security Testing Methodology Manual has been accepted to the Security Standardisation Research Conference (SSR 2020). Here’s the abstract:

The Open Source Security Testing Methodology Manual (OSSTMM) provides a “scientific methodology for the accurate characterization of operational security”. It is extensively referenced in writings aimed at security testing professionals such as textbooks, standards and academic papers. In this work we offer a fundamental critique of OSSTMM and argue that it fails to deliver on its promise of actual security. Our contribution is threefold and builds on a textual critique of this methodology. First, OSSTMM’s central principle is that security can be understood as a quantity of which an entity has more or less. We show why this is wrong and how OSSTMM’s unified security score, the rav, is an empty abstraction. Second, OSSTMM disregards risk by replacing it with a trust metric which confuses multiple definitions of trust and, as a result, produces a meaningless score. Finally, OSSTMM has been hailed for its attention to human security. Yet it understands all human agency as a security threat that needs to be constantly monitored and controlled. Thus, we argue that OSSTMM is neither fit for purpose nor can it be salvaged, and it should be abandoned by security professionals.

This is most definitely the strangest paper I have ever written. First, the idea for writing this paper came out of teaching IY5610 Security Testing in the Information Security MSc at Royal Holloway. Where my employer likes the tagline “research inspired teaching”, I guess this is a case of “teaching inspired research”.

Second, this paper, bringing together scholarship from many different disciplines has a most eclectic list of references: security testing, cryptography, HCI, ethnography, military field manuals, supreme court decisions, we got it all.

Third, the paper is unusual, at least for information security, in how it proceeds:

While information security research routinely features critiques of security technologies in the form of “attack papers”, analogues of such works for policies, frameworks and conceptions are largely absent from its core venues. This work is a textual critique of OSSTMM based on a close reading of the methodology and pursues two purposes. First, immediately, to show that OSSTMM is inadequate as a security testing methodology, despite being referenced routinely in the security testing literature. Second, more mediated, to show that the ideas at the core of OSSTMM are wrong. As we show [later in the paper], these ideas are not OSSTMM’s privilege. It is for this reason that we chose the form of a textual critique over alternative approaches such as empirical studies to the effectiveness of OSSTMM in practice.

That said, the paper says things that I think are worth saying beyond OSSTMM. Both bogus quantification and questionable ideas about social aspects of information security are widespread in the field. Thus, while OSSTMM provides particularly striking examples of these mistakes, we think our points apply more broadly:

While OSSTMM expresses the methodological dogma that scientific knowledge equals quantification particularly crudely this is not its privilege. Rather, this conviction is common across information security, as exemplified, for example, in CVSS which claims to score security vulnerabilities by a single magnitude. Moreover, the somewhat bad reputation of security testing as a “tickbox exercise” speaks of the same limitation: counting rather than understanding. Echoing the critique of CVSS, we thus suggest, too, that security professionals “skip converting qualitative measurements to numbers”. The healthy debates in other disciplines provide material for a debate within information security to examine the correctness and utility of assigning numerical values to various pieces of data.

A mistake we criticise in OSSTMM is the failure to recognise that the moments of a social organisation are different from the moments of a computer network. This, too, is no privilege of OSSTMM as can be easily verified by the prevalence of mantras along the lines of “humans/people/users are the weakest link”. This standpoint, which is as prevalent as it is wrong, offers the curious indictment that people fail to integrate into a piece of technology that does not work for them. In the context of security testing this standpoint has a home under the heading of “social engineering” and its most visible expression: routine but ineffective phishing simulations. It is worth noting, though, that even when the focus is exclusively on technology, not engaging with the social relations that this technology ought to serve may produce undesirable results, for example leading to designs of technological controls with draconian effects where less invasive means would have been adequate.

More broadly, the tendency of information security to rely on psychology, dominated by individualistic and behavioural perspectives and quantitative approaches to understanding social and human aspects of security, may represent an obstacle. Alternative methodological approaches from the social sciences, particularly from sociology and even anthropology, such as semi-structured interviews, participant-led focus groups and ethnography offer promising avenues to deeply understand the security practices and needs in an organisation.

Postdoc at Royal Holloway on Lattice-based Cryptography

I’m looking for a postdoc to work with me and others – in the ISG and at Imperial College – on lattice-based cryptography and, ideally, its connections to coding theory.

The ISG is a nice place to work; it’s a friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. For example, Carlos Cid, Anamaria Costache, Lydia Garms, Jianwei Li, Sean Murphy, Rachel Player, Eamonn Postlethwaite, Joe Rowell, Fernando Virdia and me all have looked at or are looking at lattice-based cryptography.

A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

If you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”

Mesh Messaging in Large-scale Protests: Breaking Bridgefy

Together with Jorge Blasco, Rikke Bjerg Jensen and Lenka Marekova we have studied the security of the Bridgefy mesh messaging application. This work was motivated by (social) media reports that this application was or is used by participants in large-scale protests in anticipation of or in response to government-mandated Internet shutdowns (or simply because the network infrastructure cannot handle as many devices at the same time as there are during such large protests). The first reports were about Hong Kong, later reports were then about India, Iran, US, Zimbabwe, Belarus and Thailand (typically referencing Hong Kong as an inspiration). In such a situation, mesh networking seems promising: a network is spanned between participants’ phones to create an ad-hoc local network to route messages.

Now, Bridgefy wasn’t designed with this use-case in mind. Rather, its designers had large sports events or natural disasters in mind. Leaving aside the discussion here if those use-cases too warrant a secure-by-default design, we had reason to suspect that the security offered by Bridgefy might not match the expectation of those who might rely on it.

Indeed, we found a series of vulnerabilities in Bridgefy. Our results show that Bridgefy currently permits its users to be tracked, offers no authenticity, no effective confidentiality protections and lacks resilience against adversarially crafted messages. We verify these vulnerabilities by demonstrating a series of practical attacks on Bridgefy. Thus, if protesters rely on Bridgefy, an adversary can produce social graphs about them, read their messages, impersonate anyone to anyone and shut down the entire network with a single maliciously crafted message. For a good overview, see Dan Goodin’s article on our work at Ars Technica.

We disclosed these vulnerabilities to the Bridgefy developers in April 2020 and agreed on a public disclosure date of 20 August 2020. Starting from 1 June 2020, the Bridgefy team began warning their users that they should not expect confidentiality guarantees from the current version of the application.

Let me stress, however, that, as of 24 August, Bridgefy has not been patched to fix these vulnerabilities and thus that these vulnerabilities are present in the currently deployed version. The developers are currently implementing/testing a switch to the Signal protocol to provide cryptographic assurances in their SDK. This switch, if done correctly, would rule out many of the attacks described in our work. They hope to have this fix deployed soon.

Continue reading “Mesh Messaging in Large-scale Protests: Breaking Bridgefy”