Postdoc at Royal Holloway on Lattice-based Cryptography

I’m looking for a postdoc to work with me and others – in the ISG and at Imperial College – on lattice-based cryptography and, ideally, its connections to coding theory.

The ISG is a nice place to work; it’s a friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. For example, Carlos Cid, Anamaria Costache, Lydia Garms, Jianwei Li, Sean Murphy, Rachel Player, Eamonn Postlethwaite, Joe Rowell, Fernando Virdia and me all have looked at or are looking at lattice-based cryptography.

A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

If you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”

Mesh Messaging in Large-scale Protests: Breaking Bridgefy

Together with Jorge Blasco, Rikke Bjerg Jensen and Lenka Marekova we have studied the security of the Bridgefy mesh messaging application. This work was motivated by (social) media reports that this application was or is used by participants in large-scale protests in anticipation of or in response to government-mandated Internet shutdowns (or simply because the network infrastructure cannot handle as many devices at the same time as there are during such large protests). The first reports were about Hong Kong, later reports were then about India, Iran, US, Zimbabwe, Belarus and Thailand (typically referencing Hong Kong as an inspiration). In such a situation, mesh networking seems promising: a network is spanned between participants’ phones to create an ad-hoc local network to route messages.

Now, Bridgefy wasn’t designed with this use-case in mind. Rather, its designers had large sports events or natural disasters in mind. Leaving aside the discussion here if those use-cases too warrant a secure-by-default design, we had reason to suspect that the security offered by Bridgefy might not match the expectation of those who might rely on it.

Indeed, we found a series of vulnerabilities in Bridgefy. Our results show that Bridgefy currently permits its users to be tracked, offers no authenticity, no effective confidentiality protections and lacks resilience against adversarially crafted messages. We verify these vulnerabilities by demonstrating a series of practical attacks on Bridgefy. Thus, if protesters rely on Bridgefy, an adversary can produce social graphs about them, read their messages, impersonate anyone to anyone and shut down the entire network with a single maliciously crafted message. For a good overview, see Dan Goodin’s article on our work at Ars Technica.

We disclosed these vulnerabilities to the Bridgefy developers in April 2020 and agreed on a public disclosure date of 20 August 2020. Starting from 1 June 2020, the Bridgefy team began warning their users that they should not expect confidentiality guarantees from the current version of the application.

Let me stress, however, that, as of 24 August, Bridgefy has not been patched to fix these vulnerabilities and thus that these vulnerabilities are present in the currently deployed version. The developers are currently implementing/testing a switch to the Signal protocol to provide cryptographic assurances in their SDK. This switch, if done correctly, would rule out many of the attacks described in our work. They hope to have this fix deployed soon.

Continue reading “Mesh Messaging in Large-scale Protests: Breaking Bridgefy”

Conda, Jupyter and Emacs

Jupyter is great. Yet, I find myself missing all the little tweaks I made to Emacs whenever I have Jupyter open in a browser. The obvious solution is to have Jupyter in Emacs. One solution is EIN, the Emacs IPython Notebook. However, I had a mixed experience with it: it would often hang and eat up memory (I never bothered to try to debug this behaviour). A neat alternative, for me, is emacs-jupyter. Here’s my setup.

Continue reading “Conda, Jupyter and Emacs”

What does “secure” mean in Information Security?

This is text – written by Rikke Jensen and me – first appeared in the ISG Newsletter 2019/2020 under the title “What is Information Security?”. I’ve added a few links to this version.

The most fundamental task in information security is to establish what we mean by (information) security.

A possible answer to this question is given in countless LinkedIn posts, thought-leader blog entries and industry white papers: Confidentiality, Integrity, Availability. Since the vacuity of the “CIA Triad” is covered in the first lecture of the Security Management module of our MSc, we will assume our readers are familiar with it and will avoid this non-starter. Let us consider the matter more closely.

One subfield of information security that takes great care in tending to its definitions is cryptography. For example, Katz and Lindell write: “A key intellectual contribution of modern cryptography has been the recognition that formal definitions of security are an essential first step in the design of any cryptographic primitive or protocol”. Indeed, finding the correct security definition for a cryptographic primitive or protocol is a critical part of cryptographic work. That these definitions can be non-intuitive yet correct is made acutely apparent when asking students in class to come up with ideas of what it could mean for a block cipher to be secure. They never arrive at PRP security but propose security notions that are, well, broken.

Fine, we can grant cryptography that it knows how to define what a secure block cipher is. That is, we can know what is meant by it being secure, but does that imply that we are? Cryptographic security notions – and everything that depends on them – do not exist in a vacuum, they have reasons to be. While the immediate objects of cryptography are not social relations, it presumes and models them. This fact is readily acknowledged in the introductions of cryptographic papers where authors illustrate the utility of their proposed constructions by reference to some social situation where several parties have conflicting ends but a need or desire to interact. Yet, this part of the definitional work has not received the same rigour from the cryptographic community as complexity-theoretic and mathematical questions. For example, Goldreich writes: “The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural ‘security concerns’ ”. Following Blanchette we may ask back: “How does one identify such ‘natural security concerns’? On these questions, the literature remains silent”.

Continue reading “What does “secure” mean in Information Security?”

Cryptographic Security Proofs as Dynamic Malware Analysis

This is text first appeared in the ISG Newsletter 2019/2020. I’ve added a bunch of links to this version.

RSA encryption with insecure padding (PKCS #1 v1.5) is a gift that keeps on giving variants of Bleichenbacher’s chosen ciphertext attack. As the readers of this newsletter will know, RSA-OAEP (PKCS #1 v2) is recommended for RSA encryption. How do we know, though, that switching to RSA-OAEP will give us an encryption scheme that resists chosen ciphertext attacks? Cryptography has two answers to this. Without any additional assumptions the answer is that we don’t know (yet). In the Random Oracle Model (ROM), though, we can give an affirmative answer, i.e. RSA-OAEP was proven secure. Indeed, security proofs in the ROM (and its cousin the Ideal Cipher Model) underpin many cryptographic constructions that are widely deployed, such as generic transforms to achieve security against active attacks and block cipher modes of operation. This article is meant to give some intuition about how such ROM proofs go by means of an analogy to dynamic malware analysis.

Continue reading “Cryptographic Security Proofs as Dynamic Malware Analysis”

Postdoc at Royal Holloway on Lattice-based Cryptography

We are looking for a postdoc to join us to work on lattice-based cryptography. This postdoc is funded by the EU H2020 PROMETHEUS project for building privacy preserving systems from advanced lattice primitives. At Royal Holloway, the project is looked after by Rachel Player and me. Feel free to e-mail me with any queries you might have.

The ISG is a nice place to work; it’s a very friendly environment with strong research going on in several areas. We got people working across the field of information security including several people working on cryptography. A postdoc here is a 100% research position, i.e. you wouldn’t have teaching duties. That said, if you’d like to gain some teaching experience, we can arrange for that as well.

Also, if you have e.g. a two-body problem and would like to discuss flexibility about being in the office (assuming we’ll all be back in the office at some post-covid19 point), feel free to get in touch.

Continue reading “Postdoc at Royal Holloway on Lattice-based Cryptography”

Faster Enumeration-based Lattice Reduction

Our paper “Faster Enumeration-based Lattice Reduction: Root Hermite Factor k^{1/(2k)} in Time k^{k/8\, +\, o(k)}” – together with Shi Bai, Pierre-Alain Fouque, Paul Kirchner, Damien Stehlé and Weiqiang Wen – is now available on ePrint (the work has been accepted to CRYPTO 2020). Here’s the abstract:

We give a lattice reduction algorithm that achieves root Hermite factor k^{1/(2k)} in time k^{k/8 + o(k)} and polynomial memory. This improves on the previously best known enumeration-based algorithms which achieve the same quality, but in time k^{k/(2e) + o(k)}. A cost of k^{k/8 + o(k)} was previously mentioned as potentially achievable (Hanrot-Stehlé’10) or as a heuristic lower bound (Nguyen’10) for enumeration algorithms. We prove the complexity and quality of our algorithm under a heuristic assumption and provide empirical evidence from simulation and implementation experiments attesting to its performance for practical and cryptographic parameter sizes. Our work also suggests potential avenues for achieving costs below k^{k/8 + o(k)} for the same root Hermite factor, based on the geometry of SDBKZ-reduced bases.

Continue reading “Faster Enumeration-based Lattice Reduction”

Open Letters v Surveillance

A letter, signed by 177 scientists and researchers working in the UK in the fields of information security and privacy, reads:

“Echoing the letter signed by 300 international leading researchers, we note that it is vital that, when we come out of the current crisis, we have not created a tool that enables data collection on the population, or on targeted sections of society, for surveillance. Thus, solutions which allow reconstructing invasive information about individuals must be fully justified.1 Such invasive information can include the ‘social graph’ of who someone has physically met over a period of time. With access to the social graph, a bad actor (state, private sector, or hacker) could spy on citizens’ real-world activities.”

Our letter2 stands in a tradition of similar letters and resolutions. For example, here is the “Copenhagen Resolution” of the International Association for Cryptologic Research (IACR), i.e. the professional body of cryptographers, from May 2014:

“The membership of the IACR repudiates mass surveillance and the undermining of cryptographic solutions and standards. Population-wide surveillance threatens democracy and human dignity. We call for expediting research and deployment of effective techniques to protect personal privacy against governmental and corporate overreach.”

Both of these documents, in line with similar documents, treat privacy and surveillance rather abstractly. On the one hand, this has merit: many of us, including myself, are experts on the technology and can speak to that with authority. Also, getting people working on privacy to agree that privacy is important is straight forward, getting them to agree on why is a much more difficult proposition. On the other hand, when we are making political interventions such as passing resolutions or writing open letters, I think we should also ask ourselves this question. I suspect we won’t all agree, but the added clarity might still be helpful.

Similar considerations have been voiced in several works before, e.g.

“The counter-surveillance movement is timely and deserves widespread support. However, as this article will argue and illustrate, raising the specter of an Orwellian system of mass surveillance, shifting the discussion to the technical domain, and couching that shift in economic terms undermine a political reading that would attend to the racial, gendered, classed, and colonial aspects of the surveillance programs. Our question is as follows: how can this specific discursive framing of counter-surveillance be re-politicized and broadened to enable a wider societal debate informed by the experiences of those subjected to targeted surveillance and associated state violence?” – Seda Gürses, Arun Kundnani & Joris Van Hoboken. Crypto and empire: the contradictions of counter-surveillance advocacy in Media, Culture & Society, 38(4), 576–590. 2016

and

“History teaches that extensive governmental surveillance becomes politicalin character. As civil-rights attorney Frank Donner and the Church Commission reports thoroughly document, domestic surveillance under U.S. FBI director J. Edgar Hoover served as a mechanism to protect the status quo and neutralize change movements. Very little of the FBI’s surveillance-related efforts were directed at law-enforcement: as the activities surveilled were rarely illegal, unwelcome behavior would result in sabotage, threats, blackmail, and inappropriate prosecutions, instead. For example, leveraging audio surveillance tapes, the FBI’s attempted to get Dr. Martin Luther King, Jr., to kill himself. U.S. universities were thoroughly infiltrated with informants: selected students, faculty, staff, and administrators would report to an extensive network of FBI handlers on anything political going on on campus. The surveillance of dissent became an institutional pillar for maintaining political order. The U.S. COINTELPRO program would run for more than 15 years, permanently reshaping the U.S. political landscape.” – Phillip Rogaway, The moral character of cryptographic work in Cryptology ePrint Archive. 2016

The pertinent question then is what surveillance is for. For an answer we have to look no further than GCHQ’s page about the Investigatory Powers Act 2016, which “predominantly governs” its mission:

“Before an interception warrant can be issued, the Secretary of State must believe that a warrant is necessary on certain, limited grounds, and that the interception is proportionate to what it seeks to achieve.

These grounds are that interception is necessary:

  • In the interests of national security; or
  • In the interests of the economic well-being of the UK; or
  • In support of the prevention or detection of serious crime” – GCHQ. Investigatory Powers Act. 2019

To unpack what this means in detail, we can recall how and when the means of surveillance have been deployed in recent history, by GCHQ and other departments of the British State. A programme that might cover all three aspects was GCHQ’s Tempora programme which tapped into fibre-optic cables for secret access to the world’s communications. Similarly, hacking the SIM manufacturer Gemalto would also cover all three. Spying on the German government is probably one for the economic well-being of the UK. Similarly, the London Metropolitan Police working with construction firms to blacklist trade unionists would fall into that category. UK intelligence services spying on Privacy International, police officers infiltrating Greenpeace and other activist groups, and databases of activists are plausibly done in the name of national security. The stop and search policy tackles, amongst other things, the serious crime of walking while black.

When we write that “solutions which allow reconstructing invasive information about individuals must be fully justified” it is worth to pay close attention to the justifications offered and to ask: justified to whom and by what standard.

Appendix: The letter in full

We, the undersigned, are scientists and researchers working in the UK in the fields of information security and privacy. We are concerned about plans by NHSX to deploy a contact tracing application. We urge that the health benefits of a digital solution be analysed in depth by specialists from all relevant academic disciplines, and sufficiently proven to be of value to justify the dangers involved.

A contact tracing application is a mobile phone application which records, using Bluetooth, the contacts between individuals, in order to detect a possible risk of infection. Such applications, by design, come with risks for privacy and medical confidentiality which can be mitigated more or less well, but not completely, depending on the approach taken in their design. We believe that any such application will only be used in the necessary numbers if it gives reason to be trusted by those being asked to install it.

It has been reported that NHSX is discussing an approach which records centrally the de-anonymised ID of someone who is infected and also the IDs of all those with whom the infected person has been in contact. This facility would enable (via mission creep) a form of surveillance. Echoing the letter signed by 300 international leading researchers, we note that it is vital that, when we come out of the current crisis, we have not created a tool that enables data collection on the population, or on targeted sections of society, for surveillance. Thus, solutions which allow reconstructing invasive information about individuals must be fully justified. Such invasive information can include the “social graph” of who someone has physically met over a period of time. With access to the social graph, a bad actor (state, private sector, or hacker) could spy on citizens’ real-world activities. We are particularly unnerved by a declaration that such a social graph is indeed aimed for by NHSX.

We understand that the current proposed design is intended to meet the requirements set out by the public health teams, but we have seen conflicting advice from different groups about how much data the public health teams need. We hold that the usual data protection principles should apply: collect the minimum data necessary to achieve the objective of the application. We hold it is vital that if you are to build the necessary trust in the application the level of data being collected is justified publicly by the public health teams demonstrating why this is truly necessary rather than simply the easiest way, or a “nice to have”, given the dangers involved and invasive nature of the technology.

We welcome the NHSX commitment to transparency, and in particular Matthew Gould’s commitment made to the Science & Technology committee on 28 April that the data protection impact assessment (DPIA) for the contact tracing application will be published. We are calling on NHSX to publish the DPIA immediately, rather than just before deployment, to enable (a) public debate about its implications and (b) public scrutiny of the security and privacy safeguards put in place.

We are also asking NHSX to, at a minimum, publicly commit that there will not be a database or databases, regardless of what controls are put in place, that would allow de-anonymization of users of its system, other than those self reporting as infected, to enable the data to be used for building, for example, social graphs.

Finally, we are asking NHSX how it plans to phase out the application after the pandemic has passed to prevent mission creep.

Footnotes:

1

It is worth noting that this sentence does not, in fact, echo the international letter. Where the UK letter asks to justify such invasions, the international letter outright rejects them “without further discussion”. I think the international letter is better on this point.

2

I should note that I was involved in coordinating and drafting that letter and that I signed the “letter signed by 300 international leading researchers”.

The Approximate GCD Problem

Steven Galbraith once told me that he expects mathematicians to teach RSA long after the world has migrated to post-quantum algorithms; because it is so easy to explain. Arguably, LWE is easier to explain than RSA but the Approximate Greatest Common Divisors problem (AGCD) is even easier than that and requires only scalars. Thus, it is a nice post-quantum alternative for an undergraduate mathematics module. Someone should perhaps write an undergraduate mathematics textbook introducing cryptography using Approximate Common Divisors.

Continue reading “The Approximate GCD Problem”