UK Crypto Day (June 2023 Edition)

Together with Nick Spooner and Sarah Meikeljohn, I’m co-organising the next UK Crypto Day.1

Date 23 June
Venue King’s College London
Registration Here

We got some nice speakers/talks lined up:

Jonathan Bootle: The Sumcheck Protocol, Applications, and Formal Verification

The sumcheck protocol plays a central role in many constructions of efficient zero-knowledge arguments. In this talk, I will describe the sumcheck protocol, explain why it is so useful, and discuss recent work on a machine-checkable security proof.

Bio. Jonathan Bootle is a researcher in the Foundational Cryptography Group at IBM Research – Zurich. His research focuses on constructing efficient zero-knowledge proofs, especially those based on lattice assumptions or error-correcting codes.

Bernardo Magri: YOSO – You Only Speak Once

Imagine a setting where whenever a party in a protocol sends a message, its IP address becomes known, and it gets immediately killed by the adversary in a DoS attack. This implies that in any given protocol a party can only send a single message at a random point in time. Can we do secure multiparty computation in this setting? In this talk we introduce the YOSO MPC model that is based around the notion of roles, which are randomly assigned stateless parties that can send a single message for the entire duration of the protocol. We will show how one we can leverage the infrastructure of public blockchains to securely YOSO-compute any function with private inputs.

Bio. Bernardo Magri is a Senior Lecturer at the CS department at University of Machester. His research interests are on the theoretical and practical aspects of cryptography and distributed ledgers.

Sunoo Park: Email Attribution and Election Audits

My talk will focus on two recent works. The first concerns preventing the exploitation of stolen email data. Email is used widely for personal, industry, and government communication; as such, it is a valuable target for attack. Such attacks are compounded by email’s strong attributability: today, any attacker who gains access to your email can easily prove to others that the stolen messages are authentic. We define and construct non-attributable email using a new cryptographic signature primitive.

The second paper concerns a new model of post-election audits, loosely inspired by multi-prover interactive proofs. Post-election audits perform statistical hypothesis testing to confirm election outcomes. However, existing approaches are costly and laborious for close elections—often the most important cases to audit. We instead propose automated consistency checks, augmented by manual checks of only a small number of ballots. Our protocols scan each ballot twice, shuffling the ballots between scans: a “two-scan” approach inspired by two-prover proof systems.

Bio: Sunoo Park is a Postdoctoral Fellow at Columbia University and Visiting Fellow at Columbia Law School. Her research interests range across cryptography, security, and technology law. She received her Ph.D. in computer science at MIT, her J.D. at Harvard Law School, and her B.A. in computer science at the University of Cambridge.

Michele Ciampi: On the round-complexity of secure multi-party computation

In multi-party computation (MPC), multiple entities, each having some inputs want to jointly compute a function of these inputs with the guarantee that nothing aside from the output of the function will be leaked. In this talk, we are going to investigate how many messages the parties of an MPC need to exchange to securely realise any functionality with simulation-based security in the case where there is no setup and the majority of the parties can be corrupted. We will then consider a relaxation of the standard simulation-based paradigm, and discuss whether this lead to more efficient MPC protocols which still realize non-trivial functionalities which meaningful security.

Bio. Michele Ciampi is a Chancellor’s Fellow at the School of Informatics at the University of Edinburgh. His work focuses on theoretical aspects of cryptography, including multi-party computation protocols, zero-knowledge proofs, and blockchain.

François Dupressoir: Revisiting machine-checked AKE security — Reports from a possibly active trench

Machine-checked cryptographic proofs, as supported by tools such as EasyCrypt, CryptHOL or SSProve, aim at increasing trust in cryptographic algorithms by producing machine-checkable evidence that their security follows from relatively (sometimes) standard hardness assumptions. With only a few exceptions, their application has unfortunately been limited to small, typically non-interactive, constructions. A significant exception is a Eurocrypt 2015 paper applying EasyCrypt to a family of Authenticated Key Exchange protocols, whose massive proof has unfortunately been lost to time (and some obnoxious IT practices). This talk will report on an ongoing (or perhaps, hopefully, not) attempt at understanding better the interplay between EasyCrypt, interactive protocols, and a few competing pen-and-paper definition and proof methodologies. By doing so, I hope to provoke discussions around the goal and value of security proof and their machine-checked variants, and about what “traditional” cryptographers might expect or want from proof tools.

Bio. François Dupressoir is a Senior Lecturer at the University of Bristol, where he heads the Cryptography Research Group. His research revolves around bringing formal methods and formal reasoning techniques to cryptographic security of algorithms, protocols and their implementations.

Yixin Shen: Finding Many Collisions via Reusable Quantum Walks — Application to Lattice Sieving

Given a random function f with domain [2^n] and codomain [2^m], with m \geq n, a collision of f is a pair of distinct inputs with the same image. Collision finding is a ubiquitous problem in cryptanalysis, and it has been well-studied using both classical and quantum algorithms. Indeed, the quantum query complexity of the problem is well known to be \Theta(2^{m/3}), and matching algorithms are known for any value of m. The situation becomes different when one is looking for multiple collision pairs. Here, for 2^k collisions, a query lower bound of \Theta(2^{(2k+m)/3}) was shown by Liu and Zhandry (EUROCRYPT 2019). A matching algorithm is known, but only for relatively small values of m, when many collisions exist.

In this paper, we improve the algorithms for this problem and, in particular, extend the range of admissible parameters where the lower bound is met. Our new method relies on a chained quantum walk algorithm, which might be of independent interest. It allows to extract multiple solutions of an MNRS-style quantum walk, without having to recompute it entirely: after finding and outputting a solution, the current state is reused as the initial state of another walk. As an application, we improve the quantum sieving algorithms for the Shortest Vector Problem (SVP), with a complexity of 2^{0.2563d + o(d)} instead of the previous 2^{0.2570d + o(d)}.

Bio. Yixin Shen is a research fellow at Royal Holloway, University of London. Her work focuses on quantum algorithms and their application in lattice-based cryptanalysis. She completed her PhD at Université Paris Cité in 2021. After that, she worked as a postdoctoral researcher at Royal Holloway. In 2022, she obtained a five-year EPSRC Quantum Technology Career Development Fellowship.



Formerly, known as London-ish Crypto Day, but that produced a name clash with Liz’ London Crypto Day.

SandboxAQ Internships

You may or may not be aware that at SandboxAQ we have an internship residency programme. Residencies would typically be remote but can be on-site, they can take place year round and last between three to twelve months, full-time or part-time. To take part, you’d need to be a PhD student or postdoc somewhere.

In the interest of advertising our programme, here are two example ideas I’d be interested in.

Add SIS and (overstretched-)NTRU to the Lattice Estimator

The name “lattice estimator” at present is more aspirational than factual. In particular, we cover algorithms for solving LWE but not algorithms for solving SIS or (overstretched) NTRU. Well, we implicitly cover SIS because solving SIS implies solving LWE (and we cost that: the “dual attack”), we don’t have a nice interface to ask “how hard would this SIS instance be”. Adding this would be a nice contribution to the community, given how widely that estimator is used.

OPRFs from Lattices

Our first work on building OPRFs from lattices costs about 2MB of bandwidth if you ignore the zero-knowledge proofs and something like 128GB (yes, GB) if you count them. Since then, proving lattice statements has become a lot cheaper, so a natural project is to reconsider our construction: use newer/smaller proofs, tune the parameters, prove it in a nicer game-based model or in UC. To give you a taste of what is possible: This work building a non-interactive key-exchange (NIKE) has to solve essentially the same problem (noise drowning + ZK proofs) and achieves smaller parameters.

If you are interested, or have some other ideas, ping me and apply for a PQC resident position.

The k-R-ISIS (of Knowledge) Assumption

Our paper – together with Valerio Cini, Russell W. F. Lai, Giulio Malavolta and Sri Aravinda Krishnan Thyagarajan – titled Lattice-Based SNARKs: Publicly Verifiable, Preprocessing, and Recursively Composable will be presented at CRYPTO’22. A pre-print is available and here’s the abstract:

A succinct non-interactive argument of knowledge (SNARK) allows a prover to produce a short proof that certifies the veracity of a certain NP-statement. In the last decade, a large body of work has studied candidate constructions that are secure against quantum attackers. Unfortunately, no known candidate matches the efficiency and desirable features of (pre-quantum) constructions based on bilinear pairings.

In this work, we make progress on this question. We propose the first lattice-based SNARK that simultaneously satisfies many desirable properties: It (i) is tentatively post-quantum secure, (ii) is publicly-verifiable, (iii) has a logarithmic-time verifier and (iv) has a purely algebraic structure making it amenable to efficient recursive composition. Our construction stems from a general technical toolkit that we develop to translate pairing-based schemes to lattice-based ones. At the heart of our SNARK is a new lattice-based vector commitment (VC) scheme supporting openings to constant-degree multivariate polynomial maps, which is a candidate solution for the open problem of constructing VC schemes with openings to beyond linear functions. However, the security of our constructions is based on a new family of lattice-based computational assumptions which naturally generalises the standard Short Integer Solution (SIS) assumption.

In this post, I want to give you a sense of our new family of assumptions, the k-M-ISIS family of assumptions, and its variants. Meanwhile, Russell has written a post focusing on building the SNARK and Aravind has written about the nice things that we can do with our lattice-based SNARKs.

Continue reading “The k-R-ISIS (of Knowledge) Assumption”

10 PhD Positions at Royal Holloway’s Centre for Doctoral Training in Cyber Security for the Everyday

At Royal Holloway we are again taking applications for ten fully-funded PhD positions in Information Security. See the CDT website and the ISG website for what kind of research we do. Also, check out our past and current CDT students and our research seminar schedule to get an idea of how broad and diverse the areas of information security are in which the ISG works.

More narrowly, to give you some idea of cryptographic research (and thus supervision capacity) in the Cryptography Group at Royal Holloway: currently, we are nine permanent members of staff: Simon Blackburn (Maths), Saqib A. Kakvi, Keith Martin, Sean Murphy, Siaw-Lynn Ng, Rachel Player, Liz Quaglia and me. In addition, there are three postdocs working on cryptography and roughly 14 PhD students. Focus areas of cryptographic research currently are: lattice-based cryptography and applications, post-quantum cryptography, symmetric cryptography, statistics, access control, information-theoretic security and protocols.

To give you a better sense of what is possible, here are some example projects. These are in no way prescriptive and serve to give some ideas:

  1. I am, as always, interested in exploring lattice-based and post-quantum cryptography; algorithms for solving the hard underlying protocols, efficient implementations, lifting pre-quantum constructions to the post-quantum era.
  2. Together with my colleague Rikke Jensen, we want to explore security needs and practices in large-scale protests using ethnographic methods. We’ve done an interview-based (i.e. not ethnography-based) pilot with protesters in Hong Kong and think grounding cryptographic security notions in the needs, erm, on the ground, will prove rather fruitful.
  3. My colleague Rachel Player is looking at privacy-preserving outsourced computation, with a focus on (fully) homomorphic encryption.
  4. My (new) colleague Guido Schmitz uses formal methods to study cryptographic protocols.

Note that most of these positions are reserved for UK residents, which does, however, not mean nationality (see CDT website for details) and we can award three of our scholarships without any such constraint, i.e. international applicants. The studentship includes tuition fees and maintenance (£21,285 for each academic year).

To apply, go here. Feel free to get in touch if you have questions about whether this is right for you. Official announcement follows.

Continue reading “10 PhD Positions at Royal Holloway’s Centre for Doctoral Training in Cyber Security for the Everyday”

Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong

Our work – with Jorge Blasco, Rikke Bjerg Jensen and Lenka Mareková – on the use of digital communication technologies in large-scale protests in Hong Kong was accepted at USENIX ’21. A pre-print is available on arXiv. Here’s the abstract:

The Anti-Extradition Law Amendment Bill protests in Hong Kong present a rich context for exploring information security practices among protesters due to their large-scale urban setting and highly digitalised nature. We conducted in-depth, semi-structured interviews with 11 participants of these protests. Research findings reveal how protesters favoured Telegram and relied on its security for internal communication and organisation of on-the-ground collective action; were organised in small private groups and large public groups to enable collective action; adopted tactics and technologies that enable pseudonymity; and developed a variety of strategies to detect compromises and to achieve forms of forward secrecy and post-compromise security when group members were (presumed) arrested. We further show how group administrators had assumed the roles of leaders in these ‘leaderless’ protests and were critical to collective protest efforts.

Our work can be seen in the tradition of “Can Johnny Build a Protocol? Co-ordinating developer and user intentions for privacy-enhanced secure messaging protocols” which documented the divergence of what higher-risk users – such as those in conflict with the authorities of a nation state – need and want and what secure messaging developers design for. This divergence is noteworthy because “human-rights activists” are a common point of reference in discussions around secure messaging.

However, our focus is not activists but participants in large-scale protests, i.e. our focus is more closely tied to specific needs in moments of heightened conflict, confrontation and mass mobilisation. In particular, we interviewed people who were in some shape or form involved in the Anti-ELAB protests in Hong Kong in 2019/2020. Several of our participants described themselves as “frontliners” which roughly means they were present in areas where direct confrontations with law enforcement took place.

As the title suggests our data speaks to how security needs and practices in this population are collective in nature: how decisions about security are made, what security features are deemed important, how people learn to understand security technologies. As an example take post-compromise security and forward secrecy:

Continue reading “Collective Information Security in Large-Scale Urban Protests: the Case of Hong Kong”

Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices

PKC’21 is nearly upon us which – in this day and age – means a new YouTube playlist of talks. Eamonn and Fernando wrote a nice paper on on the success probability of solving unique SVP via BKZ which Fernando is describing here:

Alex is presenting our – with Amit and Nigel – work on round-optimal Verifiable Oblivious PseudoRandom Functions (VOPRF) from ideal lattices here:

Since Alex is doing an amazing job at walking you through our paper I won’t attempt this here. Rather, let me point out a – in my book – cute trick in one of our appendices that may have applications elsewhere.

Continue reading “Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices”

Mesh Messaging in Large-scale Protests: Breaking Bridgefy

Together with Jorge Blasco, Rikke Bjerg Jensen and Lenka Marekova we have studied the security of the Bridgefy mesh messaging application. This work was motivated by (social) media reports that this application was or is used by participants in large-scale protests in anticipation of or in response to government-mandated Internet shutdowns (or simply because the network infrastructure cannot handle as many devices at the same time as there are during such large protests). The first reports were about Hong Kong, later reports were then about India, Iran, US, Zimbabwe, Belarus and Thailand (typically referencing Hong Kong as an inspiration). In such a situation, mesh networking seems promising: a network is spanned between participants’ phones to create an ad-hoc local network to route messages.

Now, Bridgefy wasn’t designed with this use-case in mind. Rather, its designers had large sports events or natural disasters in mind. Leaving aside the discussion here if those use-cases too warrant a secure-by-default design, we had reason to suspect that the security offered by Bridgefy might not match the expectation of those who might rely on it.

Indeed, we found a series of vulnerabilities in Bridgefy. Our results show that Bridgefy currently permits its users to be tracked, offers no authenticity, no effective confidentiality protections and lacks resilience against adversarially crafted messages. We verify these vulnerabilities by demonstrating a series of practical attacks on Bridgefy. Thus, if protesters rely on Bridgefy, an adversary can produce social graphs about them, read their messages, impersonate anyone to anyone and shut down the entire network with a single maliciously crafted message. For a good overview, see Dan Goodin’s article on our work at Ars Technica.

We disclosed these vulnerabilities to the Bridgefy developers in April 2020 and agreed on a public disclosure date of 20 August 2020. Starting from 1 June 2020, the Bridgefy team began warning their users that they should not expect confidentiality guarantees from the current version of the application.

Let me stress, however, that, as of 24 August, Bridgefy has not been patched to fix these vulnerabilities and thus that these vulnerabilities are present in the currently deployed version. The developers are currently implementing/testing a switch to the Signal protocol to provide cryptographic assurances in their SDK. This switch, if done correctly, would rule out many of the attacks described in our work. They hope to have this fix deployed soon.

Continue reading “Mesh Messaging in Large-scale Protests: Breaking Bridgefy”

What does “secure” mean in Information Security?

This is text – written by Rikke Jensen and me – first appeared in the ISG Newsletter 2019/2020 under the title “What is Information Security?”. I’ve added a few links to this version.

The most fundamental task in information security is to establish what we mean by (information) security.

A possible answer to this question is given in countless LinkedIn posts, thought-leader blog entries and industry white papers: Confidentiality, Integrity, Availability. Since the vacuity of the “CIA Triad” is covered in the first lecture of the Security Management module of our MSc, we will assume our readers are familiar with it and will avoid this non-starter. Let us consider the matter more closely.

One subfield of information security that takes great care in tending to its definitions is cryptography. For example, Katz and Lindell write: “A key intellectual contribution of modern cryptography has been the recognition that formal definitions of security are an essential first step in the design of any cryptographic primitive or protocol”. Indeed, finding the correct security definition for a cryptographic primitive or protocol is a critical part of cryptographic work. That these definitions can be non-intuitive yet correct is made acutely apparent when asking students in class to come up with ideas of what it could mean for a block cipher to be secure. They never arrive at PRP security but propose security notions that are, well, broken.

Fine, we can grant cryptography that it knows how to define what a secure block cipher is. That is, we can know what is meant by it being secure, but does that imply that we are? Cryptographic security notions – and everything that depends on them – do not exist in a vacuum, they have reasons to be. While the immediate objects of cryptography are not social relations, it presumes and models them. This fact is readily acknowledged in the introductions of cryptographic papers where authors illustrate the utility of their proposed constructions by reference to some social situation where several parties have conflicting ends but a need or desire to interact. Yet, this part of the definitional work has not received the same rigour from the cryptographic community as complexity-theoretic and mathematical questions. For example, Goldreich writes: “The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural ‘security concerns’ ”. Following Blanchette we may ask back: “How does one identify such ‘natural security concerns’? On these questions, the literature remains silent”.

Continue reading “What does “secure” mean in Information Security?”

Cryptographic Security Proofs as Dynamic Malware Analysis

This is text first appeared in the ISG Newsletter 2019/2020. I’ve added a bunch of links to this version.

RSA encryption with insecure padding (PKCS #1 v1.5) is a gift that keeps on giving variants of Bleichenbacher’s chosen ciphertext attack. As the readers of this newsletter will know, RSA-OAEP (PKCS #1 v2) is recommended for RSA encryption. How do we know, though, that switching to RSA-OAEP will give us an encryption scheme that resists chosen ciphertext attacks? Cryptography has two answers to this. Without any additional assumptions the answer is that we don’t know (yet). In the Random Oracle Model (ROM), though, we can give an affirmative answer, i.e. RSA-OAEP was proven secure. Indeed, security proofs in the ROM (and its cousin the Ideal Cipher Model) underpin many cryptographic constructions that are widely deployed, such as generic transforms to achieve security against active attacks and block cipher modes of operation. This article is meant to give some intuition about how such ROM proofs go by means of an analogy to dynamic malware analysis.

Continue reading “Cryptographic Security Proofs as Dynamic Malware Analysis”

Open Letters v Surveillance

A letter, signed by 177 scientists and researchers working in the UK in the fields of information security and privacy, reads:

“Echoing the letter signed by 300 international leading researchers, we note that it is vital that, when we come out of the current crisis, we have not created a tool that enables data collection on the population, or on targeted sections of society, for surveillance. Thus, solutions which allow reconstructing invasive information about individuals must be fully justified.1 Such invasive information can include the ‘social graph’ of who someone has physically met over a period of time. With access to the social graph, a bad actor (state, private sector, or hacker) could spy on citizens’ real-world activities.”

Our letter2 stands in a tradition of similar letters and resolutions. For example, here is the “Copenhagen Resolution” of the International Association for Cryptologic Research (IACR), i.e. the professional body of cryptographers, from May 2014:

“The membership of the IACR repudiates mass surveillance and the undermining of cryptographic solutions and standards. Population-wide surveillance threatens democracy and human dignity. We call for expediting research and deployment of effective techniques to protect personal privacy against governmental and corporate overreach.”

Both of these documents, in line with similar documents, treat privacy and surveillance rather abstractly. On the one hand, this has merit: many of us, including myself, are experts on the technology and can speak to that with authority. Also, getting people working on privacy to agree that privacy is important is straight forward, getting them to agree on why is a much more difficult proposition. On the other hand, when we are making political interventions such as passing resolutions or writing open letters, I think we should also ask ourselves this question. I suspect we won’t all agree, but the added clarity might still be helpful.

Similar considerations have been voiced in several works before, e.g.

“The counter-surveillance movement is timely and deserves widespread support. However, as this article will argue and illustrate, raising the specter of an Orwellian system of mass surveillance, shifting the discussion to the technical domain, and couching that shift in economic terms undermine a political reading that would attend to the racial, gendered, classed, and colonial aspects of the surveillance programs. Our question is as follows: how can this specific discursive framing of counter-surveillance be re-politicized and broadened to enable a wider societal debate informed by the experiences of those subjected to targeted surveillance and associated state violence?” – Seda Gürses, Arun Kundnani & Joris Van Hoboken. Crypto and empire: the contradictions of counter-surveillance advocacy in Media, Culture & Society, 38(4), 576–590. 2016


“History teaches that extensive governmental surveillance becomes politicalin character. As civil-rights attorney Frank Donner and the Church Commission reports thoroughly document, domestic surveillance under U.S. FBI director J. Edgar Hoover served as a mechanism to protect the status quo and neutralize change movements. Very little of the FBI’s surveillance-related efforts were directed at law-enforcement: as the activities surveilled were rarely illegal, unwelcome behavior would result in sabotage, threats, blackmail, and inappropriate prosecutions, instead. For example, leveraging audio surveillance tapes, the FBI’s attempted to get Dr. Martin Luther King, Jr., to kill himself. U.S. universities were thoroughly infiltrated with informants: selected students, faculty, staff, and administrators would report to an extensive network of FBI handlers on anything political going on on campus. The surveillance of dissent became an institutional pillar for maintaining political order. The U.S. COINTELPRO program would run for more than 15 years, permanently reshaping the U.S. political landscape.” – Phillip Rogaway, The moral character of cryptographic work in Cryptology ePrint Archive. 2016

The pertinent question then is what surveillance is for. For an answer we have to look no further than GCHQ’s page about the Investigatory Powers Act 2016, which “predominantly governs” its mission:

“Before an interception warrant can be issued, the Secretary of State must believe that a warrant is necessary on certain, limited grounds, and that the interception is proportionate to what it seeks to achieve.

These grounds are that interception is necessary:

  • In the interests of national security; or
  • In the interests of the economic well-being of the UK; or
  • In support of the prevention or detection of serious crime” – GCHQ. Investigatory Powers Act. 2019

To unpack what this means in detail, we can recall how and when the means of surveillance have been deployed in recent history, by GCHQ and other departments of the British State. A programme that might cover all three aspects was GCHQ’s Tempora programme which tapped into fibre-optic cables for secret access to the world’s communications. Similarly, hacking the SIM manufacturer Gemalto would also cover all three. Spying on the German government is probably one for the economic well-being of the UK. Similarly, the London Metropolitan Police working with construction firms to blacklist trade unionists would fall into that category. UK intelligence services spying on Privacy International, police officers infiltrating Greenpeace and other activist groups, and databases of activists are plausibly done in the name of national security. The stop and search policy tackles, amongst other things, the serious crime of walking while black.

When we write that “solutions which allow reconstructing invasive information about individuals must be fully justified” it is worth to pay close attention to the justifications offered and to ask: justified to whom and by what standard.

Appendix: The letter in full

We, the undersigned, are scientists and researchers working in the UK in the fields of information security and privacy. We are concerned about plans by NHSX to deploy a contact tracing application. We urge that the health benefits of a digital solution be analysed in depth by specialists from all relevant academic disciplines, and sufficiently proven to be of value to justify the dangers involved.

A contact tracing application is a mobile phone application which records, using Bluetooth, the contacts between individuals, in order to detect a possible risk of infection. Such applications, by design, come with risks for privacy and medical confidentiality which can be mitigated more or less well, but not completely, depending on the approach taken in their design. We believe that any such application will only be used in the necessary numbers if it gives reason to be trusted by those being asked to install it.

It has been reported that NHSX is discussing an approach which records centrally the de-anonymised ID of someone who is infected and also the IDs of all those with whom the infected person has been in contact. This facility would enable (via mission creep) a form of surveillance. Echoing the letter signed by 300 international leading researchers, we note that it is vital that, when we come out of the current crisis, we have not created a tool that enables data collection on the population, or on targeted sections of society, for surveillance. Thus, solutions which allow reconstructing invasive information about individuals must be fully justified. Such invasive information can include the “social graph” of who someone has physically met over a period of time. With access to the social graph, a bad actor (state, private sector, or hacker) could spy on citizens’ real-world activities. We are particularly unnerved by a declaration that such a social graph is indeed aimed for by NHSX.

We understand that the current proposed design is intended to meet the requirements set out by the public health teams, but we have seen conflicting advice from different groups about how much data the public health teams need. We hold that the usual data protection principles should apply: collect the minimum data necessary to achieve the objective of the application. We hold it is vital that if you are to build the necessary trust in the application the level of data being collected is justified publicly by the public health teams demonstrating why this is truly necessary rather than simply the easiest way, or a “nice to have”, given the dangers involved and invasive nature of the technology.

We welcome the NHSX commitment to transparency, and in particular Matthew Gould’s commitment made to the Science & Technology committee on 28 April that the data protection impact assessment (DPIA) for the contact tracing application will be published. We are calling on NHSX to publish the DPIA immediately, rather than just before deployment, to enable (a) public debate about its implications and (b) public scrutiny of the security and privacy safeguards put in place.

We are also asking NHSX to, at a minimum, publicly commit that there will not be a database or databases, regardless of what controls are put in place, that would allow de-anonymization of users of its system, other than those self reporting as infected, to enable the data to be used for building, for example, social graphs.

Finally, we are asking NHSX how it plans to phase out the application after the pandemic has passed to prevent mission creep.



It is worth noting that this sentence does not, in fact, echo the international letter. Where the UK letter asks to justify such invasions, the international letter outright rejects them “without further discussion”. I think the international letter is better on this point.


I should note that I was involved in coordinating and drafting that letter and that I signed the “letter signed by 300 international leading researchers”.