Our work – with Jorge Blasco, Rikke Bjerg Jensen and Lenka Mareková – on the use of digital communication technologies in large-scale protests in Hong Kong was accepted at USENIX ’21. A pre-print is available on arXiv. Here’s the abstract:
The Anti-Extradition Law Amendment Bill protests in Hong Kong present a rich context for exploring information security practices among protesters due to their large-scale urban setting and highly digitalised nature. We conducted in-depth, semi-structured interviews with 11 participants of these protests. Research findings reveal how protesters favoured Telegram and relied on its security for internal communication and organisation of on-the-ground collective action; were organised in small private groups and large public groups to enable collective action; adopted tactics and technologies that enable pseudonymity; and developed a variety of strategies to detect compromises and to achieve forms of forward secrecy and post-compromise security when group members were (presumed) arrested. We further show how group administrators had assumed the roles of leaders in these ‘leaderless’ protests and were critical to collective protest efforts.
Our work can be seen in the tradition of “Can Johnny Build a Protocol? Co-ordinating developer and user intentions for privacy-enhanced secure messaging protocols” which documented the divergence of what higher-risk users – such as those in conflict with the authorities of a nation state – need and want and what secure messaging developers design for. This divergence is noteworthy because “human-rights activists” are a common point of reference in discussions around secure messaging.
However, our focus is not activists but participants in large-scale protests, i.e. our focus is more closely tied to specific needs in moments of heightened conflict, confrontation and mass mobilisation. In particular, we interviewed people who were in some shape or form involved in the Anti-ELAB protests in Hong Kong in 2019/2020. Several of our participants described themselves as “frontliners” which roughly means they were present in areas where direct confrontations with law enforcement took place.
As the title suggests our data speaks to how security needs and practices in this population are collective in nature: how decisions about security are made, what security features are deemed important, how people learn to understand security technologies. As an example take post-compromise security and forward secrecy:
In the literature, the notion of forward secrecy (FS) is understood as the protection of past messages in the event of a later compromise of an involved party and the notion of post-compromise security (PCS) as the protection of future messages some time after a (usually full state) compromise. Both of these security notions work with a persistent, global adversary of some form. Post-compromise security protects against an (ordinarily at some point passive) adversary after a compromise. Forward secrecy protects against an adversary that either passively observed the communication (weak FS) or even actively attacked it before the compromise.
The compromise the participants in our study were most concerned about was during and after an arrest. Here, they were concerned with both forward secrecy (remote message deletion) and post-compromise security (excluding an arrestee from a group). However, their notions differed from those in the literature. First, a cryptographic scheme achieving forward secrecy would not achieve the notion of forward secrecy desired by the participants in our study as messages remained stored on the recipient’s device. That is, our participants assumed and aimed to protect against a compromise that reveals not only key material but also the entire chat history (stored on the phone). Second, a security goal of the participants in our study was to protect themselves during the compromise not just afterwards. As indicated in our research findings, there is a variety of behaviours attempting to detect and control compromise as it happens, including location monitoring, timed messages, revocation of administrator capabilities and message deletion for others, all done on behalf of the compromised person by the remaining group members. Critically, their notion of post-compromise security was at a group level (removing the compromised party) rather than for the compromised party.
While not the focus of our work, I think it is worth contrasting this collective notion of security with the usable security literature that discusses the intersection of social relations and technology predominately from a psychological and thus individual perspective, often not treating wider social contexts and influences.
Our findings speak to an understanding of information security that rests on collective practices, where security for the group is negotiated between group members and where individual security notions are shaped by those of the group. They show how Anti-ELAB protesters practised security to fulfil their own security needs as well as those of the group. Where these were in conflict, our findings suggest that protesters accepted the security approaches collectively decided for the group. Group membership was conditioned on realising specific security goals related to the Anti-ELAB context – anonymity in large public groups and confidentiality and authentication in small close-knit groups. Practices such as collective decision making to provide ‘security in numbers’ and tactical ‘buy in’ from group members substantiate the notion that, for the participants in our study, information security is a collective endeavour.
The idea of collectivity in information security is not novel, yet, research on group-level information security is sparse – and is largely limited to work on employee groups and socialising contexts. Moreover, usable security scholarship generally considers security at an individual level, as do user studies on messaging applications. While, collectively, these studies highlight a series of usability shortcomings of messaging applications, they do not consider the social environment within which these are used, nor do they consider collective security practices which dominated our study. They generally treat such shortcomings as technological problems and/or incomplete mental models among individual users, rather than also considering how users’ wider social context and collective, negotiated practices shape their use of these technologies and how (in)secure they feel in doing so.
Our findings demonstrate that the particularities of this adversarial context, the Anti-ELAB protests, shaped participants’ collective security needs and responses. Participants explained how social relations and trust were established at the protest sites rather than online and how this shaped their security practices, such as onboarding of new group members. In contrast to most usable security assumptions, our data shows that protesters go to great lengths to fulfil their security needs, conditioned on their adversarial setting and their group membership, but that such needs are not fulfilled by the technologies they rely on.
A point worth highlighting is that Telegram emerged as a favourite because it appeared to provide solutions that map to the social organisation on the ground and workarounds for mitigating the (perceived) biggest threat: arrest while out “in the field”. Speaking of Telegram, our work highlights that it really ought to be studied more by information security researchers and cryptographers. According to our data a lot of the protest activities in Hong Kong were coordinated using Telegram, and prior and concurrent works tell similar stories about other places.
PS: Initially, our plan was to study Bridgefy – which according to media reports saw heavy usage in the Anti-ELAB protests – and the security needs and practices of protesters together. However, our data suggested that Bridgefy actually saw only very limited adoption in these protests and – taking a hint from our reviewers – we split the project into two.