What does “secure” mean in Information Security?

This is text – written by Rikke Jensen and me – first appeared in the ISG Newsletter 2019/2020 under the title “What is Information Security?”. I’ve added a few links to this version.

The most fundamental task in information security is to establish what we mean by (information) security.

A possible answer to this question is given in countless LinkedIn posts, thought-leader blog entries and industry white papers: Confidentiality, Integrity, Availability. Since the vacuity of the “CIA Triad” is covered in the first lecture of the Security Management module of our MSc, we will assume our readers are familiar with it and will avoid this non-starter. Let us consider the matter more closely.

One subfield of information security that takes great care in tending to its definitions is cryptography. For example, Katz and Lindell write: “A key intellectual contribution of modern cryptography has been the recognition that formal definitions of security are an essential first step in the design of any cryptographic primitive or protocol”. Indeed, finding the correct security definition for a cryptographic primitive or protocol is a critical part of cryptographic work. That these definitions can be non-intuitive yet correct is made acutely apparent when asking students in class to come up with ideas of what it could mean for a block cipher to be secure. They never arrive at PRP security but propose security notions that are, well, broken.

Fine, we can grant cryptography that it knows how to define what a secure block cipher is. That is, we can know what is meant by it being secure, but does that imply that we are? Cryptographic security notions – and everything that depends on them – do not exist in a vacuum, they have reasons to be. While the immediate objects of cryptography are not social relations, it presumes and models them. This fact is readily acknowledged in the introductions of cryptographic papers where authors illustrate the utility of their proposed constructions by reference to some social situation where several parties have conflicting ends but a need or desire to interact. Yet, this part of the definitional work has not received the same rigour from the cryptographic community as complexity-theoretic and mathematical questions. For example, Goldreich writes: “The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural ‘security concerns’ ”. Following Blanchette we may ask back: “How does one identify such ‘natural security concerns’? On these questions, the literature remains silent”.

The broader social sciences offer a wealth of approaches to answering questions about social situations, relations, (collective) needs, imaginations and desires, yet, they are often relegated to a service role in information security, e.g. to perform usability testing of existing security technologies or as a token to blame the failings of such technologies on those who rely on them (see the “social engineering” literature). In contrast, we argue for a rather different intersection of social and computer science; one where social science establishes what technology is and ought to be. The service relation is all but inverted. If anything, computer science is asked to provide solutions to problems and challenges that social science identifies. To establish what security means within social settings – to identify and understand “natural security concerns” – one approach stands out in promising deep and detailed insights: ethnography.

More specifically, as highlighted by Herbert, ethnography is uniquely placed to “unearth what the group (under study) takes for granted”. A key challenge in engaging those who depend on security technology is that they are not trained information security professionals. They do not know and, indeed, should not need to know that confidentiality requires integrity, that existing onboarding practices can be phrased in the language of information security, which different security notions cannot be achieved simultaneously and what guarantees, say, cryptography, can give if asked. Therefore, to know exactly what is taken for granted, or put otherwise, expected, in social interactions, social and technical protocols and, indeed, cryptography, rather than what has been proven in some Appendix, is of critical import.

Some often used social science methods, while much more practical and less time consuming than ethnography, are therefore less suitable research approaches in this context. For example, questionnaires and surveys, both the qualitative and quantitative kind, are fairly futile means of enquiry here. While interviews provide some opportunity for deeper engagement, ethnography allows us to learn that which people do not know themselves. Through close observations and analysis of everyday activities and relations, ethnography reveals “the knowledge and meaning structures that provide the blueprint for social action” (Herbert) within the group under study. The exploratory nature of ethnographic enquiry, rooted in fieldwork with the group it aims to understand, is thus a key enabler in unlocking an understanding of individual and collective security needs and practices (i.e. “natural security concerns”). The inherently reflexive and embedded nature of ethnography enables such insights.

Researchers in the ISG are pursuing this approach; bringing cryptography and ethnography into conversation. We are currently engaged in a research project concerning questions about the role security technologies, especially cryptography, can play for participants in large-scale, urban protests. How do we conceptualise confidentiality in chat groups of 50k participants, where at least some must be assumed to be infiltrators? Do notions of post-compromise security, which is a common design goal in cryptographic messaging, matter? Does Blanchette’s critique of non-repudiation as a cryptographic design goal have teeth here? What are the implicit security protocols followed by participants in these protests? Should we reorient the role of trusted-third parties in cryptographic protocols from Goldreich’s “pivotal question” – “the extent to which [an] (imaginary) trusted party can be ’emulated’ by the mutually distrustful parties” to one where the parties are insecure but their infrastructure is not? Armed with this knowledge we can then investigate whether the technologies the participants of such protests and resistance movements use provide the quality which we call “security”.

One thought on “What does “secure” mean in Information Security?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s