This panel discusses IT security as distributed and fragmented efforts to co- and re-configure control and authority together with uncertainties and threats. We explore IT in/security as a matter of careful tinkering and tense negotiations, with both temporal and moral dynamics.
Warnings on computer screens, the mushrooming of security centres, increasing discourses on hacker attacks and leaks all play a crucial part in re-configuring social and moral orderings. Nonetheless, IT security is commonly framed as a matter of pure technological solutions, as a problem of mismatches between design and individual's competencies and of seemingly indisputable standards.
These perspectives fail to account for the ways in which IT in/security is encountered as a matter of caring, negotiating and tinkering. How is IT security enacted as both a technical solution and a collaborative, provisional and contested practice in organizations and beyond? How does IT security re-configure control and authority together with uncertainties and threats? How, where and when are concerns for in/security in IT propagated (or stalled) e.g. by the quest for innovation?
Bringing together a broad range of approaches to the study of IT security practices, the panel probes the lens of care and its usefulness in the study, critique and intervention. The panel invites contributions on the following or related topics:
new challenges for IT security in the wake of 'big' data,
the dynamics of shifts in in-/security, their rhythms of 'leak and fix,'
the various areas for which IT security needs to be enacted and upon which it thrives,
the practices of locating and translating "weaknesses," "risks" or "breaches,"
the different imaginaries and thought styles of IT security,
the moral economy of IT security and its appeals to public good and evil, etc.
This panel is closed to new paper proposals.
Tinkering with humans? Social engineering and the construction of the "deficient user" in cyber security
Social engineering in cyber security refers to ways in which hackers use human vulnerabilities to penetrate technical systems. Our talk focuses on how hackers and SE experts attribute (ir)responsibilities to the users, as they imagine possible solutions to the supposed "people problem."
Social engineering (SE) in cyber security, refers to the various, and often highly creative ways in which hackers penetrate and overcome security systems by targeting human as well as technical vulnerabilities simultaneously. More than two-thirds of all hacking attacks use SE, which leaves cyber security professionals in both companies and government organizations struggling to develop effective counter-measures. One of the main reasons for this is that because SE exploits basic human characteristics such as curiosity, greed, excitement, and ignorance as a gateway into the technical layer of a targeted information system.
In this presentation, we explore how hackers, security professionals, and institutional stakeholders construct a deficit employee or, more generally, deficit users as opposed to IT specialists. More specific, we focus on how hackers and SE experts attribute (ir)responsibilities to the users, as they imagine possible solutions to the supposed "people problem." We trace the ways in which SE and the experts in this community construct deficits. Instead of looking at schemes that target individual users, for instance in order to obtain a victim's credit card information, our analysis deals with the interplay between users in organizations, IT departments, and the larger SE expert discourse. What we observe in these institutional contexts is a shift in the way individual deficiency is constructed vis-à-vis collective security. While companies have largely benefitted from the digital revolution in ICT, they have individualized the risk that came with it, drawing on rarely challenged psychological and ethical assumptions underpinning most SE expertise today.
Security and the DevOps imaginary
This paper considers the relationship between information security and a recent paradigm shift in IT delivery methods. "DevOps" enacts a re-imagining of the IT delivery process, drawing on images of adaptive systems, and distributed authority, with implications for how security practices intervene.
This paper considers the relationship between information security and a recent paradigm shift in IT delivery methods. Rising over the last decade, in concert with a range of infrastructure automation technologies, "DevOps" methods are widely considered best practice for delivering change to information systems. DevOps is a re-imagining of the problems of IT delivery, drawing on a heritage in lean manufacturing, agile software development and systems thinking, with a focus on empowered self-organised teams and processes delivering rapid and continuous value.
Information security practices are challenged by DevOps methods, as the empowerment of teams involves the displacement of the kinds of representations, gateways and reviews that constituted traditional security "rituals of verification" (Power 1997). Where information was previously reported upwards in the hierarchy for authorisation, DevOps instantiates new forms of reflexivity, immanent to the delivery process.
The problem for security in these contexts seems to be how to reconcile the distributed trust of self-organising teams with the centralised accountability of the organisation's corporate personhood, which is ultimately at stake when incidents happen.
The problem for STS is partly in devising ways to approach, analyse or respond to "already reflexive" or "self-analysing" phenomena (Riles 2001). If we find concepts like "care" already at work within the DevOps imaginary, are these useful devices for our own reflection?
This paper presents reflections mainly based on professional experience as a consultant in the information technology sector, and that are forming the basis for a new ethnographic study of contemporary cyber security practices.
Whom do we fear? Between pop-cultural myth and hacking collectives - negotiations on the meaning of hack
The pop-cultural construction of the hacker's image is influential and it shapes media discourses around hacking. In my paper I use ANT with other perspectives and case studies, to answer how is the term 'hacker' being negotiated, and how it affects the perception of IT security and moral economy.
Questions about the IT - its security, risks, and moral economy - are often intertwined with the various notions about hackers. They are murky, to say the least. Problems emerge even during the simplest negotiations of the term 'hacking', since the definitions coined by the human actors - members of the hacker collectives - are opposite to the typical myth of a hacker, widespread in popular culture. The practice of defining hacking in the academic discourse is hardly consistent either. The pop-cultural construction of the hacker's image is so influential, that it shapes (and is amplified in turn) the mainstream media discourses around hacking. This leads to various situations, where the label of a 'hacker' is slapped at practices completely outside the realm of what hacker collectives believe to be the proper description of their actions. Constant negotiations between these actors have influence what is, and what is not, recognized as act of hacking.
In my paper I wish to talk at length about these discrepancies. I will use relevant anthropological, sociological and netnographic studies (works of Gabriella Coleman, Tim Jordan and others) along with the Actor-Network Theory perspective, which helps me analyze over two dozens audiovisual works and few important case studies. With this I am able to outline the main ways that the hacker is being constructed, explain how it affects the perception of IT security, and describe the intricacies of negotiating the labels of 'good' and 'evil' between various actors.
Opening (and closing) doors for security: negotiations of trust
We present three case studies to analyze the negotiations of trust involved in three security scenarios: political, offline-online, and digital. We show how different actors generate the conditions to open and closing doors and how trust is a rather contextual condition.
Keeping something safe involves actions of separation such as building walls and borders, but also actions of authorization, such as the construction and the opening of doors. Doors are old security solutions that in current times appear in different shapes, such as memberships to groups, visa applications, international borders, and passwords. The materiality of doors is as multiple as the values and goods to be secured. However, despite their diversity, doors share similarities. For instance, they open only under certain conditions, to certain actors, and when fulfilling given requirements. This talk focuses on this set of characteristics that we refer to as "negotiations of trust".
In this talk we look at negotiations of trust in three case studies coming from different political and technological contexts. First, we look at archival material from the German State Security Service (STASI) to analyze how subjects were categorize as trustable or not based on their daily routines, cloths, etc. Second, we look at digital facial recognition technologies and, in particular, how the face as a presumably unique, unmistakable, and trustworthy online password is used to open or close real life and online doors. Finally, we discuss current IT-security solutions that regulate our daily internet use and, especially, how these technologies evaluate and suggest what is trustable online content for the user. Through the analysis of these cases we aim to identify key actors that influence and control security-related negotiations of trust in three security contexts and kinds of doors: political, offline-online, and digital.
Practicing a science of security
Science of cyber security is a contested academic discipline. Many claim security is not or cannot be science. We dispute this view via context and counterarguments from STS research. Our focus is the security research community's perception of itself and to highlight avenues for STS to engage.
Background: Most people writing about a science of security conclude that security work is not a science, or at best rather hopefully conclude that it is not a science yet but could be.
Method: Literature survey of the discussion of science of security going on within the security research community.
Results: We identify five common reasons people present as to why security is not a science: (1) experiments are untenable; (2) reproducibility is impossible; (3) there are no laws of nature in security; (4) there is no single ontology of terms to discuss security; and (5) security is merely engineering.
Conclusions: Security as practiced is a science. Complaints against this view rely on outdated concepts of philosophy of science derived from broadly logical empiricist approaches. A view of science based on integrative pluralism and mosaic unity of science readily can accept security as a science, and provides tools for solving methodological and epistemic challenges in security.
Impact for STS: This talk introduces the views of practicing security researchers, highlights areas where STS can productively engage, and identifies cultural barriers to the adoption of the perception of security research as a science. This includes the secretive nature of security research, and how undisclosed research impacts social practices and interdisciplinary work.
Troubling the ordering in cybersecurity research
With increasing funding for interdisciplinary projects in cybersecurity computer scientists are recruiting collaborators from various fields. Some collaborations perform interdisciplinarity particularly convincing, based on compatible epistemic cultures. We highlight potentials for STS.
When computer scientists acknowledge the limits of their technocentric approach and aim to design technologies closer to human needs or to engender more security-conscious behaviour, they tend to reach out to psychologists. Psychologists' concepts of evidence (e.g. mathematical prove) and their experimental methods resonate well with those of computer science, and techno-psychological accounts of cybersecurity (e.g. HCI) have become acknowledged in industry and academia for analysing individuals' actions and mental models of the technological system.
With the current call for more interdisciplinarity in cybersecurity STS scholars enter into this field of study troubling concepts, methods and ontologies of techno-psychological interdisciplinarity. For example, STS views security as volatile, flexible and a matter of sociomaterial collaboration, i.e. collective, distributed and contested, which often leads to a questioning of pre-defined definitions of security and fixed relations between humans and technologies. Under these conditions, what does interdisciplinarity between computer science and STS look like? How is the topic of cybersecurity specifically suitable for psychological inquiries? Is it because it focuses on individuals' behavior?
The paper takes its point of departure in our previous and current interdisciplinary collaborations with computer scientists aimed at cybersecurity. We discuss the challenges STS encounters when attempting to re-sorte computer science based cybersecurity while still collaborating in a symmetric manner with computer scientists. We ask about the specificity of cybersecurity that makes STS engagement challenging and inquire into the possibilities of interdisciplinary collaboration. What would a socio-materially centred cybersecurity research look like?
Panel discussion: researching IT security
The panel offers an in-depth discussion of methods and methodologies for studying IT security in STS. We are interested in how the panelists approach IT security from various sides oscillating between inside and outside perspectives, and across disciplinary boundaries.
IT security issues often reach across various disciplines but are a challenging topic for interdisciplinary teams, even in STS. One reason for this is that the current IT security research builds on socio-material orders that are hard to reconcile with a more situated view on both practices and infrastructures.
We ask how STS can approach the topic of IT security and how doing so in turn provokes STS and allows to critically reflect back on STS methods and theories. Based on panelists' talks throughout the session we want to explore the sites where IT security is contested, negotiated, tinkered, or cared for and use these to draft a multiplicity of approaches to IT security in STS. How are STS methods and concepts particularly suited to research the complex phenomenon of IT security? What lessons could be learned from existing research in STS? We ask further how engineers and decision-makers could be recruited as "co-laborators" for an STS inquiry into IT security and how can we formulate such an endeavor. What could a common ground for co-laboration be and what different concerns are part of it?
Closing the session we would like to discuss how to extend our exchange and find an appropriate format of publication.
Panelists are (tbc):
Dr. Katharina Kinder-Kurlanda (GESIS Leibniz Institute for the Social Sciences)
Andreas Poller (Fraunhofer SIT Darmstadt)
Matt Spencer (Centre for Interdisciplinary Methodologies | University of Warwick)
Prof. Dr. Estrid Sørensen (Ruhr University Bochum)
This panel is closed to new paper proposals.