Software as a method acts as an epistemic instrument for knowing subjects and enacts rationalities for governing. As software enter new domains, what forms of sorting do they bring, and how do they act on subjects? We seek to examine software as method through ethnographic/historical case studies.
Software as a method for computation, counting, or classification acts not only as an epistemic instrument for knowing subjects, but also enacts rationalities for governing them—whether it is the citizen, the technocrat, the data worker, or the university teacher. Software proposes collectivities of human-machine relationships that configure subjects, objects, and forms of expertise—who counts, what is counted, and what counts as counting.
In this panel, we build on the notion that methods enact subjects and subjectivities, and we ask: As software methods enter new institutions and domains (public and private) through computerization, digitisation, and datafication, what forms of sorting do they bring, and how do they act on subjects? How do they generate collectivities of belonging? What specific methods of computation, calculation, and classification participate in assembling new subjectivities through software? As methods, how do software translate across the human/machine divide, anticipating and remediating subjectivities?
We welcome papers that examine software as method through ethnographic or historical case studies, focusing on the relations between subjects and objects, forms of anticipation and memory practices, past and future making, as well as ways of knowing and acting on subjectivities. We are particularly interested in analyses that approach the effects of method at the level of the vocation or fields, e.g., how occupational subjectivities are reshaped in the name of software.
This panel is closed to new paper proposals.
CTRL+ALT+DEL: software sorted exclusion of asylum seekers in European population statistics and emergent subjectivities
This paper traces the asylum seeker category in population and migration statistics in Europe. We argue that, how the "boundaries" are set for inclusion and exclusion are not merely software practices or decisions, but they have important political implications for the groups they are set to count.
To be able to govern, administrative bodies need to make objects of government legible (Scott 1998). Migrant persons do not fall neatly into categories used by administrative agencies, and this is illustrated in the tendency to exclude asylum seekers from the population statistics, even when [legally speaking] they should be included. Based on ethnographic fieldwork in Norway and Finland, and in Eurostat and UNECE, we study how practices of population registration and statistics compilation on foreign-born persons can be beset by differential, and at times contradictory outlooks. We show that these outlooks are often presented in the form of seemingly apolitical software infrastructures, or decisions made in response to software. Using STS and specifically "double social life of methods", we seek to trace how software emerges as both a device for administrative book-keeping, but also enacting the "migrant" categories. We argue that, even if all statistical production necessarily involves inclusions and exclusions, how the "boundaries" are set for who to be included and who to be excluded has immediate impacts on the lives of those implicated by these decisions (Bowker and Starr 1999), as such, they are onto-political (Law 2009; Mol 1999). In view of this, we show that methods enact their subjects and subjectivities; especially in detailing how the methods set to identify and measure refugee statistics in Europe end up enacting refugees as a particular collective group, which takes on a wider definition of refugee than used in official categories.
Learning to program realities and identities: diffractions of rendering technical in undergraduate computer science education
This paper explores how student subjectivities and "reality" are simultaneously rendered technical, rendered natural, and rendered in terms of binary gender categories, as undergraduate computer science students learn to program and make software.
"I hope this kind of logic is natural to you now" a professor commented to students as he was reviewing a particular programming method. This paper explores the shaping of computer science student subjectivities as these students learn to program and make software. Based on ethnographic fieldwork conducted in an undergraduate computer science program in Singapore during the 2013-2014 academic year, I consider how students learn practices of "rendering technical" and "rendering natural": students learn how to represent and translate "reality" into models, algorithms, and code. At the same time, the rules of writing programs are presented as an inherent part of how computers work and their "logic" as based in the natural evolution of human thought and practice. I also explore how these renderings diffract. While students are told that they should learn to think critically, critical judgment as it is figured in computer science relies on constructing reality such that truth, accuracy, and efficiency, among other forms of measurement, can be reliably known, assessed, and judged. As students learn to judge code, they also learn to judge themselves and one another according to technical criteria. Moreover, both student subjectivities and computer science knowledges and practices are rendered in terms of binary gender categories. Gender binaries are, in turn, are (re)produced and naturalized in and by teaching examples, computer science concepts, and other technical actors. I thus show how students' lives and selves are "torqued" as they learn computer science and to render reality technical.
Telling stories about (re)search: research practices reconfigured by digital search technologies
This paper presents insights from ethnographic data collected during a user study investigating what happens when exploratory search software and (digital) humanities scholars meet. Conclusions focus on how search software reconfigures research practices, outcomes and perceptions of expertise.
This paper presents insights from a user study investigating the relationship between search software and digital search practices of humanities scholars. The paper therefore focuses on what happens when scholars and digital search technologies meet. The main question of the paper is how computational tools - in this case, a linked open data, cultural heritage browser for exploratory search - reconfigure practices of humanities research, the ensuing research insights, and notions of what constitutes expertise within this field.
The analysis draws on ethnographic data gathered during the user-centred development process of the exploratory search browser, DIVE+. DIVE+ affords serendipitous exploration of cultural heritage collections and allows users to visualize their search journeys. Over the course of one year, 100 (digital) humanities scholars tested the software in terms of its support for narrative formation about historical events that are perceived as disruptive, such as natural disasters. Furthermore, these scholars shared ideas about the role of search technologies in their research processes, such as devising research questions and discovering narratives about the past.
This latter concern is especially poignant as interactions with the search browser change how users experience their (re)search practice. Shifting from faceted- to exploratory search opens up new research avenues, but also raises uncertainties about how to interpret found audio-visual material. Thus, engagements with the software not only change users' subjectivity, but also alter how digitized material is understood. These are valuable insights, especially in view of disciplinary debates about what it means to be(come) a Digital Humanities scholar.
Facial recognition technology as software of categorization
This paper focuses on the use of normative face templates in the development of facial recognition technologies. Based on an historical case study it analyzes the expectations underlying the introduction of such templates and discusses software-based identification and categorization practices.
In recent years, the use of facial recognition technologies has gained popularity in public and private security sectors. These technologies work by capturing an individual's face in order to authenticate identity or identify a person in surveillance scenarios. The function of these technologies, involves several standards that introduce into the software specific normative values on how a face is supposed to look like. In other words, different facial templates and standards are used in facial recognition technologies to "teach" algorithms what a "normal" face is. This normativity turns recognition into categorization. This talk investigates the factors influencing the establishment of facial standards and discusses their consequences.
Therefore, I present a case study from the 1990s known as FERET (Facial Recognition Technology). FERET was a competition organized by the Department of Defense in the U.S. to evaluate the state of the art in facial recognition technologies and to foster research in the field. By looking at the research teams and algorithms participating in FERET, this research explores how the different facial templates used in algorithms affect processes of human recognition and categorization. Who can be recognized or authenticated depends on the kind of information introduced during the algorithmic learning phase. As this case study shows, this normative dimension in the development of facial recognition technologies is of high political relevance, as their assumed objectivity and neutrality in public and private contexts fall into question. Lastly, this historical case seeks to provide insight into the functioning of ongoing software-based identification practices.
This panel is closed to new paper proposals.