Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Luke Stark
(Western University)
Melissa Adler (University of Western Ontario)
David Nemer (University of Virginia)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract:
Numerical techniques and technologies used to inscribe and circulate differences—racial, gendered, class, caste-based, and others—are everywhere in disciplinary and control societies. This panel interrogates the history of these quantitative mechanisms of power, and the ways they have been resisted.
Long Abstract:
Numerous thinkers (including Fanon, Foucault, Deleuze, Crenshaw, Hartman, Freire, and others) have theorized the ways in which the political arithmetic of the capitalist state and its agents has prompted the development of numerical techniques and technologies to inscribe and circulate differences—racial, gendered, class, caste-based, and others—about populations intended for exploitation and dispossession. In this panel, we aim to further interrogate the history of these quantitative, statistical, and inferential mechanisms of power.
From the Enlightenment calculation of early American enslavers to the contemporary deep learning transformations of artificial intelligence (AI) practitioners, and even the human infrastructures that transmit disinformation through messaging apps, numbers have never been neutral nor free from narrative. This panel will focus on the continuities in thinking and practice around the use of numbers as a tool of oppression across both space and time, and the often-nuanced strategies of resistance and cooption that progressive social movements have deployed in response.
Some of the questions we hope to see addressed by the panel include the following:
• How have historical discourses and ideologies of inequality, oppression, and supremacy prompted, shaped, and refined quantitative or technical metrics and mechanisms of scientific or social differentiation?
• How have the developers of these metrics and techniques drawn on transnational or global circuits of oppression?
• What are examples of such technical metrics and mechanisms being developed in an overtly oppressive context and then laundered more general scientific practice? How have professional fields such as medicine, statistics, and psychology grappled with or ignored these historical genealogies?
• How have people and communities targeted by such numerical mechanisms resisted, responded, refused, and coopted these technologies? What have been the results of such encounters, and what lessons can contemporary groups dedicated to opposing unfreedom take from these historical examples?
Accepted papers:
Session 1Preeti Raghunath (University of Sheffield, UK)
Short abstract:
This paper historicises practices of datafication set in place by company-states and colonial states, in a bid to trace the emergence of the global data economy as we know it today.
Long abstract:
Datafication in its present form can be traced back to histories of colonial capitalism (Cieslik and Margócsy, 2022). The emergence of early European corporations predated or coincided with the emergence of the Westphalian nation-state, bringing modern practices of business operations and governance not only in Europe but in Europe’s new colonies as well. This paper looks at precedents to contemporary digital datafication by drawing on its analogue form associated with an earlier revolutionary communication and technology infrastructure, the transnational railways. It unearths the logics and practices of analogue datafication set forth by the East India Company, a British company-state and subsequently by the British colonial state engaged in building railway systems in its colonies. In doing so, the paper proposes the framework of Imperial Datafication, specifically drawing on academic literature on British colonial rule in India, and archival material on data practices enacted in the process in the Indian subcontinent. It investigates the amalgam of corporate power, expansionist states and trans/-national elites, helping us locate and extend our understandings of datafication and development historically and today. This research primarily draws on institutional archives located in the UK, including the British Library. I read parliamentary papers, colonial and India office records, legal documents and acts of governance, letters and correspondence between officials and authorities, and related documentation around the colonial railways in India.
Konrad Kopel (University of Warsaw)
Long abstract:
The paper is organized around the questions: how measures from the former Poland were correlated with different power and economic projects; how measures shifted between being tools of oppression and resistance? Measures differed not only by the scale and points of references but above all by being a part of different political projects. These projects were oriented on different values and aims. They were connected with different notions of the world and environmental management thus they shaped locality in various ways.
The aim of the paper is to examine three measures as a political world-making tools: pług (plow); wół (ox); łan (lan). Each of them was distributed throughout former Poland in different periods of time. However it is possible to find all of them even in the 18th century in villages with different juridical organization. They were used to organize land usage but only łan was a geometrical, quantitative measure. What is more, they emerged through different power and economic regimes connected with various conceptualizations of land and work. Pług and wół were linked with tribal organization (before 10th century) while łan emerged as a tool of feudal power. They were used not only to subjugate and exploit peasants but also to regulate local power between them. Furthermore peasants used measures as tools of resistance. The paper shows specificity of these measures as elements of structures of power and world-making projects. Additionally, the paper recognizes how these measures shifted between different groups and what political and world-making elements they carried.
Barbara Hahn (Texas Tech University)
Long abstract:
LONG: Historians generally view modern corporate enterprise as wholly distinct from the family- and correspondent-based commerce that preceded it—this project connects the dots between the two economic structures. Using digital network analysis, arguing that the Liverpool cotton merchants built nineteenth-century empire from resource quantification, this project traces merchant networks around the world. The Liverpool Cotton Brokers Association tracked global cotton production in ever more standardized ways from 1841 on (the surprisingly late date of the Association’s formation). Its datasets classified the world according to the value it offered British industrialists and provided assessments of the utility and purposes of the world, its peoples and its rulers, in terms of cotton. This project maps the brokers’ assessments onto state imperial activity and resources. It hypothesizes that merchant networks became the basis for the organization of the world economy in which the Global South supplied raw materials to value-adding manufacturers and merchants in the imperial North. In this way, old-fashioned family business structures organized modern enterprise through marriage, credit, and correspondent relationships too long obscured by the rise of the corporate legal form.
The larger research project, intended for an eventual monograph, examines the brokers’ eighteenth-century domestic worlds and international trade relationships as the basis for Victorian empire, and for the organization of global capitalism into its current forms in the nineteenth and twentieth centuries. This presentation focuses on the quantitative efforts of the Liverpool Cotton Brokers Association and their effects on Victorian imperialism.
Asif Akhtar (London School of Economics and Political Science)
Long abstract:
This paper considers the modalities of disciplinary societies, governmentality, and control societies—broadly outlined in the work of Michel Foucault and Gilles Deleuze—in their mutated postcolonial forms following refraction in their colonial encounter. Taking the case of colonial India and postcolonial Pakistan, the paper will genealogically chart the state of quantification techniques informing these modalities of power between the late-eighteenth century to the early-twenty first century. The East India Company deployed techniques of quantification to inform their administration through enumeration and categorization of crimes and offences, corresponding to disciplinary power of penalization, which came to be tabulated to expand colonial rule over native populations. During the late-nineteenth century, the British Raj deployed different numbering surveys of the Indian population in the technology known as the census, which corresponds to statistical techniques of governmentality that Foucault argues enfolded disciplinary power. Since the independence of Pakistan in 1947, the postcolonial state has repeatedly failed to adequately deploy the census as an effective technology of government. In the twenty-first century, computational technologies have brought about a new era of datafication of the population—roughly corresponding to control societies in Deleuze—in Pakistan’s National Database and Registration Authority and draconian cybercrimes laws. In terms of this genealogy, the paper will argue that the neat partitioning of these modalities of discipline, governmentality, and control do not line up in the colonial trajectory, given the failures of quantification. The postcolonial state presents a mutated amalgamation of these modalities, requiring enumerative technologies to be scrutinized in their present forms.
Sebastian Fernandez-Mulligan (Yale University)
Long abstract:
Over the twentieth century, scientists, engineers, and philosophers struggled to quantify freedom. Merging moral and mathematical arguments, these metrics were used to justify and define the stakes of international intervention in the Cold War. This talk constructs a genealogy of freedom calculations from 1948 to 1990, beginning with calculations done by statistical physicists in the 1940s and ending with the legacy of the widely used metric devised by the Freedom House in the late 1970s. By performing a close reading of the technical formulae for freedom, I connect the changing mathematical assumptions of these calculations to the broader political concerns of western technocrats. I show that geopolitical fears sparked by decolonization drove a marked revision in the freedom calculations. As western technocrats became increasingly concerned with the economic and cultural status of newly sovereign nations, calculations shifted from describing individual freedom to describing the institutions of a free society. This change in mathematical form belied a new political assumption—that free citizens were the product rather than the producers of a free society. I conclude by drawing a line between these historical calculations and the quantification of freedom today, showing how midcentury political fears have been laundered into scientific metrics used in museums, public education, and politics.
Amira Moeding (University of Cambridge)
Long abstract:
The paper provides a genealogy of the present in analysing how proponents of 'Big Data' from within Big Tech, such as Alex Pentland (2009; 2012; 2014), Eric Schmidt (2013; 2021), Peter Norvig (2013) and others have imagined politics and the state when based on 'intelligent' and other forms of data-driven technologies from 1999- 2016. I focus on their conceptions of the normal citizen subject, the assumed needs of this subject, as well as, notions of objectivity and neutrality.
These conceptions of normalcy often stand in tension with the promise that future technologies will provide highly individualised welfare and care solutions based on data collected about an individual (Pentland 2012; Schmidt 2013). Analysing the idea of the normal citizen and the needs of subjects in this literature shows, I argue, for whom future technologies and state services are imagined and who remains excluded and cannot fully be 'datafied.' The paper proceeds in three steps: First, I presents some of the solutions proposed by the authors for political and welfare problems. Second, I point to the role of data and quantification in those solutions and the experimental design they propose to test them. I then turn to the categorisations involved and how the authors narrate their practices of data production. Therein, I show how they rely on historical often problematic categorisations and practices of quantification. Third, the presentation shows how these narratives produce an 'other' while assuming that they provide solutions that are both perfectly tailored to an individual while being 'neutral.'
Kim Fernandes (University of Pennsylvania)
Short abstract:
This presentation attends to the ways that histories of quantifying disability in urban India have influenced the direction of enumeration and identification technologies, generating promises for a more numerically inclusive future while continuing to quantify disability as embodied difference.
Long abstract:
In this presentation, I think alongside histories of quantifying disability in urban India to ask: how do numbers shape the imagination of disabled futures? Drawing on virtual and in-person ethnographic work conducted with disabled people seeking to be identified and enumerated as such through the process of disability certification, the presentation will attend to the role that algorithms play in quantifying and re-establishing ideas of normalcy. Focusing in particular on the promises embedded in the rollout of the unique disability ID (UDID) since 2016, I attend to the kinds of imaginaries of the future that it makes possible and the ones that it erases for disabled communities. I also do so through a historical lens on how disability has been constructed in India, and how binaries of the normal vs. the abnormal have been constructed alongside each other. In attending to the way this kind of identification and/as technology imagines the future for disabled people, I show that it creates an ordinal citizenship, or a belonging by degree, of sorts. Through the presentation’s attempt to conceptualize the UDID’s promise of an inclusive future for disabled people, I demonstrate that the identification infrastructures embedded in the certification process often paradoxically work to exclude disabled people from the futures the more expansive, inclusive futures that technologies of enumeration promise. This presentation interrogates how disability comes to be conceptualized as difference through state quantification across India, and pays close attention to how disabled people have in turn shaped these processes of quantifying difference.
Haley Lepp (Stanford University)
Long abstract:
Peer review is a technique to quantify differences between what is and is not publishable in the sciences. One increasingly quantified metric is English. Over 90% of indexed journals in the natural sciences, for example, are published in English, and writing which is not assessed as appropriately English will be rejected. As such, generative AI has been celebrated as a boon for inclusion, allowing scientists to instantly "fix" their writing prior to peer review. At the same time, by performing an automated, imperceptibly quantified version of English, these tools reinforce the global hegemony of English in the sciences by masking the language diversity of academic community members and limiting the exposure of readers to writing in different languages and language registers.
This study explores the genealogy of English metrics in the sciences and the use of generative AI by global scholars to resist those metrics. First, we analyze historical peer reviews and rubrics from major scientific publications to explore how reviewers police, negotiate, and uphold English as a publishing requirement. We compare peer review evaluations which critique English use before and after the introduction of ChatGPT in 2023, exploring the implications of widespread use on adjudication practices. Next, we interview scholars to probe motivations and perceptions of the use and users of automated writing tools. The goal of this project is to use this moment of change in historical publishing norms as a lens to de-legitimate, un-hide, and re-question the global quantification infrastructures which determine who participates in global academia.