Log in to star items.
- Convenor:
-
Sally Wyatt
(Maastricht University)
Send message to Convenor
- Format:
- Traditional Open Panel
Short Abstract
This panel welcomes examples of hostile technologies and/or theorisations of hostility. Technologies of war are intended to be hostile to enemy combatants. The panel is concerned with less obvious examples, and the how, when, where and why they become hostile, to whom, and to what.
Description
Technologies of war and incarceration maim and kill, harming people, animals, the natural environment and our sense of what it is to be human. Artificial intelligence (AI) is used to profile and identify individuals so that they can be deported or denied welfare benefits, a use that is unconstitutional in many liberal democracies. Assembly lines and other technologies of mass production have long been deployed to maximise returns to owners and shareholders, and deskill workers.
These extreme examples are increasingly familiar from contemporary news reports. There are many other less dramatic examples of how technologies may not be deliberately designed to cause harm, but may become hostile. For example, poorly maintained road infrastructures might result in an increase in accidents, harming people and animals.
In this panel, examples of ‘hostile technologies’ are welcomed, particularly when the focus is on when, where and how technologies become hostile to people as individuals or groups; to flora, fauna and the environment; and to ideals such as trust and democracy. Abstracts addressing how examples of hostile technologies generate new ways to conceptualise technologies are also welcome. Is it productive to think about ‘hostility by design’, as a counter to the normally positive discourses about transparency, democracy or privacy by design.
This panel builds on a workshop held in Maastricht in June 2025, which may result in an edited volume (currently under review). But the workshop was just a beginning, and contributions to the panel will take the conversation further.
Accepted papers
Session 1Paper long abstract
Digital platforms are built to extract, store, and make legible the behavioral data of their users (Zuboff 2019). This extractive capacity is not neutral, but in commercial contexts its effects are diffuse, distributed across populations as the ordinary cost of connectivity. This paper asks what happens when infrastructures designed for commercial extraction operate across divergent political conditions, and draws on ethnographic research with communities and their diasporas to examine how platform logics produce different outcomes depending on context.
Across a range of settings, the same platform architectures that enable targeted advertising can enable other forms of targeted intervention. Content moderation systems shape what documentation remains visible and what disappears. Metadata from ordinary digital activity can be repurposed by institutional actors beyond its original intent. Communication restrictions turn digital infrastructure into a tool of selective access rather than universal connectivity. In each case, the platform does not need to be redesigned. Its existing logic is sufficient. I argue that the effects of a technology cannot be assessed through its design alone. They must be assessed through the political conditions under which it operates (Larkin 2013). Platforms are already extractive in their orientation, but different political contexts transform the consequences of that extraction in ways that demand closer ethnographic attention. Recognizing this continuum matters because it challenges accounts of technology that rely on a distinction between intended and unintended effects, a distinction that becomes difficult to sustain when extraction is the design.
Paper short abstract
Blind people experience barriers to thrive in the workplace due to inaccessibility. Drawing on an ethnographic project collaborating with blind and sighted people working at a Danish municipal organization, I explore ocular-centrism as a form of hostility with societal and embodied costs.
Paper long abstract
Research in Science and Technology Studies (STS) and Computer-Supported Cooperative Work (CSCW) reveals that disabled people face systemic barriers in the workplace. Disabled workers remain at a higher risk of unemployment due to inaccessible offices, technology, and lack of support. Addressing workplace exclusion and inaccessibility requires a deeper understanding of how mundane workplace infrastructures and practices impact disabled people's everyday life. In this context, I contribute to conceptualizations of hostility in design and STS with the analysis of a year-long collaborative project with blind and sighted employees at an organization dedicated to supporting blind and low-vision people in obtaining employment. Drawing on semi-structured interviews with seven participants (blind and sighted employees), two focus groups with blind employees, and video ethnographic materials accompanied by image descriptions analyzed with blind participants, I examine how ocular-centrism (privileging vision in knowledge production, design and social interactions) manifests as a form of infrastructural and interactional hostility. The study reveals a paradox: while the organization seeks to foreground blind knowledge and advocacy, the legacy of ocular-centrism in technological tools, designs, and social norms persist within its practices, technologies, and communication modes. As a response, blind employees develop forms of subversion and repair to trouble ocular-centrism. These include relations of interdependence with sighted and blind colleagues, braille hacks, humor, complaints, and mutual aid. Importantly, while blind employees reclaim their right to an accessible workplace, acts of resistance derive in physical and emotional exhaustion and lack of trust in the organization and societal promises of social inclusion.
Paper short abstract
This paper analyses a university building as a sociotechnical system where spatial arrangements produce subtle forms of hostility. Through the case of the NIG building in Vienna, it examines how accessibility, spatial knowledge, and layout shape everyday dynamics of inclusion and exclusion.
Paper long abstract
This work addresses the university building as a constellation of socio-technological artefacts in which the negotiation over space accessibility produces everyday hostility in forms of disorientation and exclusion. I argue that this building, seen as a sociotechnical system, promotes exclusionary politics at some of the points where the interests of different human and non-human actors intersect.
The paper focuses on the minor instances of hostile spatial arrangements at the "New Institute Building" (NIG) of the University of Vienna. Examples of such instances include the existence of the spaces behind the closed doors, whose function is only known to a limited number of users. Such a close focus allows to denormalise everyday spatial practices, considered politically neutral.
Methodologically, the study combines an architectural analysis of the public/private gradient of the spaces with an analysis of accessibility for different groups of users. Accessibility here is seen not only as a physical feature, but also as the readability of the building layout and ease of orientation for different user groups. I argue that the varying levels of transparency between departments in communicating about the functions of different spaces play a role in the dynamics of inclusion and exclusion. Thus, users’ access to knowledge is incorporated in accessibility diagrams as an enabling or limiting factor.
Paper short abstract
This paper conceptualizes hostility as a consequence of architectural choices that organize cooperation. Drawing on Swiss cases, it shows how integration-oriented systems become hostile by denying institutional plurality, and explores distributed, non-integrative counter-hostile architectures.
Paper long abstract
Discussions of digital hostility focus on malicious intentions or exclusionary practices, locating hostility in the technology’s design or usage. This paper proposes a different perspective by conceptualizing digital hostility as a property of architectural choices that organize cooperation between institutional actors.
Drawing on cases of digital integration in Switzerland—farm data platforms, an e-ID infrastructure, and e-Health initiatives—we show how hostility emerges when digital architectures restrict institutional plurality and reconfigure boundaries of authority in the name of efficiency, sovereignty, or the wellbeing of citizens. We argue that integration-oriented architectures function as devices of forced pacification: they make cooperation conditional upon alignment, thereby marginalizing dissenting positions, chosen politico-juridical principles (e.g., subsidiarity), or pre-existing local arrangements.
By contrast, we explore distributed, non-integrative architectures that enable partial, situated, and reversible forms of cooperation, without imposing as a prerequisite the resolution or neutralization of institutional or jurisdictional divergences. Rather than eliminating antagonism, these approaches sustain cooperation by recognizing that some divergences are constitutive and need not be pacified in order for cooperation to occur. We conceptualize them as counter-hostile architectures: they do not abolish divergence, but they resist hostility in the form of architected denial of plurality.
This reframing contributes to STS debates on infrastructures and platforms as devices of governance, by showing that integration is not a neutral condition of cooperation. It is an architectural choice that can redistribute authority and responsibility among institutional actors.
Counter-hostile architectures that do not impose integration can constitute a political and infrastructural alternative to digital hostility.
Paper short abstract
Scholarship has shown how profit-driven algorithms amplify anger and division, as these emotions drive engagement and affect on social media platforms. The paper explores ways to counter this hostility by design through different kind of content moderation, from regulation to digital literacy.
Paper long abstract
The design of systems that structure information and communication infrastructure can be considered a major challenge of the 21st century. Platform designs—interfaces, recommendation and ranking mechanisms, policies, content moderation mechanisms, and links to paid partnerships—are not neutral but manifest platform politics and economies. A growing body of literature from science and technology studies, feminist and free speech advocates, activists and practitioners help to map the field of industrial content moderation systems, composed by Algorithmic moderation Systems (AMS) and human moderators. We are only beginning to grasp (Algorithmic) Moderation System's impact on conflict dynamics, social cohesion and democratic practices, but also on human moderator's mental health. Noble's groundbreaking analysis in "Algorithms of Oppression" demonstrates bias in technology and states that "Racism and sexism are part of the architecture of the language of technology" (Noble 2018). More authors are focusing on the reproduction and deepening of racist, classist, and sexist power relations through platform design and Algorithmic Moderation Systems expressed in concepts such as „Automating Inequality“(Eubanks; 2017), Platformed Racism (Matamoros Fernandez, 2017) and „Algorithmic misogynoir“ (Marshall, 2021) citing Moya Bailey). At the same time, studies on hostile working conditions for data workers and human moderators are growing.
Bringing into dialogue feminist science and technology studies, social design and content moderation studies, the paper explores ways to counter this hostility by design through different approaches in content moderation. I hereby distinguish internal (platform policies), external (national and supranational policy frameworks) and cultural/educational approaches.
Paper short abstract
What is hostile about state identification systems when they are designed to only inhabit binary gender as the default mode of life? This talk focuses on the Danish CPR-number as a hostile binary design that is encoded into the digitalised welfare state and systematically excludes trans lives.
Paper long abstract
Defaults are powerful. They are designed to seem apodictic, objective, and often hide in plain sight as seemingly 'natural'. Binary gender classification within nation state systems is no different. In such way, what is hostile about these state identification systems when they are designed to only inhabit the idea of the gender binary as the singular, default mode of life? How are trans bodies coded under the design of such state technologies? How do these identification systems, in accordance with colonial imaginaries of the gender binary, represent a state precariousness that exclude trans people by default from their design and become increasingly embedded with algorithmic modes of verification and surveillance? In posing these questions, this presentation turns to the case of the personal identification number, the Danish CPR-number, as an example of such hostile binary design that is encoded into the foundation of the digitalised Danish welfare state with unexpected, yet systematic ramifications for trans lives who do not conform to this binary notion of liveability. This analysis thus works to unveil the stealthy, dangerous hostility underlying cisnormative design of technologies and its exclusionary interactions with colonial legacies of state infrastructures as they implicate trans lives situated within the nexus of sex/gender systems, nation state policies, and algorithmic infrastructures.
Paper short abstract
We examine the Dutch Participation Act in Balance, analysing how “trust” and “balance” are institutionalized in welfare reform. We argue that these concepts may re-legitimise infrastructures of monitoring and conditionality, framing reform as infrastructural repair rather than transformation.
Paper long abstract
Until 2026, Dutch social assistance had been regulated by the 2015 Participation Act, which consolidated earlier welfare provisions while intensifying welfare conditionality. We conceptualize the Participation Act as a socio-technical assemblage in which policy logics, municipal governance, and digital infrastructures translate assumptions about deservingness into practices of monitoring and sanctioning. Thus, we consider the Participation Act as a hostile technology. In 2026, the Participation Act was revised in what is called the Participation Act in Balance. The Participation Act in Balance aims to embed practices based on trust and rebalance the relationship between citizens and the state. This presentation examines how the concepts of trust and balance are constructed and institutionalized, and what this reveals about the reconfiguration of welfare governance. First, we examine how trust is configured through administrative rules, discretionary space, and information obligations. We discuss how trust is embedded within a broader logic of responsibilization, where beneficiaries are responsible for demonstrating compliance. Second, we analyse the political meaning of “balance” as a discursive technology and we aim to understand how this concept reveals or conceals power asymmetries. Third, we theorize welfare reform as infrastructural repair. We examine how reforms incorporate critique of punitive welfare governance while stabilizing the underlying socio-technical infrastructure of welfare conditionality and under what conditions reform functions primarily as the repair of institutional legitimacy as opposed to institutional transformation. With this threefold focus, we will gain an understanding of how welfare reforms can re-legitimise disciplinary infrastructures.
Paper short abstract
Hostility operates through digital governance that distributes burden, delay, surveillance, and contestability loss onto marginalized groups within settler colonial inequality. This paper uses the Canadian aporetic condition as counter-policy to map jurisdiction and design countermeasures.
Paper long abstract
Hostility operates through routine governance in digital portals, screening systems, identity verification tools, and compliance workflows. These systems distribute burden through documentation demands, procedural delay, opaque decisions, and intensified surveillance. Marginalized groups experience these burdens with particular force because inequality shapes exposure to administrative control and settler colonial governance shapes jurisdiction, civic standing, and institutional authority through durable asymmetries.
This paper develops an analytic vocabulary for hostile technologies by treating interface rules, evidentiary thresholds, audit practices, and maintenance routines as public code: design commitments that structure access and allocate administrative labour. Hostility is defined as a patterned relation among burden (unpaid applicant labour and verification loops), exposure (screening and risk classification), delay (tempo control and waiting), voice (practical contestability and appeal intelligibility), and responsibility (decision ownership across policy layers, vendors, and model governance).
State theory supplies a strategic-relational account of how institutional priorities are realized through administrative form. The paper advances the Canadian aporetic condition as a counter-policy tool that maps jurisdiction, legitimacy claims, civic standing, time controls, and contestability onto system requirements and user pathways. Methods draw on publicly available Canadian policy, oversight, and procurement materials, paired with structured interface walkthroughs that trace decision points, evidence demands, and appeal routes.
The paper outputs a set of implementable counter-policy clauses for democratic repair: jurisdictional traceability, evidentiary fairness, contestability pathways that function in practice, temporal non-coercion through response and retention norms, and responsibility visibility across institutional and vendor chains. The contribution provides a register usable by auditors, designers, and policy staff.
Paper short abstract
Facing an infrastructure of digital borders, Iranian users rely on fragile and unsafe techniques to bypass blockages and access the Internet and Web services. I offer examples from an ethnography of their challenges and solutions to discuss how and to whom technology turns hostile.
Paper long abstract
Iranian Internet users are between a rock and a hard place. The state project to build a "National Information Network" (NIN) is promoted as bringing a fast, safe and equitable Internet. In reality, it provides the infrastructure for censorship and occasional shutdowns to facilitate bloody crackdowns as happened in January 2026. Moreover because of international sanctions, users are losing access to a growing number of globally popular Web services. As a result, they are experiencing a tiered Internet which limits some users, while support others in their work. To navigate it, users resort to assemblages of devices, circumvention software, services and trusted communities to create linkages which bypass checkpoints and reinstate free navigation. Although fragile and in need of constant maintenance, almost every Iranian user relies on them for access. Drawing on a review of official documents regarding NIN, interviews with people whose job rely on Internet and participant observation with non-technical users, I characterize Internet in Iran as hostile. I demonstrate that digital borders installed by censorship and geoblocking technologies force users to choose between privacy and efficiency of their assemblages. Moreover, the case suggests hostility as a tiered condition caused by reconfigurations and co-option of networking devices and protocols, in ways not anticipated in their initial design. I argue that attention to mundane and fragile practices of users is crucial as it uniquely reveals the emergence of new temporalities, regimes of connection/disconnection, and divisions of local/global spaces that puts users in current tiered condition.
Paper short abstract
Zero Trust redefines cybersecurity through continuous verification and default suspicion. Rather than protecting a fixed perimeter, it governs access through pervasive monitoring, reshaping trust, inclusion, and exclusion as unstable and continuously recalculated conditions.
Paper long abstract
For a long time, information security was organised through a spatial logic of inclusion and exclusion. Perimeter-based architectures drew a boundary between a secure “inside” and a threatening “outside,” granting trust by default to internal users and devices. In this framework, trust was not an affective or moral quality, but an architectural assumption tied to position within the system. As digital infrastructures became more complex, however, this model was increasingly challenged. The rise of Zero Trust (ZT) marks a decisive shift: no user, device, or connection is ever trusted in advance, and every interaction becomes a site of continuous verification.
This paper analyses ZT as a paradigmatic case of hostility by design. Drawing on critical approaches in Science and Technology Studies and a historical analysis of cybersecurity rationalities, we argue that ZT does not simply enhance security; it redefines inclusion and exclusion through pervasive, granular, and continuous monitoring. Access to network resources becomes conditional on the ongoing evaluation of identities, behaviours, devices, and contextual signals, while suspicion is institutionalised as a default design principle.
By shifting trust from a stable architectural attribute to a volatile and continuously recalculated variable, ZT turns surveillance into an ordinary condition of digital participation. It thus reveals a broader post-trust rationality in which security is pursued not by stabilising trusted relations, but by continuously managing their fragility. The paper concludes that ZT invites us to rethink hostile technologies as infrastructures that operationalise suspicion and reshape the conditions of access, legitimacy, and belonging.
Paper short abstract
From the power required to train and use models to the energy needs of entire data centres, AI’s technological assemblages consume electricity on an elusive scale. This paper unpacks how AI’s costs are narratively reconciled by presenting nuclear power as a ‘green techno-fix’.
Paper long abstract
Artificial intelligence (AI) entails vast energy demands and creates significant carbon costs. From the power required to train and use models to the energy needs of data centres, AI’s technological assemblages consume electricity on an elusive scale. This paper unpacks how tech corporations narratively reconcile AI’s costs, by presenting nuclear power as a ‘green techno-fix’. While the classification of nuclear energy as ‘green’ and ‘sustainable’ remains contested, the coupling of AI and nuclear imaginaries reveals a mutual legitimation, pitching technological solutionism against the environmental crisis. In analysing corporate communication and nuclear energy investments (by Alphabet/Google, Amazon, Apple, Meta, and Microsoft; from September 2024 onwards), the paper shows how large technology corporations sustain visions of AI progress and promises amid ecological contradictions. It examines how technology corporations narratively intertwine imaginaries of artificial intelligence (AI) with those of nuclear energy. It investigates the discursive construction and legitimation of AI, focusing on its entanglement with the political economy of energy infrastructures. The analysis identifies a circular narrative structure through which AI and nuclear energy are co-positioned. First, corporations deploy expansive imaginaries of AI (ranging from breakthroughs in medicine and climate modelling to economic revitalisation) to morally sanction AI’s escalating energy consumption. Second, nuclear energy is rebranded as 'carbon-free solution', with its legitimacy enhanced through association with AI innovation and societal progress. Such narratives sidestep persistent societal, environmental, and political contestations surrounding nuclear energy. They moreover sideline the question whether AI can fulfil the societal promises invoked.
Paper short abstract
This paper investigates how technocratic, colonial and capitalist paradigms in environmental science have manifested into hostile technologies for conceptualizing and visualizing climate change that paralyze publics instead of mobilize them to shape common futures collectively and carefully.
Paper long abstract
Scientific visualizations of climate change have received ample criticism by visual theorists and critical scholars: The perspective in climate graphics is total, yet grounded nowhere. They represent apocalyptic futures, that immobilize publics and foreclose possibilities to shape common futures in any other way than a scientific and technocratic one. Meanwhile, historians of environmental science have demonstrated its institutional and ideological ties to a capitalist, hegemonic and technocratic status quo. The discipline has therefore been oriented towards cleaning up the messes of late industrial modernity and has little incentive to facilitate stable, caring relations between people and land. By comparing the history of environmental science with critical inquiries into climate visualizations, this paper investigates how technocratic, capitalist paradigms became manifested into hostile technologies for conceptualizing climate change, that paralyze instead of mobilize. Can we understand the widespread burnout from climate activism, whereby decision making defaults to hegemonic institutions with strong ties to fossil capital and fascist regimes through the lens of technological hostility? And ultimately, how can this understanding steer scientific practice and technological design towards future-making in the image of care, humility, diversity and resilience?
Paper short abstract
Drawing from post-ANT literature and through following online discourses and inscribed infrastructure this paper aim to uncover the enacted realities of cars in the city. The hostility to and from the car drives city development but why does everyone hate, or love, the mundane parking spot?
Paper long abstract
“Uppsala is no longer a city for everyone!!!!” begins a comment on Upsala Nya Tidning's (UNT) social media page. It can be found underneath an opinion piece which argued that it should “cost (a lot)” to park on the city streets. UNT is the main newspaper in the Uppsala region and readers often engage with comments, perhaps especially when it comes to the question of cars and their parking space. Whether there should be more or less cars and parking, more or less roads, etc. The questions in the debate comes down to hostile technologies, but who (or what) is it that is hostile to whom?
With a basis in post-ANT this article aims to uncover and explore the various cars, parking spots, roads and drivers that can be found throughout the city and online. Crucially, the ontonorms prescribing different ways of being with a car presents a difficult task for local politics, a task which might not be solvable unless the underlying ontologies that politicians are engaging with are recognised. Because if the ontological status of the car makes it part of what it is to live a good life, then raising the price of street parking is not just an administrative decision, it is societal ostracizing. In the words of the same comment which we started with above “Shame on you for making the city so inaccessible, but your time will come when you will feel what it’s like to feel outside of society!!!!”