Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Stefan Böschen
(Human Technology Center, RWTH Aachen University)
Gabriele Gramelsberger
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-08A00
- Sessions:
- Friday 19 July, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel is about dynamics of digitalization of research practices and the consequences for cultures of research. It explores tendencies of unification and creating varieties via digitalization processes. For being able to do so, a combined philosophical-sociological STS view is seen as decisive.
Long Abstract:
Processes of digitalization are typically analyzed as processes of socio-technical unification through digital technologies. In this way, to name but a few, model formations, power relations, infrastructural formations have been examined and the technical and methodological as well as cultural and social forces at work in the process have been identified and modelled. At the same time, there are indications that processes of digitalization can sometimes turn out more fragmented than expected. This is particularly evident in the context of the digitalization of research practices in different research cultures. The initial assumption of this panel is that this disunity is due to the experimental appropriation and formation of digital technologies in different research cultures. Moreover, there is a key challenge for STS that there is as yet no interdisciplinary repertoire of science studies to describe such processes with sufficient depth. This is because a philosophical perspective is needed to specifically characterize digitality and at the same time a sociological perspective on the socio-technical formation processes.
Against this background, the panel sets itself the task of looking at different forms of experimental testing and appropriation of digital technologies in the realm of different forms of research and research-based policy advice and discussing the following questions:
a) What forms of digitalization can be observed in different knowledge cultures?
b) What influence do overarching digital infrastructures of research have on the forms of digitalization in different knowledge cultures?
c) What conceptual-methodological repertoire is necessary to make such processes of digitalization of research practices of epistemic cultures more investigable?
d) How does the diversity of machine learning (ML) architectures increase the digital diversity in research cultures?
Accepted papers:
Session 1 Friday 19 July, 2024, -Paper short abstract:
One of the ways computers entered scientific cultures is as ideal number-crunching machines which, given the same input, always deliver the same output. While rarely realized in practice, I will argue that this ideal may have contributed to the emergence of the "reproducibility crisis".
Paper long abstract:
Since the turn of the millennium there has been increasing alarm regarding a "reproducibility crisis" in science, leading to calls for overarching requirements to be imposed on scientific practice, one of the most prominent being that of "computational reproducibility," characterized as "obtaining consistent results using the same input data, computational steps, methods, code, and conditions of analysis" (2019 Report on Reproducibility and Replicability in Science of the U.S. American National Academies of Science, Engineering and Medicine). Yet the variety of "data," "code," or "conditions of analysis" in the practices of different cultures of research lets the operationalisation of such normative statements appear a hopeless challenge, shedding doubts that such reproducibility may have been achieved in the past. How did computational reproducibility then attain and maintain its prominent position? I will argue that its rise went hand in hand with the emergence of the reproducibility crisis, and that both were linked to the expectations of reproducibility raised by the introduction of computer-assisted methods in a growing number of research cultures around the end of the 20th century. Starting point for my presentation will be a paper published in1992 by geophysicists John Claerbout and Martin Karrenbach, which is today often quoted as marking the beginning of the "reproducibility crisis," but where in fact the authors welcomed "word processing and software command scripts" as providing the chance of improving reproducibility in computationally-aided science.
Paper short abstract:
Software is much more than just code, and it is undervalued to understand the digitalization of research practices. The dynamics of circulation of software in Computational Chemistry unveil how software packages were and are distributed, maintained and licensed under sometimes conflicting norms.
Paper long abstract:
Because of its advent in times of entrepreneurship science incentives, because of its proximity to the pharmaceutical industry, computational chemistry has always been dealing with academic norms as well as business norms. The community of computational chemists emerged in times of a plural hardware ecosystem. They blossomed in between two epochs. They were no longer in the age of supercomputers, when they were remote and dependent on the attribution of calculation time provided by computing centers. They were not yet in the age of ubiquitous desktop computing. In this specific computing context, the circulation of software has proved pivotal for practitioners.
It is epistemically important to describe how software packages were and are distributed, maintained and licensed under sometimes conflicting academic and business norms. Computational chemistry is thus a field with peculiarities regarding software which highlight computing dimensions that have been understudied until now. In media such as newsletters, mailing lists or journals’ op-eds, computational chemists passionately debate about issues of transparency, openness, dissemination... in short software issues, and the tensions they entail. For example, some scientists advocate for open-source software as a necessary condition for sound science, others defend proprietary packages as a warrant for reliable scientific software as a scientific tool. A lot lie in between, trying to reconcile academic norms and a sustainable business model. These heated debates help unveiling tensions about software in science that are otherwise invisibilized.
Paper short abstract:
This talk focuses on the adoption of artificial neural networks in the search for protein structures in the late 1980s. By presenting on journal articles and their authors’ methodical reflections, it will be argued that ML was especially akin to the research culture of chemistry.
Paper long abstract:
Computer simulations in chemistry started as early as the 1950s but started to be more regularly used from the 1970s onwards. To name just a few developments, in 1970, the computer program GAUSSIAN developed by the group of John A. Pople at Carnegie Mellon University was released as the first commercially available program for computational chemistry. From the late 1980s onwards, artificial neural networks were used to help predict at least secondary protein structures. Lastly, in the 1990s, Density Functional Theory (DFT) changed the way quantum mechanical computations were approximated in computational chemistry.
In my talk, I will focus on the second of these developments, i.e., the use of artificial neural networks and machine learning for the determination of protein structures. I will argue that machine learning did fit the research culture of chemistry especially well -- perhaps better than the mathematical formalism of quantum mechanics. It was the learning by pattern recognition and the avoidance of direct appeals to theory that particularly made the use of machine learning very akin to chemical thinking. I will argue this point historically by presenting on journal articles concerned with the use of artificial neural networks in the search for protein structures. A special focus will lie on the reflections of the authors on the usefulness of neural networks and their comparisons with other computational tools, as well as their relationship to theory.
Paper short abstract:
This contribution addresses the role of experimentation in digitalisation by focusing on different practices and their impact on knowledge. It explores the dynamic of experimental research and practices by examining three different contexts: synthetic psychology, HRI and the reconfiguration of work.
Paper long abstract:
This contribution considers the dynamics of digitilisation by focusing on experimentation and by adopting an ethnographic approach. It will address experimentation as a privileged means for two important actors in digitalisation: computer science and robotics. It will try to show that experimentation involves various practices that have an impact on the production of knowledge in the fields of science and technology.
Since the advent of the experimental sciences pioneered by Claude Bernard, scientific experimentation has become more or less widespread across a range of practices outside the field of science. From literary experimentation (as in the work of Emile Zola) to the social experiments at the end of the 19th century (as in the case of Charles Fourier), a dynamic is at work, which instructs different fields of knowledge in a somehow iterative and recursive manner.
This contribution will consider this dynamic by exploring experimental practices associated with the fields of computer science and robotics. It will address 1/ artificial intelligence as an experimental modality for modelling intelligence (synthetic psychology); 2/ experimentation in human-robot interaction (HRI); 3/ experimentation of new modalities of cooperation with intelligent machines in an industrial production setting where socio-material configurations are tested by integrating them directly into professional practices.
Based on case studies, this contribution will describe the role of experimentation in digitalisation. It will also show that, in the fields of computer science and robotics, experimentation is not very far from Zola's experimental novel: a kind of general enquiry into nature and human.