Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Evan Hepler-Smith
(Harvard University)
William Deringer (Massachusetts Institute of Technology)
Send message to Convenors
- Stream:
- Tracks
- Location:
- 128
- Sessions:
- Saturday 3 September, -
Time zone: Europe/Madrid
Short Abstract:
The practices involved in producing technical knowledge are now frequently carried out by means of discipline-specific software. On a variety of scales, from entire disciplines to specific research groups, such specialized software programs have reconfigured technical vision and practice.
Long Abstract:
Over the last few decades, in a wide variety of technical fields, the practices and judgments involved in knowledge production have been carried out by other means than they previously had been: by means of discipline-specific software. Such specialized software programs have both embodied and reshaped particular modes of vision and practice. This panel will examine five specialized programs that have reconfigured technical vision and practice on a variety of scales, from entire disciplines to specific research groups.
William Deringer will investigate how VisiCalc, a program for spreadsheet-based modeling, allowed certain financial professionals, especially investment bankers, to imagine the space of possible financial action in radically new ways. Stephanie Dick will discuss MACSYMA, an early symbolic and algebraic mathematical system designed to preserve paper and pencil symbolisms and the intuitions they supposedly afforded in automating algebraic work. Evan Hepler-Smith will address ChemDraw, a molecular drawing program that at once supported chemists' development of idiosyncratic visual rhetorics and reinforced a particular standardized form of representation that stripped away such variation. Nadine Levin will discuss XCMS, a cloud-based program for metabolomics at Scripps Research Institute that integrated different forms of statistics and enabled scientists to see biology in new ways. Rebecca Woods will address FlockBook, a software system for managing livestock pedigrees that reconfigured the kinds of vision implicated in making claims about animal breeds.
Accepted papers:
Session 1 Saturday 3 September, 2016, -Paper short abstract:
The pioneering spreadsheet software VisiCalc was released for the Apple II in 1979. This paper examines one way VisiCalc afforded new kinds of economic vision: how it enabled bankers to envision leveraged buyouts. In doing so, this study reflects on a new “mode of uncertainty” in modern finance.
Paper long abstract:
Wall Street lore holds that "junk bond" king (and felon) Michael Milken once blamed the 1980s boom in hostile corporate takeovers on the inventors of VisiCalc, the pioneering spreadsheet software released for the Apple II in 1979. VisiCalc holds a celebrated place in computing history, cited as the decisive program that made personal microcomputers into commercial tools and, some claim, spurred the personal computing revolution. But what new kinds of economic thinking, acting, and especially seeing did VisiCalc afford? Taking Milken's mythic comment as a prompt, this talk will explore one aspect of VisiCalc's new visual affordances: the way spreadsheet modeling enabled financiers to envision previously imponderable financial transactions, notably the leveraged buyouts so exemplary of finance in the "go-go" '80s. Projecting the consequences of hypothetical corporate mergers and acquisitions was an intricate, time-consuming task. By making it possible to model an array of scenarios simultaneously, VisiCalc radically restructured bankers' imaginative horizons. It became possible to imagine almost any corporation as a potential takeover target. In attending to these transformations in financial vision, this paper will extend current scholarship in the historical and social studies of finance. First, it will turn the focus onto investment banking, a domain of financial action largely overlooked within a scholarly literature that sees trading as the archetypal activity of financial capitalism. Second, it will elaborate a different "mode of uncertainty" in modern finance, one which relied on calculation to manage future unknowns, but where the quantification of risk was not the central problematic.
Paper short abstract:
The MACSYMA system was developed at MIT beginning in the 1960s. It was meant to be a "mathematical laboratory" that would enable new forms of problem solving and experimentation. I explore the vision of mathematical labor embodied in the system and the novel practices that emerged among its users.
Paper long abstract:
This talk explores new forms of mathematical thinking and doing that developed among users of MACSYMA, "Project MAC's SYmbolic MAnipulator," created at MIT in the early 1960s. The system was envisioned as a "mathematical laboratory" in which users could experiment with formal mathematical systems. At its heart, it was a toolkit of automated mathematical processes, like factorization, integration, and logical deduction. Processes like these are central to the exploration and solution of many mathematical problems, but can be incredibly tedious to do by hand. MACSYMA offered very efficient automated methods for executing them, allowing users to explore, understand, and solve problems in ways that were previously impossible. MACSYMA's developers hoped this would "free the mathematician" for what they believed were more "fundamental" parts of mathematical labor - like formulating conjectures and interpreting results. By the 1970s, MACSYMA was one of the most popular nodes on ARPANET, a precursor to the internet, with thousands of users across the country. But the system turned out to be very hard to use. MACSYMA's developers penned draft after draft of user's manuals, tutorials, and primers to help users work with the system. A close reading of these materials reveals that the developers also had to show users how to think differently about problem solving in order to recognize where the system might be useful. This paper explores the vision of mathematical labor that motivated MACSYMA's development and the reality of instituting new approaches to problem-solving throughout its user community.
Paper short abstract:
This paper will address two contrasting aspects of ChemDraw, a molecular drawing program widely used by chemists. Through “connection tables” (a digital file format) and “styles” (parameters of visual rhetoric), ChemDraw has supported the Janus-faced visual epistemology of modern chemistry.
Paper long abstract:
This paper will address ChemDraw, a computer program for drawing the molecular diagrams that chemists refer to as "structural formulas" or "structures." For the last two and a half decades, ChemDraw has been the predominant software system that chemists have used to create images of molecular structures for presentations and publication. This paper thus advances the session's objective of investigating software that has reconfigured vision and practice within particular technical domains.
Two contrasting aspects of ChemDraw have supported and articulated the Janus-faced visual epistemology of contemporary chemistry. First, ChemDraw molecular structures are stored as data in "connection tables," digital file formats originally developed by DuPont to standardize chemical data and deskill chemical data processing. Second, ChemDraw molecular structures are rendered as images according to user-defined "styles": parameters that afford fine-grained control over the size, shape, color, labeling, and other aspects of the appearance of these structural formulas. To many chemists, ChemDraw styles are both a personal signature and an expression of sophisticated, contestable scientific arguments.
I will argue that these two features of ChemDraw - connection tables and styles - respectively fit Bruno Latour's account of scientific inscriptions as "immutable mobiles" and David Kaiser's contrasting account of the "dispersion" of different ways of drawing and interpreting a genre of scientific diagram within different communities of practice. "Drawing theories apart" by means of the visual rhetoric of ChemDraw styles entails "drawing things together" at the level of information infrastructures built around connection tables.
Paper short abstract:
This paper considers the history of data analysis algorithms in the metabolomics software XCMS Online, a cloud-based platform developed in 2012 for the analysis of mass spectrometry data. These algorithms form the backbone of 21st century big data analytics, but have a history dating back to the 1970s.
Paper long abstract:
Over the last decade, the size of post-genomic datasets has grown exponentially, presenting challenges with the interpretation of data into biological knowledge. Metabolomics, the "omics" study of metabolism, typifies these challenges because of the complexity of metabolism, which—unlike genes—changes in relation to diet, environment, and disease. To cope with these challenges, researchers have developed various pieces of in-house software, which aid in data standardization, analysis, and organization.
Drawing on 18 months of ethnographic fieldwork with metabolomics researchers, this paper discusses XCMS Online, a cloud-based software used for mass-spectrometry data analysis. Developed in 2012 at the Scripps Research Institute, and from an open source R project that began in 2006, this paper considers the history of the multivariate statistical algorithms that are encapsulated within XCMS Online, and which enable researchers to parse the complexity of metabolic data. I show how multivariate statistics (like Principal Components Analysis)—which now form the backbone of many of the algorithms used in "machine learning" and "big data analytics"—trace their origins in metabolomics to the hybrid field of "chemometrics" in the 1970s.
The paper argues that multivariate statistics enable metabolomics researchers to envision metabolism as a complex problem space. It also argues that more recently, researchers have reconsidered the value of "simple" univariate statistics, in attempts to make sense of metabolic complexity. Overall, this paper contributes to STS by examining the material practices underlying so-called "big data", and also the social and historical forces that have shaped the technical practices of data-intensive science.