Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

P084


Machine listening: dissonance and transformation 
Convenors:
Juana Catalina Becerra Sandoval (IBM Research)
Edward B. Kang (New York University)
Send message to Convenors
Format:
Traditional Open Panel
Location:
NU-5A47
Sessions:
Wednesday 17 July, -
Time zone: Europe/Amsterdam

Short Abstract:

Machine listening AI systems are increasingly being used across medical, financial, and security infrastructures. This panel explores the epistemic question of what it means to listen, and more specifically, how listening is transformed through the essentialist logics of artificial intelligence.

Long Abstract:

Listening through and with machines has a centuries-long history in the form of technologies like the stethoscope, sound spectrograph, and telephone, among others. The more recent development of artificial intelligence (AI) technologies, however, that extract, collect, quantify, and parametrize sounds on an unprecedented scale to manage information, make predictions, and generate artificial media have positioned the intersection of AI and sound as “the ‘next frontier’ of AI/ML” (Kang 2023). Referred to as machine listening systems, these technologies are embedded into medical, financial, security, surveillance and workplace infrastructures with crucial implications for how society is and will be organized. In this way, machine listening systems add new valence to the epistemic question of what it means to listen, and more specifically, how listening – as a constructive epistemological process of projection, as opposed to reception – is transformed in and through the essentialist logics of artificial intelligence and machine learning (ML). Indeed, machine listening stands to reconfigure ideas around the body, identity, voice, and space, as well as complicate the relationship between ‘listening’ and ‘objectivity,’ especially in contexts such as law and science. To fill the gap in existing critical AI scholarship that has largely focused on computer vision, this panel invites Science & Technology Studies (STS) scholars interested in the relationship between AI and sound. This includes topics such as voice biometrics, acoustic gunshot detection, speech emotion recognition, accent-matching, and other forms of forensic and medical sound analysis, but also extends to machine listening systems that collect audio data for their use in AI models that transform and produce music and speech. We are especially keen on receiving submissions that engage with questions of epistemology and politics as articulated through feminist, critical race theory, crip, decolonial, and other frameworks grounded in material analyses of power.

Accepted papers:

Session 1 Wednesday 17 July, 2024, -