Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

Annotated by anthropologists: a collaborative ethnographic approach to datasets, annotation and a ML classifier for social media analysis  
Ann-Marie Wohlfarth (University of Tübingen)

Send message to Author

Short abstract:

This paper reflects on the epistemological and methodological challenges of conceptualizing ‘ground truths’ and the sociomaterial practices of producing and annotating ethnographically-informed datasets for social media analysis in an interdisciplinary setting.

Long abstract:

Beauty standards and image-based social media intricately shape the aesthetic regimes of platforms, and therefore, how users perceive and represent themselves online. The interdisciplinary project "Curating the Feed" investigates how the socio-technical entanglements of digital practices, user interfaces, and algorithmic systems co-curate image feeds on social media platforms like Instagram or TikTok. Our ongoing collaboration between cultural anthropology and computer science involves developing a machine learning system that classifies posts in image feeds to determine whether posts are perceived as perpetuating idealized notions of beauty. In order to indirectly approximate the ‘blackboxed’ recommender systems of the home feed, we aim to evaluate the one-sidedness of image feeds and experiment with algorithmic intervention as well as ethnographic methods. This paper reflects on the epistemological and methodological challenges of conceptualizing ‘ground truths’ and the sociomaterial practices of producing and annotating datasets for social media analysis in an interdisciplinary setting. Building on the concept of ‘ground truth tracings’ (Kang 2023), we address the collaborative effort to construct and translate a qualitative phenomenon into a quantitative formalization. Drawing on critical dataset studies, we furthermore explore how ethnographic methods can be utilized to aid the calls for more transparency and reflexivity. By exploring the ML system’s ‘learnability’ to classify content with regards to idealized representations of bodies, this research aims to contribute to debates about algorithmic bias and transparency as well as the adaptability of ethnographic methods.

Combined Format Open Panel P045
Developing co-laborative methods for digital transformations
  Session 2 Tuesday 16 July, 2024, -