Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Black(box) Mirror: Amplifying reflexive development practices for trustworthy Artificial Intelligence  
Tessa Oomen (Erasmus University Rotterdam - ESHCC) João Fernando Ferreira Gonçalves (Erasmus University Rotterdam) Selma Toktas (Erasmus School of History, Culture and Communication)

Send message to Authors

Short abstract:

The pervasive implementation of artificial intelligence (AI) and its societal implications puts new emphasis on evaluations of AI. With this study, we respond to that call by developing a framework for evaluative and reflexive development practices for trustworthy AI.

Long abstract:

The pervasive implementation of artificial intelligence (AI) and its potential impact on society has put new emphasis on evaluating AI systems, to ensure transparency, explainability, and overall trustworthiness.

Evaluations of AI, such as audits, tend to occur after implementation, allowing for an analysis of the complete assemblage but forgoing prevention of incidents. While preventative approaches rely on the development process and applying, for example, security-by-design (Sbd), privacy-by-design (Pbd), or applying the ethics guidelines for trustworthy AI.

Unfortunately, both approaches require significant effort, resources, and time. This, combined with the lack of unified definitions for design frameworks, affects the adoption of either.

For better evaluations, we aim to develop a framework to support reflexive development practices for trustworthy AI. We conduct exploratory interviews, literature research, framework development and testing.

Initial interviews show that the required effort, resources, and absence of incentives or legal requirements, are significant barriers for evaluating AI development. Moreover, evaluations tend to focus on specific metrics and requirements or are considered merely academic exercises.

Our previous research suggests AI developers perform practices – knowingly and unknowingly – that support the materialization of trustworthiness in both technical and non-technical understandings. Both our findings here and our insights from previous research highlight the need to remain close to current experiences and practices of AI developers to ensure the most effective adoption of evaluative development practices.

The framework developed in this paper will enable AI developers to communicate about their work reflections, increasing transparency of AI practices beyond AI systems.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 2 Wednesday 17 July, 2024, -