Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
Paper short abstract:
Secondary use of trained AI models is currently one of the most severe regulatory gaps with regard to AI. We propose Purpose Limitation for Models, a concept that limits the use of trained AI models to the purpose for which it was originally trained and for which the training data was collected.
Paper long abstract:
Imagine medical researchers build an AI model that detects depression from speech data. Let's say the model is trained from the data of volunteer psychiatric patients. This could be a beneficial project to improve medical diagnosis, and that's why many consent to the use of their data. But what if the trained model falls into the hands of the insurance industry or of a company that builds AI systems to evaluate job interviews? In these cases, the model would facilitate implicit discrimination of an already vulnerable group.
There are currently no effective legal limitations to reusing trained models for other purposes (this includes the forthcoming AI Act). Secondary use of trained models poses an immense societal risk and a blind spot of ethical and legal debate.
In our interdisciplinary collaboration between critical AI ethics and legal studies, we develop the concept of Purpose Limitation for Models to impose suitable limits by empowering society to govern the purposes for which trained models (and training datasets) may be used and reused. Purpose limitation is originally a concept from data protection regulation, but it does not automatically apply to trained AI models, as the model data is not generally personal data. We argue that possession of trained models is at the core of an increasing asymmetry of informational power between AI companies and society. Limiting this power asymmetry must be the goal of regulation, and Purpose Limitation for Models would be a key step in that direction.
What is limiting artificial intelligence? STS perspectives on AI boundaries.
Session 2 Tuesday 16 July, 2024, -