About The Workshop
[CANCELLED / WITHDRAWN] SW 33– Is Contestability the Hidden Grammar of AI Law? Making Sense of the Conceptual Mess
Convenors: Wojciech Rzepiński (Adam Mickiewicz University, Poznań); Łukasz Szoszkiewicz (Adam Mickiewicz University, Poznań)
Contact: wojrze@amu.edu.pl ; l.szoszkiewicz@amu.edu.pl
The Problem
Contemporary AI regulation is marked by a proliferation of normative terms. Fairness, transparency, explainability, interpretability, traceability, human oversight, trust, and trustworthiness appear side by side in statutes such as the EU AI Act, in OECD and UNESCO soft law instruments, in national policy documents, and expert group reports. They are often presented as mutually reinforcing requirements, as if they naturally align or name distinct but complementary properties of AI systems.
This workshop proceeds from the opposite intuition: the vocabulary of AI law bundles multiple, partly competing values. Fairness is not a single ideal; it may refer to non-discrimination, avoidance of arbitrariness, procedural remedies, or protection of persons in situations of vulnerability. Transparency may mean public openness, intelligibility to affected individuals, or auditability for regulators. Oversight may imply real-time human intervention, ex post review, or the mere possibility of challenge. Rather than resolving these tensions, regulatory drafters rely on open-textured concepts and thereby shift the burden of reconciliation onto courts, compliance professionals, and affected parties.
The Hypothesis
We propose that much of this vocabulary converges on a single practical aim: making AI-mediated decisions contestable. On this view, interpretability, explainability, transparency, traceability, oversight, and even trust are not independent checkboxes but interconnected conditions for meaningful challenge, review, and accountability. They matter insofar as they lower barriers to contestation and strengthen justificatory practices. Read through the lens of contestability, the conceptual cluster acquires coherence: each requirement serves to enable affected individuals, oversight bodies, and courts to question decisions, demand reasons, and allocate responsibility across complex socio-technical arrangements.
Central Themes
The workshop will pursue three lines of inquiry:
- Conceptual “archaeology” focused on the following questions:What distinct meanings hide beneath unified labels? How do legal, ethical, and technical discourses pull these concepts in different directions?
- Normative tensions and interpretive displacement what prompts two central questions:How do conflicts between different concepts get displaced into legal interpretation? What is the cost of this displacement for legal certainty and democratic accountability?
- Human oversight– Oversight is often imagined in anthropomorphic terms: a human “in the loop” supervising an otherwise autonomous system. In practice, however, oversight is distributed across designers, deployers, auditors, and regulators. Different interpretations of the same concepts reshape the function of the entire vocabulary with which oversight interacts, thereby raising the question of whether oversight remains possible once the concepts that are invoked when discussing its fulfilment are subject to divergent interpretations.
Call for Papers
We invite submissions that engage with any of the core concepts in the AI-law vocabulary – fairness, transparency, explainability, interpretability, traceability, human oversight, trust, trustworthiness – while situating them within the broader problem of conceptual fragmentation and normative tension.
Contributions may:
- analyse a single concept or a pair of concepts;
- explore tensions between competing interpretations;
- test or challenge the contestability hypothesis;
- offer comparative, doctrinal, or socio-legal perspectives.
We particularly welcome papers that make explicit their theoretical, jurisprudential, or methodological contribution and that engage with concrete regulatory instruments (EU AI Act, OECD Principles, IEEE/ISO standards, national frameworks).
Key Information
Deadline for abstracts
30 April 2025 (300–500 words)
Notification
31 May 2025
Workshop date
[to be confirmed with organisers]Publication
Special issue or edited volume (planned)
Contact
Wojciech Rzepiński: wojrze@amu.edu.pl
Łukasz Szoszkiewicz: l.szoszkiewicz@amu.edu.pl

