About The Workshop
SW 19 Should the Law Save Us from Ourselves? The Role of Law in the Face of New and Old Vulnerabilities Elicited by AI Technology
Convenors: Dr. Célia Matias, Assistant Professor, Faculty of Law, University of Macau
Dr. Sara Migliorini, Assistant Professor, Faculty of Law, University of Macau
Contact: celiamatias@um.edu.mo ; saramigliorini@um.edu.mo
Overview:




In the churn of today’s information overload, these headlines can easily be dismissed as curious anecdotes. For policymakers, however, they may appear as warning signs that a rapidly evolving technology, still poorly understood by the general public, is generating new forms of vulnerability. When faced with potentially dangerous consumer goods, some regulators adopt a precautionary response, which is often to regulate.
A similar reaction arises when individuals turn to large language models (LLMs) for medical or legal guidance. Should the law prohibit this? After all, human advisors in these domains are typically subject to strict qualification and certification requirements. Yet people have long consulted online symptom checkers and medical information websites, sometimes with serious repercussions for their mental health. Should that practice also be constrained?
The acceleration of technological change, especially in AI, compels us to revisit a fundamental question: to what extent should the law protect adult individuals from the consequences of their own choices? Put differently: should the law save us from ourselves?
This workshop addresses the tension between individual autonomy and state paternalism in the digital age and aims to shed light on whether the law should (or should not) respond when emerging technologies tempt us into choices from which we may need protection, even from ourselves.
We welcome submissions that examine whether, and under what conditions, legal intervention is warranted when adult individuals use powerful technologies in ways that primarily put themselves at risk, rather than directly harming others, society at large, or the environment. In an era of complex, opaque, and interconnected systems, how can we meaningfully distinguish among these types of harms, and who bears responsibility?
Core Questions & Discussion Points
- Mapping the Harm Landscape
- How can we distinguish between harms inflicted on others (e.g., User A deploying a biased AI system that causes financial loss to User B), harms to society or the environment (e.g., disinformation eroding public trust, AI-driven resource extraction), and risks primarily to the self (e.g., User A relying on an unregulated AI “therapist” that worsens mental health, or consuming hyper-personalized, addictive content leading to severe self-neglect)?
- Where do more diffuse harms such as data exploitation, pervasive surveillance, or subtle psychological manipulation fit within this taxonomy?
- Paternalism
- Should the law intervene to prevent competent adults from engaging in technologically mediated forms of self-harm? If so, on what grounds (e.g., preventing severe or irreversible harm, safeguarding future autonomy), and how far should such intervention go?
- Do features of contemporary technologies-such as unprecedented scale, speed, personalization, and manipulative design-change the traditional arguments against paternalism?
- At what point does “informed consent” become unattainable because of technical complexity, opacity, or deception (as in the case of deepfakes and synthetic media)?
- Vulnerability and Capacity
- How should legal and regulatory approaches vary according to users’ capacities? What additional safeguards are appropriate for children, whose decision-making abilities are still developing?
- For older adults experiencing cognitive decline, or for individuals with disabilities, where is the line between necessary support and undue paternalism?
- How should we assess capacity and vulnerability in interactions with sophisticated AI interfaces that are designed to appear human-like, empathetic, or trustworthy?
- Liability and Profit
- When self-harm occurs in or through digital environments, where should responsibility and liability be located?
- Should the legal emphasis shift towards regulating providers of potentially harmful technologies, particularly those whose business models depend on maximizing engagement (e.g., platforms that promote addictive design features, developers who market AI systems without adequate safety warnings or protections against foreseeable misuse)?
- Is the principle of caveat emptor-“let the buyer beware”-still defensible in an era of pervasive algorithmic nudging and behavioral targeting?
- Redefining ‘Reasonable’ and ‘Informed’
- What does it mean to be an “informed person” when deepfakes obscure the line between reality and fabrication, algorithms construct personalized echo chambers, and complex systems operate in ways that are not intelligible even to experts?
- How should the legal standard of the “reasonable person” evolve to reflect the cognitive burdens, information asymmetries, and widespread misinformation characteristic of the digital environment?
- Can individuals plausibly be expected to anticipate the full range of risks associated with rapidly evolving AI tools?
Call for Participation
We invite scholars and practitioners from law, philosophy, computer science, psychology, sociology, ethics and related fields to contribute to this interdisciplinary conversation. Abstracts are welcomed that engage with one or more of the core questions or closely related themes.
Accepted papers will be presented and discussed in a dedicated workshop as part of the IVR 2026 World Congress in Istanbul.

