Event Report - It's About People Conference 2025: Perspectives and Limits of Algorithmic Constitutionalism

Online Panel Discussion - It's About People Conference 2025: Perspectives and Limits of Algorithmic Constitutionalism

On March 17, the Lendület/Momentum Research Group on Algorithmic Constitutionalism hosted an online panel at the Its About People Conference 2025 by the Almamater Europaea University of Maribor. 

Participants of the discussion:

  • Boldizsár Szentgáli-Tóth, Senior Research Fellow, HUN-REN Centre for Social Sciences, Institute for Legal Studies

  • Veronica Mocanu, Lecturer, State University of Moldova

  • Sarah de Heer, PhD Candidate, University of Lund

  • Kitti Mezei, Senior Research Fellow, HUN-REN Centre for Social Sciences, Institute for Legal Studies

Moderator: Nóra Chronowski, Professor, HUN-REN Centre for Social Sciences, Institute for Legal Studies)

On 17 March, 2025, scholars specializing in AI governance gathered for a panel discussion examining constitutional law's adaptation to the transformative role of artificial intelligence (AI) in society. Chaired by Nóra Chronowski, professor of the Institute of Legal Studies, the panel explored whether current constitutional frameworks can effectively regulate AI while safeguarding human rights and democratic values. The discussion revolved around the normative and judicial regulation of AI, its implications for fundamental rights, and the potential for interdisciplinary solutions to bridge the gap between technology and constitutional principles.

In his introductory remarks, Boldiszár Szenthgáli-Tóth, principal investigator of the Algorithmic Constitutionalism Momentum Research Group, emphasized the need to rethink traditional legal frameworks to address AI's distinct characteristics. Drawing on ideas from scholars such as Giovanni De Gregorio, he outlined the concept of algorithmic constitutionalism as a response to the growing "human-like" interactions between AI systems and individuals. These systems, he noted, challenge conventional legal categories designed for human relationships. To address these challenges, his research group focuses on protecting fundamental rights—such as the right to a fair trial, freedom of expression, and the right to a healthy environment—while proposing judicial interpretation methods to uphold constitutional safeguards. Szenthgáli-Tóth also highlighted the role of soft law (e.g., ethical guidelines) and normative law (e.g., the EU AI Act) in regulating AI. While the AI Act provides a robust framework for high-risk AI systems, judicial case law remains underdeveloped in this area. He urged courts to develop novel interpretative methods to apply constitutional principles to AI-related cases.

Kitti Mezei, a legal scholar specializing in criminal law and AI, delved into the EU AI Act's implications for law enforcement. She discussed how the act categorizes AI systems into risk tiers, prohibiting those with "unacceptable risks" (e.g., predictive policing and real-time biometric surveillance in public spaces). High-risk applications, meanwhile, require transparency, human oversight, and quality data management. Mezei highlighted the extraterritorial nature of the AI Act, which applies to global actors offering AI systems in Europe. However, she pointed to unresolved challenges, particularly around the "black box" nature of many AI systems, which hinders transparency. The Act’s focus on labeling generative AI content as "manipulative" or "deepfake" was noted as a progressive step toward ethical AI governance.

Sarah de Heer, a doctoral candidate from Lund University, explored how the EU AI Act aims to protect fundamental rights while promoting trustworthy AI. She discussed the Act’s emphasis on a "risk-based approach," organizing AI into four tiers: unacceptable, high, limited, and minimal risk. While the Act prohibits AI systems that undermine fundamental rights, de Heer noted that it does not provide for individual rights or remedies, relying instead on consumer protection frameworks. She called for more robust mechanisms, including judicial safeguards, to ensure effective enforcement.

Veronica Mocanu, representing the  State University, examined the tension between digital democracy and digital repression. She defined digital democracy as leveraging technology to enhance civic participation and governance transparency but warned that unchecked AI use could facilitate repression through surveillance and misinformation. Highlighting the role of ethical AI standards, she argued that balancing technological innovation with democratic values requires a global commitment to data protection and human rights.

Across the presentations, a recurring theme was the need for interdisciplinary approaches to address AI's complex societal implications. Speakers stressed the importance of maintaining a human-centered framework in AI regulation, drawing parallels between AI and environmental law’s precautionary principle. This principle advocates for decision-making based on the best available scientific evidence to mitigate risks while fostering innovation.

The panel also underscored the critical role of judicial interpretation in bridging the gap between legislative inertia and technological advancement. As courts increasingly encounter AI-related cases, they will need to develop robust tests to reconcile statutory requirements with constitutional principles. Finally, the speakers highlighted the ethical dimension of algorithmic constitutionalism, emphasizing the importance of fostering cross-sector collaboration among developers, policymakers, and legal scholars to ensure AI serves human dignity and societal well-being. This panel discussion marks an important step in advancing the study and practice of algorithmic constitutionalism, offering a roadmap for navigating the challenges and opportunities posed by AI.

This report was prepared with the support of the Algorithmic Constitutionalism Research Group (LP2024-20/2024), funded by the Hungarian Academy of Sciences.