We use cookies to provide and improve our services. They allow us to remember some of our preferences and improve the overall site performance. View our Privacy policy.
What is it
AI governance often uses shared terms like transparency and fairness, but their meaning can vary in practice. This session compares how courts in the US, UK, Australia and China apply these ideas when using AI.
When
18/03/2026 12:00pm - 1:00pm
Where

Melbourne Connect, Manhari Room, Level 7

Free
Register here

Same Words, Different Worlds: The Illusion of Shared Judicial AI Principles


Global AI governance frameworks increasingly deploy shared terminology—“transparency”, “bias” and “fairness”—creating an illusion of alignment across jurisdictions. Through a comparative analysis of AI deployment in common law courts (United States, United Kingdom, Australia) and Chinese courts, we expose how seemingly universal concepts mask fundamentally divergent interpretations and implementation.

Our analysis reveals stark contrasts across core governance principles. Common law courts conceive reliability as case-specific accuracy verified through adversarial contestation and post-hoc sanctions for errors like AI hallucinations, while Chinese courts prioritize systemic reliability achieved via centralized data curation and ex ante governmental oversight. Bias threatens individual rights and procedural equality in adversarial systems but endangers social harmony and state legitimacy in China. Transparency enables party contestation in common law jurisdictions; in Chinese Smart Courts, it renders processes legible to supervisory authorities rather than litigants. Procedural fairness protects party autonomy in common law courts but advances governmental effectiveness and stability objectives in Chinese courts.

Beyond doctrine, we identify three extra-legal forces driving these divergences: competing efficiency logics prioritizing different judicial functions; the inherently political nature of AI technologies presupposing particular institutional arrangements; and archival-epistemic structures determining who holds authority over legal knowledge. Common law judges perceive AI as threatening their custodianship of precedent, while Chinese judges, operating as bureaucratic state agents, embrace AI for standardization.

We conclude that global AI governance cannot rely on superficially harmonized principles. Instead, we propose a frame-reflective translational approach whereby jurisdictions explicitly acknowledge discrepant frames and engage in reciprocal translation, preserving distinct legal, political and social commitments while building mutual understanding.

This event is hosted by the Centre for Artificial Intelligence and Digital Ethics (CAIDE).

Presenters

Prof Ernest Lim

Professor Ernest Lim is the inaugural Chan Sek Keong Chair in Private Law at NUS Law. He also serves as Vice Dean (Faculty Development) and Co-Director of the Centre for Technology, Robotics, AI & the Law (TRAIL). He received his DPhil and BCL from the University of Oxford, LLM from Harvard Law School and LLB from NUS. He applies his research expertise in comparative corporate law and governance, as well as other areas of private law, to understand and critique AI. His AI-related publications include The Cambridge Handbook of Private Law and Artificial Intelligence (Cambridge University Press, 2024) (co-edited with Phillip Morgan), “Unpacking the State-Private Nexus in China’s AI Development Path” (2026) 18 Law, Innovation and Technology (forthcoming) (with PY Cheng), “A Legal Framework for Artificial Intelligence Fairness Reporting” (2022) 81 Cambridge Law Journal 610 (with JQ Yap), “B2B Artificial Intelligence Transactions: A Framework for Assessing Commercial Liability” [2022] Singapore Journal of Legal Studies 46, and “Technology vs Ideology: How Far will Artificial Intelligence and Distributed Ledger Technology Transform Corporate Governance and Business?” (2021) 18 Berkeley Business Law Journal 1 (with HY Chiu).

Ilya Akdemir

Ilya Akdemir obtained his JSD and LLM from UC Berkeley in 2023 and 2017 respectively, and his LLB from the University of Kent in 2014. Dr. Akdemir’s research and teaching interests lie at the intersection of law and technology, with a particular interest in exploring how today’s data-driven methods and approaches, such as machine learning, natural language processing, data science, artificial intelligence, etc. can be used and/or misused in the legal domain. Furthermore, he is also interested in developing critically and epistemologically informed legal responses to today’s emerging technologies, including issues relating to artificial intelligence technologies and their legal and societal implications.