Artificial Intelligence and the Law: New Challenges and Possibilities for Fundamental Human Rights and Security
The Jack and Mae Nathanson Centre on Transnational Human Rights, Crime and Security will hold a conference titled Artificial Intelligence and the Law: New Challenges and Possibilities for Fundamental Human Rights and Security on March 13 in Room 1014 Helliwell Centre, Osgoode Hall Law School (Keele Campus) and online.
Artificial Intelligence (AI) is dramatically reshaping how people live, work and interact, as well as the functioning of societies and legal systems’ adaptations to these changes. Machine learning technologies’ integration into various decision-making processes carries profound implications for sentencing, taxation, workplace dynamics, surveillance and policing, privacy and financial markets. The rising automation of human activities prompts significant legal inquiries spanning constitutional, contractual and tort issues.
Large language models such as ChatGPT are AI technologies with a range of legal, ethical and societal implications. These models, trained on massive volumes of text data, can generate text resembling human language, enabling tasks like answering questions, writing essays and even crafting poetry. They implicate freedom of expression, the right to information and the democratic process at large. They have the potential to generate misleading, harmful or hateful content, regardless of their programmers’ and owners’ intentions. They could become tools for propaganda or disinformation campaigns. They raise intellectual property questions, particularly when their output is based on pre-existing intellectual or artistic works and could lead to mass job automation.
On March 13, we will meet to discuss all these issues with a stellar group of researchers and speakers.
Noon to 12:15 p.m. – Light lunch served
Introduction to the Conference
12:15 p.m. to 2:15 p.m.
Artificial Intelligence and the Law – Barnali Choudhury, Chair
Allan Hutchinson: Reflections on Singularity: AI and Law’s Multiplicity.
Jon Penney: How Safe Are AI Safety Standards?
Carys Craig: The AI-Copyright Trap.
Valerio De Stefano: Artificial Intelligence and Work.
Aida Abraha: Examining AI Governance in the Workplace Context: A Comparative Analysis of Workplace Technology Regulations in Canada, the United States, and the European Union.
François Tanguay-Renaud: Contrasting Police Powers of Detention and Arrest in Canada and the United States: Is There a Place for Predictive AI and Some Thoughts about Racial Profiling and its Regulation.
2:15 p.m. to 2:30 p.m.
2:30 p.m. to 4:30 p.m.
Artificial intelligence and Human Rights Protection – Dean Trevor Farrow, Chair
Sean Rehaag: Rights-Enhancing Tech: Using AI to Open the Black Box of Human Refugee Adjudication.
Jake Okechukwu Effoduh: How Artificial Intelligence is Bastardizing Paradigms of Human Rights in the Third World.
James Sheptycki: AI and the police intelligence division-of-labour; a Canadian perspective.
Alexandra Scott: Autonomous weapons systems and International Humanitarian Law.
Anthony Sangiuliano: Approaches to Prohibiting Algorithmic Discrimination under the Canadian Human Rights Act.
Luna Xiaolu Li: Regulating Algorithmic Discrimination with Affirmative Action: A Comparative and Interdisciplinary Study.
Aneurin Thomas: Regulating Police Facial Recognition Technology: Issues and Options.
4:30 p.m. to 4:45 p.m.
4:45 p.m. to 6:15 p.m.
Artificial Intelligence, Due Process and Legal Ethics – Roundtable – Valerio De Stefano, Chair
Dean Trevor Farrow, Osgoode Hall Law School
Glenn Stuart, Law Society of Ontario
Amy Salyzyn, University of Ottawa
Patricia McMahon, Osgoode Hall Law School
Richard Haigh and Stephen Fulford, Osgoode Hall Law School
Giuseppina (Pina) D’Agostino, Osgoode Hall Law School
The event will be both in-person and livestreaming online. For any further information, feel free to contact Professor Valerio De Stefano at email@example.com and copy firstname.lastname@example.org.