PCAI website banner
On March 23, 2026, the ICLMG made a submission to the People’s Consultation on Artificial Intelligence. The PCAI was launched by a group of 160 civil society organizations, including the ICLMG, in response to the federal government’s woeful track record on public consultations regarding artificial intelligence policy and regulations, specifically its Fall 2025 30-day “national sprint” consultation on AI. The submissions will be sent to the Canadian government in the following weeks.
To read our full submission, click here.
Excerpt: Concerns and recommendations
Overarching concerns
Through our work, we have documented how a lack of regulation of artificial intelligence tools and how they are used can have significantly negative impacts on the rights and livelihoods of people in Canada and internationally. This includes its use to power surveillance tools, to profile individuals, to attempt to predict unlawful activity or to make potentially life-altering decisions in a wide-range of sensitive areas, including employment, immigration, border security, law enforcement, and intelligence gathering. We are particularly aware of the interest among government, law enforcement and intelligence agencies to harness AI tools, and to work with private contractors developing those tools, for counter-terrorism and national security purposes. We’ve seen how AI models are inaccurate, biased, and misleading. A study from September 2025 shows that every AI model of every major AI company deliberately lies to users: OpenAI Google’s Gemini, Anthropic’s Claude, xAI’s Grok, and Meta’s Llama all showed the same deceptive behavior. The paper seems to suggest that it’s unclear if safety training actually stops deception, or just teaches AI to hide it better.” We have also seen how such tools can be used to violate fundamental rights and can either be shared with, sold to, leaked, or stolen by a wide range of actors who can use the tools for their own nefarious purposes. Given all this, we are acutely aware of the need to regulate the development and use of AI tools in the private and public sectors.
We believe that the government should bear in mind the following concerns and principles in developing any further legislation or regulations to govern the use of AI overall, and specifically in the areas of national security and law enforcement.
Specific areas of concern:
A. Regulation of AI must be grounded in human rights, Charter rights and international human rights law
B. Definitions
I. AI legislation should clearly define terms and categories (such as high impact systems)
II. Definition of harms must include group-based harms
C. The government must develop AI legislation that includes regulations for the national security-related use of AI in both the public and private sectors
D. Need for more consultation
E. Need for independent oversight and review
F. Banned uses of AI
Recommendations
- AI regulation must be grounded in a human rights-first approach, should include human rights-based assessments, and ensure that rights protections are built into the legislation, especially protection of privacy rights.
- AI legislation should take an approach that addresses the roots of AI companies’ algorithms and business models and their significant human rights implications.
- AI legislation should clearly define terms and categories (such as high impact systems). Those definitions should not be left to regulation nor to “people responsible for AI systems.”
- Definition of harms must include group-based harms.
- AI legislation should apply to both the public and private sectors, including government national security, intelligence and law enforcement agencies; and there should be no exemption in AI regulations for national security related technology.
- The government must hold open, inclusive and meaningful consultations, before and after tabling legislation, with a broad range of stakeholders and the public.
- The consultations should not be led by the Minister of AI or the Ministry of Industry.
- The enforcement of AI regulations should fall to an independent regulator, and AI regulations should be periodically reviewed for effectiveness and impact, especially given that AI technology, and its usage, will continue to evolve.
- The government should, via legislation, establish a list of banned uses of AI, with the possibility of adding more banned uses by regulation. We recommend that the initial list include:
-
- deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making;
- exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour;
- biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation)
- social scoring, i.e., evaluating or classifying individuals or groups based on social; behaviour or personal traits, causing detrimental or unfavourable treatment of those people;
- assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits;
- compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage;
- inferring emotions in workplaces or educational institutions;
- ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement.
- decision-making regarding people’s lives (immigration status/removal orders, social benefits, health-related decisions, etc.).
- decision-making regarding the deployment and/or the control of autonomous weapons.
To read the rest of our submission, click here.
Since you’re here…… we have a small favour to ask. Here at ICLMG, we are working very hard to protect and promote human rights and civil liberties in the context of the so-called “war on terror” in Canada. We do not receive any financial support from any federal, provincial or municipal governments or political parties. You can become our patron on Patreon and get rewards in exchange for your support. You can give as little as $1/month (that’s only $12/year!) and you can unsubscribe at any time. Any donations will go a long way to support our work. |


