Search
Close this search box.

Canadian Authorities Unveil Voluntary Code of Conduct for Advanced Generative AI Systems

In a major step towards responsible AI development and management, Innovation, Science and Economic Development Canada has announced a voluntary Code of Conduct for developers and managers of advanced generative AI systems like ChatGPT, DALL·E 2, and Midjourney.

Amidst growing concerns regarding the wide-ranging potential of such AI systems, the Code aims to address and reduce associated risks, particularly those related to health and safety, biases, deception capabilities, and societal impacts.

Generative AI has the potential to revolutionize fields like customer service, corporate knowledge management, and content creation, but its misuse can undermine democratic and criminal justice institutions.

Reacting to the announcement, Microsoft Canada President Chris Barry stated, “At Microsoft, we believe strong laws are needed regulating AI that protect people’s privacy and uphold civil liberties while allowing for positive uses of the technology to continue. The Government of Canada’s announcement today of a voluntary code of conduct for advanced AI systems is a valuable step towards ensuring AI is developed and deployed responsibly by Canadian companies.”

Barry also expressed hope that this move would bolster the G7 Hiroshima AI Process, aiming for a globally interoperable framework. “Codes of conduct are useful tools as governments also work to put in place legislative and regulatory frameworks for AI, including Canada’s proposed Artificial Intelligence and Data Act in Bill C-27.”

We are committed to sharing our own learnings, innovations, and best practices and look forward to contributing to the process towards a robust Canadian legal and regulatory framework on AI,” he added.

The Code emphasizes six crucial outcomes:

  1. Accountability: Ensuring firms understand their responsibilities.
  2. Safety: Carrying out risk assessments and setting mitigations before deployment.
  3. Fairness and Equity: Addressing potential impacts during development and deployment.
  4. Transparency: Providing adequate information to consumers and experts.
  5. Human Oversight and Monitoring: Keeping tabs on system usage post-deployment and updating as required.
  6. Validity and Robustness: Ensuring system security, intended operation, and understanding of its responses.

Additionally, the table accompanying the Code specifies measures to be adopted by both developers and managers, with a sharper focus on systems available for public use.

These measures range from maintaining databases of reported incidents, conducting third-party audits, to ensuring AI systems are clearly identified when they could be mistaken for humans.

Signatories of the Code pledge not just to adhere to its principles but also to foster a strong, responsible AI ecosystem in Canada. They commit to ongoing collaboration with other players in the industry, researchers, and governments. This collaborative effort aims to drive sustainable growth in Canada, prioritizing human rights, accessibility, environmental sustainability, and leveraging AI to address pressing global challenges.

The move has been welcomed by industry insiders, who see it as a timely initiative in the rapidly evolving landscape of AI. The voluntary nature of the Code, while being a precursor to binding regulations, emphasizes collective responsibility and sets the tone for responsible AI development in Canada and beyond.

Share:

Facebook
LinkedIn
Twitter
More of What's Happening

Read Next

TARGET: YOUR INBOX

SIGN UP & Don'T MISS A DROP