Responsible AI Governance: A Response to UN Interim Report on Governing AI for Humanity

For Full Report - visit here

Bernd Stahl, Beverley Townsend, Carsten Maple, Charles Vincent, Fraser Sampson, Geoff Gilbert, Helen Smith, Jayati Deshmukh, Jen Ross, Jennifer Williams, Jesus Martinez del Rincon, Justyna Lisinska, Karen O'Shea, Márjory Da Costa Abreu, Nelly Bencomo, Oishi Deb, Peter Winter, Phoebe Li, Philip H. S. Torr, Pin Lean Lau, Raquel Iniesta, Sarah Kiden, Sarvapali D. Ramchurn, Sebastian Stein, and Vahid Yazdanpanah

All the authors contributed equally, listed alphabetically by their first name.

Executive Summary

The report agrees that AI holds immense potential for global benefit, aligning with the Sustainable Development Goals (SDGs). However, it emphasizes that without robust safeguards and responsible governance, AI could exacerbate societal inequalities. The authors advocate for a multi-stakeholder approach where governments, developers, and users all share responsibility. Key recommendations include investing in infrastructure, promoting AI literacy, and grounding AI governance in international human rights law.

Opportunities & Enablers

  • AI has the potential to transform access to knowledge and boost efficiency, supporting the SDGs.
  • Governments must ensure responsible, equitable, and safe access to AI for all, including vulnerable groups.
  • Investment in foundational infrastructure like broadband and electricity is crucial to support AI systems.
  • All stakeholders—developers, policymakers, and end-users—must be aware of their distinct responsibilities.
  • International collaboration is key for sharing data ethically and extending support to underserved communities.

Risks & Challenges

  • Without safeguards, AI can worsen societal inequalities, bias, and discrimination.
  • The environmental costs of AI (energy, hardware) must be a prominent part of governance discussions.
  • AI literacy is essential to empower people to protect their privacy and make informed decisions.
  • Human-centric design is critical, ensuring human agency, oversight, and well-being are prioritized.
  • International Human Rights Law should be used as a framework to assess harm and assign responsibility.

International Governance of AI

Guiding Principles

Inclusivity

AI must be governed by and for all, focusing on vulnerable communities and bridging multiple digital divides (access, skills, voice).

Public Interest

Governance should move beyond voluntary nudges and self-regulation toward legally binding norms to ensure accountability.

Data Governance

Explore alternative data models like cooperatives, fiduciaries, and trusts to empower individuals and communities with control over their data.

Legal Anchoring

AI governance must be anchored in the UN Charter, International Human Rights Law, and other commitments like the SDGs.

Institutional Functions

The report supports establishing a multidisciplinary body for AI assessment (similar to the IPCC for climate change) to perform horizon scanning and build scientific consensus. It calls for harmonizing technical standards, creating liability regimes, and fostering international collaboration on data, talent, and computing power. These functions must be transparent, inclusive, and accountable to build public trust and ensure that AI development is equitable and safe.