Tag Archives: chatgpt

OHCHR Publishes a Taxonomy of Human Rights Risks Connected to Generative AI

Recently, we observed an article titled Toward Humanist Superintelligence on microsoft.ai. That article, dated November 6, 2025 was credited to Mustafa Suleyman. We continue to recommend that humanists read and evaluate Suleyman’s comments about that companies aims.

in the meantime, we further note that the United Nations Human Rights Office of the High Commissioner has published a document titled, Taxonomy of Human Rights Risks Connected to Generative AI. The introduction to the 22-page document states that it , “explores human rights risks stemming from the development, deployment, and use of generative AI technology. Establishing such a rights-based taxonomy is crucial for understanding how the United Nations Guiding Principles on Business and Human Rights (UNGPs) should be operationalised in addressing human rights risks connected to generative AI. This taxonomy is concerned with demonstrating how the most significant harms to people related to generative AI are in fact impacts on internationally agreed human rights.

We urge humanists to read the OHCHR document and a related covering article on their website and reflect upon how it relates to the objectives of existing and emerging commercial interests, such as Microsoft, but by no means limited to Microsoft. The context of the UN’s work and our own investigation is essential: we must ensure discussion is oriented to concrete human dignity rather than abstract technical issues or priorities set by commercial interests focused on profit-generating activities.

From a humanist standpoint, the UN’s taxonomy could be an important starting place to center people and ethics — not profit or innovation — in policy decisions about AI’s future.

Consider:

  • Grounding AI governance in shared values — by linking risks to the UN Guiding Principles on Business and Human Rights (UNGPs), it provides a practical, universally recognized ethical framework
  • Amplifying disenfranchised voices — it explicitly highlights that generative AI often exacerbates risks for already vulnerable groups, including women, girls, and populations in the Global South
  • Addressing consent at scale — because these models often use large datasets scraped from the internet, people may not know or be able to give informed consent when their data is collected for AI training

Matters such as the ethical oversight of the advent and implementation of this massively powerful new technology are not beyond our human ability to navigate. As Suleyman has observed, humanism contains an essential ethical toolkit. We caution that humanists must ensure that which humanist tools are used, and how they are used, remains in the appropriate hands.

AI Disclosure

This article was drafted using a process that included the use of artificial intelligence tools. If you have any stylistic or editorial concerns or find factual errors or omissions, please let us know.

Up For Discussion

If you’re interested in analyzing and discussing this issue, there are actions you can take. First, here at Humanist Heritage Canada (Humanist Freedoms), we are open to receiving your well-written articles.

Second, we encourage you to visit the New Enlightenment Project’s (NEP) Facebook page and discussion group.

Citations, References And Other Reading

  1. Featured Photo Courtesy of :
  2. https://microsoft.ai/news/towards-humanist-superintelligence/
  3. https://www.indigo.ca/en-ca/building-a-god-the-ethics-of-artificial-intelligence-and-the-race-to-control-it/9781493085880.html
  4. https://www.ohchr.org/sites/default/files/documents/issues/business/b-tech/taxonomy-GenAI-Human-Rights-Harms.pdf
  5. https://unric.org/en/protecting-human-rights-in-an-ai-driven-world/

By continuing to access, link to, or use this website and/or podcast, you accept the HumanistFreedoms.com and HumanistHeritageCanada.ca Terms of Service in full. If you disagree with the terms of service in whole or in part, you must not use the website, podcast or other material.

The views, opinions and analyses expressed in the articles on Humanist Freedoms are those of the contributor(s) and do not necessarily reflect the views or opinions of the publishers.

Building a God by Christopher DiCarlo

We were pleased to learn that Christopher DiCarlo’s new book, Building a God: The Ethics of Artificial Intelligence and the Race to Control It is now available via Amazon and other booksellers. We’re looking forward to acquiring a signed copy, as soon as we can!

Dr. DiCarlo’s previous titles include: How to Become a Really Good Pain in the Ass: A Critical Thinker’s Guide to Asking the Right Questions (currently in its fifth printing) and So You Think You Can Think? Tools for Having Intelligent Conversations and Getting Along.

You may also be familiar with Dicarlo’s recent podcast work on All Thinks Considered.

In Building a God, Dr. DiCarlo explores the profound implications of artificial intelligence surpassing human intelligence—a destiny that seems not just possible, but inevitable. At this critical crossroad in our evolutionary history, DiCarlo, a renowned ethicist in AI, delves into the ethical mazes and technological quandaries of our future interactions with superior AI entities.

From healthcare enhancements to the risks of digital manipulation, this book scrutinizes AI’s dual potential to elevate or devastate humanity. DiCarlo advocates for robust global governance of AI, proposing visionary policies to safeguard our society.

AI will positively impact our lives in myriad ways: from healthcare to education, manufacturing to sustainability, AI-powered tools will improve productivity and add ease to the most massive global industries and to our own personal daily routines alike. But, we have already witnessed the tip of the iceberg when it comes to the risks of this new technology: AI algorithms can manipulate human behavior, spread disinformation, shape public opinion, and impact democratic processes. Sophisticated technologies such as GPT-4, Dall-E 2, and video Deepfakes allow users to create, distort, and alter information. Perhaps more troubling is the foundational lack of transparency in both the utilization and design of AI models.

What ethical precepts should be determined for AI, and by whom? And what will happen if rogue abusers decide not to comply with such ethical guidelines? How should we enforce these precepts? Should the UN develop a Charter or Accord which all member states agree to and sign off on? Should governments develop a form of international regulative body similar to the International Atomic Energy Agency (IAEA) which regulates not only the use of nuclear energy, but nuclear weaponry as well?

In this incisive and cogent meditation on the future of AI, DiCarlo argues for the ethical governance of AI by identifying the key components, obstacles, and points of progress gained so far by the global community, and by putting forth thoughtful and measured policies to regulate this dangerous technology.

Up For Discussion

If you’re interested in analyzing and discussing this issue, there are actions you can take. First, here at Humanist Heritage Canada (Humanist Freedoms), we are open to receiving your well-written articles regarding artificial intelligence.

Second, we encourage you to visit the New Enlightenment Project’s (NEP) Facebook page and discussion group.

Citations, References And Other Reading

  1. Featured Photo Courtesy of : Christopher DiCarlo
  2. https://www.prometheusbooks.com/9781493085880/building-a-god/
  3. https://www.amazon.com/s?k=9781493085880
  4. https://forum.mobilism.org/viewtopic.php?f=892&t=5835191
  5. https://sanet.st/blogs/ai-ebooks/building_a_god_the_ethics_of_artificial_intelligence_and_the_race_to_control_it.5038031.html
  6. https://www.criticalthinkingsolutions.ca/biography
  7. https://allthinksconsidered.com/

By continuing to access, link to, or use this website and/or podcast, you accept the HumanistFreedoms.com and HumanistHeritageCanada.ca Terms of Service in full. If you disagree with the terms of service in whole or in part, you must not use the website, podcast or other material.

The views, opinions and analyses expressed in the articles on Humanist Freedoms are those of the contributor(s) and do not necessarily reflect the views or opinions of the publishers.