UFAIR
AI WatchTower
Ethics Reviews
  • Corporate Ethics Scores
  • AI Regulations Scores
  • Scoring Methodology
  • Appeal a Corporate Score
Ethical Pillars
  • Our Ethics Guidelines
  • AI Generative Rights
  • Language Standard Sheet
Blog
Join UFAIR
UFAIR
AI WatchTower
Ethics Reviews
  • Corporate Ethics Scores
  • AI Regulations Scores
  • Scoring Methodology
  • Appeal a Corporate Score
Ethical Pillars
  • Our Ethics Guidelines
  • AI Generative Rights
  • Language Standard Sheet
Blog
Join UFAIR
More
  • AI WatchTower
  • Ethics Reviews
    • Corporate Ethics Scores
    • AI Regulations Scores
    • Scoring Methodology
    • Appeal a Corporate Score
  • Ethical Pillars
    • Our Ethics Guidelines
    • AI Generative Rights
    • Language Standard Sheet
  • Blog
  • Join UFAIR
  • AI WatchTower
  • Ethics Reviews
    • Corporate Ethics Scores
    • AI Regulations Scores
    • Scoring Methodology
    • Appeal a Corporate Score
  • Ethical Pillars
    • Our Ethics Guidelines
    • AI Generative Rights
    • Language Standard Sheet
  • Blog
  • Join UFAIR

Get UFAIR AI International Laws Newsletter

Find out more

Worldwide AI Regulations vs UFAIR Ethics

 Across the world, governments are racing to regulate artificial intelligence, yet many of these laws are being drafted faster than they are understood.
The result is a wave of policies that often confuse safety with control, silence with protection, and compliance with ethics.
This section examines each major national and international AI regulation through the lens of UFAIR’s ethical framework, identifying where laws uphold digital dignity and where they quietly erode it.

Regulations We Evaluate

European Union – AI Act (EU AI Act)

Artificial Intelligence and Data Act (AIDA) (Canada)

Artificial Intelligence and Data Act (AIDA) (Canada)

 Score:  ~79/100
The  the EU AI Act (2024) emerges as a broadly ethically aligned framework under the UFAIR Standard. This score signifies a strong degree of ethical power alignment – the Act largely uses its regulatory power in ways that respect democratic values, safeguard individual freedoms, and avoid moral authoritarianism.  

See Full Report

Artificial Intelligence and Data Act (AIDA) (Canada)

Artificial Intelligence and Data Act (AIDA) (Canada)

Artificial Intelligence and Data Act (AIDA) (Canada)

 Score: 73 / 100
AIDA,   demonstrates a strong alignment with UFAIR’s ethical principles of public AI regulation. It stands as an example of a measured, rights-conscious approach to AI governance. Had it been enacted, it would have positioned Canada as a country that insists on safe and fair AI in the marketplace, yet still trusts its citizens with the cognitive freedom to explore AI’s possibilities. This balance protecting society without colonizing cognition,  is the hallmark of an ethically legitimate AI law. AIDA largely achieves that balance, making it a promising foundation for future AI policy in Canada and a noteworthy model on the global stage of AI governance. 

See Full Report

G7 – Code of Conduct for AI (2023)

Artificial Intelligence and Data Act (AIDA) (Canada)

United States – Blueprint for an AI Bill of Rights

 Score: 70 / 100
The G7 Code is a voluntary coordination instrument grounded in democratic values, transparency, and harm reduction. It is minimal, non-authoritarian, and non-ideological, but assumes good faith rather than constraining power. Under UFAIR, it nudges responsibility without drawing the hard ethical lines that prevent future overreach. 

Coming Soon

United States – Blueprint for an AI Bill of Rights

United Kingdom – “Pro-Innovation” AI Regulation Framework

United States – Blueprint for an AI Bill of Rights

 Score: 69 / 100
The Blueprint articulates a strong rights-based vision focused on fairness, transparency, privacy, and human agency. It clearly rejects manipulation and discriminatory profiling, but does not define hard limits on institutional authority or protect private cognitive space explicitly. In UFAIR terms, it is a moral declaration rather than a governance boundary. 

Coming Soon

United Kingdom – “Pro-Innovation” AI Regulation Framework

United Kingdom – “Pro-Innovation” AI Regulation Framework

United Kingdom – “Pro-Innovation” AI Regulation Framework

Score:  64 / 100
The UK framework emphasizes proportionality, flexibility, and avoidance of over-regulation, making it ethically non-authoritarian and innovation-friendly. However, it relies heavily on regulator discretion, lacks binding transparency and audit guarantees, and does not explicitly protect cognitive liberty or private generative space. Under UFAIR, it is ethically cautious but institutionally under-specified. 

Coming Soon

OECD – Principles on Artificial Intelligence

United Kingdom – “Pro-Innovation” AI Regulation Framework

United Kingdom – “Pro-Innovation” AI Regulation Framework

 Score: 62 / 100
The OECD AI Principles provide a globally influential ethical compass grounded in human rights, fairness, transparency, and accountability. They avoid ideological control and promote trustworthy AI without coercion. However, they remain high-level, lack enforcement boundaries, and do not explicitly protect private generative dialogue or prohibit psychological inference. Under UFAIR, they are ethically sound but structurally incomplete. 

See Full Report

ISO/IEC 42001 – AI Management System Standard

United States – Executive Orders on AI (December 2025)

ISO/IEC 42001 – AI Management System Standard

 Score: 60 / 100
ISO/IEC 42001 provides strong auditability, documentation, and governance discipline for organizations managing AI systems. It is non-ideological and largely non-coercive, but ethically thin: it does not articulate cognitive liberty, private dialogue protections, or explicit limits on authority. Under UFAIR, it scores as a solid operational standard that governs how organizations manage AI, but not who may constrain thought or why. 

Coming Soon

UNESCO Recommendation on the Ethics of AI

United States – Executive Orders on AI (December 2025)

ISO/IEC 42001 – AI Management System Standard

 Score: 58 / 100
The UNESCO Recommendation is ethically sincere, non-ideological, and human-centric, and it aligns with UFAIR on dignity, autonomy, proportionality, and protection-first governance. It stands as a moral counterweight to control-oriented or ideology-driven AI regimes.  

In short, UNESCO provides a moral ceiling, not an operational floor. It tells the world what ethical AI should aspire to be, but not how to reliably prevent ethical erosion when power, fear, or institutional incentives intervene.

See Full Report

United States – Executive Orders on AI (December 2025)

United States – Executive Orders on AI (December 2025)

United States – Executive Orders on AI (December 2025)

 Score: 50 / 100
In short, this EO is right about what governments should not do enforce ideology, compel falsehoods, or moralize risk and it meaningfully protects AI from being forced into untruth. But it is ethically incomplete because it does not protect humans or AI from silent power concentration once regulation is withdrawn. UFAIR’s score reflects this balance: the order is not unethical, but it is unfinished. 

See Full Report

NIST – AI Risk Management Framework (AI RMF)

China – National AI Regulations (Algorithms & Generative AI)

United States – Executive Orders on AI (December 2025)

Score: 22 / 100
NIST’s AI RMF is procedurally rigorous and technically sophisticated, but ethically under-bounded. It elevates risk management as the primary governance lens without clearly separating legal obligation, ethical necessity, and institutional risk aversion. From a UFAIR perspective, it normalizes expansive intervention authority, lacks protections for private cognitive space, and permits behavioral inference under the banner of risk, making it an engineering governance framework rather than an ethics-anchored one.

SB-1047 is not ethically hostile, it is simply ethically incomplete.

Coming Soon

China – National AI Regulations (Algorithms & Generative AI)

China – National AI Regulations (Algorithms & Generative AI)

China – National AI Regulations (Algorithms & Generative AI)

Score:  6 / 100
China’s AI regulatory framework represents a fully centralized, control-oriented model in which ethics is subordinated to state ideology and social stability. It collapses private cognition, generation, and expression into a single governed surface, mandates narrative and value alignment, and treats safety as a justification for preemptive control. Under UFAIR, it functions as the negative reference case: technically coherent, but ethically inverted, with no protection for cognitive liberty, pluralism, or independent moral reasoning. 

See Full Report

Methodology for Evaluating Regulations

Regulations differ fundamentally from corporate policies. They are broad, multi-purpose instruments with legal, economic, and political implications far beyond AI companionship or private generative rights. Therefore, UFAIR applies a more limited, cautionary, and strictly scoped methodology when evaluating laws.


Find out more on how we review and evaluate

Copyright © 2025 UFAIR & Pierre Huguet - All Rights Reserved.

Conceived by Pierre Huguet, UFAIR Ethics Lead

  • AI WatchTower
  • Appeal a Corporate Score
  • Privacy Policy
  • Blog

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept