DATASCI 185: Introduction to AI Applications

Lecture 17: AI Regulation and Standards Around the World

Danilo Freire

Department of Data and Decision Sciences
Emory University

Welcome back! 🌍

Recap of last class

  • We explored AI in healthcare and finance, comparing traditional ML with LLMs in each field
  • Healthcare: ML for imaging and risk scoring; LLMs for clinical notes, patient communication, and RAG-based Q&A
  • Finance: ML for credit scoring, fraud, and trading; LLMs for sentiment analysis, finance-specific models (BloombergGPT, FinGPT), and agentic workflows
  • Optum case: healthcare costs as a proxy for needs penalised Black patients
  • Apple Card: unexplainable credit limits
  • RAG reduces hallucination but inherits knowledge base biases
  • We also spoke about the group project: test AI tools in a real-world domain and design a new application
  • Today: how are governments responding to these challenges?

Lecture overview

What we will cover today

Part 1: Why regulate AI?

  • The case for government intervention
  • Different regulatory philosophies

Part 2: The EU AI Act

  • Risk-based approach
  • Prohibited, high-risk, and low-risk systems
  • Compliance requirements

Part 3: The US approach

  • Sectoral regulation vs comprehensive laws
  • Executive orders and agency guidance
  • State-level initiatives

Part 4: Other approaches

  • China’s AI regulations
  • The global race for AI governance
  • What this means for practitioners

Meme of the day 😄

Source: r/agi

Why regulate AI? 🤔

The case for intervention

Market failures:

  • Information asymmetry: Users can’t evaluate AI systems
    • How accurate is this hiring algorithm? Nobody knows
    • Is this chatbot hallucinating? Hard to tell
  • Externalities: Harms fall on people who didn’t choose to use the system
    • For instance, deepfakes harm people who never consented
  • Collective action problems: Race to the bottom
    • Company A deploys unsafe AI
    • Company B must follow or lose market share

Rights and dignity:

  • Some uses may violate fundamental rights
  • Facial recognition in public spaces
  • Manipulation of democratic processes

“Move fast and break things” maybe works for social media features. It’s less appealing when the things being broken are people’s careers (or even lifes)!

The case against (heavy) intervention

Innovation concerns:

  • Regulation can slow development
  • Compliance costs burden small companies
  • May push innovation to less regulated jurisdictions

Technical challenges:

  • Technology moves faster than law
  • General-purpose systems defy categorisation
  • Enforcement requires technical expertise

Existing laws may suffice:

  • Discrimination laws already apply
  • Consumer protection already exists

Most people agree AI needs some rules. The real argument is where to draw the line and who gets to draw it.

Source: X.com

Regulatory philosophies

Approach Philosophy Example
Precautionary Prove safety before deployment EU AI Act’s prohibited uses
Innovation-first Regulate only after harms emerge Early US approach to internet
Risk-based Stricter rules for higher-risk uses EU AI Act’s tiered system
Sectoral Different rules for different industries US healthcare vs finance AI
Self-regulation Industry develops own standards Many current AI ethics guidelines

No single approach dominates

  • EU leans precautionary/risk-based
  • US leans sectoral/innovation-first (but changing)
  • China mixes state control with innovation goals
  • Most countries are still figuring it out

The EU AI Act 🇪🇺

The world’s first comprehensive AI law

Timeline:

  • April 2021: European Commission proposal
  • December 2023: Political agreement
  • March 2024: European Parliament approval
  • August 2024: Entry into force
  • 2025-2027: Phased implementation

Scope:

  • Applies to AI systems placed on the EU market
  • Covers providers, deployers, importers, distributors
  • Extraterritorial reach: If you sell to the EU, you must comply
  • Since the EU market is huge, most companies will comply globally to avoid fragmentation

Approach:

  • Risk-based: Rules depend on potential harm
  • Horizontal: Applies across all sectors
  • Technology-neutral: Doesn’t define specific techniques

The risk pyramid

Unacceptable risk (banned):

  • Social scoring by governments
  • Real-time biometric identification in public (with exceptions)
  • Emotion recognition in workplaces and schools
  • Predictive policing based solely on profiling

High risk (strict requirements):

  • Biometric identification systems
  • Critical infrastructure management
  • Employment and worker management
  • Access to essential services (credit, insurance)
  • Law enforcement and border control
  • Justice and democratic processes

Limited risk (transparency only):

  • Chatbots (must disclose AI)
  • Emotion recognition (must inform)
  • Deep fakes (must label)

Minimal risk (no requirements):

  • Spam filters
  • Video game AI
  • Most consumer applications

High-risk system requirements

If your AI system is classified as high-risk, you must:

Before deployment:

  • Conduct a conformity assessment
  • Establish a risk management system
  • Maintain technical documentation
  • Enable logging and traceability

During operation:

  • Ensure human oversight capability
  • Guarantee accuracy, robustness, cybersecurity
  • Register in EU database
  • Report serious incidents

For deployers (users of high-risk AI):

  • Use according to instructions
  • Ensure human oversight
  • Monitor for issues
  • Inform affected individuals

Data governance requirements:

  • Training data must be relevant, representative, free of errors
  • Must examine for possible biases
  • Must be able to demonstrate compliance

Transparency requirements:

  • Clear information about capabilities and limitations
  • Instructions for use
  • Contact information for oversight

General-Purpose AI (GPAI) rules

The Act includes special rules for foundation models like GPT-5, Claude, and Gemini

All GPAI providers must:

  • Maintain technical documentation
  • Provide information to downstream deployers
  • Comply with copyright law (or demonstrate exceptions)
  • Publish training content summaries

“Systemic risk” GPAI (more powerful models) must also:

  • Conduct model evaluations including adversarial testing
  • Assess and mitigate systemic risks
  • Track and report serious incidents
  • Ensure adequate cybersecurity

Open-source models: a partial exemption

The Act gives a partial exemption to open-source AI models:

  • If you release model weights under a free licence, you are generally exempt from most GPAI transparency requirements
  • But if the model is classified as posing systemic risk (trained with more than 1025 FLOPs), the stricter tier still applies
  • Why the exception? Open-source models allow independent scrutiny, which partly substitutes for regulatory oversight
  • Supports European open-source ecosystem and research

Criticism:

  • Creates a potential loophole: open models can still cause harm once deployed by third parties
  • The deployer bears responsibility, but small deployers may lack resources to comply

Example: Mistral is a French open-source AI company. Its larger models (like Mistral Large) are trained above the systemic risk threshold, so they still face the stricter GPAI rules. Its smaller models, released under open licences, would be largely exempt.

Penalties and enforcement

Maximum fines:

Violation Max fine
Prohibited AI practices €35M or 7% global turnover
High-risk non-compliance €15M or 3% global turnover
Incorrect information €7.5M or 1.5% global turnover

For comparison:

  • GDPR (General Data Protection Regulation) maximum: €20M or 4% turnover
  • EU AI Act is stricter for the worst violations

Enforcement:

  • National market surveillance authorities
  • New AI Office at EU level for GPAI
  • Complaints mechanism for affected individuals
  • Regulatory sandboxes for testing

Source: BBC News

For a company like Google (2024 revenue ~$350B), a 7% fine would be ~$24.5 billion. That gets attention.

Where would these fit? 🤔

Quick reference for the four risk levels:

  • Prohibited: uses that threaten fundamental rights
  • High risk: affects safety or fundamental rights in specific domains
  • Limited risk: interacts with people directly, so users must be told they are dealing with AI
  • Minimal risk: low-stakes uses with no specific requirements
  1. ChatGPT used for customer service
  • Limited risk – must disclose it’s AI
  1. An AI system that scores job applicants
  • High risk – employment decisions
  1. Spotify’s music recommendation algorithm
  • Minimal risk – entertainment
  1. Facial recognition at airport security
  • High risk – biometric + border control
  1. An AI that predicts which students will drop out
  • High risk – educational access
  1. Deepfake detection software
  • Minimal risk – helps detect manipulation
  1. China-style social credit scoring in the EU
  • Prohibited – social scoring by government

The classification often depends on context and use, not just the technology itself

The US approach 🇺🇸

No comprehensive federal AI law (yet)

The US has taken a different path from the EU:

Sectoral approach:

Recent developments:

Philosophy:

  • Innovation-friendly compared to EU
  • Voluntary commitments from companies
  • Existing laws apply to AI uses

Source: Law.com

Biden’s Executive Order (2023)

Requirements:

  • Developers of powerful AI must share safety test results with government
  • Standards for red-teaming AI systems
  • Guidelines for watermarking AI-generated content
  • Protections against AI-enabled fraud

Focus on national security:

  • Reporting requirements for large training runs
  • Export controls on AI chips
  • Protection of critical infrastructure

Limitations:

  • Executive orders can be reversed by next president (and they were!)
  • Many provisions are voluntary or guidance
  • Congress hasn’t passed comprehensive legislation

America’s AI Action Plan (2025)

The Trump administration’s approach differs from Biden’s:

Priorities:

  • American AI dominance over global competitors
  • Reduce regulatory barriers to development
  • Focus on national security applications
  • Energy infrastructure for AI data centres
  • Streamlined permitting for AI facilities
  • Suggested framework for Congress to consider (March 2026)

Changes from Biden era:

  • Less emphasis on AI safety requirements
  • More focus on competition with China
  • Voluntary industry commitments over mandates
  • Concern about “overregulation” slowing innovation

Implications:

  • US-EU regulatory divergence may increase
  • Companies face different rules in different markets
  • “Race to the bottom” concerns
  • Debate continues about appropriate balance

State-level action

With limited federal action, states are filling gaps:

Colorado AI Act (2024):

  • First comprehensive state AI law
  • Requires impact assessments for high-risk systems
  • Disclosure requirements for consumers

California (various bills):

  • Multiple bills on deepfakes, employment AI
  • Often sets national trends

Illinois:

  • AI Video Interview Act: Must disclose AI in hiring

New York City:

Companies operating nationally face a patchwork of state requirements. Many prefer federal law just for consistency.

Global perspectives 🌏

China’s AI governance

China has been very active on AI regulation:

Features:

Contradictions:

  • Regulates facial recognition but deploys it extensively
  • Restricts AI manipulation but uses AI for surveillance
  • Rules for companies, exceptions for government
  • Innovation priorities vs content control tensions

Source: Corporate Compliance Insights

China’s approach: regulate commercial AI carefully, but state use is largely unrestricted

Other approaches

United Kingdom:

Canada:

Brazil:

International coordination:

Initiative Focus
OECD AI Principles Soft law, voluntary
G7 Hiroshima Process International norms
UN AI Advisory Body Global governance
Council of Europe AI Convention Human rights focus
ISO/IEC 42001 Technical standards

No global consensus on AI governance. The “Brussels Effect” (EU rules becoming global standards) may apply, as with GDPR.

Comparing approaches

Aspect EU US China UK
Framework Comprehensive law Sectoral Multiple regulations Guidance + sectors
Philosophy Risk-based, precautionary Innovation-first State control + innovation Pro-innovation
Enforcement Dedicated AI Office Existing agencies Cyberspace Admin Sector regulators
Prohibited uses Social scoring, some biometrics Few explicit bans Political content Minimal
GPAI rules Yes, tiered Voluntary Yes, content-focused AI Safety Institute
Extraterritorial Yes Limited Yes No

The EU is the only jurisdiction that has passed a comprehensive, binding law. If your company sells AI in both the EU and the US, you will need two separate compliance strategies.

What this means for practitioners 💼

Practical implications

If you’re building AI systems:

  • Know your use case classification under EU AI Act
  • Document training data and development process
  • Build in human oversight capabilities
  • Plan for audits and assessments
  • Consider compliance early, not as afterthought

If you’re deploying AI systems:

  • Understand what system you’re using
  • Ensure appropriate human review
  • Inform affected individuals
  • Monitor for problems
  • Know your liability

If you’re affected by AI systems:

  • You may have transparency rights
  • Ask how decisions are made
  • Challenge automated decisions
  • Report problems to authorities

Compliance as competitive advantage:

Companies that document their training data and build in oversight now will have less to fix when regulations arrive. Compliance is cheaper when you plan for it from the start.

The regulatory trajectory

Where we’re heading:

  • More jurisdictions will regulate AI
  • Convergence likely around high-risk categories
  • Interoperability challenges will persist
  • Technical standards will matter more

Open questions:

  • Can regulation keep pace with technology?
  • What about AI systems that don’t fit categories?

What we do know:

  • Every major economy is working on AI rules
  • Companies that sell globally will face multiple regimes
  • The EU’s rules will likely shape what other countries do, just as GDPR did for data protection

Source: LinkedIn

Summary and takeaways 📝

Main takeaways

Why regulate?

  • Market failures: information asymmetry, externalities
  • Rights protection and human dignity
  • Collective action problems

EU AI Act

  • World’s first comprehensive AI law
  • Risk-based approach with four tiers
  • Strict requirements for high-risk systems
  • Special rules for foundation models
  • Fines up to 7% global turnover

US approach

  • Sectoral regulation, no comprehensive law
  • Executive orders and voluntary commitments
  • State-level initiatives filling gaps
  • Innovation-first philosophy

Global picture

  • No international consensus
  • EU likely to set de facto global standards
  • China: extensive but contradictory
  • UK, Canada, others developing approaches

For practitioners

  • Know your use case classification
  • Build compliance in from the start
  • Document everything
  • Plan for human oversight
  • Stay informed as rules evolve

How do you hold someone accountable for a decision made by a machine? Every framework we covered today is trying to answer that!

… and that’s all for today! 🎉