DATASCI 185: Introduction to AI Applications

Lecture 17: AI Regulation and Standards Around the World

Danilo Freire

Department of Data and Decision Sciences
Emory University

Welcome back! 🌍

Recap of last class

  • We explored AI applications in healthcare and finance
  • Healthcare: ML for imaging and risk scoring; LLMs for clinical notes, patient communication, and RAG-based Q&A
  • Finance: ML for credit scoring and fraud detection; LLMs for sentiment analysis and agentic workflows
  • Optum case: healthcare costs as a proxy for needs penalised Black patients
  • Apple Card: unexplainable credit limits eroded public trust
  • RAG reduces hallucination but inherits knowledge base biases
  • Today: how are governments responding to these challenges?

Lecture overview

What we will cover today

Part 1: Why regulate AI?

  • The case for government intervention
  • Different regulatory philosophies

Part 2: The EU AI Act

  • Risk-based approach
  • Prohibited, high-risk, and low-risk systems
  • Compliance requirements

Part 3: The US approach

  • Sectoral regulation vs comprehensive laws
  • Executive orders and agency guidance
  • State-level initiatives

Part 4: Other approaches

  • China’s AI regulations
  • The global race for AI governance
  • What this means for practitioners

Meme of the day 😄

Source: r/agi

Why regulate AI? 🤔

The case for intervention

Market failures:

  • Information asymmetry: Users can’t evaluate AI systems
    • How accurate is this hiring algorithm? Nobody knows
    • Is this chatbot hallucinating? Hard to tell
  • Externalities: Harms fall on people who didn’t choose to use the system
    • For instance, deepfakes harm people who never consented
  • Collective action problems: Race to the bottom
    • Company A deploys unsafe AI
    • Company B must follow or lose market share

Rights and dignity:

  • Some uses may violate fundamental rights
  • Facial recognition in public spaces
  • Manipulation of democratic processes

“Move fast and break things” maybe works for social media features. It’s less appealing when the things being broken are people’s lives.

The case against (heavy) intervention

Innovation concerns:

  • Regulation can slow development
  • Compliance costs burden small companies
  • May push innovation to less regulated jurisdictions

Technical challenges:

  • Technology moves faster than law
  • General-purpose systems defy categorisation
  • Enforcement requires technical expertise

Existing laws may suffice:

  • Discrimination laws already apply
  • Consumer protection already exists

The debate isn’t whether to regulate, but how much, what, and when.

Source: X.com

Regulatory philosophies

Approach Philosophy Example
Precautionary Prove safety before deployment EU AI Act’s prohibited uses
Innovation-first Regulate only after harms emerge Early US approach to internet
Risk-based Stricter rules for higher-risk uses EU AI Act’s tiered system
Sectoral Different rules for different industries US healthcare vs finance AI
Self-regulation Industry develops own standards Many current AI ethics guidelines

No single approach dominates

  • EU leans precautionary/risk-based
  • US leans sectoral/innovation-first (but changing)
  • China mixes state control with innovation goals
  • Most countries are still figuring it out

The EU AI Act 🇪🇺

The world’s first comprehensive AI law

Timeline:

  • April 2021: European Commission proposal
  • December 2023: Political agreement
  • March 2024: European Parliament approval
  • August 2024: Entry into force
  • 2025-2027: Phased implementation

Scope:

  • Applies to AI systems placed on the EU market
  • Covers providers, deployers, importers, distributors
  • Extraterritorial reach: If you sell to the EU, you must comply
  • Affects companies worldwide

Approach:

  • Risk-based: Rules depend on potential harm
  • Horizontal: Applies across all sectors
  • Technology-neutral: Doesn’t define specific techniques

The risk pyramid

Unacceptable risk (banned):

  • Social scoring by governments
  • Real-time biometric identification in public (with exceptions)
  • Emotion recognition in workplaces and schools
  • Predictive policing based solely on profiling

High risk (strict requirements):

  • Biometric identification systems
  • Critical infrastructure management
  • Employment and worker management
  • Access to essential services (credit, insurance)
  • Law enforcement and border control
  • Justice and democratic processes

Limited risk (transparency only):

  • Chatbots (must disclose AI)
  • Emotion recognition (must inform)
  • Deep fakes (must label)

Minimal risk (no requirements):

  • Spam filters
  • Video game AI
  • Most consumer applications

High-risk system requirements

If your AI system is classified as high-risk, you must:

Before deployment:

  • Conduct a conformity assessment
  • Establish a risk management system
  • Maintain technical documentation
  • Enable logging and traceability

During operation:

  • Ensure human oversight capability
  • Guarantee accuracy, robustness, cybersecurity
  • Register in EU database
  • Report serious incidents

For deployers (users of high-risk AI):

  • Use according to instructions
  • Ensure human oversight
  • Monitor for issues
  • Inform affected individuals

Data governance requirements:

  • Training data must be relevant, representative, free of errors
  • Must examine for possible biases
  • Must be able to demonstrate compliance

Transparency requirements:

  • Clear information about capabilities and limitations
  • Instructions for use
  • Contact information for oversight

General-Purpose AI (GPAI) rules

The Act includes special rules for foundation models like GPT-4 and Gemini.

All GPAI providers must:

  • Maintain technical documentation
  • Provide information to downstream deployers
  • Comply with copyright law (or demonstrate exceptions)
  • Publish training content summaries

“Systemic risk” GPAI (more powerful models) must also:

  • Conduct model evaluations including adversarial testing
  • Assess and mitigate systemic risks
  • Track and report serious incidents
  • Ensure adequate cybersecurity

Penalties and enforcement

Maximum fines:

Violation Max fine
Prohibited AI practices €35M or 7% global turnover
High-risk non-compliance €15M or 3% global turnover
Incorrect information €7.5M or 1.5% global turnover

For comparison:

  • GDPR (General Data Protection Regulation) maximum: €20M or 4% turnover
  • EU AI Act is stricter for the worst violations

Enforcement:

  • National market surveillance authorities
  • New AI Office at EU level for GPAI
  • Complaints mechanism for affected individuals
  • Regulatory sandboxes for testing

Source: BBC News

For a company like Google (2024 revenue ~$350B), a 7% fine would be ~$24.5 billion. That gets attention.

where would these fit? 🤔

Let’s classify these AI applications under the EU AI Act:

  1. ChatGPT used for customer service
  2. An AI system that scores job applicants
  3. Spotify’s music recommendation algorithm
  4. Facial recognition at airport security
  5. An AI that predicts which students will drop out
  6. Deepfake detection software
  7. China-style social credit scoring in the EU

Answers:

  1. Limited risk – must disclose it’s AI
  2. High risk – employment decisions
  3. Minimal risk – entertainment
  4. High risk – biometric + border control
  5. High risk – educational access
  6. Minimal risk – helps detect manipulation
  7. Prohibited – social scoring by government

The classification often depends on context and use, not just the technology itself.

The US approach 🇺🇸

No comprehensive federal AI law (yet)

The US has taken a different path from the EU:

Sectoral approach:

  • Healthcare AI: FDA regulates medical devices
  • Financial AI: SEC, CFPB oversee lending/trading
  • Employment AI: EEOC applies discrimination law
  • No single AI authority

Recent developments:

  • October 2023: Biden’s Executive Order on AI
  • July 2025: Trump’s America’s AI Action Plan
  • State-level initiatives (Colorado, California, Illinois)

Philosophy:

  • Innovation-friendly compared to EU
  • Voluntary commitments from companies
  • Existing laws apply to AI uses

Source: Law.com

Biden’s Executive Order (2023)

Key requirements:

  • Developers of powerful AI must share safety test results with government
  • Standards for red-teaming AI systems
  • Guidelines for watermarking AI-generated content
  • Protections against AI-enabled fraud

Focus on national security:

  • Reporting requirements for large training runs
  • Export controls on AI chips
  • Protection of critical infrastructure

Limitations:

  • Executive orders can be reversed by next president (and they were!)
  • Many provisions are voluntary or guidance
  • Congress hasn’t passed comprehensive legislation

America’s AI Action Plan (2025)

The Trump administration’s approach differs from Biden’s:

Key priorities:

  • American AI dominance over global competitors
  • Reduce regulatory barriers to development
  • Focus on national security applications
  • Energy infrastructure for AI data centres
  • Streamlined permitting for AI facilities

Changes from Biden era:

  • Less emphasis on AI safety requirements
  • More focus on competition with China
  • Voluntary industry commitments over mandates
  • Concern about “overregulation” slowing innovation

Implications:

  • US-EU regulatory divergence may increase
  • Companies face different rules in different markets
  • “Race to the bottom” concerns
  • Debate continues about appropriate balance

State-level action

With limited federal action, states are filling gaps:

Colorado AI Act (2024):

  • First comprehensive state AI law
  • Requires impact assessments for high-risk systems
  • Disclosure requirements for consumers

California (various bills):

  • Multiple bills on deepfakes, employment AI
  • Often sets national trends

Illinois:

  • AI Video Interview Act: Must disclose AI in hiring

New York City:

  • First US jurisdiction to require bias audits for hiring AI

Companies operating nationally face a patchwork of state requirements. Many prefer federal law just for consistency.

Global perspectives 🌏

China’s AI governance

China has been surprisingly active on AI regulation:

Key features:

  • Content control: AI must uphold “socialist values”
  • Registration requirements for public-facing AI
  • Restrictions on what AI can generate
  • Security assessments for new services

Contradictions:

  • Regulates facial recognition but deploys it extensively
  • Restricts AI manipulation but uses AI for surveillance
  • Rules for companies, exceptions for government
  • Innovation priorities vs content control tensions

Source: Corporate Compliance Insights

China’s approach: Regulate commercial AI carefully, but state use is largely unrestricted. Different values, different rules.

Other approaches

United Kingdom:

  • “Pro-innovation” approach
  • Sector regulators apply existing frameworks
  • No comprehensive AI law (yet)
  • AI Safety Institute for frontier models

Canada:

  • AIDA (Artificial Intelligence and Data Act) proposed
  • High-risk system requirements
  • Stalled in parliament

Brazil:

  • Comprehensive AI bill in development
  • Influenced by EU approach
  • Rights-based framework

International coordination:

Initiative Focus
OECD AI Principles Soft law, voluntary
G7 Hiroshima Process International norms
UN AI Advisory Body Global governance
Council of Europe AI Convention Human rights focus
ISO/IEC standards Technical standards

No global consensus on AI governance. The “Brussels Effect” (EU rules becoming global standards) may apply, as with GDPR.

Comparing approaches

Aspect EU US China UK
Framework Comprehensive law Sectoral Multiple regulations Guidance + sectors
Philosophy Risk-based, precautionary Innovation-first State control + innovation Pro-innovation
Enforcement Dedicated AI Office Existing agencies Cyberspace Admin Sector regulators
Prohibited uses Social scoring, some biometrics Few explicit bans Political content Minimal
GPAI rules Yes, tiered Voluntary Yes, content-focused AI Safety Institute
Extraterritorial Yes Limited Yes No

The EU is the only major jurisdiction with a comprehensive, binding law. Others rely on existing frameworks, guidance, or sector-specific rules.

What this means for practitioners 💼

Practical implications

If you’re building AI systems:

  • Know your use case classification under EU AI Act
  • Document training data and development process
  • Build in human oversight capabilities
  • Plan for audits and assessments
  • Consider compliance early, not as afterthought

If you’re deploying AI systems:

  • Understand what system you’re using
  • Ensure appropriate human review
  • Inform affected individuals
  • Monitor for problems
  • Know your liability

If you’re affected by AI systems:

  • You may have transparency rights
  • Ask how decisions are made
  • Challenge automated decisions
  • Report problems to authorities

Compliance as competitive advantage:

Companies that build ethical AI now will be better positioned when regulations arrive. “Responsible AI” isn’t just ethics; it’s risk management.

The regulatory trajectory

Where we’re heading:

  • More jurisdictions will regulate AI
  • Convergence likely around high-risk categories
  • Interoperability challenges will persist
  • Technical standards will matter more

Open questions:

  • Can regulation keep pace with technology?
  • What about AI systems that don’t fit categories?

Certainties:

  • AI governance is here to stay
  • Companies need compliance strategies
  • International coordination will increase
  • Affected communities will demand accountability

Source: LinkedIn

Summary and takeaways 📝

Main takeaways

Why regulate?

  • Market failures: information asymmetry, externalities
  • Rights protection and human dignity
  • Collective action problems

EU AI Act

  • World’s first comprehensive AI law
  • Risk-based approach with four tiers
  • Strict requirements for high-risk systems
  • Special rules for foundation models
  • Fines up to 7% global turnover

US approach

  • Sectoral regulation, no comprehensive law
  • Executive orders and voluntary commitments
  • State-level initiatives filling gaps
  • Innovation-first philosophy

Global picture

  • No international consensus
  • EU likely to set de facto global standards
  • China: extensive but contradictory
  • UK, Canada, others developing approaches

For practitioners

  • Know your use case classification
  • Build compliance in from the start
  • Document everything
  • Plan for human oversight
  • Stay informed as rules evolve

AI regulation is a moving target. What we discussed today will evolve. The principles (transparency, accountability, human oversight) are more stable than specific rules.

… and that’s all for today! 🎉