DATASCI 185: Introduction to AI Applications

Lecture 21: AI and Wellbeing: The Attention Economy, Mental Health, and the Environment

Danilo Freire

Department of Data and Decision Sciences
Emory University

Welcome back! 😊

Recap of last class

  • Last time, we looked at AI and the labour market
  • AI is different from earlier waves: it targets cognitive tasks, not just physical ones
  • Automation displaces tasks, not whole jobs (the Acemoglu & Restrepo framework)
  • AI raises worker productivity but may compress wages for some and widen inequality
  • The graduate job market is rough right now, and AI is part of the reason
  • Today: a different angle. What does AI do to people, not jobs?
  • Three themes: the attention economy, mental health, and the environment

Source: China Daily

Lecture overview

Today’s agenda

Part 1: The attention economy

  • “Your attention is the product”
  • How recommendation algorithms work
  • Filter bubbles: what the evidence says
  • What companies are actually doing

Part 2: AI and mental health

  • The treatment gap
  • Chatbots in therapy: what the evidence says
  • Risks and hard limits
  • Augmentation, not replacement

Part 3: AI and the environment

  • Energy and water costs
  • Footprint in context
  • AI as a climate tool

We will pause for discussion after each part

There are no right answers to the questions I put on screen. I’m curious what you think!

(Sad) meme of the day

The attention economy 📱

What is the attention economy?

Source: The Atlantic

Does the incentive structure produce good outcomes regardless of intent?

How recommendation algorithms work

The loop:

  1. You watch, like, or share something
  2. System creates an embedding of “you”: a vector of your behaviour (remember lecture 06?)
  3. Compares your vector to what similar users watched next
  4. Recommends something similar to what they watched
  5. You watch → loop repeats, embedding updates
  • Engagement = watch time, clicks, shares
  • TikTok’s ForYou feed reportedly updates your profile within the first few seconds of a video (WSJ investigation, 2021)
  • YouTube autoplay: every video leads into another by design

Source: Data Science Dojo

Every swipe teaches the algorithm something about you

These systems have no concept of “healthy” consumption. They optimise the objective they are given, nothing more.

Filter bubbles: real or exaggerated? 🫧

  • Eli Pariser coined “filter bubble” in 2011
  • His argument (which many people believe) is that algorithms seal you in a cocoon of confirming content
  • However, two large experiments say that the effect is probably overstated
  • Nyhan et al. (2023, Nature): reduced like-minded content on Facebook during the 2020 US election, polarisation barely changed
  • Guess et al. (2023, Science): algorithmic vs. chronological feeds → little difference in political attitudes
  • So filter bubbles may not drive polarisation the way we feared (phew! 😉)
  • But that’s one outcome. Effects on anxiety, body image, attention spans, and sleep are separate questions, with worse answers. Let’s talk about those now!

Good news on political polarisation. Less good news on mental health and sleep. Don’t confuse the two.

Engagement vs wellbeing

  • YouTube has gradually shifted away from watch time as a sole metric, adding satisfaction surveys and other signals
  • Took over a decade to move beyond pure engagement metrics
  • The “outrage drives clicks” problem (Brady et al., 2017, PNAS): content that makes you angry, anxious, or fearful gets shared more and watched longer
  • The algorithm learns this. Surfaces more of it
  • The Facebook Papers (2021, WSJ): internal research showed Instagram made body image worse for teenage girls
  • Facebook had the data, but kept the feature because fixing it would have cost engagement
  • The problem is hard to solve, as company revenue and user wellbeing point in opposite directions

Companies have a legal duty to shareholders, not to users. When engagement and wellbeing conflict, engagement wins.

Teens and screens: what the data says

How large is the effect?

Factor Association with wellbeing
Screen time ~0.4% of variance
Wearing glasses ~3x screen time
Sleep quality ~44x screen time
Family relationships Much larger still

Small effects can still matter at population scale. But they should not be presented as if they explain the whole story.

What companies are actually doing

Platform Change Scope
Instagram “Recommended content” toggle Opt-in
TikTok 60-min daily limit for under-18 Bypassable
YouTube “Take a break” / “Bedtime” reminders Opt-in
Meta Teen Accounts with safer defaults Under-18
  • Most are opt-in or reach a minority of users. The ad model is unchanged
  • Also handy to point at when regulators come knocking
  • US Surgeon General (2023): issued an advisory on social media and youth mental health; called for warning labels in 2024
  • Amy Orben (Cambridge): transparency about why content is recommended changes behaviour more than usage caps
  • Knowing you’re being nudged turns out to matter

Source: The Verge

Genuine concern or good PR? Probably a bit of both. The test is whether these survive when regulators look away.

China’s experiment: opting out of the algorithm

  • Since March 2022, Chinese users can legally turn off recommendation algorithms on all platforms
  • The Internet Information Service Algorithmic Recommendation Management Provisions requires every app to offer a non-personalised feed
  • Douyin (Chinese TikTok), Baidu, Taobao, and WeChat all added one-tap opt-out buttons
  • Early research suggests users appreciated the transparency, though long-term effects on usage and wellbeing are still being studied
  • The EU Digital Services Act (Article 38) now requires large platforms to offer a non-profiling feed option in the EU too
  • China requires algorithmic transparency for users while maintaining state censorship of content itself. Both things are true at the same time

What the regulation requires:

  • Users must be told they are being profiled
  • A one-tap opt-out of personalised recommendations
  • Algorithms must not exploit addictive behaviours
  • Platforms must label AI-generated content
  • Special protections for minors

A real-world experiment in what happens when users can choose whether to be recommended to. The results are still coming in.

Discussion: designing for wellbeing 🔍

If you were designing a recommendation algorithm and your bonus depended on user wellbeing instead of watch time, what would you change?

  • What metric would you optimise for? How would you even measure “wellbeing”?
  • Would your platform still be profitable?
  • Would users actually prefer it, or would they migrate to a competitor that gives them the dopamine hits?

AI and mental healthcare 🧠

The treatment gap

  • WHO estimate: 75% of people with mental health conditions in low- and middle-income countries receive no treatment at all
  • Even in wealthy countries:
  • This is the context for AI mental health tools
  • If the alternative is literally nothing, the calculation looks different than if you’re imagining replacing well-funded human care

“Better than nothing” is a low bar. For millions of people, it’s also the only bar that exists.

AI as a therapeutic tool

  • Woebot (2017): first major CBT-based chatbot, 2M+ users. Follows clinical CBT protocols, not a free-form LLM
  • Wysa: similar model, piloted by the UK NHS in 31 trusts
  • Replika: not therapy, but an “AI companion.” People form real emotional attachments (more on the risks of this shortly)
  • Best recent evidence: Habicht et al. (2025, JMIR): Generative AI used alongside group therapy for real patients
  • Improvements in clinical outcomes and patient engagement
  • But AI was supplementing human therapists, not replacing them

In the Habicht et al. study, AI augmented human therapists. That’s a very different claim from AI replacing them.

What the research shows

  • Opel and Breakspear (Science, 2026), two clinical neuroscientists, say AI “may reduce care inequities when deployed responsibly
  • “May” and “responsibly” are doing a lot of work here 😅
  • What the evidence contains:
    • Small RCTs: short-term symptom relief for mild-to-moderate anxiety and depression
    • Very few studies run past 8–12 weeks. Long-term effects? Unknown
    • High dropout: many people stop using these apps quickly
    • Publication bias: positive results get published; negative ones mostly don’t
  • Real signals that AI can help some people with some conditions, short-term. But the evidence base is much thinner than the marketing suggests

Evidence by level:

Level Status
Long-term RCTs Almost none
Short-term RCTs Mixed, small samples
Observational Positive signals
User self-reports Generally positive
Marketing claims Very positive

The gap between marketing and clinical evidence is wide.

Short-term relief is real. Long-term safety and efficacy? We don’t know yet.

The risks

These are documented, not hypothetical:

  • Dependency. When Replika changed features in 2023, users reported grief comparable to losing a real relationship. What happens when an app you’re attached to gets discontinued?
  • Data privacy. Therapy touches the most private parts of someone’s life. Chatbot data is rarely protected like clinical notes. Where does it go?
  • Harmful responses. Moore et al. (2025): LLMs expressed stigmatising attitudes toward mental health conditions and gave harmful crisis responses
  • No crisis escalation. A chatbot cannot call an ambulance, contact a GP, or do anything in the physical world
  • Regulatory gap. Most mental health chatbots are classified as apps, not medical devices. No clinical regulation in the US, UK, or most of the EU

But also consider:

  • Therapy with a human isn’t fully private either: notes, supervision, insurance coding
  • “Harmful responses from AI” vs. zero access to care for 75% of the world
  • AI tools may give governments and employers an excuse to avoid funding real care

Being cautious about AI therapy doesn’t mean defending the status quo. The status quo is also harmful.

What LLMs cannot do

LLMs cannot:

  • Hold reliable long-term memory between sessions. Some chatbots (ChatGPT, Claude) now offer memory features, but these are limited and not designed for clinical continuity
  • Verify what you tell them. A therapist builds context over months
  • Provide legally enforceable confidentiality
  • Diagnose or prescribe medication
  • Read body language, tone, or facial expression

LLMs can:

  • Be available at 3am
  • Be patient. Not judge. Not get tired
  • Deliver structured info on anxiety, depression, coping strategies
  • Scale to millions at near-zero marginal cost

These are different capabilities, perhaps complementary, but not perfect substitutes

Questions to think about 🤔

It’s 3am, you can’t sleep, you feel anxious. Would you talk to a chatbot? What would it need to do for you to actually trust it?

If AI therapy lets a government say “we’ve addressed mental health” without funding real services, is that a net positive or negative?

Augmentation, not replacement

The most credible use cases right now:

  • First contact: AI handles initial triage, reduces stigma, connects people to professionals
  • Between-session support: coping strategies and mood tracking between weekly appointments (the Habicht et al. model)
  • Stepped care: AI for mild symptoms, human professionals for moderate-to-severe
  • Admin burden: AI writes session notes, handles scheduling, finds local services. Frees therapist time for actual care
  • OpenAI updated ChatGPT in 2025: added safety messaging, crisis hotline signposting, trauma-informed language. An acknowledgement that people already use the product this way
  • Who decides what “responsible” looks like?

The hybrid care model:

Mild symptoms
  → AI triage + self-help tools

Moderate symptoms
  → AI support +
    case management referral

Severe symptoms
  → Human professional care,
    AI for admin support only

The technology is moving faster than the evidence, the regulation, and the training of professionals who need to understand it. That gap is the problem.

AI and the environment 🌍

The energy cost of training AI

  • Training a large model costs serious energy. Companies rarely disclose the numbers
  • Patterson et al. (2021): training GPT-3 produced ~552 tonnes of CO₂. That is equivalent to:
    • 368 return flights from London to New York
    • 120 cars driven for a full year
    • The annual carbon footprint of 61 Americans
  • GPT-4? Nobody knows. Estimates range from 5x to 50x GPT-3
  • But training is a one-off cost
  • Running billions of queries (inference) is ongoing, and at scale it probably dwarfs training
  • Microsoft’s carbon emissions rose 29% between 2020 and 2023, partly driven by AI
  • Google’s 2024 Environmental Report: emissions up 48% since 2019, largely from data centres

Training is a one-off. Inference (billions of queries, every day) is the number that grows with adoption.

Water, hardware, and hidden costs

  • Energy gets the headlines. Water gets ignored
  • Li et al. (2023), “Making AI Less Thirsty”: training GPT-3 used ~700,000 litres of freshwater for cooling
  • A typical ChatGPT conversation: about 500ml, roughly a bottle of water
  • Data centres get built where land and electricity are cheap, often in water-stressed regions: Phoenix, Las Vegas, northern Chile
  • GPUs last 3-5 years. Manufacturing needs lithium, cobalt, and rare earth metals, each with its own environmental cost
  • None of this shows up in the headline CO₂ figures

Mining, manufacturing, cooling, disposal: the full supply chain is almost never counted in the numbers you read.

The footprint in context

Current estimates (IEA, 2024):

Sector Share
Aviation ~2.5% of global CO₂
All data centres ~1–1.5% of electricity
AI specifically ~0.5–1% of electricity
Global internet ~3–4%
  • Right now, AI’s direct climate footprint is real but not huge compared to aviation or steel
  • The worry is growth rate: AI electricity use grew ~60% year-on-year between 2022–2024
  • IEA projections: data centre electricity demand could double to ~945 TWh by 2030, roughly equivalent to Japan’s total consumption
  • The Jevons paradox: more efficient models → more usage → efficiency gains get cancelled out
  • GPT-4 is far more efficient per query than GPT-3. But there are vastly more queries

Adding one Sweden or one Germany per year

Source: IEA (2024)

Small today, growing fast. In 10 years, this picture may look very different.

AI as a climate tool

The same technology that burns energy may also help cut emissions:

Net impact? Unclear. The evidence simply isn’t there yet.

Discussion: the environmental trade-off 🔍

You run an AI startup. A journalist asks about your carbon footprint. What do you disclose, and what do you leave out? Why?

  • Do you report training costs, inference costs, or both?
  • Do you compare your footprint to other industries (aviation, streaming)? Is that honest or deflecting?
  • Would full transparency help or hurt your business?

Summary 📚

Main takeaways

  • Attention economy: recommendation systems optimise engagement, not wellbeing. Filter bubble fears are overstated; mental health effects are not

  • Environment: training costs are real; inference at scale matters more. AI might cut emissions elsewhere, but net impact is unclear

  • Mental health: the treatment gap is massive. AI shows short-term promise but limited long-term evidence. Augmentation over replacement

  • Across all three: incentive structures matter more than intentions. The financial model rarely rewards user wellbeing

  • No tidy answers: you will make decisions about these systems throughout your careers. Being honest about what we don’t know is more useful than pretending we do

…and that’s all for today! 🎉