Abstract

Internet use and apps can create a number of risks for people requiring cognitive accessibility. This module/paper covers safety issues for these users, including cybercrime, mental health, wellbeing, privacy, and more.

This module:

This module is intended for:

This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility Task Force (COGA TF), a joint task force of the Accessible Platform Architectures Working Group (APA WG) and the Accessibility Guidelines Working Group (AG WG) of the Web Accessibility Initiative.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This is an early draft. The Task Force intends to add more research and improved discussion.

Feedback on any aspect of the document is accepted. For this publication, the Working Groups particularly seek feedback on the following questions:

To comment, file an issue in the W3C coga GitHub repository. If this is not feasible, send email to public-coga-comments@w3.org (comment archive). Comments are requested by 16 February 2026. In-progress updates to the document may be viewed in the publicly visible editors' draft.

This document was published by the Cognitive and Learning Disabilities Accessibility Task Force, the Accessible Platform Architectures Working Group, and the Accessibility Guidelines Working Group as an Editor's Draft.

Comments regarding this document are welcome. Please send them to public-coga-comments@w3.org (archives).

Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by groups operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures (Cognitive and Learning Disabilities Accessibility Task Force), a public list of any patent disclosures (Accessible Platform Architectures Working Group), and a public list of any patent disclosures (Accessibility Guidelines Working Group) made in connection with the deliverables of each group; these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 March 2019 W3C Process Document.

Introduction

Disabilities that may require cognitive accessibility support include:

Examples of specific disabilities include attention deficit hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia and dyscalculia, mild cognitive impairment (MCI), Down syndrome, asphasia, and others.


People with disabilities that require cognitive accessibility support can benefit from using the internet. However, there are risks as all users are at risk of experiencing activities like fraud, terrorism, extortion, harassment and hacking.These activities are collectively referred to as cybercrime [RM-Hökby1].

Examples of cybercrime include:

People with cognitive and learning disabilities are at higher risk and may be unable to take the recommended safety precautions.

Not all activities that affect people's safety are illegal. Personal information can be stored and sold to third parties or stolen. This information can be used to hurt the user, including through financial or emotional exploitation. This can include exploiting the user for marketing or to increase click throughs.

Fear of these dangers can prevent people from using apps and websites that may be valuable to them. For example, mental health apps are a potentially important part of health care. However, research shows that many people do not use mental health services via the web or apps due to concerns about whether mental health information is kept private and how their data is used [RM-Lipschitz1].

Other research shows people may avoid apps that make them frustrated with the complexity and cognitive load. This deprives users of support and services. It also means fewer people with cognitive and learning disabilities and mental health conditions are in the data surrounding these services. This makes many groups invisible or underrepresented in data-driven decisions.

In this issue paper, we lay out potential safety issues for people with cognitive and learning disabilities. Proposed solutions include integrating processes that support safety and user control to ensure mental health, wellbeing, supported consent and decision making, privacy and more. Such a solution could be part of its own certification process.

Safety Issues for People with Cognitive disabilities

Cybercrime

Potentially rewriting this section to focus on the issues on development end that lead to risk/vulnerabilities.

Hackers and Identity Theft

People with cognitive and learning disabilities may not be able to easily use some of the common security measures used on the Web such as two-factor authentication and complex, unique passwords for each login [RM-Ophoff1].

Extra security precautions to increase password strength often make this group more vulnerable to "human error". This can encourage risky behavior such as keeping a list of passwords on a desk that can be viewed by anyone with physical access to the room. Also, when users ask for assistance to complete these security procedures, they can put themselves at high risk of being abused by those they trust to help.

Con Artists and Deception

These cyber-criminals use deception to gain trust. This enables them to negatively influence the behavior of vulnerable individuals. People with cognitive and learning disabilities who experience difficulty understanding social cues may fail to accurately identify a potentially harmful situation. Those who have difficulty understanding others may be more trusting and more easily believe false information. Also, people with disabilities affecting reasoning, attention or memory may be similarly vulnerable to these situations, as they may struggle to validate presented information.

Sexual predators and Physical Safety

People with cognitive and learning disabilities may be more at risk of being a victim of a sexual crime. Some potential reasons for this include:

Data, Algorithms and privacy

Personalization

Personalization is important, especially as a way to avoid conflict when meeting varying user needs. However, there is a significant risk that if poorly implemented, user information and vulnerabilities can be exposed. This puts the most vulnerable users of this population at the greatest risk.

Algorithms and Privacy

Algorithms are computer programs that try to solve a specific problem or achieve a goal. (Goals are set by the programmers and company they work for). Currently, computer algorithms are increasingly responsible for automated decision-making. These decisions use information from tracking our everyday behaviors online and via apps. This data is often sold to or used by companies, including organizations providing critical services such as employment, health care, and credit.

This creates multiple problems. Many users know that they are at risk of scams and data misuse, and are therefore afraid of using digital services that they may need. All users in a recent study had concerns about privacy and their data being inappropriately shared [RM-Rotondi1].

The misuse of algorithms has the potential to cause financial harm to people with cognitive and learning disabilities and mental health conditions. As an example, a patent could be filed that included an algorithm that could help financial institutions analyze a person's social network and use that data with regard to granting a person's loan application. [RM-O'Neil1]

People with cognitive and learning disabilities may be vulnerable to risks related to algorithms and big data. This includes errors and biases in the algorithm and problems with privacy [RM-Venkit1] [RM-Moura1] [RM-Monteith1].

In another use case, Internet of things (IoT) devices are involved more and more in providing medical care. This can include support for mental illness and cognitive support. Massive amounts of data from as many digital activities through devices as possible are collected. This erodes people's knowledge of what is public, as much of this data is done in homes and other private settings. This data is then used to drive decisions such as loans, insurance coverage, and more. Privacy becomes more important for individuals with mental health diagnoses and cognitive disabilities, especially where there is prejudice and stigma [RM-Monteith1].

For example, people who are victims of domestic violence may avoid putting their picture online because they do not want to be easily found, but automated programs for job applications may be biased against profiles without a picture.

Social media and Mental Health

This section focuses on research from the Mental Health subgroup. More research is needed on social media and cognitive and learning disabilities.

Prolonged time spent on social media platforms appears to contribute to increased risk for a variety of mental health symptoms and poor wellbeing. This may partly be driven by the detrimental effects of screen time generally on mental health. Research suggests social media usage can cause increased severity of anxiety and depressive symptoms [RM-Hardy1].

Recent studies have reported negative effects of social media use on mental health of young people particularly. Negative outcomes include social comparison pressure with others (sometimes called social media envy), rumination, depressive symptoms, anxiety, greater feeling of social isolation after being rejected by others on social media, and body image issues (especially in teens) [RM-Ang1][RM-Karim1].

For preteen girls, time spent on social networking sites produced stronger correlations with body image concern than did overall Internet exposure. This may lead to reduced self esteem, dieting and eating disorders. With overexposure to social media, preteens are highly likely to be exposed to material that they neither fully understand nor are equipped to evaluate sufficiently critically. Similarly, in another study with adults over 30, social media use seemed to correlate with increased anxiety[RM-Tiggemann1] [RM-Hardy1].

Social media does not seem to have a consistent affect on mental health, and many factors can come into play. The relationship between social media and mental well-being may be affected by age. However, some studies have shown the opposite effect for other groups. For adults aged 18–29, social media use decreases anxiety. This is a counterpoint, where sometimes social media reduces the incidence of anxiety , especially for younger people [RM-Hardy1].

One study conducted with nurses argues that social networking site addiction stimulates various stressors such as envy, social anxiety and rumination. These stressors can lead to task distraction. A preference for online social interaction was also found to be linked to social anxiety and loneliness.[RM-Majid1][RM-Caplan1]

Social networking sites have also been associated with likelihood of diminished future well-being and reduced self esteem. Also people with signs of depression linked to social networking sites spend even more time on social networking sites, leading to a vicious circle of depression [RM-Shakya1].

On the other hand, social media does not seem to have a consistent affect on mental health, and many factors can come into play. The relationship between social media and mental well-being may be affected by age. However, some studies have shown the opposite effect for other groups. For adults aged 18–29, social media use correlates with decreased anxiety.This is a counterpoint, where sometimes social media reduces the incidence, especially for younger people [RM-Hardy1].

Further, social media has become an important part of the lives of many people with mental health disabilities. Many of these individuals use social media to share their lived experiences with mental illness, to seek support from others, and to search for information about treatment recommendations, accessing mental health services and coping with symptoms. Overall, although time spent on the Internet was found to be negatively associated with mental health, some activities, such as school work, were positively associated. Additionally, studies have also shown social media helped teens cope with isolation in lockdown/covid. [RM-Naslund1][RM-Hökby1][RM-Cauberghe1]

Online social participation can alleviate the negative effects of pain on mental well-being. Social Media may serve as a source of connection for people who are not able to participate in face-to-face interactions as much as they would like due to pain, disability, or age. Social media can support people with social anxiety and act as a safe form to increase human interaction and make friendships (assuming web sites support cognitive accessibility and social anxiety) [RM-Rotondi2].

It is also worth noting that many of the studies reviewed were scientifically weak. For example, studies often have small samples, effects that are not scientifically significant, or show correlation rather than causality. For example, the correlation between social media use and mental health issues may be that users use social media to help with mental health issues. More robust research is needed in this area in order to develop solutions that enable use with less risk [RM-Odgers1].

Longevity of posts

Further, people have reported (in response to this paper) that they are afraid of other long term consequences of posting online. For example, if they post about their disability (mental health issues), or if their posts are considered inappropriate or imply health issues, then they worry that it can be used against them or to harm them after recovery has started.

Algorithms, AI (Artificial Intelligence) and Curated Content

Editor’s Note: This section focuses on research from the Mental Health subgroup. More research is needed on AI and curated content and cognitive and learning disability.

Algorithms and artificial intelligence can make all the above issues worse.

Artificial intelligence (AI) and cognitive computing can empower algorithms by learning. This includes learning the individual users' behaviors. For example, an algorithm may have a goal to increase the time the user spends on a web site or application. AI could then continuously adapt the algorithm to be more effective for each individual user. In our example they will be learning what makes you more likely to click on a link. Unfortunately, this is often anger, fear, and other negative emotions. This may have two main effects on the mental health of the user:

Curated content

Algorithms often adapt and tailor content to the individual. Often the algorithm changes the content so that the user stays on the site for longer and is more likely to click on links and advertisements. The effect is curated content. Curated content also creates “Echo Chambers,” or environments where you see content that you disproportionately agree with. The user is unlikely to be exposed to different points of view.

Users may not be aware of the goals, settings and biases of the algorithms, making it harder to realize their isolation. Issues that can get reinforced include unhealthy eating practices, such as eating disorders and antisocial behavior. Anxieties are also often reinforced by being exposed to many voices expressing similar views, such as conspiracy theorists. While this also occurs offline, on the web it happens faster and to a more extreme extent [RM-Tiggemann1].

When groups are exposed to large amounts of content that disproportionately represents one point of view, and for the sake of clicks, encourages negative reactions to people outside the group, it is more difficult for people to live together and respect differences.

Spending and Financial Commitments

Conditions that impact emotional regulation may also affect how people act online, and make their reactions more extreme [RM-Yu1].

For example, mental health conditions may impact spending. Further frustration from difficult interfaces can also cause stress and impair decision-making. When under stress, people can become more impulsive. Research on decision-making shows stress leads to fast intuitive decision-making over slower, more logical or analytical processes [RC-M1].

For example the internet may enable a person with bipolar disorder to make substantial financial commitments whilst in the manic phase, often with much greater ease than in the physical world.

Further, some learning and cognitive disabilities impact spending habits and money management. For example a person with dyscalculia may be unable to tell the steep price difference between similar options. In the real world, they may use cash when shopping to limit the effect.

Gatekeeping and Restricting Digital Access

Risks to users seem to increase with factors such as lower levels of sociability offline, loneliness, anxiety and depression, poorer insight, judgment, discrimination and ability to detect deception online, and reduced experience and life opportunities. Perceived high online risk may lead to gatekeeping restrictions and controlling digital access. Solutions include supportive communities and supported decision-making (see issue paper on supported decision-making)

However, solution providers must be careful to reinforce the users rights. Restriction may affect online self-determination, participation and development by people with intellectual disabilities and others. There is also a significant risk of solutions being misused to control and oppress marginalized or vulnerable people. Providers should be very cautious about gatekeeping in locations where human rights for everyone are not well established and enforced. This paper only advocates providing tools of autonomy that can not easily be used as tools of repression, marginalisation or isolation. Solutions must be careful to not unnecessarily restrict access and autonomy for vulnerable groups without true consent [RM-Chadwick1].

Internet or Gaming Addiction and Overuse

Overuse of some internet features such as gaming can sometimes reach the level of addiction. Internet addiction disorder (IAD), or Internet gaming disorder (IGD) describes conditions in which people develop a problematic, compulsive use of the internet (mainly gaming) that causes some significant impairment in different aspects of their life over a prolonged period of time. This includes neglecting other important areas of life, (work, school, family,) or lying to hide the extent of internet use. Note that while not everyone agrees with this as a clinical diagnosis, the effect is well established [RC-IAD1] [RM-Kim1].

There is an association between Internet or gaming addiction and serious psychiatric symptoms such as somatization, sensitivity, depression, anxiety, aggression, phobias, and psychosis. This was after controlling for age, sex, education level, marital status, and type of universities [RM-Hökby1].

In other research, people with schizophrenia or schizoaffective disorder using digital technology report negative feelings “often” or “very often” 56% (255/457). This included feelings of being unable to stop (27%, 123/457), frustration (25%, 114/457), paranoia (24%, 110/457), worry (20%, 91/457), sadness (20%, 91/457), anger (19%, 87/457), mania (16%, 73/457), or envy (16%, 73/457) [RM-Gay1].

Poor Representation in Data Sets

More and more decision making processes are based on data that is gathered from phone use and apps, such as smart city infrastructure and traffic allocations. This makes many groups under-represented or even entirely excluded in data-driven decisions. For example, decisions on parking spaces may be based on data from a parking app. People with disabilities may find the app confusing or unusable. They may pay with cash. The data collected by the parking app does not include their patterns or needs and they are left out of the data. When they go to events, their parking needs are not monitored. This can also occur for different public facilities as decision making becomes more data driven. Vertical and horizontal analysis of data needs to be performed to see what groups are missing in the data set, including the different sub-groups of learning cognitive, physical and mental health related disabilities.

Additionally, as discussed above, many people do not use apps for mental health services and cognitive support due to concerns about how their data is used and whether mental health information will be kept private. Other reasons include the complexity and frustration experienced when these apps are not designed with cognitive learning and emotional disabilities in mind [RM-Lipschitz1].

User Story and User Needs

This list of user needs is not complete. We are also seeking feedback on the format for presenting user needs.

Related Persona: Gopal, Kwame, Tal

Proposed Solutions

Safety should be a priority when making content accessible for people with cognitive and learning disabilities. All user information must be kept safe, to the fullest extent possible. Any clues that the user has cognitive disabilities, such as a request for a simplified version, should be protected information.

Personalization systems should be designed so that any information implying vulnerabilities are on the user device and are secure.

Technical designs that may support privacy include

Giving the user control

Use of data

Users should know when data is collected and how that data will be used. This requires testing that different groups of users requiring cognitive accessibility understand the privacy and data terms, including what data may be available or stored, how the data will be used and who will have access to it. It should be simple for users to see their data and make corrections when they discover errors [RM-O'Neil1].

Settings and sources

Users should have more control over their personal data and the kind of content they are shown on social media [RM-Bernard1].

Users should be able to know the source of their content. Having a bias ranking on different sources may also be helpful. This will allow users to notice the selection bias. This is especially important where a user is unlikely to be exposed to different points of view.

Also let the user know the setting and the potential biases of the algorithm. This can help the user understand how the content seen may be affected. Give examples to help the user understand this effect.

Ease of personalisation

Protecting one's data is often a site by site issue. Users should be able to set data protections themselves. This requires testing that different groups of users requiring cognitive accessibility understand the risks of different setting, and can change the setting to what they want.

Users should be able to easily cancel data storing in minimal steps. This should be among the first list of options about data. Users should not have to click or scroll to find this option

Example
We would like to use information about you for a few different purposes. Choose from the options below.

Option 1

Option 2

Option 3

Option 4

Do not store any

data about me!

(The site might work less well.)

Only use data that is used by the site to function well

I want to choose different usages.

I am OK with all uses of my data in your standard data usage.

(See our data usages)

Supported decision making

Supported decision making (SDM) are functions that help people with disabilities make decisions as independently as possible whilst staying safe. This includes choosing helpers to help them make choices. See links including;

The helpers can be, friends, family members or professionals. Supported decision making helps people avoid a formal guardian . SDM needs to adapt to the changing needs of the individual. Please see the solutions suggested in the supported decision making issue paper (See supported decision making issue paper - temporary link)

Social Media and Algorithm Driven Content

For social media, provide a graded range of solutions including:

A Process for Safety and Wellbeing

This paper recommends establishing a process that promotes wellbeing of the users. Such a process could include:

Hackers

Security should be strong and easily used by those with cognitive disabilities, such as a biometrics option. For a full discussion see the issue paper on security.

Sexual predators and Con Artists

Naming Applications and Services

Apps and services should be careful when including disabilities, cognitive emotional and learning impairments, in their name. Consider naming after support provided rather than disability labels. For example, an app may be called “shopping support,” with a description that this may be useful for people with dyscalculia. With this name, people do not have to know they have dyscalculia to know they need support with numbers. In addition it is less obvious to others that they have dyscalculia.

Research has suggested that app titles related to anxiety disorders and symptoms have lower adoptions and fewer reviews than others.

Anxiety apps with titles related to mindfulness activities have more installs, reviews, and higher ratings by users. Since app titles related to mindfulness activities (eg, breathing and meditation) signal providing a method to help users reduce their anxiety, users may perceive them to be more useful and applicable. [RM-Huang1][ [RM-Bakker1].

Representation in Data

When using data for design and decision making, ensure that all groups and subgroups are proportionately represented. Research and longitudinal studies should be performed to ensure that all groups and subgroups of people are proportionately represented in the dataset.

Given the realities of app accuracy, efficacy, privacy, security, a variety of different technology platforms could be used for collecting data rather than just apps and smartphones. This will avoid biases against communities who struggle to use these devices or apps, and reduce the effect of the digital divide, such as those aging with memory impairments. [RM-Bauer1]

Appendix 1: Terms

Social media refers broadly to web and mobile platforms that allow individuals to connect with others within a virtual network (such as Facebook, Twitter, Instagram, Snapchat, or LinkedIn), where they can share, co-create, or exchange various forms of digital content, including information, messages, photos, or videos (Ahmed et al. 2019)

Algorithms, refers broadly to the logic often used to run computer programs.

Appendix 2: Acknowledgments and sources

Thanks to Crimes against Children Investigations Israel National Cyber Unit for the review. Interviewed by Lisa seeman-horvitz

References

Note that this section needs to be cleaned up and made consistent with other issue papers.

Most Citations are the row number in Mental Health Literature Review (Responses)

Unless they have the letters CG when they are the row from coga general research database

Other sources

  1. Social Media and Mental Health: Benefits, Risks, and Opportunities for Research and Practice
  2. Going Online to Stay Connected: Online Social Participation Buffers the Relationship Between Pain and Depression
  3. From envy to social anxiety and rumination: How social media site addiction triggers task distraction amongst nurses
  4. Association of Facebook Use With Compromised Well-Being: A Longitudinal Study
  5. The moderating role of age in the relationship between social media use and mental well-being: An analysis of the 2016 General Social Survey
  6. Can social media cause PTSD?
  7. Social Media Use and Its Connection to Mental Health: A Systematic Review
  8. NetTweens: The Internet and Body Image Concerns in Preteenage Girls