This text is an early draft. We intend to improve it before publication.

This document is part of set of papers that describe accessibility issues for users with various cognitive or learning disabilities and mental health issues. See cognitive or learning disabilities issue papers for other modules.

This document is part of a set of related informative publications from the Cognitive and Learning Disabilities Accessibility Task Force (COGA TF), a joint task force of the Accessible Platform Architectures Working Group (APA WG) and the Accessibility Guidelines Working Group (AG WG) of the Web Accessibility Initiative.

Feedback on any aspect of the document is welcome. The Working Groups particularly seeks feedback on the following questions:

To comment, file an issue in the W3C coga GitHub repository. You can also send an email to public-coga-comments@w3.org (comment archive). Comments are requested by 16 July 2024. In-progress updates to the document may be viewed in the publicly visible editors' draft.

Introduction

The Internet is not always a safe place. Like life off the Internet, everyone is at risk of having crime being a part of their experience online. Usually referred to as cybercrime these activities, including fraud, terrorism, extortion, harassment and hacking are perpetrated by several types of criminals:

There are also activities that can affect people's safety that are not illegal. Personal information can be stored and sold to third parties or stolen. This information can be used to hurt the user, including financial or emotional exploitation. This can include exploiting the user for marketing or to increase click throughs.

Mental health apps are potentially an important part of health care. However, research shows that many people do not use mental health services via the web or apps due to concerns about whether mental health information is kept private and how their data is used (31).

Other research shows people avoiding apps that make them frustrated with the complexity and cognitive load. This deprives users of support and services that otherwise are available. It also means less people with emotional, cognitive and learning disabilities are in the data surrounding these services. This makes many groups invisible to data driven decisions. Note that smart cities and traffic allocations are often based on this data.

Use of the Internet is an important part of belonging to society and has many benefits for people with disabilities. However there are significant risks depending on the activity, and how it affects the individual. (23)

The solutions proposed include processes that support safety, including for mental health, wellbeing, supported consent and decision making, privacy and more. Such a solution could be part of its own certification process.

Safety Issues for People with Cognitive disabilities

Cybercrime

Hackers

People with cognitive disabilities may not be able to easily use some of the common security measures used on the Web such as two-factor authentication and safe and unique passwords.

Extra security precautions to increase password strength often make this group more vulnerable to "human error". This can encourage risky behavior such as keeping a list of passwords on a desk which can be used by anyone who has physical access to the room. Also, when users ask for assistance to complete these security procedures, they can put themselves at high risk of being abused by those they trust to help.

Con-artists

These cyber-criminals use deception to gain trust and this enables them to negatively influence the behavior of vulnerable individuals. People with cognitive impairment who experience difficulty understanding social cues will likely fail to accurately identify a risky or potentially harmful situation. Those who have difficulty understanding others can act contrary to their own hypothetical actions in a given situation (i.e., mind-blindness) are more trusting and may easily believe false information. Also, people with impaired reasoning, attention or memory may be similarly vulnerable to these situations as they are not especially cognitively equipped to validate presented information.

Sexual predators

People with cognitive disabilities may be more at risk of being a victim of a sexual crime. This is more likely if:

  • they tend to be unaware of someone using a fake identity or misleading information;
  • they are dependent on caregivers and family who they are afraid of disappointing, which makes them susceptible to blackmail;
  • they tend to believe false information and find it harder to validate facts;
  • they are less likely to identify unreasonable requests.

Data, Algorithms and privacy

Personalization and privacy problems

Personalization is important, especially as a way to avoid conflict when meeting varying user needs, among many other reasons. However, there is a significant risk that if poorly implemented, user information and vulnerabilities can be exposed. This puts the most vulnerable users of this population at the greatest risk.

Algorithms and Privacy

Algorithms are computer programs that try to solve a specific problem or achieve a goal. (Goals are set by the programmers and company they work for). These days there is a growing amount of automated decision-making by computer algorithms. These decisions use information from tracking our everyday behaviors online and via apps. This data is often sold to or used by companies, including organizations providing critical services such as employment, health care, and credit.

This creates multiple problems. Many users know that they are at risk of scams, and employment etc data misuse and therefore are afraid of using digital services that they may need. Study noted that all users has concerns about privacy and their data being inappropriately shared. (40)

The misuse of algorithms has the potential to cause financial harm to people with cognitive and/or mental health conditions. For example, in a patent was filed that included an algorithm that could help financial institutions analyze a person's social network and use that data with regard to granting a person's loan application. (35)

People with cognitive, learning and emotional challenges may be most vulnerable to the risks related to algorithms and big data. This includes errors and biases in the algorithm and problems with privacy. ( 29)

In another use case, Internet of things ( IoT) devices are involved more and more in providing medical care. This can include support for mental illness and cognitive support. Massive amounts of data from as many digital activities through devices as possible are collected. This erodes people's knowledge of what is public, as much of this data is done in homes and other private settings. This data is then used to drive decisions such as loans, insurances etc, people with cognitive and learning disabilities and mental illness may be especially at risk of harm from algorithms biases. Privacy becomes more important for individuals with mental health disorders and cognitive disabilities , especially where there is prejudice and stigma. (56)

This is becoming worse as technology improves. For example, people who are victims of domestic violence may avoid putting their picture online because they do not want to be easily found. automated programs for job applications may be biased against profiles without a picture.

Social media

Heavy and prolonged time spent on social media platforms, appears to contribute to increased risk for a variety of mental health symptoms and poor wellbeing, especially among young people. This may partly be driven by the detrimental effects of screen time on mental health, including increased severity of anxiety and depressive symptoms, have been documented.

Recent studies have reported negative effects of social media use on mental health of young people, including social comparison pressure with others and greater feeling of social isolation after being rejected by others on social media. Social media envy has been shown to affect the level of anxiety and depression in individuals. Affect body image, anxiety and depression especially in teens. (33, 34)

Negative comparisons with others on social media contributed to risk of rumination and subsequent increases in depression symptoms in young adults.

For preteenage girls,time spent on social networking sites produced stronger correlations with body image concern than did overall Internet exposure.This may lead to reduced self esteem,dieting and eating disorder.With over exposure of social media pretweens are highly likely to be exposed to material that they neither fully understand nor evaluate sufficiently critically8.

The relationship between social media and mental well-being may be affected by age. In one study, adults over 30, social media use increases anxiety. For adults aged 18–29, social media use decreases anxiety.This is a counterpoint, where sometimes social media reduces the incidence, especially for younger people5.

One study contends that social networking site addiction stimulates various stressors among nurses such as envy, social anxiety and rumination that augment its negative effects on task distraction3. A preference for online social interaction was also found to be linked to social anxiety, loneliness.(11)

One other systematic review has found that social media envy can affect the level of anxiety and depression in individuals7.

The study suggests using a social networking site was associated with a likelihood of diminished

future well-being and reduced self esteem. Study says excessive use of the social networking site could induce higher levels of envy, which led to feelings of depression. Also people with signs of depression linked to social networking sites spend even more time on social networking sites, leading to a vicious circle of depression4.

On the other hand, Social media has become an important part of the lives of many individuals living with mental disorders. Many of these individuals use social media to share their lived experiences with mental illness, to seek support from others, and to search for information about treatment recommendations, accessing mental health services and coping with symptoms1. Overall although time spent on the Internet was found to be negatively associated with mental health, some activities, such as school work, were positively associated (23). Studies have also shown social media helped teens cope with isolation in lockdown/covid (73).

Social Media can also support people with mental health who are in pain and can not go out much, protect from social anxiety if web sites support cognitive impairments and social anxiety. Online social participation can alleviate the negative effects of pain on mental well-being2.

In addition, many of the studies are scientifically weak. For example studies often had small samples, or the effect shown was not scientifically significant and/or the studies showed coalition not causality (74). For example, the correlation between social media use and mental health issues may be that users use social media to help with mental health issues, such as loneliness, resulting in the correlation.

The solutions should enable use without the risks.

Algorithms, AI (Artificial Intelligence) and Curated Content

Algorithms and artificial intelligence can make all the above issues worse.

Artificial intelligence (AI) and cognitive computing can empower algorithms by learning. This includes learning the individual users' behaviors. For example, an algorithm may have a goal to increase the time the user spends on a web site or application. AI could then continuously adapt the algorithm to be more effective for each individual user. In our example they will be learning what makes you more likely to click on a link. Unfortunately, this is often anger, fear, and other negative emotions. This may have 2 main effects on the mental health of the user.

Curated content

Algorithms often adapt and tailor content to the individual. Often the algorithm changes the content so that the user stays on the site for longer, and are more likely to click on links and advertisements. The effect is of curated content. Curated content also creates “Echo Chambers” or an environment where you see content that you disproportionately agree with. The user is unlikely to be exposed to different points of view.

Users often do not know the source of the news or the goals, settings and biases of the algorithms, making it harder to realize their isolation. Issues that can get reinforced include unhealthy eating practices, such as eating disorders, anti social behavior and others. Anxieties are also often reinforced by being exposed to many voices expressing similar views, such as conspiracy theorists. Of course this often happens in real life as well, such as when living with like-minded people, and in communities. However, on the web, this could happen faster and to a more extreme extent. 8.

Societal sanity

Of course this also affects society as a whole, not just the individual. When groups are exposed to large amounts of content that disproportionately represents one point of view, and for the sake of clicks, encourages negative reactions to people outside the group, it is more difficult for people to live together and respect differences.

Spending and emotional regulation

Cognitive learning and Mental health conditions often impact spending (61). Further frustration from difficult interfaces can also cause stress and impair decision-making. When under stress, people can become more impulsive. Research on decision making under stress shows a change towards fast intuitive decision making over slow logical or analytical process in making decisions. Emotional regulation may also decrease how people act on line, and make their reactions more extreme.(63)

Further people have reported (in response to this paper) that they are afraid of other long term consequences of posting online. For example, if they post about mental health issues, or if their posts are considered inappropriate or imply health issues, then that can be used against them or to harm them after recovery has started.

Gatekeeping and Other Issues

Risks to users seem to increase with factors such as: lower levels of sociability off line, loneliness, anxiety and depression, poorer insight, judgment, discrimination and ability to detect deception online and reduced experience and life opportunities;

Perceive high online risk may lead to gatekeeping restrictions and controlling digital access; restriction may affect online self-determination, participation and development by people with intellectual disabilities and others.(Chadwick, D.D. (2019)(30 of COGA general research database.) There is also a significant risk of solutions being misused to control and oppress marginalized or vulnerable people. Providers should be very cautious about proving gatekeeping in locations where human rights for everyone is not well established and enforced. This paper only advocates providing tools of autonomy that can not easily be used as tools of repression.

Internet or gaming addiction and over use

Overuse of some internet features such as gaming can sometimes reach levels of addiction. Compulsive-impulsive spectrum disorder is often described as where people develop a problematic, compulsive use of the internet (mainly gaming) that causes some significant impairment in different aspects of life over a prolonged period of time. Such as: Neglecting other important areas of life, (work, school, family,) or lying to hide the extent of internet use. Note that not everyone agrees with this as a clinical diagnosis, however the effect is well established.

There was an association between a diagnosis of Internet addiction and serious psychiatric symptoms such as somatization, sensitivity, depression, anxiety, aggression, phobias, and psychosis . This was after controlling for age, sex, education level, marital status, and type of universities.(23)

In other research on Schizophrenia or Schizoaffective Disorder and using digital technology, negative feelings were reported “often” or “very often” 56% (255/457) of the time. This included feelings of being unable to stop (27%, 123/457), frustration (25%, 114/457), paranoia (24%, 110/457), worry (20%, 91/457), sadness (20%, 91/457), anger (19%, 87/457), mania (16%, 73/457), or envy (16%, 73/457). (Note a bias that some who are afraid of technology may not have taken this Web-based survey) (14)

Becoming invisible in data

As discussed above, many people do not use apps for mental health services and cognitive support due to concerns about how their data is used and whether mental health information will be kept private (31) . Other reasons include the complexity and frustration experienced when these apps are not designed with cognitive, learning and emotional disabilities in mind.

However, more and more decision making processes such as smart cities and traffic allocations are based on data that is gathered from phone use and apps. This makes many groups invisible to data driven decisions. For example, decisions on parking spaces may be based on data from a parking app. People with disabilities find the app confusing , or unusable. They may pay with cash and be left out of the data. When they go to events, their parking needs are not monitored. This can also occur for different public facilities as decision making becomes more data driven.

Proposed Solutions

Safety should be a priority when making content accessible for people with cognitive disabilities. Extra care should be applied at the same time to keep them safe.

All user information must be kept safe, to the fullest extent possible. Any clues that the user has cognitive disabilities, such as a request for a simplified version, should be protected information.

Personalization systems should be designed so that any information implying vulnerabilities are on the user device and are secure. Use of functional requirements can also be a safer alternative to describing user needs in systems such as meta-data.

Giving the user control

Use of data

Any data collected should be clearly communicated and how that data is going to be used should be transparent for any sensitive information . It should be simple for users to see their data and make corrections when they discover errors.(35)

User would like more control over their personal data and the kind of content they are shown on social media.(58)

Revealing sources of content may also be helpful. This will allow at least some users to notice the select bias.

Where the user is unlikely to be exposed to different points of view, users should at least know the source of the news or the goals, as well as the settings and biases of the algorithms.

Ease of personalisation

Protecting one's data is often a site by site issue. Therefore, it is important that everyone can set it themselves, and give informed consent.

There must be minimum steps to cancel any data storing. This should be among the first list of options about data. The user should not have to click or scroll to find this option

Example

We would like to use information about you for a few different purposes [a]. Choose from the options below.

Option 1

Do not store any

data about me!

(The site might work less well.)

Option 2

Only use data that is used by the site to function well.

Option 3

I want to choose different usages.

Option 4

I am OK with all uses of my data in your standard data usage.

(See our data usages)

Supported decision making

Supported decision making (SDM) are functions that help people with disabilities make decisions as independently as possible whilst staying safe. This includes choosing helpers to help them make choices 12345. The helpers can be , friends, family members, professionals 1. Supported decision making helps people avoid a formal guardian23. SDM needs to adapt to the changing needs of the individual3. Please see the solutions suggested in the supported decision making issue paper (See supported decision making issue paper - temporary link)

Social Media and Algorithm Driven Content

For social media, Provide a graded range of solutions including:

Supported decision making that can help people check themselves. Please see the solutions suggested in the supported decision making issue paper (See supported decision making issue paper - temporary link LINK)

A process of safety

This paper recommends establishing a process that promotes wellbeing of the users. Such a process could include

Hackers

Security should be strong and easily used by those with cognitive disabilities, such as a biometrics option. For a full discussion see the issue paper on security.

Sexual predators and con-artists

  • A site with a chat option should prevent any exchange of personal information.
  • Users should be regularly warned to avoid scams.
  • Getting help and or reporting something worrying should be extremely easy to do. Users should know they will never be penalized for reporting something.
  • Users should find it easy to report to the cyber crime fighters in their jurisdiction.
  • Provide easy to use videos and tips that provide explanations about cyber criminals, how to stay safe and how to report anything you find odd.
  • Server-side solutions, such as analytics, can be employed to find cyber-criminals.
  • Advertisements and paid articles should be vetted for reliability. They should be clearly marked as external content in an easy to understand way.
  • Users should be made aware when they are leaving your site or going to a less trustworthy site, including when following links from other users.
  • Sites offering sexual content or intended for chats of sexual nature should state that clearly.

Mainstreaming , and names of applications do not mention disabilities

Apps and services should avoid including disabilities, cognitive emotional and learning impairments, in their name.

Research has suggested that App titles related to anxiety disorders and symptoms have lower adoptions and fewer reviews than others.

Anxiety apps with titles related to mindfulness activities have more installs, reviews, and higher ratings by users. Since app titles related to mindfulness activities (eg, breathing and meditation) signal providing a method to help users reduce their anxiety, users may perceive them to be more useful and applicable. (5) (6)

Invisibility in data

When dat is used for decision making, research and longitudinal studies should be performed ongoing that check that all groups and subgroups of people are proportionately represented in the data in

For example, aging with memory problems.

Given the realities of app accuracy, efficacy, privacy, security, a variety of different technology platforms could be used for collecting data rather than just apps and smartphones. This will avoid biases against communities who struggle to use these devices or apps, and reduce the effect of the digital divide. (19)

Appendix 1: Terms

Social media refers broadly to web and mobile platforms that allow individuals to connect with others within a virtual network (such as Facebook, Twitter, Instagram, Snapchat, or LinkedIn), where they can share, co-create, or exchange various forms of digital content, including information, messages, photos, or videos (Ahmed et al. 2019)

Algorithms, refers broadly to the logic often used to run computer programs.

Appendix 2: Acknowledgments and sources

Thanks to Crimes against Children Investigations Israel National Cyber Unit for the review. Interviewed by Lisa seeman-horvitz

References

Note that this section needs to be cleaned up and made consistent with other issue papers.

Most Citations are the row number in Mental Health Literature Review (Responses)

Unless they have the letters CG when they are the row from coga general research database

Other sources

  1. Social Media and Mental Health: Benefits, Risks, and Opportunities for Research and Practice
  2. Going Online to Stay Connected: Online Social Participation Buffers the Relationship Between Pain and Depression
  3. From envy to social anxiety and rumination: How social media site addiction triggers task distraction amongst nurses
  4. Association of Facebook Use With Compromised Well-Being: A Longitudinal Study
  5. The moderating role of age in the relationship between social media use and mental well-being: An analysis of the 2016 General Social Survey
  6. Can social media cause PTSD?
  7. Social Media Use and Its Connection to Mental Health: A Systematic Review
  8. NetTweens: The Internet and Body Image Concerns in Preteenage Girls