In general, accessibility is about real people with different types of disabilities. So, a digital product can be used by a blind or a deaf person, physically impaired, or having cognitive problems. This product can also be used by an older person, which is not so fast with operating websites. It can be a user that has a partial disability, like color blindness or myopia. Or a combination of all of them.

While many of us think about disabilities being a permanent condition, we shouldn't forget that many of us are regularly affected by temporary disabilities. Imagine situations like holding a baby in your hands, when you can't see clearly due to broken glasses, or when you broke an arm.

In the end, we all have conditions that make us less able of some things at some point or another. Like many of us, personally, I have problems with my vision. I also have colleagues with problems with hearing, moving, or having other health issues. One of my coworkers didn't think of his situation as a physical disability — it was a temporary condition that made it impossible to behave in the same way he did almost his entire life. He had an accident and was disabled for a few months, yet he managed to work even without hands. Necessity is the mother of invention.

When you hear that accessibility is primarily focused on people with disabilities, that is correct. On the other hand, these improvements, essential for those with disabilities, are also beneficial to all other users. For example to older people with changing abilities due to aging. Or people with “situational limitations” such as in bright sunlight (better contrast!), noisy environment (subtitles!), those who are not fluent in the language (easy to understand software and texts), or even to people that are using a slow Internet connection. All of these users are not typical people with a disability. But also are affected by the lack of accessibility considerations of the website.

Today, one billion people globally live with some form of disability, and many of these people struggle to use digital products effectively. Around 1 in 7 people experience disability in some form. For example, over 970 million people need glasses, almost 500 million have problems with hearing. All of them, and many more, cannot use digital products the way people without disabilities. And still, with that massive group of possible users (and potential customers), there are not a lot of digital products – websites, mobile apps – that could be used without the struggle. And these people want to surf, buy online, listen to something, or get news. But very often, when they use the software, it almost hurts. It is like trying to use a broken bicycle to ride.

It is a massive group of potential users. Yet, accessibility is still something not common. Why is that?

The paradox of not accessible accessibility

Nowadays, the addition of a wheelchair ramp to a building to provide access to disabled people doesn't surprise anybody. Such ramps also help other people, such as parents with strollers, delivery personnel with trolleys, and more. So why are we still surprised when it comes to ensuring access to the web content? Many websites and online applications are hard to use by people with disabilities, but it is not without reason. In general, making software is neither easy nor cheap. So making accessible software or hardware is even more complicated.

I sincerely appreciate the hard work put into the documentation, but I find official guides quite frustrating and difficult-to-follow. Of course, they are packed with information; they were designed to be as comprehensive as possible. But it resulted in hundreds of pages of documentation which can be overwhelming for a front-end developer who doesn't happen to be an expert in accessibility. It requires some time to go through the documentation to follow all the rules, especially when there are official regulators that will audit the accessibility of the system. It takes time, so it costs. When a company does not have large budgets or the IT team is relatively small and overloaded, accessibility is not the first thought on the product's roadmap.

To highlight the problem with documentation. There are currently two stable versions of WCAG:

  • WCAG 2.0 was published in December 2008 and has become widely adopted as the standard for web accessibility by many businesses and governments worldwide. It defines 12 Guidelines under the four POUR principles. Under each Guideline, there are more specific Success Criteria divided into three Conformance Levels: A, AA, and AAA. WCAG 2.0 defines 61 Success Criteria.
  • WCAG 2.1 was published in June 2018 to better address accessibility for people with cognitive and learning disabilities, people with low vision, and people with disabilities using mobile devices. WCAG 2.1 is fully backward compatible with WCAG 2.0. WCAG 2.1 defines 13 Guidelines and 78 Success Criteria.
But there are also new, additional documents - marked as a work in progress:
  • WCAG 2.2 was published in August 2020 as a working draft. "Following these guidelines will make content more accessible to a wider range of people with disabilities, including accommodations for blindness and low vision, deafness and hearing loss, limited movement, speech disabilities, photosensitivity, and combinations of these, and some accommodation for learning disabilities and cognitive limitations; but will not address every user need for people with these disabilities. These guidelines address accessibility of web content on desktops, laptops, tablets, and mobile devices." Content that conforms to WCAG 2.2 also conforms to WCAG 2.0 and WCAG 2.1.
  • WCAG 3.0 was published in January 2021 as the first public working draft. Because this is the newest version, it will be interesting to observe the progress of work: It is also possible to contribute.

The time that passed between the first version and the second one was long, almost ten years. So it raised questions regarding WCAG 2.0 if it is relevant as technology changes faster. When WCAG 2.1 started to be obligatory, a lot of institutions needed to comply. Now, with two new drafts being work in progress, it is even less sure what to follow to be compliant with rules (and then to pass official audits).

The other thing is that the Web Content Accessibility Guidelines (WCAG) defines four principles:

  1. Perceivable: information can be presented in different ways; for example, in braille, different text sizes, text-to-speech, or symbols, etc.
  2. Operable: functionality can be used in different modalities; for example, keyboard, mouse, sip-and-puff, speech input, touch, etc.
  3. Understandable: information and functionality are understandable; for example, consistent navigation, simple language, etc.
  4. Robust: content can be interpreted reliably by various browsers, media players, and assistive technologies.

To prepare a whole system being compliant with all the rules that should apply for all four principles is not easy. It is possible, of course, but again: it would take time and good people in a project to fulfill all of it. So it will cost.

Do not get me wrong. I am thrilled that those rules exist and that passionate people work on them. I just want to highlight a lack of simpler one- or even few-pagers that would also be official and accepted, so the smaller companies with lower budgets could also be compliant with it and would pass 'low profile official audits.' If you think about it, every improvement helps people to use the Web. It would be great if companies that build software could improve accessibility with official green light and acceptance - with few stages of, let's say, certification. Then small companies would also faster follow the needs of people with disabilities. This is of course only my opinion, after observing the situation in both public and private companies.

There is also another side to the accessibility problems. It is money that disabled users have (or do not have). By WHO: "In many low-income and middle-income countries, only 5-15% of people who require assistive devices and technologies have access to them." Most of the time, accessible technologies are just more expensive. I encourage you to check the latest summary of accessible apps on iOS and compare their costs to other apps in the store. Of course, you can find a few cheap or free apps, but in general, they are way more expensive than others. When comparing hardware prices, professional gaming stuff is more affordable than something helping a blind person surf the Internet. And so on.

There are, of course, reasons for that. "The common sentiment regarding assistive technologies was to develop a range of offerings that produce sustainable profit-making businesses through affordable end-user solutions for relevant unmet needs." But to develop accessible software or hardware, resources are needed. Because of this, the price of the final product is higher. It means that an accessible product is less affordable (so less accessible).

To wrap up: accessibility is quite a complicated topic. There is extensive documentation, but also passionate people write easier-to-follow instructions (check this one: Yet still, the cost of providing fully accessible software is high - sometimes too high for some companies just to include it in budgets. It is the answer to the question: why accessibility is still a topic. And why it has to be almost imposed, as logic suggests that companies would like to have more and more customers, so they should also take care of that.

But we are going back to the main topic.

So, you can't move...

Physical impairment can be a result of sickness or an accident that can happen to anyone. It can be a permanent disability, like a broken spine, or a temporary disability, like a broken arm. A person can also have movement problems, primarily due to old age and sickness, like Parkinson's or ALS. All these people would like to use the Internet, but it is hard or even impossible for them to use a keyboard and mouse. How to do everyday things then? How to work? Fortunately, there are solutions.

I. Technology to the rescue: Voice

The first thing that comes to mind in the context of technology when picturing an image of somebody that cannot move is voice. Voice is a solution, as this person could still use it to give commands. It is the correct inkling. Also, it is not something unreachable. IoT makes it possible to steer various devices within voice commands (and not only, of course). When all household appliances are connected via the Internet, the next step is to have a control panel. To choose the form of that "steering wheel" is to adapt to the end-user's needs. It can be with a touch (for everyone and blind), can be with voice (again: for everyone, but also physically impaired or blind), it can be with screens (for everyone, but also deaf), etc. It seems perfect. In all situations, everyone can use it. But having available a few different options on how to control devices within IoT net, also disabled can benefit.

When thinking about IoT in the case of accessibility, voice assistants are especially interesting. The technology now is still in progress, but there is a noticeable jump in its capabilities. And also in recognition. Based on the PWC report, "only 10% of surveyed respondents were not familiar with voice-enabled products and devices. Of the 90% who were, the majority have used a voice assistant (72%)."

Voice assistants can change the way we use devices. Yet there is still a massive work ahead of developers, designers, and... customers. "Despite growing capabilities, basic tasks remain the norm. For now, the bulk of consumers have yet to graduate to more advanced activities like shopping or controlling other smart devices in the home. Consumers see voice assistants as the smarter, faster, and easier way to perform everyday activities. Yet, for more serious situations involving money (shopping, refund on an airline ticket, etc.), consumers prefer what they already know and trust—at least for now."

The answer to the question, why voice assistants are not broadly used can be pretty straightforward. Using only your voice makes it relatively easy to pick up a song or set a clock alarm. But when it comes to checking bank accounts or doing work tasks, it is almost impossible. For now, a user can prepare a to-do list for him- or herself, asking a voice assistant for help in that. But to do those tasks? In most cases, not yet. There are two factors why it is not in our reach. First is still the technology. It still needs to be improved, which does not a surprise if taking into notice the vast amount of languages people speak. Add to that the fact that nobody is perfect, and all of us are perfectly healthy, and the possible ways of saying something are not only doubled.

As an experiment, I tried to use my Google Home and asked a voice assistant to translate a simple sentence from English to Italian. I wanted to achieve it using:

  1. my native language (Polish) -> didn't succeed. It didn't surprise me, as I am aware that the language is quite tricky and not supported.
  2. my second language (English) -> failed in the first and second approach, probably because of my accent. The third time, when I put effort into the command, it worked. Yay!
  3. the third and the last language I have tried was the newest one I learn currently (Dutch). The attempt was a failure; the reasons are two: not very well supported by the assistant and my lack of proficiency in the language. I just wanted to try:)

To summarize: if you are a native English speaker, probably voice assistants are ok for you or just easier to use. For not native speakers - good luck with doing money transfer via voice assistant.

Voice recognition technology has a big task ahead to be able to recognize things people say. And this is the first problem. The second is a proper UX for voice assistants (best known as VUI, Voice User Interface). Voice assistants are pretty new, and preparing an app that could be used only with voice usage means new design approaches. It is for now more like a "cherry on a cake" to have a good VUI designed. Let's hope it will soon change, as more and more people use voice-controlled devices for more activities than leisure. As we expect, this is our future. What is incredible, it can be an accessible future.

II. Technology to the rescue: Gestures

When a person cannot move (or moves are shaky or trembled), there are other ways to do things. It is by using gesture recognition. Manufacturers are enhancing smartphones, yet to fully control a phone without touch is not easy-peasy. People who cannot use fingers can bypass the problem by using a pencil with rubber (handled in the mouth or even in toes). But not only. There is also software that can use regular smartphones to enable control over the device with head movements.

I have checked one app that provides this solution: Sesame. Using this, you can control the device using your head movements. It is not fully efficient compared to a mouse and (or) keyboard, but quite helpful.

How does it work?

  1. Setup
    1. Download the software and watch the tutorial (I strongly recommend it!).
    2. Adjust the camera. If you do not see your face and face is not outlined by the software, it will not work correctly.
    3. Adjust settings.
    4. You can change colors or turn on dark mode (a nice feature for people with visual impairments). You can turn on voice commands, add multiple monitors support, change the control mode, sensitivity of the software, dwell time, or pointer time (I also recommend trying it together with voice commands).
  2. Usage
    1. Turn on Sesame to start controlling your computer only with head movements:
    2. When Sesame is active, you will see the pointer. It will move accordingly to your head movements. It can seem awkward at first, but it is easy to get used to it. I had to change settings to have this pointer moving more slowly. But it is a personal preference.
    3. You can move this pointer whenever you want to do some action. If you leave it in one place for longer, you will see a loader icon. It shows you when your action will be considered as an intention to click this area. When it happens, you will see a small menu. You can choose if your purpose was left-clicking, right-clicking (like with a mouse), or you want to leave with no action. This menu you can disable in settings, choosing an option that you always mean cleft-clicking. From my perspective, the menu could be slightly more extensive.

That's it. After a few minutes of tests, it is possible to use your computer without a mouse. But still, you need a keyboard. I didn't manage to open a completely new website by typing an URL or searching for something in Google. There is still required software with a keyboard that can be operated without hands (like Actigaze, mentioned below). It is also possible to use an on-screen keyboard (inbuilt in Windows), but it was not so easy with Sesame as I tested. Another tip: ensure a good light in the room!

Sesame software you can download on the website: There are three options—Sesame for mobile (Android system and iOS - here only communicators) and the Windows system.

III. Technology to the rescue: Gaze

For researchers and scientists, eye-tracking technology has been well-known for many years. It is used in user tests (especially in marketing) and scientific research. For gamers, this kind of hardware is geared mainly towards improving the gaming experience.

But what is it exactly? An eye-tracker is typically an infrared camera that observes your eyes, calculating which point of the computer screen you are looking at. This technology is often associated with specialized equipment and a high price tag. Still, fortunately, there are also cheaper solutions in the mainstream consumer market. For example, the Tobii Eye Tracker 4C costs as little as € 169 (which is competitive compared to other devices for people with movement disabilities).

The idea of using an eye-tracker for the physically impaired is simple. People can be online using only their eye gaze by combining an eye-tracker and a particular browser - without a mouse and keyboard. What is worth mentioning, using eye-trackers can be pretty tiring after a few hours of using them. Yet, sometimes it is better to be tired, but to do some work!

An elegant example of software is called Actigaze. It is a unique browser that, combined with an eye-tracker, enables disabled people to be online. The software is a result of research at the University of Auckland. It emerged a few years ago from experiments and evolved into a product.

From the user's perspective, the Actigaze browser is simple. Once an eye-tracker is installed and calibrated, the browser allows you to open websites, scroll through content, and use buttons and hyperlinks. All of this is possible due to the browser’s user interface designed specifically for eye gaze. This technology is different from a simple point-and-click approach.

With eye gaze tracking, you need to address a few challenges. First of all, precision and accuracy: eye-trackers detect the area of central foveal vision, which is about as large as your thumbnail on the screen. Imagine two short hyperlinked words, separated only by a space. Eye-tracking cannot easily distinguish which one you would like to choose. So, if you tried to control the mouse cursor via eye-gaze directly, this would not be accurate enough.

Furthermore, even if gaze pointing seems relatively straightforward, activating ("clicking") is not. Activating a clickable after you look at it for a specific time ("dwell") leads to inadvertent clicks (the "Midas touch" problem). This problem remains an essential concern for almost all gaze-only click alternatives.

An Actigaze browser solves the problems mentioned above. A website's interactive elements are labeled with colors. To click a link, all you need to do is see what color the link is labeled. Then look at a corresponding button with the same color in the margin of the web browser, and you have it working. So if a link is red, all you need to do is look at the red button in the margin. As a result, people that cannot move their hands can navigate the Internet with comparable ease. Clicking links with the eyes can become as fast and accurate as with the mouse.

How does it work?

  1. Setup
    1. Calibrate your eye-tracker.
    2. Open Actigaze browser. In the menu, you can find a bunch of settings that you can adjust. For people with vision problems, there is an automatic zoom level. Once set, it works on every site. For people with color blindness or other color-related issues, there is a feature to set buttons colors to work the best. The default setting is now compliant with WCAG 2.1 rules. Creators also prepared a detailed list of options to adjust dwell time, scroll speed, accuracy, and more. While using Actigaze, it is good to try a few options to find the most convenient one.
  2. Usage
    1. To start: type the website's address. To do that, focus on the input field. It will be highlighted in the color of one of the sidebars buttons. Move your gaze to the right-colored button. The keyboard will open, and you will be able to type the address. To type - focus your gaze on the letter, you will see an icon of the loader (it means how long you should focus on the area to have it clicked. This speed of reaction to your gaze focus you can always adjust in settings, I had to do it, as I accidentally clicked wrong letters). There is an option to set one website as a start page.
    2. To browse: let the browser automatically scroll the page for you while looking at the top or bottom of the page (also, you can turn it off in settings permanently). If you want to open multimedia or go to other pages linked on the side - focus on the area of your interest. All clickable areas will be highlighted in the colors of the buttons in the sidebars. Then focus your gaze on the button that marks the area you would like to explore more. This way, you will click it, like with a mouse.

A combination of an eye-tracker with a unique browser can help access the Internet without using a mouse or keyboard.

It is possible to:

  • Browse the Internet and move quickly from site to site, with easy control of pop-up, ads, etc., that intrude during scrolling;
  • Read long online articles without problems with manual scrolling (in-build automatic scroll, a nice feature when you cannot easily move your hands).

The software can be used by people with temporary movement problems, permanent physical disabilities, or even by people with ALS. Just note: it can be tiring after some time. It also takes time to type long texts, so you need to be patient.

The Actigaze software is for free. Creators are looking for testers, as they are open to further software development and improvements of their product. You can download software and start testing right now (here is the website: I enrolled as a tester myself to try it, and it didn't take me long to understand how to read articles I wanted to read efficiently. I also met another tester, and there is a good story: while using Actigaze, she found one crucial issue. The colors used as buttons to label URLs were too bright and pastel. So they were not accessible to people with visual impairments. After she reported it, the issue was fixed within a few days. It is an unusual case when software that helps people with particular disabilities can still include not accessible features. Good that creators are open to feedback. Again, I encourage you to test the software - and report what you will find.

To sum up: combine them all!

As bold as it sounds, it is a correct summary. The best solution for people with disabilities is to try a few solutions and combine them in a set that will suit a person's particular needs.

When I was looking for materials for this article, not just based on my own, or my colleagues' experiences, I stumbled upon a great post by Joshua Comeau ( The author described how he struggled with technology when physical problems occurred. He still wanted to work, and finally, he established a solution (check: It allowed him to do his job, but it was slower and more exhausting than in his days before being sick. It is worth reading!

So, the future is bright?

Technology is not the point. Inclusion is. So the technology is only a tool to achieve that goal. In theory, the Internet is for everyone. In practice, however, this is unfortunately not true.

Assistive technologies are essential to include people, no matter their abilities, and allow them to make the most of the Internet's many resources. Accessibility should be a goal of every person engaged in hardware and software development. It should not be the last thing, but products should be born accessible.

Fortunately, there is a wave of change. New programs, new conferences, and foundations emerge. Awareness of the problem is much higher than in previous years. The movement to ensure accessibility is more influential than ever before. Global Accessibility Awareness Day attracts more and more interest. Companies (hopefully) are starting to notice that this group is vast. It is comforting to finally see businesses beginning to understand the power of disabled consumers! Due to COVID-19, the issue of inaccessibility of online and offline is now a more prominent topic. Remote working and meetings are likely going to be the new norm, at least in technology companies. It is then more important than ever that everyone — indeed everyone — can access these technologies.

But one thing is to think about making a product accessible. The second is: how to make it accessible or: how to make a user experience accessible. Sometimes, to show this problem (and also empower a new approach), I described the problem with UX as the accessibility is not inbuilt in designers' thinking. As I call it: AUX (Accessible User Experience) should be a primary in designers' tasks. This kind of approach saves some work for developers. But AUX is a topic for another article:)

Further reading


Key Value
Title How to use the web when you can't move?
Author Małecka, Katarzyna
Year 2021