User testing observations with disabled mobile users

Below are a handful of observations from user testing on mobile websites and applications I’ve seen recently. All users had some form of disability including people with limited mobility, sight impairments, cognitive impairments  dyslexia or hearing loss. Testing was carried out using Android or iOS with blind users accessing using the TalkBack or VoiceOver screen readers respectively. For obvious reasons I can’t share with you any details about the products.

Disclaimer: This is purely based on my observations and should not be taken as fact. I’ve added in commentary and interpretation but again this is my own opinion. What I’m really interested in is hearing what you think, especially if you are a disabled user.

Apps versus browsing

Many users say they use native apps over and above browsing.

[Apps are] quite focused, I find it easier than the website – A blind iPhone user

This makes sense. Apps are tasked based, have less clutter and tend not to cross sell information. Standard UI components for iOS and Android also come with accessibility baked in for screen reader users i.e they have the correct trait and hint assigned so elements can be identified and explained correctly to the user by their software. All the developer needs to do is assign a label (alternative) to describe the component. As Alistair Campbell pointed out when we discussed this, activating a button within an app takes you directly to the next step in whatever you are doing. On a website however activating a link will most likely refresh the page which you have to navigate within in order to get to the place where you left off.

Landmarks

Many screen reader users tend to navigate using headings over and above landmarks. Users who were aware of landmarks didn’t always seem to bother using them. The following user didn’t have landmarks set up as an option within Web Rotor on their iPhone.

I find they’re not particularly used that well –  A blind Jaws and iPhone user

I’d have to agree – an issue that affects both desktop and mobile. It’s hard to say why users aren’t making the most of landmarks but recent research from WebAim on how screen reader users access the web did suggest usage is on the increase. There is fairly good support for landmarks by screen readers and browsers alike (on both desktop and mobile) so my guess is we are not implementing them in a way that is as useful as they could be. It’s also likely that not all users know they exist.

It’s important to consider content order and placement of headings in relation to landmarks as what is announced by the screen reader can vary. For example:

  • Jaws for Windows announces “Navigation  region  start”  and  “Navigation  region  end
  • NVDA  2012.3 for Windows announces “Navigation  region  start”  nothing  at  the  end
  • iOS VoiceOver announces “Landmark  start”  and “Landmark  end” . It fails to identify which landmark (in this case ‘navigation’) opting instead to  announce the  next  item  in  the  content  order together with the ‘landmark start’
  • Android TalkBack announces “Navigation”  but  nothing  at  the  end

This means placement of appropriate content such as a heading after the landmark in the code order or use of aria-label is worth considering in order to make them more helpful. I’ve written a little bit more about this in usable landmarks across desktop and mobile.

Portrait versus landscape

Quite a few disabled users, not just screen reader users, don’t use landscape and may even lock their screens so they can’t go into landscape.

I don’t use landscape, the speaker gets covered so I can’t hear, so far as I know other blind people do as well - A blind user of iPhone

Aside from covering up the speaker possible reasons for not using landscape for screen reader users might be that there is no need to rearrange screen real estate in order to better read text, watch video or play games. Having said that this might not be the case for a sighted screen reader user, such as someone with cognitive impairments, who rely on the speech to help them understand what’s happening on screen. A blind user might also want to minimise changes in layout and content that sometimes occurs when moving from portrait to landscape. It can be frustrating to have content appear or disappear and  potentially change the structure of a page.

3 thoughts on “User testing observations with disabled mobile users

  1. Those reasons for using apps instead of mobile sites are all excellent, but there is one thing that was not mentioned. There are a very large number of totally useless mobile sites. for some reason, they seem to be more interested in showing off their ability to detect your mobile browser than in giving you the information you came to the site to get. Information that is on the desktop version of many sites is totally absent from the over simplified mobile ones. Putting an app in an app store requires some effort, so I guess they think a little before creating an app.

    I wish my attempt to use landmarks for navigation was being rewarded. They Sure aren’t useful on most sites, but that may improve.

  2. Hi Sarah – you are right of course, this is the big elephant in the room. There were a couple of reasons why I didn’t mention the state of accessibility of mobile websites though.

    Firstly, the testing I’ve seen has been on mobile sites and native apps that have been built from the ground up with accessibility in mind (I should have been clear on that in the post.). While not perfect they were all fairly accessible so my interest was therefore in user preference and why.

    Secondly, I didn’t want a debate on the accessibility of mobile websites but around the usability of sites/apps for people with disabilities. In my mind at least user testing with disabled users should not be an exercise in unearthing accessibility issues – these should already have been tested and addressed – but an exercise in testing how usable the site/app is for disabled users.

  3. It’s also interesting on the app vs site topic whether sites can learn from apps.

    Selecting a button/link that moves the focus to the new area of interest seemed to work in both contexts, can we make more of that on sites? For within-page interactions the keyboard-focus is key, and perhaps we could do similar things across pages, so that even if a new page loads it focuses on the area of interest (assuming that’s known from the last page).

    It will be interesting to see how far we can push the page-model before it causes confusion.

Comments are closed.