In 2017, I and a small group of colleagues collaborated on a series of accessibility workshops that we delivered as pre-conference sessions at three national conferences, AHEAD, EDUCAUSE, and Accessing Higher Ground. If you were a participant in any of these workshops, you’re about to receive a follow-up survey. This blog post documents my quest for an online tool for conducting the survey. My #1 criterion for choosing a tool is whether the tool generates accessible output. My #2 criterion is whether the tool is accessible to survey authors with disabilities, but I didn’t specifically evaluate that for this blog post.
To keep things simple, I tested only one question type: Multiple choice with radio buttons.
The first question on my survey is this: “Where did you attend our accessibility workshop?” There are three possible answers: Accessing Higher Ground, AHEAD, and EDUCAUSE. Users are required to select one of the answers.
For this to be fully accessible to screen reader users, the following information should be communicated via their screen reader:
- Each answer
- The question
- That the field is required
- The current state of each radio button (“checked” or “not checked”)
- The number of options, and the user’s position within those options (e.g., “2 of 3”)
If I were to hand-code the survey from scratch using standard HTML markup, my code would look something like this:
Here again are the five requirements for full accessibility, with a brief explanation of how each is attained using the above markup.
1. Each answer
The label associated with each radio button (e.g., “Accessing Higher Ground”) has a <label> element with a for attribute, the value of which matches the id of the radio button <input> element. This explicitly associates the label with its matching radio button. Screen readers announce the matching label when a radio button has focus, and mouse users and touch screen users can click or tap anywhere on the label to select that button (more convenient since it’s a larger target than the button alone).
2. The question
Web developers often make the answers accessible, but often overlook the question. And users should never answer “Yes” if they don’t know what they’re agreeing to! The standard method for making the question accessible is to wrap the question in a <legend> element, then wrap that plus the group of radio buttons inside a <fieldset>. With this markup, screen readers announce both legend and label when a radio button receives focus. They differ in their implementation of this. Some screen readers announce both the legend and label for each button as the user navigates between the buttons; other screen readers announce the legend only once (when the first button in the group receives focus). They assume that’s enough, and on subsequent buttons they just announce that button’s label.
3. That the field is required
The required attribute was introduced in HTML5. The proper technique for using it with radio buttons is described in the HTML 5.2 spec, Example 22.
To paraphrase: It’s only required on one of the radio buttons in the group, but authors are encouraged to add it to all radio buttons in the group to avoid confusion.
4. The current state
If the radio button is correctly coded as a radio button, all screen readers automatically announce whether the current radio button is “checked” or “not checked”.
5. Position within the Total
If the group of radio buttons is coded correctly, all screen readers will announce something like “2 of 3”. One exceptions is JAWS in Internet Explorer 11, but this is probably an IE issue, as JAWS does announce this information in Firefox (tested using JAWS 2018).
How Screen Readers Render Standard HTML
Putting all the pieces together, screen readers typically announce the following information when the first radio button in a group receives focus:
- What this is, i.e., “Radio button”
- The label for the button
- The question (e.g., legend)
- The current state (“checked” or “not checked”)
- Position within the total (e.g., “1 of 3”)
Screen readers vary on the sequence of these items. Also, as noted above, screen readers vary on whether they continue to announce the legend for each button as the user navigates through their choices.
I created a simple survey with one required question using the following tools:
- Survey Monkey
- Google Forms
- Survey Gizmo
Then I tested the output with keyboard alone, NVDA 2017.4 in Firefox 57, JAWS 2018 in Firefox 57 and Internet Explorer 11, VoiceOver in Safari on MacOS Sierra, and VoiceOver in Safari on iOS 11 (using an iPhone X).
The following sections show the code generated by each of the survey tools, edited to just show the relevant markup for accessibility. All tools add a lot of extra <div> and <span> elements plus class attributes to help with styling, but these have little or no impact on accessibility and have been removed here for readability. Also, each of the tools auto-generates name and id attributes – I’ve edited all those so they match my original example.
I really like that Survey Monkey accepts questions and answers using plain text, so authors don’t have to do a lot of clicking in order to create a coded question. Just enter all question and answers as plain text into one textarea field, and Survey Monkey intelligently breaks that into the appropriate parts and adds the HTML markup. I haven’t evaluated the accessibility of the overall interface with Survey Monkey or any other of these tools, but this feature alone greatly enhances ease-of-use. Here’s the resulting code:
With this markup, Survey Monkey is using a heading rather than a legend to identify the question. If screen reader users are jumping through the form using heading navigation (e.g., with the “h” key in JAWS or NVDA) they will land on the question then can drill deeper using the tab key. However, if they’re using forms mode (navigating through the form using the tab key) they won’t hear the question at all. They would have to exit forms mode and go searching for it.
Also, the main heading of the survey is <h1>, but the heading for each item in the survey is <h4>, skipping <h2> and <h3>. Survey Monkey has various themes available that can be applied to the survey, but these don’t have any effect on HTML structure.
Note also that Survey Monkey is using an alternative method for identifying that the field is required (not the HTML5 required attribute). First, they’ve added a visible asterisk to the question, which is commonly understood to mean “required”. Second, they’re adding aria-required to the fieldset element. According to the ARIA 1.1 spec, this isn’t proper placement for aria-required, and indeed it fails HTML validation. That said, some screen readers do support it: For example, VoiceOver on iOS announces “required” for each radio button, but VoiceOver on MacOS does not. JAWS in Firefox announces “required” when the first radio button receives focus; but NVDA in Firefox doesn’t announce “required” at all.
Another issue with relying on ARIA for identifying fields as required is that ARIA only affects communication by assistive technologies; it doesn’t affect the form itself, and with Survey Monkey there’s no actual client-side validation for ensuring required fields are completed. Survey Monkey allows the form to be submitted with empty required fields, then returns a new page with an error message. The HTML <title> and main heading <h1> of the new page are still the same as the original page, so there’s no obvious indication to screen reader users that there was a problem. The error message appears in a red font, but otherwise isn’t featured prominently (the font size is small and it’s tagged as an <h5>). Note that Survey Gizmo has the same issue, only worse: The error message isn’t even a heading, so is difficult for screen reader users to discover. (See below for more on Survey Gizmo).
Here’s the simplified output from Google Forms:
Where did you attend our accessibility workshop? *
There’s a lot of ARIA markup in this form, which demands closer inspection.
- The <div> that contains the entire question has role=”listitem” because the entire survey is a list (further up the tree, there’s a parent <div> with role=”list”), and each question is a list item. This may or may not be a good idea. From my perspective, a list stops being a list when the list items reach a certain size, and a survey could easily cross that line.
- The question is an <h2> (thanks to the combination of role=”heading” and aria-level=”h2″). Therefore, screen reader users can easily jump between questions using their heading navigation keys.
- The question has an aria-describedby attribute, the value of which is the id of an empty <div> that’s positioned immediately beneath the question. Presumably this <div> would be populated with supplemental help text for some questions, but not for this one.
- The question has a unique id, which is referenced later (see the next item).
- The group of radio buttons is wrapped in a <div> with role=”radiogroup” and an aria-labelledby attribute that references the id of the element containing the question (see the previous item). This would seem to explicitly associate the question with the group of answers, as a legend would otherwise do. If it works, it might be better than a legend because it also serves as a heading, so screen readers can enjoy the benefits of both.
- The radio group has aria-required=”true”. This is similar to Survey Monkey’s approach (marking the entire group of buttons as required, rather than each button individually). However, unlike Survey Monkey, Google has placed this attribute on a valid element (Survey Monkey added it to a fieldset, which is not valid; whereas Google is adding it to an element with role=”radiogroup”, which is valid). Note that the question also includes an asterisk, which is wrapped in a <span> element with aria-label=”Required question”, so screen readers will read this whenever the question is announced.
- The question, radio group, and each answer all include aria-describedby attributes that point to one or two ids that are either associated with empty <div> elements or are missing altogether. Given the values of the targeted ids (they all contain either “desc” or “err”), I assume these are either supplemental descriptions (e.g., help text) or error messages, and will be added dynamically if needed. I did find one matching empty <div> (with “err” in the name), and it includes role=”alert”, so if it’s populated at some point with an error message then screen readers will announce it immediately, plus it will be associated with the specific element so it can be rediscovered in an appropriate context even after the initial alert is announced.
- Each radio button is a <div> with role=”radio”, and several ARIA attributes that communicate the state and properties of the radio button, such as aria-checked=”false”, aria-posinset=”1″ (which defines the element’s position in the set of radio buttons), and aria-setsize=”3″ (which defines the total number of radio buttons in the set).
I confess that I’m impressed by this markup. It’s painstakingly thorough, and it nearly validates, were it not for aria-describedby targeting non-existent elements. Oh, and there’s also that <content> element, which doesn’t exist in HTML, but it might be a custom element, although given the context I’m not sure why it would be needed, as opposed to simply wrapping all the radio buttons in a <div>. I won’t go so far as to criticize it though, since I don’t understand its function.
The question is: How do screen readers render this?
The answer: Not bad! JAWS, NVDA, and VoiceOver on iOS all announce the question when the first radio button receives focus, along with all the expected info about the individual button. This is essentially the same as if the question were wrapped in a legend, but it’s also a heading so users can quickly jump from question to question using their heading navigation techniques.
Each of these screen readers also informs the user that an answer is required, although they do so differently. JAWS and VoiceOver (iOS) both announce “required” with each radio button, whereas NVDA announces “required radio grouping” when the first button receives focus.
The only screen reader I tested that doesn’t play nicely with all this is VoiceOver on MacOS. If users navigate through the page in a manner that lands on the question first (e.g., navigating by headings, or reading the entire page element by element), they will hear “Required question” since that’s the aria-label associated with the asterisk that’s appended to the question text. However, the question is never read in association with any of the radio buttons, so if a user navigates directly to the buttons (e.g., using the tab key or using the Form Controls list in the Rotor), they will not hear the question, nor will they hear that the question is required.
Survey Gizmo boasts of being the most accessible survey tool on the market. Here’s their output:
Where did you attend our accessibility workshop? * This question is required
I don’t see anything in this markup that’s worthy of heavy accessibility praise. Each label element is explicitly associated with the relevant input, which is true for all other tools as well. However, the code in this case is a little bizarre. The id of the <input> matches the for attribute of the <label>, which links the two elements. That alone would be enough for making the answers accessible. However, that same label is hidden from screen readers with aria-hidden=”true”, then its inner text is replicated in an aria-label attribute on the <input>. I have no idea why they did this, unless maybe it’s intended to be a fallback for older browsers or screen readers that don’t support ARIA.
As it turns out though, this markup causes major problems for VoiceOver users on iOS. If I try to navigate through Form Controls using the Rotor, none of the radio buttons are recognized as form controls. The only form control in my survey is the Next button, which is announced as the “right pointing triangle” button. If instead I try to navigate sequentially through all items on the page using the swipe-right gesture, each radio button is identified as “medium wide circle”, and the label as plain text. There is no indication that either of these items is a radio button or any other clickable item. If I double tap on either item, it does select that item but automatically submits the form. VoiceOver on iOS is the only screen reader among those I tested that exhibited these problems, but it nevertheless is a significant problem.
All other screen readers can handle the idiosyncratic radio button and label markup. However, Survey Gizmo is the only survey tool among those tested that includes no semantic markup whatsoever for the question. With all other survey tools, the question is either marked up as a heading or a legend, both of which have advantages. However, in this case the question is neither – it’s just a <div>, and is not explicitly associated in any way with the set of radio buttons.
Also, the only indication in the code that this is a required field is the asterisk, which is accompanied by screen-reader only text (wrapped in a span with class=”sg-access-helper”, which positions it off screen, out of site from sighted users). Survey Gizmo, like Survey Monkey, is not using the HTML5 required attribute, is relying entirely on server-side validation after the form is submitted with errors, and when the user is returned to the same page with errors, it isn’t clear what just happened and the error message is hard to find.
Survey Gizmo’s one unique accessibility feature is a “Skip to first question on page” same-page link at the top of the page, but screen readers can already do that easily enough (as can sighted keyboard users) so it’s a feature that really serves little purpose.
One known problem with Qualtrics is that, according to their own admission, not every question type is accessible. Therefore, this limits authors’ ability to use the full functionality of the tool. Fortunately for me, they claim that multiple choice questions are accessible, so I included them in this test. Here’s their output:
This is the only tool of those I tested that uses <legend>, which I’m pleased with. However, I’m puzzled by the presence of two labels for each input. They seem to be taking a play out Survey Gizmo’s playbook, explicitly associating a <label> element with a form field, presumably for the benefit of screen reader users, then hiding that same label from screen readers. Their approach to replacing the hidden label is also similar to Survey Gizmo’s: Survey Gizmo is adding an aria-label attribute to the radio button, whereas Qualtrics is adding an aria-labelledby attribute, which points to a second label element that is also associated with the same input field. So the radio button has two labels explicitly associated with it, one empty and hidden from screen readers, and another one containing content and visible to everyone, including screen readers. Screen readers would seem to be able to handle this. When faced with two labels, surely they won’t choose the one that’s empty and hidden. But will they look past that and use the second label, or will the empty, hidden first label trigger an error that prevents the second label from working properly? Even if all screen readers can handle it today, it feels like a hack, which makes me nervous about depending on it long-term.
This does work reasonably well with all screen readers I tested with, with the possible exception of VoiceOver on MacOS. I tested VoiceOver in Safari, which has another problem: It seems to be impossible, even with VoiceOver not running, to tab to the radio buttons. I tested this in MacOS Sierra 10.12.6, and MacOS High Sierra 10.13.2. In both cases, pressing Tab jumps directly to the Next button, bypassing the radio buttons. I didn’t observe this problem in any other browser (and Yes, “Press Tab to highlight each item on a webpage” is checked within Safari’s Advanced settings).
With VoiceOver running, its built-in visible focus indicator doesn’t follow when navigating between the radio buttons: It remains fixed on some outer container, despite verbally announcing individual radio buttons. This isn’t a huge problem, although it could be a problem for users with low vision who depend on both modalities. Otherwise it just reinforces that something ain’t quite right.
Also, Qualtrics has chosen to abandon the standard visual appearance of radio buttons, hiding the buttons altogether and showing each label in a shaded box. From my perspective, standard, recognizable radio buttons and checkboxes communicate important information about the question (e.g., “Check only one” vs. “Check all that apply”). Therefore, whatever Qualtrics has gained in style (if anything) is offset by loss of information.
Of all the tools I tested, I’m most impressed with Google Forms and will be using that for my follow-up survey to workshop participants. Of course, all opinions are my own, and do not necessarily reflect the opinions of my employer or funding sources. Also, this is all based on a single question type, so it’s by no means a comprehensive evaluation. Also, I welcome opposing viewpoints! If you have them, or if you have insights into any of the code I’ve deemed funky, please share in the Comments.