My Post-CSUN Comparison of Web Accessibility Checkers

Two weeks have now passed since the 2016 CSUN Conference, and I'm still inspired by many of the bright ideas that were generated from sessions, conversations, tweets, etc. and considering how to apply them.

I gave two sessions at CSUN, What's New With Able Player? and Web Accessibility 101 with Accessible University 3.0. In the second of these session, I modeled how to use our Accessible University (AU) demo site in an interactive training session on web accessibility. The AU site consists of three core pages: A "before" page, with at least 18 accessibility problems, an "after" page, with those problems fixed, and an intermediary page that describes the problems and solutions.

One of the sessions I intended to attend but was locked out due to a capacity crowd was Luis Garcia's Automated Testing Tool Showdown. Fortunately Luis shared his slide deck. After looking over his findings, I found myself wondering how the various accessibility checkers would do with a web page like AU's "before" page, the page with at least 18 known accessibility problems. I decided to find out.

Continue reading

YouTube Captions Revisited: Various APIs and Services

I did some work over the weekend to improve Able Player's support for YouTube videos. The changes will be available in the next major release of Able Player, which I'll be unveiling in my session at CSUN.

The biggest challenge with playing YouTube videos in a third-party player is getting access to captions. I described the issues in a previous blog post, Handling Captions via the YouTube Player API. The biggest problem with the YouTube IFrame API, which is used to embed a YouTube player in a web page, is that the API exposes captions and subtitles only after the onAPIChange event is fired, which doesn't happen until the video starts playing. This makes it very difficult to construct the player, as we don't know whether to include a CC button, and whether clicking on that button should display a pop-up menu for selecting available languages.

The workaround I used in Able Player was to autostart the video and play it for just long enough to trigger the onApiChange event, then reset the video back to the start and collect the caption data that had been exposed during the brief moment of playback. This is a clumsy hack, and I've been looking for a better way.

Continue reading

Happy Holidays, Stevie Wonder, & Logic Pro X Accessibility

You may have seen the new holiday commercial from Apple featuring Stevie Wonder and Andra Day singing "Someday at Christmas", a beautiful and poignant holiday song for the times, with lines like:
Someday at Christmas
men won't be boys 
playing with bombs
like kids play with toys
Someday at Christmas
we'll see a land 
with no hungry children,
no empty hands
Stevie Wonder using Logic Pro X
Stevie's using Logic Pro X on a MacBook Pro to do the mix (some online media sources have misreported the software as GarageBand—it's not). The commercial opens with VoiceOver announcing: "Track 5 Vocals. Track 3 Piano." At that point Stevie seems to press a two-finger command then starts playing the piano. Presumably he's recording his piano onto Track 3. Kudos to Apple for presenting accessibility so casually here. In a 90-second ad, only three seconds feature VoiceOver, and they never specifically mention accessibility. Stevie Wonder just happens to be doing the recording and mixing. It's a passing reference, no big deal. And it shouldn't be. It's just the way things are. Continue reading

reCAPTCHA Accessibility reVISTED

It’s December 2015, the one-year anniversary since Google introduced its NoCAPTCHA ReCAPTCHA in a Google Online Security Blog post. As Google explained then, "On websites using this new API, a significant number of users will be able to securely and easily verify they’re human without actually having to solve a CAPTCHA. Instead, with just a single click, they’ll confirm they are not a robot."

The morning after Google’s announcement, Derek Featherstone was first to post an assessment of The accessibility of Google’s No CAPTCHA, and his initial response was one of "surprise, and maybe even a reserved delight."

However, the response throughout the accessibility community was not all positive, as reported on Adrian Roselli’s blog post ReCAPTCHA Reboot, as well as in the comments on the WebAXE blog and posts to the WebAIM list.

In a nutshell, here’s how Google ReCAPTCHA works:

First, Google harnesses all sorts of information about the user and analyses that to determine whether it feels the user is human. If it can make that determination with confidence, it provides a CAPTCHA that consists of a simple checkbox with label "I’m not a robot."

screen shot of simple CAPTCHA with a single checkbox

Next, if Google is not confident of the user’s humanness, it provides a more challenging CAPTCHA, such as the one shown here:

screen shot of CAPTCHA showing a grid of nine photos, with the prompt: Select all images with sandwiches

From my perspective, there’s no reason why the single checkbox should be inaccessible. In fact, it’s coded well for accessibility and it seems from my tests to work well with screen readers, speech input, and keyboard only.

However, things get more problematic if the secondary CAPTCHA is needed. But even here, I think Google has made significant improvement of this interface.

What is your experience? Please help me to collect data by filling out my ReCAPTCHA Test Form. I’m hoping to capture lots of data from the crowd and analyze the trends. I’ll share the results in a few weeks.

My Experience

Here’s an analysis of how ReCAPTCHA is coded, complemented with my experience using JAWS 17 in IE11. I also tested with NVDA 2015.4 in Firefox 42 and VoiceOver in Safari on both Mac OS X (El Capitan) and iOS 9, and get very similar results.

Continue reading

Which presidential candidates, senators, and members of congress are not captioning their videos?

Every day people watch hundreds of millions of hours of video on YouTube, with 300 hours of new video uploaded every minute (source: YouTube Statistics). Very few of these videos are captioned, which means there are huge volumes of information being shared by our society, and people who are deaf or hard of hearing are being excluded. An estimated four million adults in the United States age 18 and over report having a hearing-related disability. If a person is running for the highest elected office in the United States, I expect them to be knowledgeable of the need for closed captions and/or to care that so many people in the United States are being excluded from important information.

So, which of the 2016 presidential candidates are captioning their videos? Today I used YouTube Caption Auditor (YTCA) to find out. YTCA is a tool that I developed and recently released as an open source project on GitHub. It uses the YouTube Data API to collect data on any YouTube channel and generate a report that includes the following information for each channel:

  • Number of videos
  • Total duration of all videos
  • Number of videos with captions (does not include YouTube's machine-generated captions)
  • Percent of videos that are captioned
  • Mean number of views per video (to get a sense of how popular the videos are)
  • Total duration of uncaptioned videos

I included all candidates who, according to The New York Times, are officially running for president as of Labor Day 2015, and who have easily discoverable YouTube channels (I couldn't find current channels for Lincoln Chafee or Jim Gilmore). I found two channels for Donald Trump: The Donald Trump channel only has four videos but seems to be his official 2016 presidential channel, whereas Trump is the far more active channel with 172 videos, but covers all things Trump with very little campaign content. I included the first of these channels in the analysis, but ran a separate analysis on the second channel just in case it revealed anything noteworthy (spoiler alert: none of Trump's videos on either channel are captioned).

Continue reading