reCAPTCHA Accessibility reVISTED

It’s December 2015, the one-year anniversary since Google introduced its NoCAPTCHA ReCAPTCHA in a Google Online Security Blog post. As Google explained then, "On websites using this new API, a significant number of users will be able to securely and easily verify they’re human without actually having to solve a CAPTCHA. Instead, with just a single click, they’ll confirm they are not a robot."

The morning after Google’s announcement, Derek Featherstone was first to post an assessment of The accessibility of Google’s No CAPTCHA, and his initial response was one of "surprise, and maybe even a reserved delight."

However, the response throughout the accessibility community was not all positive, as reported on Adrian Roselli’s blog post ReCAPTCHA Reboot, as well as in the comments on the WebAXE blog and posts to the WebAIM list.

In a nutshell, here’s how Google ReCAPTCHA works:

First, Google harnesses all sorts of information about the user and analyses that to determine whether it feels the user is human. If it can make that determination with confidence, it provides a CAPTCHA that consists of a simple checkbox with label "I’m not a robot."

screen shot of simple CAPTCHA with a single checkbox

Next, if Google is not confident of the user’s humanness, it provides a more challenging CAPTCHA, such as the one shown here:

screen shot of CAPTCHA showing a grid of nine photos, with the prompt: Select all images with sandwiches

From my perspective, there’s no reason why the single checkbox should be inaccessible. In fact, it’s coded well for accessibility and it seems from my tests to work well with screen readers, speech input, and keyboard only.

However, things get more problematic if the secondary CAPTCHA is needed. But even here, I think Google has made significant improvement of this interface.

What is your experience? Please help me to collect data by filling out my ReCAPTCHA Test Form. I’m hoping to capture lots of data from the crowd and analyze the trends. I’ll share the results in a few weeks.

My Experience

Here’s an analysis of how ReCAPTCHA is coded, complemented with my experience using JAWS 17 in IE11. I also tested with NVDA 2015.4 in Firefox 42 and VoiceOver in Safari on both Mac OS X (El Capitan) and iOS 9, and get very similar results.

Continue reading

#a11y rocks!

My song Man with Small F (The Inaccessible PDF Song), originally released on my Flow Theory Flavors album, is now featured on a compilation album called ‪A11y‬ Rocks!

#A11y Rocks! logo

Man with Small F features a screen reader on vocals, trying but failing to make sense of a PDF document that wasn't created properly for accessibility. There's more of a back-story, plus lyrics, on the Man With Small F Liner Notes page. Screen readers are tools used by blind computer users, enabling them to access computer-based information and applications via synthesized speech rather than a screen. The most popular screen reader is a product called JAWS (Job Access with Speech), which costs consumers over $1000 plus hundreds more for each major upgrade, released about once per year. For decades blind computer users (the most disproportionately underemployed of all minority groups) have had to pony up or be left behind.

But now there's a new kid on the block, NVDA (NonVisual Desktop Access). It's an open source screen reader that was developed by a couple of blind guys in Australia, and is available for free. All proceeds from the sale of A11y Rocks! go to accessibility-related causes, including NV Access, to support continued development of NVDA. The album is only £3 (under $5 at today's exchange rate) and you get a dozen cool tunes in addition to my own, plus you'll be supporting a worthy and much-needed project

Check out the A11y‬ Rocks! website for additional info. And thanks for supporting accessibility! Rock on!

Which presidential candidates, senators, and members of congress are not captioning their videos?

Every day people watch hundreds of millions of hours of video on YouTube, with 300 hours of new video uploaded every minute (source: YouTube Statistics). Very few of these videos are captioned, which means there are huge volumes of information being shared by our society, and people who are deaf or hard of hearing are being excluded. An estimated four million adults in the United States age 18 and over report having a hearing-related disability. If a person is running for the highest elected office in the United States, I expect them to be knowledgeable of the need for closed captions and/or to care that so many people in the United States are being excluded from important information.

So, which of the 2016 presidential candidates are captioning their videos? Today I used YouTube Caption Auditor (YTCA) to find out. YTCA is a tool that I developed and recently released as an open source project on GitHub. It uses the YouTube Data API to collect data on any YouTube channel and generate a report that includes the following information for each channel:

  • Number of videos
  • Total duration of all videos
  • Number of videos with captions (does not include YouTube's machine-generated captions)
  • Percent of videos that are captioned
  • Mean number of views per video (to get a sense of how popular the videos are)
  • Total duration of uncaptioned videos

I included all candidates who, according to The New York Times, are officially running for president as of Labor Day 2015, and who have easily discoverable YouTube channels (I couldn't find current channels for Lincoln Chafee or Jim Gilmore). I found two channels for Donald Trump: The Donald Trump channel only has four videos but seems to be his official 2016 presidential channel, whereas Trump is the far more active channel with 172 videos, but covers all things Trump with very little campaign content. I included the first of these channels in the analysis, but ran a separate analysis on the second channel just in case it revealed anything noteworthy (spoiler alert: none of Trump's videos on either channel are captioned).

Continue reading

Relax, Accessibility is Easy!

I received two ads this week from accessibility-related vendors, one in the mail and one via Facebook on my phone. The one in the mail was from Nuance and was an ad for Dragon Professional, the speech recognition product that enables people to dictate documents and use their computers hands-free. The ad features a very comfortable man in a dress shirt, reclining in a comfortable chair in an all-white office with his hands locked casually behind his head.

Nuance Dragon Professional ad, featuring relaxed man #1

The second ad was from 3Play Media, a company that provides video captioning solutions. Their ad, like the Nuance ad, features a very comfortable man in a dress shirt, reclining in a comfortable chair on the beach with his hands locked casually behind his head, gazing out across a peaceful water/mountain landscape as the sun sets.

3PlayMedia ad, featuring relaxed man #2 with text overlay: RELAX. We're taking care of it.

At first I thought the man pictured in these ads was identical, a stock photo pasted onto two different backdrops. However, on closer inspection I see that the indoor Nuance guy is wearing a light blue dress shirt and a wristwatch, whereas the outdoor 3Play guy is wearing a white dress shirt and no wristwatch. Also the outdoor 3Play guy is extending his right pinky finger ever-so-slightly, not true of the indoor Nuance guy.

In any case, both guys are extremely relaxed. And for both, the reason they're so relaxed is that they're using accessibility-related products.

As someone who works in the accessibility field, creating, deploying, or using solutions just like the ones in these ads, I'm imagining myself now in that same pose. I am that guy, a very comfortable man in a T-shirt, reclining in a comfortable chair with my hands locked casually behind my head, soaking in the warm vibrant hues of the setting sun. On my ad, the overlay says:

RELAX. Everything is accessible.

Handling Captions via the YouTube Player API

I rolled out a new version of Able Player over the weekend, and the new version (2.2.1) now supports YouTube captions and subtitles. It was already possible to use Able Player to play YouTube videos, but until now we had relied entirely on YouTube to handle captions on its own, which isn't necessarily intuitive or convenient for users. If a user has turned captions on via the YouTube website, then they get captions in embedded YouTube players for any video that has captions (automated captions don't count). Conversely, if they've turned captions off on YouTube, they won't see captions in an embedded player, even if the video has captions available. This is browser-specific, so it only tracks the user's current caption preference within the current browser.

Ideally, users would have the flexibility to toggle YouTube's captions on and off and change the language of subtitles from any player, not just on the YouTube website. Based on my reading of the YouTube API Reference I hadn't thought it was possible to control captions from an external player, but it turns out the YouTube API has undocumented secrets, which I thought I'd document here to save other developers some headaches.

First, here are some relevant links:

Next, here's what you won't find in the YouTube API documentation...

Continue reading