Is it possible to Automate Accessibility Testing???

Accessibility- Visual Disability  This blog discuss the challenges to Automate the accessibility testing of Web applications made accessible  for people having Vision related issues; such as blindness and low vision etc.

   Starting with a brief about Accessibility, I will discuss the challenges we faced, and then will end with the
solution.

As per W3C, Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web.

W3C started an initiative; Web Accessibility Initiative (WAI), to lead the Web to its full potential to be accessible, enabling people with disabilities to participate equally on the Web. Web Accessibility testing is to validate that website is accessible to people with various level of disabilities.

Businesses are making their websites accessible to avoid legal issues, expend the business (approx 1 trillion $ market) and remove inequality among people with various level of abilities. Tesco invested £35,000 to make their website accessible and generated £1.5 million in a year from online sales to disabled people in Europe.

Broadly disabilities can be grouped under Sensory (Vision, Hearing), Physical (Hand movement, paralysis etc), and Cognitive (dyslexia, slow processing of information etc) disabilities.

For people with Visionary problems such as low vision or blindness, there are some assistive screen reading tools; such as JAWS, NVDA etc.  These tools read the web content; end user hears and accordingly with the help of Keyboard can interact with the website. Tab key, Arrow keys, Enter Key, Shift, CTRL, ALT, and Space bar are most used keys for navigation.

Testing the Website for vision accessibility is a two steps process;  Step 1, Use free tools where you provide the URL of your website and the tool generates a report showing the how accessible is the website. Take appropriate action. Step 2, Manual test engineers imitate the blind users, hear the web content for correctness, and test the navigation and functionalities using keyboard.

Hearing the content and then verifying what you see on the screen, repetitively is monotonous and boring tasks for manual test engineers.  Disorientation leaves space for missing vision accessibility issues during regression testing. Keeping this in mind, since few days I along with my colleague, trying to automate, accessibility testing.

The foremost challenge was we could not find over web, if somebody has tried to automate Screen readers.  Then next biggest challenge was how to verify if the Screen readers are reading the content right. Another technical challenge is that Screen reader tools are not accepting the Keyboard Shortcut inputs sent by various paid/open source tool such as QTP, SilkTest, Test Complete, Selenium, AutoIT, Robot Api etc.  These Keyboard shortcuts help the disabled people to navigate and use the functionality of the Web page.

The solution; we created an Object Repository which contains all the objects, their IDs, specific attribute which JAWS reads, and expected content. We were sure that if the right content is set in the right property of an object, JAWS is going to read it correctly. We also found during our R&D, that JAWS reads the ARIA labels first. So in cases where the development is at initial stage, I would recommend to ensure that development team is entering content in the ARIA labels associated with an object. These contents will be read by JAWS when user moves control on the object.

What is ARIA (http://en.wikipedia.org/wiki/WAI-ARIA) WAI-ARIA describes how to add semantics and other metadata to HTML content in order to make user interface controls and dynamic content more accessible.)

In our case, the test website is already developed, so instead of asking development team to add ARIA labels for all objects, we collected all objects, and their content in specific attributes, which JAWS is reading. This way we created our object repository and automated Screen Readers. For navigation, currently we are using the Tab key, Arrow keys, Enter Key, and Space bar.  With these keys we are able to check all objects, content for JAWS, and functionalities.

I am looking forward to hear from you if you have suggestions and queries.

Advertisements