Your browser doesn’t support the features required by impress.js, so you are presented with a simplified version of this presentation.

For the best experience please use the latest Chrome, Safari or Firefox browser.

Accessibility Testing

dcmouyard.github.io/accessibility-testing

Dan Mouyard

Senior Interface Engineer

dmouyard@forumone.com
@dcmouyard

Test Results

Types of Testing

When to Test

Test Results

Testing should gather the data necessary to
understand, prioritize, and fix problems.

List specific issues, with information that is
accurate, descriptive, and actionable.

Not all accessibility issues are equal.

Prioritize Accessibility Issues

Severity on Users

 
 

 

Give priority to fixes that have a high positive impact to users.
 

 

High: Users will be unable to perform important tasks or understand important content.

 

Medium: Users will be able to perform and understand, but with difficulty.

 

Low: Users will be inconvenienced, but still able to accomplish all tasks.

Effort Required to Fix

 

 

Knock out easier fixes before more difficult ones.

 

Gain quick improvements with minimal effort.

 

Get more accessible in less time.

Location of Problems

 

 

Accessibility issues that don’t impact anyone aren’t issues.

 

Prioritize issues found on high traffic pages.

Volume of Repeated Issues

Multiple issues can be caused by the same code.

Secondary Benefits

Some accesibility fixes improve usability for other users.

Types of Testing

Automated Testing

Manual Testing

Use Case Scenarios

Usability Studies

Automated Testing

 
 

 

A large amount of accessibility errors
can be found automatically.

 

You can reliably catch approximately
25% of accessibility best practices.

Automated Tools

 

 

Quality of automated testing tools can vary significantly.

 

mothereffingtoolconfuser.com

 

 

With manual verification, you can catch another
35% of accessibility best practices.

Manual Testing

 
 

 

Can be time consuming.
Depends on number and complexity of features testing.

 

It’s possible to find everything, but highly dependent on
the skill of the reviewer and the time alotted.

Use Case Scenarios

 
 

 

Use Case: A list of steps defining interactions
between a role and the system.

 

Can find a close representation of the
system’s real level of accessibility.

 

Accurate in assessing how severe the problems are.
 

 

Caution: Different types, brands, and versions of
Assisstive Technology can behave differently.

 

Often misses the specific code flaws
that cause the system to fail.

Usability Studies

 

 

Use real people with disabilities.

 

Find out exactly how easy or hard a system is to use.

When to Test

Most accessibility testing occurs too late.

Fixes are more expensive.

More legal risk.

Incorporate accessibility testing into your workflow.

Waterfall

 
 

 

Incorporate accessibility testing into
each phase of the project.

Agile

 
 

 

Make accessibility an acceptance criteria
for each story.

 

Developers should run automated tests on their code
and check keyboard functionality.

 

Include manual testing in the QA process.
 

Content

 
 

 

A significant volume of accessibility errors
are caused by content creators.

 

Train content creators in basic accessibility best practices.
 

 

Test accessibility before publishing the content.
 

 

Publishing workflow should include automated testing,
using only pass/fail tests on just the content.

 

If possible, have an accessibility expert review
the content after automated testing.

Iterative Approach

 
 

 

Goal: Become compliant faster and cheaper.
 

 

The first round of testing should only include
automated testing for pass/fail items.

You should never pay a human to find errors that can be found through automated testing.” — Karl Groves Accessibility Guru

 

After initial problems are fixed, test manually to
uncover issues automated testing couldn’t find.

 

Save use case scenarios and usability studies for the end.
 

Thank You!

Any Questions?