Testing for Accessibility: Techniques, PerspectivesHeadaches and The Future
Derek Featherstone
bartricks.org
Principles
Context: testing with a purpose
web site? web application? web based data entry?
who? how often? skill level?
Principles
- How do we know when we've hit the mark?
- What is the mark, anyway?
Principles
- Accessibility testing is different than usability testing
Principles
- Accessibility (and testing for it) operate at both macro/micro levels
Ideally
Ideally, we are able to do continous testing throughout the development lifecycle.
Reality
In reality, it will take us a long time before people realize that accessibility isn't a plugin, or something that you can just add at the end of a project.
Ideally
- include user testing and expert review
- pan-disability testing with multiple users
- task analysis with tracking of success rates/times/etc
- continuous throughout the dev cycle
- tape, transcribe, analyze
- evaluate each page against specific criteria
- if appropriate, track before and after results
- unlimited budget
Reality
- developers left to themselves to sort it out
- one user, if you are lucky
- "check this over, will you?"
- low to no budget
Compromise
- developers learn techniques (development and testing)
- do as much as you can as early as you can
- some budget, so that we get to eat too
- team testing approach
Cost and Utility
There are a number of reasons for higher testing costs. We are looking to get maximum impact with reasonable cost.
Team Testing Approach
- everyone gets involved from the beginning
- developer peer "teams"
- establishing local "expertise"
- include automated testing
- no formal taping or transcription
- 3 people for formal testing:
- Facilitator
- Tester
- Analyst
Developer Tools
- Lynx
- Web Developer Toolbar
- Accessibility Toolbar, NILS, Australia
- Opera
- IE right-click addons, bookmarklets etc
See Testing Tools at WATS.ca
Automated Testing Tools
- A-prompt
- Hera
- Watchfire: Bobby -> WebXact
- HiSoftware: CynthiaSays, AccVerify
- Sample output
Formal Testing Roles
- Facilitator: records, guides, monitors task completion and intervenes where necessary
- Tester: the person to complete tasks and/or "survey"
- Analyst: this person is critical; monitors the technical side of things, examining code behind the scenes, makes repair notes etc, while the testing is happening; concurrent expert review
Process
- Creating a testing snapshot (CYB
- Testing strategies: random, targeted, all
- Tracking expert review: custom tool, ScrapBook, BaseCamp
- Timing: WARNING! extended testing can become mind-numbing
- Reporting: summary reports versus "weighty tomes"
Summary Report
- Executive Summary: what we did, how we did it
- Overview: "rating scale" Yes, No, Provisional
- Responsibility?
- Severity rating
- Estimated time to repair
- Anecodotals
Perspectives Headaches
- "I want a badge"
- Testing a process vs Testing static content
- Reporting by checkpoint (especially on retrofits)
- Generalizability
- Over testing
- Adaptive Technology
Research and The Future
- Is heuristic analysis good enough? is it better, or is it just different?
- How many users is enough for testing?
- Adaptive technologies continue to evolve; so must our development techniques
- Adaptive technologies are behind in terms of what standardistas are doing, but ahead in terms of what the masses are doing.
- If we are doing things right, we should be able to tell users of older screen reader software to turn JS off for more consistent experience. (There may be some problems with this approach, but we may be able to do it - more later)
For further Reading/Thought:
boxofchocolates.ca/atmedia2005