Agile transformed how we build software. But the QA role often gets stuck between two worlds—traditional testing practices that don't fit sprint cycles, and "everyone tests" ideals that leave testers wondering where they fit.
After 22 years working in and leading agile teams, here's how I think about the QA role in iterative development.
Testing Starts Before Development
In waterfall, testers wait for code. In agile, testers start working before any code exists.
During Story Refinement
QA brings the "what could go wrong" perspective. While everyone else focuses on what the feature should do, testers ask:
- What happens when the user does something unexpected?
- How does this interact with existing functionality?
- What are the edge cases in the acceptance criteria?
- Is this testable as written?
Stories that get refined with testing input are clearer, more complete, and less likely to bounce back with "what about...?" questions during development.
Acceptance Criteria as Tests
Good acceptance criteria are tests waiting to be automated. QA helps write criteria that are:
- Unambiguous: One interpretation, not three
- Testable: Clear pass/fail conditions
- Complete: Cover the important scenarios
- Independent: Don't create hidden dependencies
During Development: Continuous Feedback
The biggest shift in agile testing: feedback happens continuously, not at the end.
Pair Testing
Developer finishes a feature. Instead of throwing it over the wall, they call over a tester for immediate feedback. The tester explores while context is fresh. Bugs get fixed in minutes, not days.
Automation in Parallel
As developers build features, testers build automation. By the time the story is "done," automated tests exist. The test suite grows with the product, not after it.
Early Integration Testing
Don't wait for everything to be done to test integration. Test components together as soon as they exist. Find interface problems early when they're cheap to fix.
The Three Hats of Agile QA
Effective agile testers wear multiple hats:
1. Quality Advocate
Speak up for quality in planning, standups, and retrospectives. Push back when shortcuts threaten quality. Make quality visible through metrics and reporting.
2. Risk Manager
Not everything can be tested equally. Help the team decide: what's critical? What's risky? Where should we invest testing effort?
3. Enabler
Make testing easy for everyone. Build frameworks developers can use. Create documentation that helps anyone contribute. Remove barriers to quality.
Common Anti-Patterns
Mini-Waterfall
Development happens in week one, testing in week two. This isn't agile—it's compressed waterfall. Integrate testing throughout the sprint.
The Testing Bottleneck
Stories pile up waiting for "QA sign-off." If this happens regularly, something is wrong. Either developers aren't testing enough, testers are overloaded, or the process creates unnecessary handoffs.
Automation Debt
"We'll automate it later." Later never comes. Stories aren't done until automation exists. Build this into your definition of done.
Scope Creep Testing
QA finds bugs. Product owner says "while you're in there, can you also test...?" Protect scope. New testing is new work that needs planning.
Metrics That Matter
Track metrics that reflect agile quality goals:
- Escaped defects: Bugs found after release. Lower is better.
- Cycle time: Time from story start to production. Testing shouldn't inflate this.
- Test coverage growth: Is automation keeping pace with development?
- Bug discovery timing: Where in the process are bugs found? Earlier is cheaper.
The Goal
In a well-functioning agile team, quality is built in—not tested in. QA shifts from finding bugs to preventing them, from gatekeeping to enabling, from testing at the end to quality throughout.
The best agile QA engineers I know spend more time in planning meetings than in test execution. They shape quality before code exists. That's the real transformation.
Related Reading
- QA is a Mindset, Not a Role - changing team ownership models for better quality outcomes.
- AI Token Costs in CI/CD - governance tactics for AI-powered testing pipelines.
- Multi-Agent QA Framework - scaling QA workflows with specialized AI agents.
Originally shared on LinkedIn.