At last month’s 2014 Worldwide Developers Conference (WWDC), Apple made a number of huge announcements. …
In Part 3 of our “User Stories 101″ series, we talked about how user stories are great placeholders for functionality, but are not all the functional requirements we need. To go deeper on actually conveying need, we should integrate a series of acceptance tests to confirm that a story is satisfied.
Or, put another way: how do we know when a user story is done?
A few ground rules:
- It’s better to write a short story and a longer list of metrics to test.
- These tests should be written before coding.
- These tests should not be written by the actual developers.
- These tests should comprise the QA script for the customer team after coding.
Writing Acceptance Tests
Acceptance tests are detailed lists of requirements that accompany a story. They should be actionable and task oriented, centered around the following thoughts:
- What does the team need to know about the story?
- What can go wrong during the story?
- What am I assuming about how the story will be implemented?
It’s a good idea to use a consistent tone and syntax here, too. Our team follows the “Verify That…” approach. The first line of your acceptance tests should begin with that phrase, and what follows can be a simple list of to-do’s, essentially. This allows the developer as well as the QA team to ensure the requirement is met.
Another way of explaining detailed functionality is via a behavior model. This layout gives another consistently phrased approach to how we handle individual requirements. The syntax is:
>Given that I am << on a page >> and I have << performed an action >>, I should << see this reaction >>.
This model is also great for QA and bug submissions. Simply add, “Instead, I expect << desired functionality >>.” to the end of that line and you’ve got a really detailed error report that can be submitted to a developer.
I often write these tests on the back of the user story notecard. Again, this limits the amount of detail I can add to a particular card and instead implies that I should break complex stories down into multiple cards or entities
So now we’ve got a big pile of well-written, thought-out user stories, and they even have acceptance criteria. Hooray, acceptance testing!
Next up, in the final part of our “User Stories 101″ series, we’ll answer the question: how do we know where to start?
Editor’s Note: This series was originally drafted by Jon Arnold, but was not published until after he left Centresource for his next adventure as Product Manager at Taonii. You can find Jon on Twitter @jonarnold.