Bug reports and what to do with them
At a minimum, a good bug report contains the answer to three questions:
- What did you do?
- What did you expect to happen?
- What actually happened?
This doesn't just cover software, it's a reasonable place to start any fault diagnosis:
- What did you do? I tried to open the door.
- What did you expect to happen? The door to open.
- What actually happened? Nothing!
(Although you will receive reports with more data than this, well written, detailed reports will be the exception.)
(You probably have an intuition already about what the problem is. Learn to distrust that intuition, or at least treat it with scepticism.)
The first step in diagnosing an issue is to make sure you understand correctly what the issue is. People often missrepot the issue for various reasons; they don't understand the technology ("The TV remote is broken" > the remote needs new batteries, is not being pointed at the TV, is actually the remote for the hi-fi), they're trying to be helpful ("The TV wasn't responding to the remote so I took out all the cables and wrapped the ends in tin foil to help them conduct and I've put them back in and now the TV is on fire and the remote still doesn't work"), or they're worried about looking stupid ("The TV remote in the demo room isn't working, fix it before the clients turn up in 5 minutes and stop bothering me with stupid questions").
You are going to need to ask questions, you will need to ask her basic questions that could be interpreted as insulting ("Of course it's plugged in, do you think I'm stupid"), and you will probably need to ask the same question more than once as people helpfully answer a different question.
Looking at our door example, try to establish why they are trying to open the door? Is it a door they go through often, or is this the first time? At this stage, you're still looking for context. Again, try not to think of solutions at this point, or even causes. The first step is to establish what actually happened, ideally well enough that you can reliably trigger the issue locally.
I say ideally, but it's very close to essential to be able to replicate the problem at will. If you're dealing with software, this is a good time to write a new test case. Write enough code to trigger the issue, and then start taking code away from your test until you get the smallest reliable trigger. This exercise has two aims. First, writing the test case should help find the rough area of interest in your source. Second, having a reliable test case means you can be confident that you have fixed the issue! Without a test, you can't be sure that your 'fix' has worked, or even fixed the right problem. With a test, you can apply the fix, run the test, confirm the issue doesn't reoccur, and then remove the fix, rerun the test and confirm the issue comes back. (Also, including the test in your automated test suite (you do have an automated test suite, yes?) makes it harder for someone else to reintroduce the issue later).
Once you understand the problem, have isolated the issue, built a test (or series of tests, don't hold back here), written and confirmed your fix, this is a good time to look though your codebase for similar patterns (or exact duplicates) that you can fix at the same time. (It's great to have users report bugs, but it's far better to not have bugs for them to report)