As we talked about at the beginning of the course, there’s lots of different ways to evaluate software.
One that you might be most familiar with is empirical methods, where, of some level of formality, you have actual people trying out your software.
It’s also possible to have formal methods, where you’re building a model of how people behave in a particular situation, and that enables you to predict how different user interfaces will work.
Or, if you can’t build a closed-form formal model, you can also try out your interface with simulation and have automated tests — that can detect usability bugs and effective designs.
This works especially well for low-level stuff; it’s harder to do for higher-level stuff.
And what we’re going to talk about today is critique-based approaches, where people are giving you feedback directly, based on their expertise or a set of heuristics.
As any of you who have ever taken an art or design class know, peer critique can be an incredibly effective form of feedback, and it can make you make your designs even better.
You can get peer critique really at any stage of your design process, but I’d like to highlight a couple that I think can be particularly valuable.
First, it’s really valuable to get peer critique before user testing, because that helps you not waste your users on stuff that’s just going to get picked up automatically.
You want to be able to focus the valuable resources of user testing on stuff that other people wouldn’t be able to pick up on.
The rich qualitative feedback that peer critique provides can also be really valuable before redesigning your application, because what it can do is it can show you what parts of your app you probably want to keep, and what are other parts that are more problematic and deserve redesign.
Third, sometimes, you know there are problems, and you need data to be able to convince