Designing with Data
I’ve mentioned the Google/Bowman situation about half a year ago, but since then, some interesting things have been written and said about it. Scott Stevenson has revisited the issue, writing:
The idea, I assume, behind using large data sets to make decisions is to remove the human ego from the equation and let the machine be purely objective. The thing that’s so vexing about this approach is that it’s nearly impossible to argue with because it provides «empirical evidence,» and yet there’s a lingering «it just doesn’t feel right» scent to it. The subtext is that the machine is more perfect than its flawed human counterparts.
Putting trust in the aggregate is useful when you’re actually looking for statistical data, but could a computer have ever dreamed up an iPhone? I doubt it.
This vastly overstates how people actually use data. Statistical analysis is not the opposite of design, and it’s not a replacement for designers. Instead, it’s a tool that every designer should have in her arsenal.
A/B testing doesn’t replace designers. It’s a method designers use to test their ideas.
In a video on parc.com, Marissa Mayer, Google’s VP for Search Products & User Experience, talks about how and why Google uses A/B testing. Around minute 14, she explains the now infamous «shades of blue» test Google ran. The google.com and gmail.com teams used different shades of blue for links, but wanted to use the same color. In order to decide which color to use for both products, they ran tests with a number of colors and actually found a statistically relevant correlation between different colors, and click-through rates.
A/B testing didn’t dictate how google.com or gmail.com should look. Instead, the designers of the two teams decided to use A/B testing as a means of resolving a specific design question.
Marissa Mayer also points out that the results of these tests can be empowering to design teams. Companies have to make money. Since it’s not self-evident that good design helps a company increase its profits, design teams often lack authority. A/B testing allows designers to prove that their work is one of the most important factors influencing a company’s bottom line.
TechRadar has an article on Google’s design team, which also discusses the Bowman situation. Irene Au, Director of User Experience at Google, explains:
We often will put things out on an experimental framework and look at how they perform, and use that to drive our design decisions. Search is the interface that most of our users are familiar with and so what goes into the search interface is really what drives our visual language.
Search is such a fragile interface. It’s humbling to see how the slightest changes in design, just pixel-level changes or barely perceptible changes to colours, can have such a dramatic impact on usage and revenue. At Google, we’re in a unique position to measure the impact of design on the business, and we take advantage of that opportunity to make design decisions based on evidence.
We, the people who design software, must keep in mind that we are not designing for an art gallery. Simply creating beautiful things is just not good enough. We are designing products that will be used by humans who want to reach specific goals. We can’t just make stuff up and hope that it’s good enough. We need to do usability tests, we need to make sure our products are accessible, we need to make sure users can actually reach their goals. Statistical analysis is just one of the tools at our disposal.
The goal is to create something that is both beautiful, as well as usable. That’s a challenge, and it’s not a challenge a computer can solve. Designers must not be afraid of evidence and statistics and science, because evidence and statistics and science are essential to reaching that goal.
If you require a short url to link to this article, please use http://ignco.de/205