Skip to content

Think Like Google X, Test Like Netflix

Do you test anything in your daily work? I do. I test prototypes with end users to ensure the developed ideas actually work for those using them. There are strong indicators lately that I need to take testing more seriously, especially when another piece of encouragement bubbled to the surface recently.

The most recent piece of encouragement is Jon Gertner’s article about Google X on FastCompany. I can’t say that I even knew about the existence of this group prior to this morning, but I’m intrigued. I’m not talking about Google’s Research Division, which spends its time on digital bits and bytes. Google X is more concerned with atoms, things you can hold in your hand rather than websites or software. The big ideas that have come from this group are Google Glass, driverless cars, high-altitude Wi-Fi balloons, and glucose monitoring contact lenses. Do these things apply directly to the user experience design activities I engage in every day? Not obviously, no.

But, take a look inside Google X and you’ll find out about a group focused on testing called the Rapid Evaluation Team. Rich DeVaul, the head of this team says, “Why put off failing until tomorrow or next week, if you can fail now?”

The initial idea of failure presented in Gertner’s article isn’t new to me. One of my mentors, Dennis Breen, got me thinking along those lines years ago with the idea that good UX is about failing faster and I’m sure (while incredible) Dennis wasn’t the first one to the table with that thought. The approach these scientists (a group made up of former park rangers, sculptors, philosophers, and machinists) at Google X take is focused on failure.

So now, I go back to the original question: do you test anything in your daily work? How seriously do you take that testing? Put another way, are you testing for success or failure?

Often, the idea of failing is hard for people to swallow. Their tests are shallow and focus on success. They say, “I tested it and it worked!” Then they cheer on their team and move on to the next thing. I’ve been guilty of this and I know I’m not alone.

In the words of Bruce Tognazzini during his interaction design course at Nielsen Norman Group’s Usability Week, “a little bit of testing is 100 times better than no testing at all.” I agree with him and it was another indicator to me that I need to test everything. You will catch a lot of things with a quick test. But I’m less concerned about the amount of testing done than I am over the aggressiveness of testing.

From what I’ve read, nobody takes testing as seriously as Netflix.

Years ago, I read about Chaos Monkey, a tool that randomly disabled their production instances to make sure their users experience zero impact when this fairly common occurrence happens. I was blown away. They’re not testing their work, they’re attacking it. The result is one of the most redundant systems on the planet, which Netflix had to become to shake up the established and entrenched world of movie rental.

This approach has made them so successful, they’ve followed it up with Latency Monkey, Conformity Monkey, Doctor Monkey, Janitor Monkey, Security Monkey, 10-18 Monkey (short for Localization-Internationalization, or l10n-i18n), and finally, Chaos Gorilla. They have created what they call a Simian Army.

This army sets Netflix up as their own worst enemy, which guarantees that Netflix at its worst is still better than the alternatives. By creating this automated Simian Army, they know testing is done. Having created these monkeys (and now a gorilla) to attack specific things, they know the testing is targeted.

Because testing is useless if you don’t have a target.

I need to get more aggressive and truly hunt failure to bring a better experience to end users. Thinking about what I can do to bring more aggression to my testing, I’ve come up with a few plans that I will use from now on:

Test things multiple times

Often, if something passes a hallway test and a user test I call it complete and move on to the next piece. But it doesn’t cost much to keep past test material in future tests, as you’ve done all the hard work already. I might learn more about an early feature in later rounds of testing and we should all consider that changes made later in a project can affect earlier pieces.

Build a repository of test questions

Go back through the last few tests and compile a series of standard test questions. If the type of projects typically handled are similar, build some test scripts. These can be the base of every test I plan in future. That will bring the burden of testing down and speed things up drastically.

Make a UX test plan

I don’t need to spend all day on this, but I should make sure I know some critical details about every upcoming test. I should be able to relate my test back to a business case, define the objectives of my test, and understand the tasks I’m testing, as well as the process they belong to. If I can’t do this, I have to question the value of what I’m working on.

Document UX test scope

I need to understand where in your process I’m going to do what type of testing or when I should escalate a tricky piece to make sure I’ve gotten it right. For instance, hallway testing went well for a few ideas and I’ve moved on to user testing. Two ideas are still getting great results, but I have to move on and keep the project moving. Why not escalate these two ideas to an A/B Test and get a clear winner so I can move on?

Build a UX Checklist

Go back through the last few tests and compile a series of typical data points required. These will be a starting point, which can be tailored to confirm results for each project. I want to ensure I have not missed anything. This also helps to clarify activity on future projects, forcing me to ask how I can find out critical pieces of information.

How about you? What can you set up and commit to today to ensure your work brings the most value possible to your clients? Perhaps you can even delight their users.