Why Your Interview Performance is Impossible to Judge

When I was at Google, I referred a number of candidates, and ran a little (informal) experiment. How well could people judge their performance? After each candidate completed their interview, I’d ask them how they did. Then, I’d look up their actual performance. And guess what? There was no correlation. None. Zip. Zero. Zilch.

[Update: here's some more data on this.]

Why is it so hard to know how you did? The answer comes in how candidates try to judge themselves, which is typically one of two ways:

Method #1: “I know I did well / poorly because my interviewer was friendly / unfriendly.”

One guy I knew, Alex*, told me he was sure he bombed his interview because the interviewer seemed so unhappy with him. Later, he discovered that he would not only receive an offer, but he was the best candidate that the interviewer had ever seen.

What Alex didn’t know is that this interviewer was not what we’d call a “smiley happy” person. And that’s the problem with interpreting performance from a stranger’s attitude. You’re comparing them to what you’re used to, not to how the interviewer usually acts.

Additionally, a good interviewer should be friendly to anyone, even a poor performer.

Method #2: “I know I did well / poorly because of how slowly / quickly / correctly / incorrectly I solved a problem.”

Imagine a professor passes back a test and you see in big red ink the score “45.” Did you do well or poorly? You have no idea, of course, without knowing how the rest of the class did. The same goes for an interview. You can’t assess your performance on a question without knowing what’s “normal.”

Interviews are not evaluated on an absolute basis with respect to either speed or correctness. Struggling your way through one problem but eventually getting the right answer might indicate that you did extremely well, or poorly. It all depends on how other candidates did (which, of course, you don’t know).

Note: this only applies to “skill based” questions, like programming and algorithms. We’ll discuss behavioral questions later.

So how can you evaluate your performance?

As we’ve discussed, you can’t interpret much from the interviewer’s reactions. However, if you could gauge how difficult a question is, you might be able to guess at how you did. One way to do this is to ask a number of friends the same questions. If they solve it in half the time that you do, then you might be able to conclude that you did poorly.

What about behavioral questions?

Behavioral questions are a bit easier to evaluate performance for two reasons:

  • Interviewer Attitude: The interviewer’s attitude is a bit more meaningful here, but you should look for changes in their attitude. If your interviewer gets more engaged during your responses, or less engaged, then that might be an indicator of performance.
  • Response Quality: Unlike programming questions, behavioral questions don’t vary drastically in difficulty. Struggling to respond to several of them is an indicator of poor performance. You may recognize when you really bombed a question. But, even there, I would be cautious about making assumptions.

Remember this when you’re walking out of your next interview, or when a friend tells you they did horribly. Be cautiously optimistic, because you won’t know until you know.