Developer Interviews are Broken, and You Can't Fix It

The "Google-style" interviews is the one people love to hate. It's broken, good candidates fail, bad candidates just memorize the answers, yadda yadda yadda.

That's all true.

But this is also true: all processes are broken.

PRIOR EXPERIENCE

The Argument: "You should hire based on their prior experience. What have they accomplished? Have they added value to their prior companies? That's the best way to know if they're good."

Issues:

  • You aren't hiring on what they've done. You're hiring on how they discuss what they've done. (These interviews are highly coachable. Trust me. I've coached lots of people and turned candidates that would have flopped into successes.)
  • Candidates might fluff up what they've done to make it sound more impressive.
  • Candidates might not take sufficient credit for what they've done (attributing too much credit to their team or not really getting into why it's hard). Women and leaders tend to have a particularly big problem with attributing their personal accomplishments to their teams, although the driving reason there is different.

Best Practices:

  • Drill in deeper. Don't accept answers at face value.
  • For individual contributor developers, focus primarily on technical challenges, rather than questions about leadership, conflicts, etc. There's such an art to crafting such an answer that the quality of the response likely has little correlation with someone's true attributes.

REFERRALS

The Argument: "You should hire with referrals. Hire people who have been recommended by your employees. As you grow, make sure you really thoroughly check referrals."

Issues:

  • Hiring only people you've worked with won't scale well.
  • A mediocre candidate can identify a few people who will speak sufficiently well about them. It won't help separate the so-so from the great so you can't bet your hiring strategy on this.
  • If the candidate is still employed, you can't go poking around at their current employer. 

Best Practices:

  • Check referrals when you can. Why not? But take the responses with a grain of salt. A candidate could be great but they provide a reference who never expresses glowing enthusiasm.  Or, a candidate could be mediocre but their reference only wants to say nice things.
  • When a current employee refers someone, don't bet on that referral. Really validate just how closely the employee worked with the candidate.

GITHUB & WORK SAMPLES

The Argument: "Check out the candidate's Github or work samples. This way you see what they can really accomplish and how they work in the real world."

Issues:

  • Won't scale. Many people don't have major out of work accomplishments or don't want to post them.
  • You may really struggle with female developers here. They're more likely to have substantial family responsibilities (and thus can't do as much out of work). They also might not want to subject themselves to public scrutiny here.
  • Work samples show you current code quality, not how good they'd be with some training. Lots of people (especially those with less experience or who have never worked in a company with rigorous coding guidelines) might write sloppy code. That doesn't necessarily mean they're bad coders. A bit of training could fix their coding style.
  • It's very difficult to identify intelligence / problem-solving skills this way.

Best Practices:

  • You can look at their code, but take your interpretation of their code style with a grain of salt. Separate what is fundamental to them and what can be fixed.
  • Don't rely on this exclusively if you have large hiring needs. You'll eliminate a lot of candidates.
  • Be aware of what you're not looking at, like intelligence / problem-solving. 

HOMEWORK

The Argument: "You want to evaluate what they can really do, so give them a homework assignment or project to build on their own time. This scales well and is real-world"

Issues:

  • Lots of candidates won't do a long homework assignment, especially those with other responsibilities or other job options.
  • Candidates will ask for help from their friends, especially on trickier algorithm parts. Don't expect that this truly represents their own work. (This doesn't mean you can't use it. You just can't rely on the results completely.)
  • You'll have the same issues as above with seeing their current code style, not how they could code with help.

Best Practices:

  • Be fair to candidates. Balance the demand on their time with the utility you get from it. If you're giving a longer project, these should largely replace your interviews. Be aware that this means losing candidates who have other responsibilities.
  • A short 1 - 2 hour test is fine and can be used to screen out those with weak problem-solving and coding skills. However, because of cheating, you need to also evaluate this yourself during actual interviews.
  • Separate out coding issues that are fundamental from ones that can fixed through training and guidelines.

KNOWLEDGE

The Argument: "You need your developers to know stuff. A developer who doesn't know their field can cause a lot of bugs and other damage. Clearly knowledge and expertise is valuable!"

Issues:

  • A good developer can typically learn a new programming language to sufficient depth for most jobs. If you're expecting a skill that can be easily learned, you're just reducing your candidate pool (or dropping your focus on other attributes).
  • The programmers who are identify themselves by a specific programming languages are often have weaker problem-solving skills. You might actually be selecting for a bad attribute.  In fact, Google found that certified developers tended to perform worse on their problem-solving interviews. This was not a surprise within the company. Good developers are less likely to identify themselves as "a Java developer" or "a PHP developer." They're developers. They might work with a specific language right now, but they know they could quickly ramp up in another. (They might however say that they're a front-end developer or a back-end developer.)

Best Practices:

  • Knowledge should be hard to acquire. If you're going to require knowledge of a specific language or technology, it should be true expertise. Easily acquirable knowledge isn't useful to assess because you could just have someone learn it on the job. The exception to this when lacking this basic knowledge is a red flag.

    For example, if you're just asking a developer to write some SQL SELECTs and JOINs, that's not useful. That's knowledge that a competent developer could very quickly learn.

    However, if someone says that they've worked with SQL for the last few years and can't write a JOIN -- well, that's very weird. I'd want a good explanation for this.
  • Assess if you really need expertise in a particular language. Most medium and larger companies do not. Someone can learn the programming language on the job. You have enough people who are comfortable with the language to provide oversight for the newbies.

PROBLEM-SOLVING/ALGORITHMS + WHITEBOARD CODING

The Argument: "You want your developers to be smart. Smart developers do good work."

Issues:

  • In practice, you're expecting knowledge of certain computer science data structures and algorithms that are highly unusual to use on the job. Lots of people don't know these things, so you are going to get a bunch of false negatives.
  • Many candidates study so much for the interviews that they already know that problems that they're going to be asked. In this case, you're not assessing problem-solving skills so much as you are ability to repeat back an algorithm.
  • A lot of developers get really nervous during these interviews. They might fail in an actual interview, even though they could solve these problems on their own.
  • Whiteboard coding is unrealistic. No one codes on a whiteboard, so it throws people off and causes them to make mistakes that wouldn't necessarily happen on the job. Additionally, it's just painful and slow.

Best Practices:

  • Your questions should be challenging and unusual. Asking common problems is a very bad idea because then you're just seeing how much someone studied.
  • Your interviewers should be looking beyond correctness of an algorithm question. The goal of these questions is to assess how problem-solving skills, so you want to actually see how good they are at solving a hard problem. If you stop at an obvious solution, you're not going to be identifying what you want.
  • Whiteboard coding is slow and does cause a lot of mistakes. Be understanding there. Don't make a candidate write details in their code that aren't useful for evaluation. You can also provide a laptop instead if it makes a candidate more comfortable, although this has its own set of hurdles.
  • Use questions that don't have single "a-ha" moments or hurdles. Even if that one hundred is a useful indicator, it's still just a single data point. Pick questions that have multiple hurdles so that you can help a candidate past one hurdle and still evaluate them on subsequent ones.
  • Come up with a list of expected data structures, algorithms, and concepts that you expect candidates to know and that are "fair game" for algorithm questions. Binary search tree and breadth-first search is fair game; the implementation of red-black trees is not. Share this with your interviewers; they shouldn't expect knowledge beyond this. Share this with your candidates so that those without computer sciences (or those who graduated a while ago) have a fair chance.
  • Know when you do need expertise. Some skills are hard to acquire, even by someone who is smart.

IF IT'S ALL BROKEN, WHAT ISN'T BROKEN?

See above. All interview approaches have issues. All of them.

A great number of the top companies have been built through the algorithm-style interviews (so it's probably not terrible at least for a medium-sized company with particular problems), but plenty of other companies have been built through other processes.

But it's all broken.

SO… WHAT DO YOU DO?

Accept that any interview approach you take will have flaws. You will have false negatives and false positives. It's all broken.

Now, with that said, you need to identify the least broken one for you. And then you need to implement it well.

This means:

  1. Identifying what skills you need/want (and which are actually realistic to acquire). What is the weighting of each? What really matters? What can you give on?
  2. Determining how (or if) you can assess each of these. What sort of questions or approach will work? (You can pick multiple approaches.)
  3. What are the issues with this approach? Can you mitigate these, at least in part?
  4. Create an interview training program that aligns with this approach.
  5. Communicate the expectations with candidates. They should know what they're getting into.

You still won't have a perfect process -- you never will -- but you'll be better than you were before.