Skip to content

How we interview for U.S. positions: A prep guide for engineers

Join our mailing list

Signifyd regularly publishes free reports packed with business insights, commerce trends and data from our massive Commerce Network. We’ll only email when we have something meaningful to share, no more than once per week. And of course you can unsubscribe any time.

Interviewing and hiring is not a perfect process, in any industry. It’s especially difficult to find the right candidate for backend software engineering positions.

In the race to find the best, most qualified candidates, selecting the best one from a pool of applicants is not a solved problem. Conversations tend toward one of three topics:

  1. Espousing the virtues of one technique over another
  2. Focusing on the impossibility of predicting productivity from a bit of coding performance art
  3. Declaring, “This is how it’s done at my company,” because we’re pretty proud of how we do things

This article is about what we do when hiring for U.S.-based engineering roles at Signifyd: #3.

I suggest everyone incorporate a Code Review equivalent into their own process.

The theory behind our process

First a short digression into theoryland: It’s important to understand Signifyd’s overall approach to interviewing. For some context, read up on the Unified Arguments for Overall Approaches. You might recognize that as a hedgehog way of thinking.

Conversely, the fox way of thinking relies on a large toolbox of diverse approaches, even if the combination is theoretically unsound. Our hedgehog approach to designing an interview scheme is based on a fox approach to interviews.

I will explain how we’ve found success with this approach, as well as the downsides, plus future improvements. The final piece of our engineering interview process is my favorite aspect of our process: the code review. Of all the steps in our interview process, this is the step I enjoy the most. It’s also the step I have heard the most positive feedback about. In fact, I’d recommend a code-review aspect for everyone’s interviewing process.

Random forests, foxes, and amateurs

Meaningful interviewing is hard. Maybe impossible. Most people aren’t good at it, and many companies struggle with it. Folks can become consistent at executing some hiring processes, but despite their best efforts, they let poor candidates through to the hiring stage, or they exclude candidates who would have been a great fit for the job.

Google, for instance, has a very specific type of filter. Engineering candidates must be very good at “standard whiteboard algorithm problems” to make it through their gauntlet. Companies like Google can get away with a very strong and specific filter because their candidate pools are so large. Most other companies can’t do this.

Without a very strong and specific filter, what can we use to hire well?

The core of Signifyd’s business—the unique value we create which we then convert into value for merchants across the world—is predicting whether or not a given transaction is fraudulent. Sometimes merchants’ fraud traffic patterns are similar to each other, but we often onboard merchants with previously unseen types of fraud. The machine learning models we’ve chosen to tackle this problem are random forests.

We build these random forests by first building one illogical decision tree. Then we build another, and another, and so on, and average their results. This is like leveraging the wisdom of the crowd, which works like a charm. We apply this approach when interviewing Signifyd engineering candidates.

We know the following things are true: We’re amateurs. Interviews give a weak signal about future productivity. We don’t have a Grand Unified Theory of how to get the most out of an interview. Instead, our Grand Unified Theory of how to get the most out of an ensemble of interviews is based on diversifying, gathering as many uncorrelated signals as we can, and mushing those weak signals together.

A quintessential fox approach.

Our weak predictors

Here’s a quick look at what we do in practice and what weak signals of ability can look like:

  • A timeboxed (for real!) take-home, about 50 percent design and 50 percent algorithm
  • A “standard whiteboard algorithm” problem geared toward “can you code?” and introspection over difficulty or cleverness
  • A code review, where candidates annotate, eviscerate, and/or rewrite some provided problematic code
  • A technical deep dive, where we delve into interesting bits of the candidate’s technical background
  • A design problem, where the focus is on good code structure rather than algorithms
  • A common systems design problem, like design Bitly

Not every candidate gets every category from the above list. Occasionally one category is doubled up in the interview process, based on the candidate’s experience and background and the position they are interviewing for. But generally, our candidates get at least four of these six.

Pros and cons:

Pros Cons
Take-home Low candidate stress, no scheduling constraints, produces unrushed code Easy to cheat, some candidates won’t do a take-home
Algorithm Well understood among candidates, decent insight into whether candidate can actually write code Not well liked among experienced candidates
Code Review Close to actual work skills Not suitable for college hires, some candidates can be put off because it’s different from what they expect
Technical Dive Helps verify if the candidate really knows the things they included on resume Easiest to “socially hack”
Design Insight into how experienced candidate really is, proves they can do more than sling code Most likely to go off the rails and not provide good data, hardest type to explain and communicate well
Systems design Finds the scope of architecture candidate is comfortable with Not suitable for non-senior candidates

Code reviews

Have I mentioned that the code review is my favorite part of the process? It seems to be well regarded by our candidates, too. Our original code review question contained a bit of real production code that was simply bad.

It was very hard to read, so we started the code review with a request: “Please read this undocumented function and then describe what it does in your own words.” Then we asked candidates to write good documentation, assuming the code was 100 percent functional. We asked them how they’d refactor the code to be maintainable, and finally, we asked them to rewrite the function.

Now we have a few types of code review questions. Some feature a difficult-to-follow, small function as above. Some feature a larger, poorly designed component. In this case, we would ask the candidate to discuss the design and implementation with us as if we, the interviewers, were junior coworkers who just submitted the code for review.

This type of question is an excellent test. It’s more in line with the practical skills we use in our day-to-day work when reading legacy code, reviewing peers’ code, writing empathetic comments (where we try to include encouraging emojis :D), and rewriting bits of code to optimize for readability and maintainability. This is far more common for our teams than needing to solve a tricky algorithm problem. It also gives a clearer picture of how the candidate views code reviews, rather than just asking, “Do you do code reviews?”

Current results and future plans

Theory is great, but you’re probably asking yourself, “How well does this process work?” In the interview process, you can’t know how well your declined candidates would have done in the role and you won’t know how good your hires are in their roles until a few months down the line, at least.

What we do know is that is that we’ve been able to maintain rapid growth, while assembling a talented team of engineers, the members of which appear happy to be here as they tend to stick around.

At Signifyd, we’re still working on our process for effective software engineering candidate interviews. We’re working to add more variety in the interview questions we ask within each category —not just because it’s good practice to routinely cycle out old questions, but because interviewers can get tunnel vision when they ask the same question too many times. We should strive to be well-calibrated on the questions, but there’s a fine line between well-calibrated and stuck in a rut.

We’re still exploring best practices for interviews for our technical and engineering positions in Belfast, as the culture is different and candidates expect different things. Similarly, we are expanding our repertoire of weak predictors for data scientists. Overall we like our process and plan to iterate on it, not revamp it, in the foreseeable future.

All of this is to say, we’re hiring! Now that you know a little more about our process, check out our careers page for open positions.

photo courtesy of UnSplash

Guy Srinivasan

Guy Srinivasan

Guy is a senior software engineer at Signifyd, who builds and maintains the decisioning engine at the center of Signifyd's order processing pipeline. He is a probability buff, who likes discussing esoteric decision theories. He is based in Seattle.