[🗣 Weekly Discussion] Are LeetCode questions really the best way to technically assess candidates?

I’ve lost count of the number of times I’ve heard someone in the industry mention that they don’t actually use Algorithms in their daily job functions.

Why then, do most companies use it as a metric to assess potential employees. Aren’t they better off testing skills that are more relevant to the positions’ actual responsibilities? I took some time to think about it, and I’ve shared my thoughts below. I would love to hear what other Risers think about this issue as I’m sure pretty much everyone has an opinion about it.

Here’s the quick and dirty summary according to me: No, LeetCode questions are not the best way to assess a candidate, but until we come up with a better approach, right now LeetCode questions provide the best, (candidate_quality)/(work_involved to assess candidate) ratio out there.

I’m gonna be honest, I like solving LeetCode questions. I like how they challenge me, help me learn and implement new concepts in actual problems. The joy of seeing myself improve over time alone is worth the effort for me. And it’s kind of a bummer that on average those aren’t the type of questions we’ll be facing in real work. That being said, there are definitely positions out there that require you to apply some of these concepts, but none warranting the amount of questions, concepts and approaches we end up learning.

In an ideal world I’d imagine interviewers taking the effort to create custom questions that are related to their daily functions, and those the candidate is likely to encounter when and if they join the organization. But in reality, if that were to happen, after the interview the interviewers who meticulously crafted these questions, would find their questions being posted on every interview forum under the sun. Should they make new questions for every interview? That’s not even close to being worth the effort given the number of candidates some companies have to interview.

I think that’s where LeetCode comes in. I feel it provides a very good estimate of a candidates potential to learn complex concepts. So now no matter what their daily functions entail, chances are they should be able to tackle them. To me it feels kinda similar to saying, “ A student who has completed high school calculus should find it easy to work with trigonometry concepts as well as teach them to others.”

So is LeetCode the best way to assess candidates? Not at all, it has little to do with the actual job responsibilities. But is it the best bet companies have to get good talent with relatively low effort? Absolutely.

Do you have any other ideas or have seen any interesting ways in which companies can assess candidates? Would love to hear your thoughts :studio_microphone: :studio_microphone: ! :cowboy_hat_face:

2 Likes

So I heard it once from a former engineering manager at Box – when one component of their job is to fill these seats across different engineering functions, and there are many of them, it is very very difficult to create a bespoke process for each of them. He used a case where an infrastructure engineer is interviewing a UI engineer or vice versa. They have domain knowledge that hardly overlaps, that common denominator becomes some shared background – likely a computer science degree, so they converge upon asking “toy problems” rooted in aspects of CS. For inexperienced software engineers lacking real-world experience, data structures and algorithms certainly makes this the most common denominator.

Having DSA knowledge is just one of many signals, perhaps, an overly weighted one. It does provide a somewhat reliable signal about whether someone has put in the effort to study, and in a sense, do their due diligence.

We also must remember that a lot of great software was built before the entire industry, the Bay Area more specifically, moved toward a DSA-focused interview.

In real-world software, some things that I find really really important is software design ability and understanding software principles that’s permissive of extensible and maintainable software (employing SOLID). One way that I’ve seen this assessed is having a small project with really poorly written code and abstractions and a unit test suite, then the candidate is told, “the only wrong answer is if you do nothing,” and let them work their magic in trying to improve the code. This sort of thing has provided the following signals: ability to debug, see abstractions, spot inconsistencies, find optimizations, and understanding what matters most to them. A lot of people get caught up in method/function names and a lot never read the test suite to try and determine what assertions are being made.

Personally, I would like people to have a basic CS understanding and at least a cursory knowledge of data structures and algorithms, but a requirement is not having to solve the problem. I’m speaking from the perspective of a person that didn’t study CS, but have learned a lot of it on my own, and see its applications in many places:

  1. topological sort in build systems (dependency graphs)
  2. dynamic programming used for optimizations (data compression)
  3. tree traversals/recursion (parsing) – I’ve done this a lot in traversing JSON/dictionaries/hash-maps to reconstruct different models or nested models before and after transmission (in both imperative and functional languages)
  4. how skip lists are useful in designing a key-value store/cache like Redis
  5. how bloom filters are effective with caching at scale

These may not be used day-to-day, but there will be a day that comes where being aware of what to reach for can save you from a mind-numbingly slow program or re-inventing the wheel, or in a scenario where being presented with numerous options, finding one that best fits the problem space you’re tackling

2 Likes

Some great points @dwu !! The part about aspects of CS becoming the common denominator makes total sense. And I agree knowing the right tool can save so much time and effort down the line, but I think tech interviews(for entry level positions at least) just test in-depth knowledge of the most basic structures. So even though the candidate might not know these advanced abstractions, at least the interview can assess whether the candidate will understand those in the future or not!

The one thing that I didn’t mention here is what it fails to test for – will they actually execute on the job, deliver things on time, willingness to raise concerns and find resolutions, ability to lead, etc.

Things that are also super super super important to me and I believe for many hiring managers that I feel many companies forget about is how to gauge whether someone will be performant. I would go so far as to say that the following outperforms raw skill/talent or the illusion of it when using toy problems/algo DS problems as an analog for them are: grit, perseverance, curiosity.

But, “how?,” one might ask, can you measure these? There are certain signals that I’ve come to notice, albeit anecdotal (in no specific order): public repositories (non-forks) on people’s GitHub accounts can reveal coding style, level of experience, attention to detail, willingness to document, robustness (tests). It can reveal what languages people are interested in and what sorts of projects they’ve taken notice to beyond the work that they do that’s listed on a resumé. Another signal, particularly for non-traditional programmers/SWEs, is simply delving into their knowledge of different systems – beyond their immediate abstraction – have they contributed to open source, are they familiar with RFCs/specs (prevents re-invention of the wheel a lot), do they follow or admire particular luminaries in software (ones that craft it; not Elon Musk or Steve Jobs), and why. Do they know that Linus Torvalds is behind Linux and Git? I lay these out because it is rare and I think it reveals (more times than not IMHO) whether or not people take special care of the software that they write/whether they’ve peeled back the covers. I’ve met a lot of people that have never peeled back the covers to look at what’s under the hood whether it’s something their colleague has implemented or some piece of software that they use a lot. Questioning the status quo could lead to measurable improvements in code quality or removing unnecessary and bespoke abstractions (less code always means less things to maintain). GitHub repos, if available, can be a good way to discover non-big-N company talent for small/medium-sized companies. I don’t think people just come across this stuff without having a genuine curiosity about them or having labored through them. Even being able to explain different algorithms provided a scenario and talking intelligently about them could reveal a lot more than whether or not someone could code up a solution under duress…

2 Likes

100%, most companies choose to have these quick tests for ease of application and are prioritizing their time over netting the best talent.

As a former recruiter for a number of smaller tech companies, I’ve seen everything ranging from a 2-3 hour in-depth pair programming interview (where we had candidates sit with an engineer and problem solve an actual, day-to-day problem that engineers faced) to taking a written coding exam (testing their concept knowledge).

At the end of the day, if companies are smart, they are measuring the candidate’s ability to reason and problem solve in a quick, communicative manner and not the superfluous details of the actual questions they ask.

2 Likes

Love it! Couldn’t agree more.

Interesting read on how a company is using new ways to assess their candidates: https://heap.io/blog/engineering/finding-a-better-way-to-interview-engineers

2 Likes