Decoding Interview Coder: My Weekend of Cheating (and What I Learned)

Anas KhanAnas Khan
12 min read

Note: My views are my own and don't reflect the opinion of my employer.

I'm a final-year CS engineering student, and like many of you, I've spent significant time preparing and attempting over 30+ online coding assessments on various programming websites—HackerRank, HackerEarth, SHL, et al. Over time, I’ve discovered quite a few loopholes and exploits in these platforms and even reported some of them to the respective platforms.

Later, in a series of different events that deserve its own blog post (coming soon.) I ended up interning at HackerRank! The first thing I wanted to do here? You guessed it: to work on fixing those loopholes. It was fun while I knew the various exploits and how to leverage them to game the online assessment, but it also made me the ideal person to test the platform and fix these loopholes continuously. So far, I’ve analyzed and reported over a dozen vulnerabilities while at HackerRank. Shameless plug: I'm that intern!
Lately, I started hearing the buzz around a tool called “Interview Coder”, promising to make cheating "undetectable”. I was more intrigued by the story of this 21-year-old dropout who made this tool. I immediately knew I had to check it out. So, I tested it out over a weekend on multiple online tests. Here’s what I found -

First Impressions of Interview Coder

The first thing you notice on Interview Coder’s website is the big “F*ck Leetcode” message. I thought that was pretty cheeky and frankly, I was glad that someone finally called it out given the fact that I’ve myself been very vocal about changing the traditional online assessments and have lobbied hard for having better assessments that allow the usage of AI and which simulate real-world problems instead of having candidates grind Leetcode-style problems.

However, it’s hard not to judge a tool explicitly designed to help people crack interviews while going to great lengths to stay hidden. Ethically, that doesn’t exactly fall into a “grey” area but rather a “pitch-black” area. Taking external help, whether AI or not, when the interview explicitly forbids it, is by definition, cheating. Just stating the obvious here, although I think neither the tool’s author nor the users have any disagreements or qualms about this. Yet, I’ve tried my best to be unbiased and evaluate this tool from the perspective of a candidate who wants to use this to cheat (for whatever reason) and see how effective or helpful it is.

Taking it for a spin

The installation process was relatively straightforward, requiring screen-sharing permissions. The interface seemed simple, with very few actions a user could take. I’m a big fan of minimalist design, so kudos on that. Almost all actions have keyboard shortcuts, which is also really helpful, especially to avoid raising suspicion in an interview. Pretty neat!

But then I noticed there was no "Forgot Password" flow! That’s strange. Even the most basic CRUD app tutorials cover this. If you happen to lose your hastily created password for the $60/month subscription, you’ll have to reach out to the author and wait for their reply! Hmm, this already looks more like a college undergrad’s hack than an actual product.

The UI also doesn’t display which account you're currently logged into. While this is not a deal-breaker, it is another example of the overall lack of polish and attention to detail.

I found the layout to be not very user-friendly. Most of the time, it worked, but sometimes, I had to scroll not just vertically but also horizontally, even for relatively short code snippets. A few times, I found it quite difficult to use, especially for Python one-liners, which are often truncated or require excessive scrolling. I accidentally discovered that zooming out on the overlay helps slightly, but you're still limited to viewing a certain number of lines.

Coming to the overlay, it's semi-transparent and you cannot even adjust it, it significantly blocks the view of the editor. I was forced to constantly toggle between the tool and the editor, or awkwardly position them side-by-side. This added a significant cognitive load, making it harder to focus on the actual problem.

At this point, I’m already quite disappointed by the tool. But then I realized, there's no obvious way to exit the app cleanly. This, combined with the fact that the tool hijacks your keyboard shortcuts, makes it difficult to resume normal work after the test.

Frequent updates further complicate things. Since the app isn’t visible in the dock, there's no obvious way to quit and update it, leaving you feeling stuck unless you manually kill the process.

Problem Solvability

The core of Interview Coder's functionality, the thing it's supposed to be good at, is solving coding problems. And in this, surprisingly it fails quite a lot. The reason is probably the fact that it is just a GPT wrapper, packaged in Electron, primarily utilizing (from the source code I have) Antrophic's Claude 3.5 Sonnet more specifically claude-3-5-sonnet-20241022. This means that any problem Claude struggles with, Interview Coder struggles with too.

System prompt and model found in the source code of webapp.

And that's a lot of problems in itself. Unseen questions, problems that deviate from the standard LeetCode ones, medium-to-hard problems involving trees or dynamic programming – these are all the areas where Interview Coder consistently falls short. In my testing, the only questions it reliably solved were those classic problems found in the LeetCode 150, which you would have probably known already if you were preparing for a technical interview. This is hardly surprising, given that LLMs are trained on vast datasets of - you guessed it, popular solutions readily available online. As of now, the tool doesn’t support non-coding questions. API and System Design, etc. I have noticed a shift in hiring where OAs now include SQL and API questions, and the tool falls flat in those areas.

While it allows for uploading multiple screenshots, the application froze for me each time I tried this. This happened to me repeatedly, forcing me to restart. The "debug" feature, which is supposed to help you fix your code, seems similarly broken. During my testing, it consistently caused the application to freeze, forcing me to start over and get a completely new solution, which also often didn't work.

But here's the real kicker: even when it does manage to generate a "solution," it's often presented in such a cryptic, convoluted way that it's difficult to understand, let alone explain to an interviewer. The variable names are generic, the code style is textbook-bland, and there's a heavy reliance on list comprehensions and other techniques that scream "LLM-generated." I've seen plenty of human-written code and LLM-generated code. Any interviewer (or plagiarism detection mechanism) would immediately flag this code as AI-generated and highly suspicious. No questions asked.

So, is it really worth the buck?

Now, let's talk about the pricing and boy, is this where things go from bad to worse. Interview Coder charges a whopping $60 per month for 50 credits! I think $60 is exorbitant for what is essentially a glorified GPT wrapper, one that struggles with anything beyond the most basic, well-trodden LeetCode problems. You could get far more value (and significantly better performance) by directly using newer Claude or GPT models with your own API keys, all while costing you a few dollars at most. Heck, Gemini 2.5 Pro is free – why not allow users to leverage that instead of forcing them into your expensive, server-dependent ecosystem?

Considering the number of times the tool gives incorrect answers, the credit system starts to feel like an intentionally designed “dark pattern.” Each wrong answer still costs you credits. You can burn through 10-12 credits in a single test, especially with restarts or the unreliable "debug" feature.

Additionally, there's absolutely no consumer protection:

  1. No Free Trial: You can download and install but can’t log in and solve any problems until you buy it. There's no way to test the tool's effectiveness before committing to the exorbitant monthly fee.

  2. No Refund Policy: If the tool doesn’t work on your machine, you can’t request a refund. There’s no clear information on which machines or OS versions are supported. In my testing, I encountered different results across various OS builds.

This doesn’t seem like a fair pricing model. Especially considering the fact that the users of this tool are primarily job seekers – often students – who are desperate to pass these interviews. It's particularly galling when you consider why it blew up because the initial builds of Interview Coder were open-source and allowed users to bring their own API keys. That approach made perfect sense: it eliminated the unnecessary server-side component, reduced costs, and gave users more control over their data and privacy. But somewhere along the way, the focus clearly shifted from providing a useful tool to maximizing ARR, resulting in this overpriced, underperforming mess.

Security Issues:

But here's where things get really interesting. The tool isn't so much built as it is “vibe-coded”, mostly using Cursor AI. Don’t get me wrong. I have nothing against “vibe-coding”. In fact, I do it a lot. But I do it mostly for building prototypes and quick hacks. I take comfort in the fact that if the auto-generated code doesn’t work, I’m fully capable of debugging and making it work. I’ll never blindly push the auto-generated code to prod and hope it works. I certainly won’t be charging people for the hacks thus created until I’ve polished them into “production-quality products”.

This is where Interview Coder differs. The security and authorization are lacking, to put it mildly. To demonstrate just how bad it is, I’ve attached a demo below – yes, the supposed $2M ARR project completely falls apart due to basic authentication failures. I can bypass the entire payment system with a few lines of code. This exploit should get patched, but I'm curious to see how long it takes the author to fix it.

The Hack:

This isn't some sophisticated exploit; it was right there in plain sight. The commit history of the project is also interesting. The author attempted to make the repository private, but couldn't due to the 2k+ stars, so he resorted to rewriting the commit history. But, he forgot about branches, pretty novice. Traces of previous versions, including API keys, were still accessible.

Other branches are still visible on the interview coder’s GitHub repo.

It's a clear demonstration of a lack of basic development skills, and security best practices. Earlier, the server component (interview-coder-webapp) was public too, and now it's not. With such major flaws, I find it hard to trust the tool with my systems’ screenshots that may contain screenshots of my interview with my face and sensitive information.

While I think it was a really cool idea and I would even have considered using the tool in some of my interviews, not because I think it’s ethically right, but just because I’m too bored to solve the problem myself, sadly it feels like a missed opportunity to build a reliable cheat tool.

The Real Cost of Shortcuts

But the problem isn't Interview Coder; it's the mindset, the idea that there's a quick fix, a way to "game the system" without actually putting in the work. And that, my friends, is a dangerous illusion.

Let's be brutally honest: using a tool like Interview Coder, even if it did work perfectly (which it emphatically does not), is a short-sighted strategy. You might, just might, manage to squeeze through an interview. But what happens next? What happens when you're faced with real-world challenges, complex codebases, and the pressure of delivering actual results? You'll be exposed. You'll be overwhelmed. And you'll have cheated yourself out of the opportunity to actually learn and grow. Simply clearing an interview doesn’t amount to much. I’m aware of many companies who have immediately fired candidates once they figured they were not as good on the job as they were in the interview.

The tech interview process is flawed, yes. We all acknowledge that. But the answer isn't to find more sophisticated ways to cheat; it's to advocate for a better system. A system that values genuine problem-solving skills, collaboration, communication, and the ability to learn and adapt.

What could better interviews look like?

So, while I agree with Roy that Leetcode-style interviews need an upgrade and I personally would enjoy a different interview setting, what could such an interview look like?

I personally appreciate companies who take the effort to dig into my existing coding chops even before the interview happens. They look at my side projects, and my GitHub activity and talk to me about those. They try to understand what I’m really passionate about. I don’t mind take-home tests, in fact, I think take-homes are much better than live pair programming since they take off the artificial time pressure from interviews and allow me to code in my natural zone.

A good interviewer would give me messy data and ambiguous requirements rather than abstract puzzles. I think the best way for companies to gauge my instant value-add to their team would be to just give me their repo and ask me to fix an actual open bug.

The most enjoyable interviews I’ve had were the ones where the interviewer acted more like my partner in solving a complex problem and brainstormed different ideas with me to arrive at a solution. If you can collaborate and communicate well with the interviewer you can be somewhat sure that you’d work well with the rest of the hiring team too when you work with them.

To Roy: A Plea for Responsibility

Roy, I won't sugarcoat it: Interview Coder is a bad product. It's poorly designed, not secure, and overpriced, and it promotes a fundamentally dishonest approach to the tech interview process. You're clearly a talented individual. You have the skills to build something genuinely useful, something that could actually improve the hiring landscape. Instead, you've created a tool that exacerbates the existing problems and preys on the anxieties of vulnerable students.

It's interesting to see the response to Interview Coder and similar tools from the community. New projects are emerging, aiming to detect and counteract these methods. It’s clear that this space is evolving rapidly. People want real solutions, not shortcuts that undermine the integrity of the interview process.

I urge you to reconsider your approach. Rethink, Interview Coder. Learn from this experience and use your skills to create something that improves the system rather than degrades it.

You've been pushing the narrative that the only way to crack interviews and land jobs at top companies is either two years of LeetCode practice or using your tool. I loved the marketing and the growth of it, but let’s not ignore the reality—your Columbia CS degree and years of LeetCode grinding played a huge role in your Big Tech offers. Selling the same dream to younger candidates is misleading, and frankly, harmful.

Final Thoughts

The tech interview process is a complex and often frustrating beast. But it's not insurmountable. And it's certainly not something to be "gamed" with shortcuts and cheating tools. The real key to success is to focus on building genuine skills, developing a deep understanding of fundamental concepts, and honing your ability to communicate and collaborate effectively. Invest your time and energy in becoming a better engineer. That's the only real shortcut to a successful career.

And as for me? I'll continue my work at HackerRank, and strive to create a fairer, more effective, and more meaningful assessment process (PS - y'all will see in some time). It's a long road, but it's a journey worth taking. Ultimately, we all benefit from a system that values genuine talent and integrity over the ability to memorize LeetCode solutions. Now, back to battling those exploits… and maybe finding a proper way to completely exit that darn Interview Coder app without restarting my machine.

11
Subscribe to my newsletter

Read articles from Anas Khan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Anas Khan
Anas Khan