At Rubrik, we maintain an extremely high talent density. We build in-house online tests with mostly original questions to identify the right talent. We invest a lot in this. Attracting and hiring the right talent is extremely important for us. Sadly, with LLM models and the Telegram mafia, cheating and cases of plagiarism are at an all-time high. Yes, you read it right - there are Telegram channels 'selling' solutions during online assessments. We try to remain one step ahead. How? In my teams, we already know the answers from all popular GPT clones. We know what mistakes they make and what tests fail with their answers. My colleagues are there on the Telegram channels as well engaging as candidates. We generally know line by line the exact solution being peddled. That's why you can see that many candidates got rejected by us despite scoring 100%. Next time around I am thinking of starting a Telegram channel myself and selling incorrect 'almost working' answers to online assessments - it will be a good addition to my team's morale budget. So, beware, you might be buying 'solutions' from the hiring manager. 😉 Jokes apart, my humble appeal to students... Don't fall into the trap and give in to peer pressure. Rely on your capabilities. #integrity P.S. There will be some false positives/negatives. My heart aches for such cases. We make it a point to spend the needed time and effort validating the submissions to keep these to a minimum.
Hiring should focus more on a candidate's ability to learn and adapt than just testing static knowledge. Technology is evolving so quickly that relying on a few original questions to evaluate talent might miss out on finding people who can drive innovation. I appreciate the effort to ensure integrity in the process, but designing tests to counter AI-generated solutions feels like fighting against progress. Why not make AI part of the process instead? For example, you could assess how candidates use AI to tackle real-world problems, develop creative solutions, or adapt to unexpected challenges. That would give a better idea of their potential to succeed in an AI-driven future. Rather than sticking to methods that might feel outdated, it could be worth exploring new ways to evaluate talent that match the pace of today’s advancements.
What we need is a better way to evaluate the coding skills of candidates, not a way to check if they are doing plagiarism while solving ds/algo questions (leetcode or our own original questions makes no difference). Industry will have to evolve beyond evaluating candidates based on their ability to solve DS problems. Because all it proves is that the candidate has spent couple of months on leetcode and that shouldn’t classify them as a “right talent”.
I am glad that someone is talking about it. I support your take on this issue. If not addressed properly, it could discourage motivated individuals from working hard.
As a student building my career with pure honesty, I strongly believe that such cheating practices are not just unethical but also a step backward for personal growth. Success earned through genuine effort is what truly counts, and shortcuts like these only lead to a hollow foundation for one's future. 🙂
Loved this. One wouldn't expect anything less from Rubrik.
I feel sorry for false negatives; some people do work really hard!🙂
I'd disagree with your take. While blindly copying solutions is not ethical, at the same time engineers should not refrain from leveraging AI, to come to a better solution. It's time the interview modules and techniques are changed to suit the current industry needs.
It seems you may be prioritizing ‘protecting’ your current process over evaluating whether the process itself might be flawed. I believe coding challenges—particularly take-home ones—are a poor way to assess an engineer’s effectiveness or ability to collaborate. Engineering involves much more than coding, such as problem-solving, understanding business objectives, and effective communication.
Awesome Pulkit Kansal , if every company/platform starts doing this , plagiarism check will become much stronger
Engineering@Indeed
3wPulkit Kansal It is a rat race, and with the given cutthroat competition, every hardworking person wants an edge in preparation or the process they are going through. Suggestions on how this could be mitigated include: 1. A larger problem set, such as more than 200 problems for OAs or PS rounds and 20+ for SD rounds. 2. Keeping problem statements as open ended as possible. 3. Using extendable problems and non-LeetCode problems in PIs. 4. Decentralized hiring, where each team uses unique questions.