From the ExperiencedDevs subreddit, this developer wanted perspectives on candidates being lazy with their take home interviews because of AI. What should they do about it?
📄 Auto-Generated Transcript ▾
Transcript is auto-generated and may contain errors.
Hey folks, I got a couple of different topics I want to go through today. So, I don't know which one I'm going to do first. Maybe we'll talk about AI in interviews and and that sort of thing. So, there's a experience dev subreddit post where someone was asking about uh take-home interviews that they're doing. It seems like they have a bit of a problem where what is going on? Why are you The car automatically hit the brakes for me and there's nothing on the road. That's not okay. Um, yeah, they're talking about a uh take-home interviews and candidates are using AI for them. Uh, which I think is okay. Like I think that's part of what how they're framing their interviews. But um they're saying but we have this problem because the candidates are submitting the answers like their their work without actually testing it. They're just kind of like having AI make it and then they just send it.
And so they're like well you know how do we set better expectations? I don't know like for me like the first thing that comes to mind is like if you have a candidate who is AI generating things and submitting it truly without testing it. My perspective is that that's probably a no hire. I don't like to me the the framing of it would seem kind of silly because of like I I feel like right away that's a signal to me that like you know I I feel like I shouldn't have to tell anyone submitting a you know interview take-home piece of work like you should probably run this thing and and feel good about it. Uh because if that's to me if that's not like super obvious then I don't know. I feel like I feel like that's probably going to be a an issue. Now, maybe if someone is brand new to programming, I don't know, maybe not.
But I still think that's like a almost just like a fundamental thing I would expect anyone to to do. Now, what we don't know from this post is when this person saying, "Hey, they're doing it with AI and they're not even testing it and submitting it." Well, what are they building and how is it being tested? Because is this person just saying, you know, statistically we have a lot of errors coming in from the people that submit their things? And when we say errors, like what what are we talking about? Is it like something where you programmatically get an answer? Is it some UX that you're you're navigating and there's edge cases that are broken? Like we don't we don't actually know. Um, and so I think there's uh a little bit of room there for uh I don't know like misinterpretation or kind of misconstring the story, right?
It very well might be that you have I'm just going to use an example here where maybe there's more junior developers applying for a role and um the actual take-home thing that they've been given is uh maybe a little bit complex. And so when they're putting it together or maybe there's vague requirements for it and they're putting it together and they're using AI and that that part's fine, let's say given the I don't know expectations set by the the people who are hiring. And so they build it and maybe they are testing it and they're trying it out, but like the expectations that were set by the people hiring is like maybe there's other things that were supposed to be done with the project and they're not totally clear. So they do their best and then the people hiring are like, "No, you missed this scenario.
It's all broken." Blah blah blah. We just don't we don't really know. Um, so questions like this are kind of tricky because I would really I would want to question back the person who's asking the question and get some more details on it. But um yeah, ultimately you know like the simple answer that I would have to their question is like uh it was actually more around how do we set how do we make it more clear that we're expecting people to to actually validate their AI generated programs and it's like I just I just don't think that you I don't think that you really need to do much more aside from saying this is literally part of your dear view seems kind of nuts to me, but um I do I do kind of speculate that maybe what they're asking them to build in the takehome uh maybe isn't clear.
That's sort of my guess. But I didn't just want to talk about, you know, this specific example. Um, I wanted to talk about AI interviews in general because um, I actually I don't know. Uh, I I don't have a I don't think I yet have a well-formed opinion on this. I certainly don't feel like I have solutions to some of the things that I feel like are sticking points or challenges. Um, so I I figured maybe if I get talking about it, maybe I'll start to like formulate those a little bit better. But, um, if you're not new to the channel, if you've seen other videos and stuff, you've probably seen me mention like if we're talking, you know, software engineering interviews, I'm not a fan of like lead code style questions.
Uh, in fact, I'm not I'm not a fan of questions like leak code is one particular form of this, but I'm simply not a fan of questions where um like you can you can memorize things because I I think the value of that is so so low and not in my opinion not telling of much at all that like why would you waste everyone's time doing that? That's my personal opinion on it. Right. So, I see like and I I'm going to say this without trying to be condescending or insulting to anyone because I I can understand why they would do this, but I see people that will make like courses around like, you know, the top, you know, top questions you'll get asked as like a React developer or like a C developer in an interview. Um, and I'm like I if you're getting asked questions that are specifically like tell me about this language feature.
I'm like is that not that you could literally memorize or you could look up on the spot? Like what what is the value in asking the question if it's just proof that someone's used a language before? Like I I feel like there's arguably significantly better ways to gauge someone's level of experience. um versus I don't know like they could have put a lot of effort into like trying to memorize different things. It just I don't I don't really see the value in it for either side personally. Um but here's the tricky part, right? Like what Okay, so what is the solution? Like for me it would be something around like first of all the language specific stuff I don't I don't give as much of a about because it's a programming language you can learn more.
Um, I think the when I've talked about this in videos before, the the caveat there is in my experience, if you're at a smaller company that is like say a startup and if you're going to hire someone, you don't have the resources to try and ramp them up uh from scratch in a programming language. depending on their level. If they're, you know, way more senior, maybe they can pick up a language fast. But like sometimes like every day counts. So you want someone who can come in and be programming in the language like right away. I'm not saying right or wrong. I'm just saying like that is sort of like the need or what you're after. So in those cases maybe like I would lean more on like how do you demonstrate the language? But how do you how do you do that effectively? And for me it's like okay well if you can write you can demonstrate that you're writing code or navigating code in that language to me that's more helpful.
But again like so how do you do that in practice? Kind of tricky. So, when I'm thinking about AI in interviews, like I mean there's a million different ways we could do this, but if we keep like the traditional big tech interview, I feel like your lead code questions start to fall apart even more because what types of questions have like the most examples of them on the internet? These dumb lead code questions. So, you know, if you're allowed to use AI in an interview and you're asking a lead code question and the entire time your business has been asking lead code questions for hiring, you're just looking for the answer at the end, which unfortunately in my experience has been has been the case as an as an interviewee and interviewer like in uh in uh recruitment loops. I often see that people will focus on like did the candidate get the final answer that's like the most optimized.
Like I just I think that's almost like arguably the least important part. But anyway, if you're if you're set up to do that and not think about it different ways then like what what are you expecting? Obviously, they're going to put the question into any LLM. It's gonna blast out code that's been, you know, people have debated as the most optimal answer they can think of. There you go. Final answer. Give me the next question. Let me just pass through to to chat GBT or Gemini, your co-pilot. Done, done, done. So, what what value are you getting? Right? So, I think if it's not obvious, like I think that's not going to work. But if you change your interview approach, like if you still want to stick to lead code, for example, I think you really have to go back to getting candidates to explain which you should have been doing in the first place, right?
Like I like I said, I think when you're hyperfocused on these final answers for these questions, like so what? you're asking questions that like inherently have tricks to them most of the time to get the like the the most optimal solution. So if they don't get the final answer that's the most optimized, like wouldn't you just want them to explain their thought process? Like, isn't that the point to see how people are are thinking through things? Or do you truly need someone that has found the trick and you're telling me like in the the age of AI like you're still expecting them to manually go find the trick? It just to me it all seems kind of weird. So, I would hope that if you're going to ask lead code style questions, it's really just to get to a point where you have code and then you have someone walk like explain the code like why why is this effective?
Like what is this doing? Like walk me through it because I would think the entire point is understanding and working with the algorithm. Now, if you're trying to test that someone can create the algorithm, if that is the goal of the interview question, then I don't think you want AI for that because the AI is going to create the algorithm. And I'm not saying there's anything wrong with that. I'm just saying if that's what you're trying to test, right? If you're trying to test that someone can prompt an LLM to get an algorithm, that's one thing. If you're trying to test that someone can do it themselves, that's a different thing. So I think people with their interview questions when AI is coming around like to coming into the conversation of how it's used in interviews, I think people have to go back to understanding what they're trying to gauge because I don't think most people in interview loops actually know, which sounds kind of terrifying.
I don't know, maybe I'm super jaded by all this but like I I do not think that most people unfortunately in interviews actually even know what they're trying to gauge for. Um, and a lot of the time I think that it's just because there's like not good training around it. the training is like do you know the script versus do you know do you know why you're doing this? Um, so on like lead code style questions, never like them. But I think if your goal is to have someone uh, you know, explain or navigate code, then hell yeah. I think yeah, use I wouldn't be opposed to that. If someone was like, look, we just need a scenario to walk through. Like, I'm going to pick a leak code question out of this hat. You're going to pump it into an AI tool and like get the code.
like now that we have the code, walk me through like how this works, right? I think that could be still a very useful interview question as someone who doesn't like lead code because now you're just looking through an algorithm. Walk me through this like how does it work? And then if you're getting stuck, if you're like, "Okay, here's what I think." If you're like, "Hm, but I'm not actually sure what, you know, prompt the LLM." Right? If if the whole if if your entire interview was just like here's the code and then the next prompt is like now explain this to big tech interviewer panel um I think like that's you know that's kind of cheating but if you if you were as a human trying to walk through it and explain it to get your thought process across I think that's super helpful and
then when you're stuck like demonstrating to the interviewers like if if part of what you're trying to interview for is like do you know how to use the tools like whatever you're using for AI then yeah like you're stuck. So what do you do when you're stuck instead of instead of being like uh even before AI, you know, sort this collection and you're like cool, I'm just going to call the the sort method like no, it's the same thing. Don't just have AI like prompt it and say like explain the whole thing. like tell me what's going on, where you're stuck and uh and then demonstrate to the interviewer that you can you can use the tool to give the context to get the next part of the answer and then continue walking through and explaining. Um and like I said, if you are trying to look for like how someone creates an algorithm, then I don't think you want AI to one-shot it.
So I would give that as the constraint. Hey, I want we want to see how you think through this. So like start building it. You're not going to use AI to oneshot it. And if you're open to it, because I think we need to move in this direction, then I would say like you're allowed to use AI in the interview to to assist you with the algorithm generation. I guess what as I'm talking through this, I don't know how you would police it so much that, you know, someone wouldn't right away like oneshot the thing and be like, "Oop, sorry I did that." Um, would you just say, "Okay, like here's the next question. Don't do it again." Um, because I think the number of questions that you could pull out of a a hat is like seemingly infinite. So maybe it doesn't matter. But yeah, I think we have to go back to like what are we trying to interview for?
So coding may be like that. Personally, I don't know. like I I'm kind of interested in the direction of uh of project building. So, bringing it back to this Reddit post, by the way, I'm not talking about soft skills and stuff at all. Uh in in my mind, if you need to use an LLM to talk about soft skills, then like we're missing we're missing part of uh what's going on. So, um I think for from the AI perspective, all my soft skill type things I probably wouldn't uh lean on AI for. Um I have a thought that's coming to mind which is like if you were put in a sticky situation like how would you navigate that? Maybe conversing with an LLM to generate ideas could be interesting. But anyway, um, for project based stuff, still talking about coding, kind of rushing through this cuz CrossFit's coming up.
Um, I I do like project based interviews and I'm biased. Like, you know, everyone's going to have their own biases, but uh, I'm biased because that's what I would excel at personally because I like to build stuff. I want to show you that I can build. This is before having AI. So, if you're going to have a project-based like take-home interview, then my thought is like, okay, like I think that you should be allowed to use AI if your company is, you know, uh allowing AI as for developers. I think most most are. So like it allow people to operate the way they would building things that are at least in the same direction that you're building things and then have them explain it, right? Like I don't think the the interview part finishes where they just submit a project and you go great. Like I would want someone to in their interview end up explaining what they're what they've built.
like walk me through it. You built this thing, right? So, it's like with any code, if you're gonna be using AI to generate it, you better be able to explain what the hell's going on if you think you're done. So, anyway, those are some of my first thoughts, but I think uh I'll need more conversations uh to kind of firm that that up in my mind, but that's kind of what I'm thinking right now. uh for us uh at least in my experience there's more flexibility coming with AI for interviews but I've not been in a position yet where um where it's come up to to be used. So kind of interesting but things are changing and we'll see how it goes. So thanks for watching. I will see you in the next one. Take care.
Frequently Asked Questions
These Q&A summaries are AI-generated from the video transcript and may not reflect my exact wording. Watch the video for the full context.
- How should candidates approach using AI for take-home software engineering interviews?
- I think it's okay for candidates to use AI for take-home interviews, but they should definitely test and validate the AI-generated code before submitting it. Submitting untested AI-generated work is a red flag for me and likely a no hire, because I expect candidates to run their code and ensure it works properly.
- What is my opinion on using traditional LeetCode-style questions in software engineering interviews with AI?
- I'm not a fan of LeetCode-style questions because they often focus on memorization and finding the most optimized final answer, which AI can easily generate. Instead, I believe interviews should focus on candidates explaining their thought process and walking through code, which better demonstrates understanding and problem-solving skills.
- How can interviewers adapt their process to effectively evaluate candidates using AI tools?
- Interviewers should clarify what skills they want to assess and consider allowing candidates to use AI as a tool during interviews, especially for generating code. The key is to have candidates explain and navigate the code, demonstrate their understanding, and show how they use AI to assist their problem-solving rather than just producing final answers.