From the comments, a viewer wanted to know how to measure quality when the QA team is regularly rejecting changes and ultimately causing feature development to slow down. Here are my thoughts.
📄 Auto-Generated Transcript ▾
Transcript is auto-generated and may contain errors.
Hey folks, we're going to the comments today for another question. This one is from Carlos Ortiz 8789. It's uh the second part of a two-part question. This says, "How do you measure quality when something is completed fast, but QA usually rejects it, so at the end it takes longer time?" Um, so I think great question. I've lived through this in startup land for many years before. we had to make sort of changes to our software development life cycle. And what I would say like this is going to be an oversimplification of it. But I do not believe in developing software where you have developers build the thing and then give it to QA as like the gate. I do not believe in it. I have lived through it for years and I have never seen it work effectively. I am not here to say that you cannot do it or that it cannot work.
I have just never seen it work effectively. And for all the things that I've tried or been part of, you know, we're trying different things, it has never been worth it and it seems like a flawed system to me. The reason for that, and I'm going to use the word agile here, and I don't mean for that to be the triggering word that like gets everyone up in arms like purely, you know, scrum or camban or extreme programming kind of thing. When I'm using the word agile here, I mean like literally agility when it comes to uh feedback loops. So I think that if we think about any software development process where we have some type of gate, let's remove QA for a second. Okay, let's just talk about bunch of programmers. Let's pretend there's no such thing as like testers or QA like a whole role to go look at this stuff.
We're just talking purely about code. Okay, if you were to go build a really big feature and now you need to get that reviewed and you put that up to a group of reviewers and the poll request or the code review is very big, you're you're not going to expect that that's going to go well. There's a lot of examples of this kind of stuff where people are like, "Dude, like there's like a thousand files or hundreds of files touched here. like you expect that I'm going to go review like read all this and just understand it and give you meaningful feedback. No. Um but that's an example of like basically bringing in people too late to the conversation or doing too much before there's feedback, right? There's it's not that like touching a lot of files is inherently bad or wrong.
It makes it very difficult for people to give you feedback because number one, I mean there's a lot of things to actually go look at, but number two, like depending on what the change is, it's very likely that you've done a lot of stuff that got you there. That's why a lot of files are touched and now you're looking for feedback. And like what if the feedback is like, hey, all of this is wrong. Like you just went down the wrong direction, but now you've invested all this time to get there. It's simply too late. So, how do we make that better? Okay. Well, you can break things down into smaller deliverables. You could do a draft review sooner. Maybe you're touching something and you truly do to make the change. You have to go rename something or go across an API surface and change a whole bunch of stuff and you do need to touch hundreds of files.
But you could bring some people in early and say, "Hey, look, this is the approach I think I need to take. I'm going to go give you a draft of this so you can see what it's going to look like." Then when I go to put the real thing together, yeah, it's going to be a lot to step through, but like you know that there's like two or three patterns that I had to use throughout the whole thing. And you agreed that they were good and we all looked at them and we said, "Those two or three patterns, they make sense. Let's do that." You bring people in early. Okay, that's like how you can try to mitigate a lot of this this like terrible experience of like I did all this work, gave it to someone, and they go, "Oh shit." So, I'm trying
to give you a different example that's not even QA so we don't have like a tester or QA kind of bias in the conversation because I think it's the exact same type of thing. You go build something as a developer, you create the functionality, you go, "Oh, I think I'm done. I've done all of this work. I think I'm done. Someone try it." And they go, "What the hell are you talking about? Like, I touched it for two seconds and it doesn't work how I expect. But like, did you even include them in the conversation anywhere along the line? And you might say, I did right in the beginning. I picked up the picked up the item from the backlog or off the taskboard and I sat down and I talked with them. We agreed and then I spent two weeks to build it and then gave it to them.
Like it's just another example of where we have this opportunity for tighter feedback loops. So I am not a fan of any software development process where you have a extended period of time where there is no feedback loop with someone who is gatekeeping quality or some other part of the process. I think that there is a lot of benefit into doing it early and often. So um I have I can talk about this from a couple of different team dynamics where I've set this kind of thing up. um to give you one example. So when I worked at a digital forensics company, I managed a small team that did mobile acquisition. So we wrote the software that would take uh data off of phones. So uh try to do like a bite forbyte copy of your iPhone or your Android phone and if we can't we would get as much data off as we can.
Uh and this is not like it's not um trying to like recover certain things or organize it. It's like it's literally just like we need to get the data off of the device so that other software can do more work on it. Every team that we had at that company had developers and testers and I had the only team that did not aside from like I guess DevOps teams that were spinning up also did not. Um but we were a product team. Now in the beginning I did have a tester and we had a very interesting working relationship in terms of how the software development life cycle looked and we just did things differently because it was significantly more effective for us.
every other team was kind of caught in this pattern of like let the devs go devot and then when it's done they're going to go give it to the testers and the testers are going to go this sucks give it back and there's different flavors of this right so yes over time the teams got much much better at bringing in people early and often and trying to have better conversations but it always seemed to me inherently flawed that like it wasn't a collaborative effort the whole way through if you're going to have a gatekeeper because it's just going to be surprises at the end. So on this smaller team that I had, we had a single tester and often they acted more like a consultant for us. We said, "We need our software tested, but we cannot have you manually clicking things to be like regression testing stuff." So we're going to go build this feature.
Okay. Step one is we're going to sit down and we're going to talk about this because we have two I would say like two major aspects and I'm I'm oversimplifying it. But when I'm looking for QA or like testers, I'm mixing those two things together and I realize they're not the same, but that's because how I worked with people that did testing, they had like a dual kind of role. They were the user proxy. So they acted as the user trying to use the software, but they also were trying to help make sure that it wasn't busted. So like, is it usable? Can I get the functionality that I need to carry out the the work that I'm trying to do? Is it intuitive? Like as a user, am I happy using this? And then the other part to that was like, let's look for edge cases.
Let's make sure that there isn't behavior that's regressing. And when it came to like the edge cases behavior being regressing or making sure that we had like our behavior for the happy path checked like we need that tested in code. So I would work and my team would work with the tester that we have to talk about those things early but it also meant that we needed to understand from like a user perspective. So the user proxy like how do you plan to use this? We're talking about that we need to support this new phone. And the way that this phone works, you're going to have to go through this workflow. You might have to like put the phone into a certain mode. I'm just making this up on the spot. Um, and so like we need to show a workflow in the user interface.
like I'm not just going to go off and go make that workflow, not consult with the person who's going to be, you know, sort of gatekeeping this gatekeeping from like a does a user like to use this and does this work? I'm not going to just like give it to them at the end. I'm bringing them in at the beginning and saying look like physically we need someone to go press these buttons on the phone. It's the only way that this mechanism is going to work. Let's talk about that. Right? So if you needed to do that as a user and you're using our software, how can we make this obvious to you? Like let's let's you and I talk through that, right? So we would do this kind of thing and then we would talk about like here's how I I'm thinking of building it.
Cool. And then we would get we'd start writing some code and as soon as we had something that was like kind of functional, come on over like look at my desk or if you're remote it would be like you know we could get on a call and go through it. like let me walk you through what I have so far just so you can see it kind of coming together and coming alive because then you might start having questions already of like well wait a second like what about this or like do we have to be worried about this other thing over here you know how are you going to test that and then we can have that conversation of like okay good yeah good point this part of the code over here that's actually kind of brittle we don't have good tests like that's a good reminder for me that when I'm putting this together probably need to go beef that up a little bit make sure we have better test coverage.
But the whole example that I'm walking through here is that we have this feedback early, often, and throughout the the life cycle of development. Not just, "Hey, I'm done. Go look at it." Over time on this team, we stopped having testers. And it's not because I was like, "Get rid of the testers. They suck." We had people that were changing teams and stuff and then when they would move like we ended up just ended up not having a tester come back on. But does that mean that we just don't care about testing or we don't care about users? No. Because we had our user proxy like that behavior that role replaced by our product manager as well as we actually had people that used to be examiners or investigators working for us at the company. We had access to them. We could say we're building this out.
So we'd work with our product owner who could be the user proxy and then we could bring in these other people and say you want to try this out. We want your feedback on it. So we had that whole role covered by sometimes multiple people and we would still do the same thing. Bring them in early. Let's talk about this. It's a collaborative effort. I'm going to be the one, you know, punching the keys on the keyboard to write the code, but I need your input about building this. And then when it came to the testing part, we just continued the same process of like how are we going to make sure that the brittle parts get covered? Um do we are we missing sort of like uh test infrastructure to test this kind of stuff. So it's like that that became part of development. Now that's not to say that we couldn't have benefited from a test strategist or something like that.
It's just that like we were able to channel what we were already doing with that role and we just made that part of everything we were doing when we were building. But really the user proxy part for us was the most important. So the short answer to that that question, how do I measure quality when something isn't completed by keyway? It's like I just don't develop software that way. I don't develop software in a way that's like let me give it to you when it's done for the first time that you're going to look at it. you're going to be building it with me. The I'm going to touch on this briefly. I had another team that I spun up and we followed the same approach. We actually took um same type of development that another larger team was doing and they had a much higher ratio of sorry my dogs are barking.
Um much higher ratio of tester to developer. And when I spun up this smaller team, I basically nuked the ratio of that. I actually brought on a tester and I think I had five developers. So the ratio is way different than the other teams. And I structured it and said this tester will act as a test strategist. That means that when you're building out these features, you will be working with the developers to make sure kind of like I was doing with the first team, you're going to be making sure that all of this stuff that they're building has test like from the start they're thinking about this. How are they going to test it? Do we have gaps in our infrastructure for testing it or the codebase for testing it? Because we're going to make it better. It's going to be part of delivering this.
and then they had to get looped in from the start to make sure that they understood the features that were getting built. This worked really well on this team because the type of work that was being done was very similar and repeatable. So, it meant that there were parts where they could feel more comfortable being like, "Cool, we already have the patterns for this. We've seen this this type of thing a hundred times already. Oh, wait. This one's new. Let's spend a lot more time and effort talking about that." like that's where we need to be creative. That's where there's going to be a lot more back and forth. But the rest of the development on that team kind of came down into more of a repeatable process.
So point being that in both of those teams um I had testers at different points in times or QA in different points in times like like dedicated roles for it but it was never a matter of just like get it done give it to them so that they can send it back to you. Um so a way to generalize that would be that I would say uh if I had a team so if I started on a so some company brings me into a team they have programmers and testers or developers in QA sort of different roles where you have someone who's going to be signing off on the quality. I would be looking to measure the amount of rework, the number of times things get sent back, like the amount of lost time due to sending it back. That would be a key metric I try to measure and look at.
And we're going to eliminate that. And when I say eliminate, it's not ever going to be zero. But try to get that down as close to zero as possible because that to me is a metric when it's high that tells me that we have a ridiculous amount of inefficiency and people are not collaborating along the way. There will always be escapes. We're not perfect when we develop things. You could be working alongside your uh you could have a dedicated tester or QA and a dedicated user proxy and you could be working with them the entire time through and when it's finally done someone goes, "Oh, wait. Oh, we did miss that." That's life, right? It's going to happen. But I would rather that be the exception than this happens every time we build anything. So, I think this is a really cool question, but I hope that makes sense.
Um, that's how I think about building software. That's what I've seen work really well. I have personally never seen it work really well where a developer goes all the way through, says, "Okay, I'm done." Even if it's signed off by other code reviewers, and then they hand it over to QA to go validate. I've never seen that work. Well, doesn't mean that it can't, just not from my experience. So, thanks so much for the question. If you have questions, leave them below in the comments. We're going to go to codecommute.com. And then I always like to mention I have three other YouTube channels. There's Dev Leader where you can learn about C or programming with AI tools. I have Dev Leader Path to Tech. That's where you can check out resume reviews or submit your resume to be reviewed. And then there's the Dev Leader podcast where I interview other software engineers and I have a live stream every Monday at 7 p.m.
Pacific. So, I'd love to see you there. Talk about concepts and topics just like this. So, see you next time. Take care.
Frequently Asked Questions
These Q&A summaries are AI-generated from the video transcript and may not reflect my exact wording. Watch the video for the full context.
- How do you measure software development quality when QA frequently rejects completed work?
- I measure quality by looking at the amount of rework, the number of times things get sent back, and the amount of lost time due to sending work back. I aim to reduce these metrics as close to zero as possible because a high rate indicates inefficiency and lack of collaboration during development. While some issues will always escape, I want those to be exceptions rather than the norm.
- What development process do you recommend to avoid frequent QA rejections and delays?
- I do not believe in a process where developers build something fully and then hand it off to QA as a gatekeeper. Instead, I advocate for early and continuous collaboration with testers or user proxies throughout development. This includes breaking down work into smaller deliverables, doing draft reviews early, and involving testers or product owners from the start to create tighter feedback loops and avoid surprises at the end.
- How did your teams handle testing and quality assurance differently to improve software development outcomes?
- On one team, we had a tester who acted more like a consultant and user proxy, collaborating closely with developers from the beginning to discuss workflows and testing strategies. Over time, we integrated testing responsibilities into development and product management roles, maintaining early feedback and user involvement. On another team, I introduced a test strategist role who worked with developers from the start to ensure proper test coverage and infrastructure, which worked well for repeatable development tasks.