My company is building $(BIG_COMPLEX_SOFTWARE_PRODUCT). As part of that, we have a MANDATORY set of automated tests that MUST BE RUN before every checkin of code. That is even enforced by the tests returning an encrypted token that must be included in your checkin command, FFS.
Turns out that those tests haven't been actually passing for ... oh, a month? Longer? I can't tell how far back it goes, because the results web page loads SO SLOWLY.
I have asked for advice, and I was told by other developers, "Oh, just ignore the results. If it fails in Step 10, you're fine and should check in."
OH NO YOU DI'INT!
Seriously, folks. We have these tests for a reason. Do you think, perhaps, one of the reasons that this development branch is always broken just might be because lots of people are checking in without running complete test suites? Even if this test isn't going to catch many of the problems that do get checked in, we now have devs that are completely OK with checking in code that doesn't pass testing. This is a dangerous attitude and really really needs to be stopped RIGHT NOW.
I swear to Bog, some of these people need a Boot To The Head!
Turns out that those tests haven't been actually passing for ... oh, a month? Longer? I can't tell how far back it goes, because the results web page loads SO SLOWLY.
I have asked for advice, and I was told by other developers, "Oh, just ignore the results. If it fails in Step 10, you're fine and should check in."
OH NO YOU DI'INT!
Seriously, folks. We have these tests for a reason. Do you think, perhaps, one of the reasons that this development branch is always broken just might be because lots of people are checking in without running complete test suites? Even if this test isn't going to catch many of the problems that do get checked in, we now have devs that are completely OK with checking in code that doesn't pass testing. This is a dangerous attitude and really really needs to be stopped RIGHT NOW.
I swear to Bog, some of these people need a Boot To The Head!
Comment