Customers Suck!

Customers Suck! (
-   Unsupportable (
-   -   Tests? We don't need no stinkin' tests! (

Nunavut Pants 01-31-2019 08:06 PM

Tests? We don't need no stinkin' tests!
My company is building $(BIG_COMPLEX_SOFTWARE_PRODUCT). As part of that, we have a MANDATORY set of automated tests that MUST BE RUN before every checkin of code. That is even enforced by the tests returning an encrypted token that must be included in your checkin command, FFS.

Turns out that those tests haven't been actually passing for ... oh, a month? Longer? I can't tell how far back it goes, because the results web page loads SO SLOWLY.

I have asked for advice, and I was told by other developers, "Oh, just ignore the results. If it fails in Step 10, you're fine and should check in."


Seriously, folks. We have these tests for a reason. Do you think, perhaps, one of the reasons that this development branch is always broken just might be because lots of people are checking in without running complete test suites? Even if this test isn't going to catch many of the problems that do get checked in, we now have devs that are completely OK with checking in code that doesn't pass testing. This is a dangerous attitude and really really needs to be stopped RIGHT NOW.

I swear to Bog, some of these people need a Boot To The Head!

csquared 01-31-2019 09:04 PM

I can so relate. Had to troubleshoot a production problem today (by the way, I am on sick leave). B2B web site that was shutdown several months ago when customer left. We have a new customer. Been testing in QA for a while now. Went live on the prod site today.

Nobody checked to see if the prod site was still functional.


TheSHAD0W 02-01-2019 12:55 AM

Have fun tracking down the changes that result in test failures. Always fun when you've got a bunch of changes in between.

Nunavut Pants 02-01-2019 09:59 PM

Fun fact that I didn't know before:

We have an email list called "(this specific test suite) tiger team". I do wish one of them had actually bothered to look at the test results some time in the last month or so; they might have actually started doing something about it at that point!!

So far all we have is a bunch of "well there are a lot of different things that could cause this to fail", "we should increase the timeout so the test might pass", "let's keep the system alive for 6 hours so someone can troubleshoot" (uh, most of these are run overnight so they're gone by the time we get back in!!), and so on. Well, it's possible that someone will actually track down the source of the error if they keep the system around...

Buzzard 02-02-2019 02:56 AM

Since nobody can be bothered to make sure their code passes, the problems can be ANY FREAKIN WHERE in the whole program. And, of course, now that the stupidity has come to light, the greatest idea anyone can have is to pull the equivalent of putting fingers in ears and "La La La, can't hear you".
Seriously, the code has GOT to have such glaring holes in it that... well, I'm sure the test suite is likely more stable and dependable than the friggin product right now. (Yes, I'm aware that the test suite is non-functional as it is)

I'm thinking that the product code is going to have to be gone through from start to finish, every last module, from this crap.

Ironclad Alibi 02-03-2019 01:14 AM

Since the end uses will end up being the beta testers to find of the problems with the software, I know which company you work for.

I've narrowed it down to several dozen based on the programs and games I've used.

mjr 02-11-2019 06:53 PM


What testing suite are you using? Unit Tests should run fast. Even if there are hundreds or thousands of them, they shouldn't time out your system.

Personally, if you're using a web page, you're probably doing it wrong. I know Visual Studio has built-in testing tools. And if you use NUnit, you can even create test cases for each test.

It also sounds like if they're using a deployment script, that the deployment script may need work. They should have a policy in place (I don't know why they don't) that if the tests don't pass, the code won't build (or something like that). And in that case, it should kick back to the dev team or project manager to let them know.

Unit tests should be small, fast, and have no conditional logic in them, except the "built-in" logic of the assert statements.

Nunavut Pants 02-12-2019 05:10 AM

We don't actually have unit tests. We have three different layers of test architecture that call themselves unit tests, but they are actually full-on functional tests. As in "deploy the software to an actual device, then throw commands and data at it and see if it does the right things".

Of course, the whole thing is a huge mish-mash of homebrew scripts in six or seven different languages, ranging from Perl to Ruby to bash to Python to Curl scripting (blanking on the name for that, maybe JCL?) to C-language tests that get compiled and run once per decade whether they need it or not!

To say that I am not overly enamored with the way we do stuff would be a bit of an understatement...

mjr 02-12-2019 10:46 AM

Wow. That sounds like haphazard "Integration Testing".

Is there anyone in your office you could convince that Unit Testing is a good idea? Start the ball rolling. I did it where I work, and they were rather receptive.

Nunavut Pants 02-12-2019 10:28 PM

There are quite a few people who agree that it is a fabulous idea, and that Somebody Else should put a whole lot of time and effort into developing it.

I tried to be that Someone Else for a little while, but wound up unable to put together anything useful...

All times are GMT. The time now is 08:51 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.