Archive for March, 2017

Tests tell stories, listen to what they say

I’m not a big fan of integration tests. They’re often unreliable and “flaky” due to their very nature of being integration tests that rely on file systems, networks, and databases. These kinds of tests are hard to get right. At least, it’s hard to get them stable enough to be valuable and I often wonder if the gains outweigh the costs. However, there is one very useful thing integration tests can do for you. They can bring architectural smells to light. Do you need to bring up an entire (virtualized) cluster just to test one small component of your system? That’s a smell. Just like a unit test that needs to setup dozens of fakes, it’s a sign that your system is too tightly coupled. 

I was recently working on a dashboarding application that talks to a Lucene based search engine. The QA team had been running their integration tests by spinning up an entire cluster, complete with dozens of other components. This was the only way for them to run the application and get data into it to work with. It takes around 20 minutes to get a local, dockerized, cluster running, so needless to say, I’m looking for ways to run just the parts I need. In other words, I needed to know if I actually needed the entire environment, or if I could just spin up an instance of the search engine for testing purposes. 

It turns out that this was a couple of hours of work to create a much smaller docker network with just the search engine, proxy server, and the web app. It turns out that this section of the system is architected pretty well, but you wouldn’t have known it from looking at the tests. Indeed, many other parts of the “distributed monolith” really do need the whole cluster in order to function properly. I suspect that because most of the system requires dozens of components to be online, it was just assumed that this part of the system did too. 

Tomorrow, I hope to finish automating those integration tests by whipping up a small docker-compose file and data seed script. This test setup should document exactly what is needed in order for this application to run. As for the tests that require dozens of components to run? Those tests are a testimate to why I’ve begun to lovingly call this system a “distributed monolith”.

Until next time,

Semper Cogitet

, ,

Leave a comment

Teaching an Old Dog New Tricks

I had a really amazing experience this week. I had an opportunity to pair with a gentleman who has been programming since I was in diapers. He had only ever test driven code once before, but is really excited about learning. I think that in and of itself is amazing, but I learned a few things during the experience as well.

Being it was his second time test driving, I navigated while he drove. He’s seen our team pair, and asked if we were going to ping pong (one of us would write a failing test, then the other make it pass), but I wanted him to build as much muscle memory as possible. A smile lit up his face and I can’t tell you how happy it made it me.

Our task was to parse a version number and increment the patch number on it. He immediately created a new file for the test, but I stopped him.

“Hey, wait a sec. Why don’t we take a few minutes to write a test list first?”

“What do you mean?”

“Well, I like to take a minute to think about the inputs, outputs, and edge cases, then write down any tests we think we should write. The test list is our roadmap. It keeps us on task. I don’t like Test Drive Design. I like to do just enough design first, then test drive the development.”

“Okay man. I can dig that. That’s smart. I never understood how TDD makes better code when you weren’t taking the time to design anything. So, what you’re saying is that TDD doesn’t mean “don’t design at all”, but “design just in time”, right?”

“Yeah. I guess I am. Something like that anyway.”

I’ve said before (perhaps not on this blog, but before and relatively often) that TDD doesn’t excuse us from good design. What I hadn’t realized is the “just in time” part. Just like everything else in my development cycle, I’m doing just enough, just in time to be most effective, yet still as lightweight as possible. It’s interesting and I’m surprised it never dawned upon me before.

Anyway, we create a test list and finally he gets to create that test class he was so eager to create. Even though he’d barely done this before, he instinctually created the test class first. I know I had to have been smiling as I watched him. “This guy’s a natural”, I thought to myself. He copied just enough code from a neighboring test class to get the test runner to pick it up and then I had him delete any extra remnants of the other test. We paused for a moment and I asked him to pick out what he thought would be the simplest test to write. He picked one and began writing the test, beginning with creating an instance of our not yet existent class.

“Whoah! Hold up! You’ve got a failing test.”

“What do you mean? There is no test yet!”

“Yes there is, and it’s failing. That code won’t compile. A test that won’t compile is a failing test.”

He smiled. “Okay man. I’m picking up what you’re laying down. Write just enough to make it fail, right?”

“Exactly.”

There it is again. Do just enough, just in time….

So, he created the new class, ran the no-op test, and went back to writing the test. Soon, we also had a no-op function and a failing test that actually tested something meaningful.

“Okay. Now let’s make this pass.”

“Sure. Now let’s see. We’re going to need to split this string and..”

“Whoah! Slow down a little. Is that really the simplest way to make this test pass?”

“What do you mean?”

“Well, why not just hard code the return value?”

He looked at me like I was totally nuts. Maybe I am….

“Huh? I don’t get it.”

“If we hard code the return value, the test will pass. Then, we’ll go back and add a new test that forces us to change the code. If we start splitting strings right now, we might accidentally write code that isn’t tested.”

“Hmm…. I don’t know man, but you’re the expert. All right, let’s just hard code it for now.”

He reminded me of just how counter intuitive TDD can be when you’re new at it. Heck, it can be counter intuitive when you’re experienced at it. I’ve been doing this for years and I still feel weird hard coding that return value, but I’ve also burnt myself by writing more code than absolutely necessary to make a test pass. This is a nice reminder of just how much discipline it takes to test drive.

We test drove this little function for about an hour, writing just enough code to make it fail, then just enough to make it pass. At the end of that hour, we had about a dozen tests and I could see the concern growing on his face. I had a feeling that I knew what was coming.

“Man. These tests are great, but the implementation code sucks. It’s downright awful. Are you sure about this TDD stuff?”

“Yeah. That code does suck, doesn’t it. Let’s fix that. So far we’ve been skipping the refactor step. I wanted you to get the feel for the red/green cycle. I probably should have stopped us sooner, but you were having fun and building muscle memory.”

We spent 15 or 20 minutes refactoring the code until we had something not quite as nasty as before.

“Dude. That was awesome! We just completely tore the guts out that code and it still works! More importantly, I know it still works, because the tests are green. Not bad man, not bad at all. I can see why you young guns like this so much.”

This is something that strikes me often about test driven code. Test driven implementations tends to be ugly as sin. I mean, downright bad, rotten, spaghetti code implementations, but we have the tests. We can refactor. Hell, I’ve deleted entire implementations and started over, knowing that when I was done, I’d have working code again, because I have the tests. The ugly implementation is only a problem if we skip the refactor step. Don’t skip it. Even if there’s nothing to refactor, be sure to take a moment to double check that there’s nothing to refactor yet. It’s really important to build that habit. Next time I teach someone TDD, I’ll be sure to make it an explicit part of our cycle from the very beginning.

We continued on like this for the rest of the afternoon. By the end of the day, we had a really bullet proof implementation ready to merge in.

“So, Chris… This is really great and all. I mean, I had fun man, but it took us all afternoon to write that one function. Is this really worth it?”

“Well, you’re right. It took a long time to write that, but let me ask you something. All those edge cases we found… If we had whipped up the 30 minute version of this function, without test driving, would we have thought of them?”

“No. No way.”

“When would we have thought of them?”

“Months from now when someone reported a bug.”

“Right. Months from now, when…”

“When we’ve forgotten all about this code and not only will it be more expensive to fix, but it’s also caused real damage out in production.”

“Exactly.” I said with a smile on my face.

An old dog learned a new trick this week, and this pup learned some valuable lessons as well. I’m looking forward to pairing with him again. I’ve a feeling we’ve got a lot to teach each other.

 

, , , , , ,

Leave a comment