110

I'm attempting to get into the habit of writing unit tests regularly with my code, but I've read that first it's important to write testable code. This question touches on SOLID principles of writing testable code, but I want to know if those design principles are beneficial (or at least not harmful) without planning on writing tests at all. To clarify - I understand the importance of writing tests; this is not a question on their usefulness.

To illustrate my confusion, in the piece that inspired this question, the writer gives an example of a function that checks the current time, and returns some value depending on the time. The author points to this as bad code because it produces the data (the time) it uses internally, thus making it difficult to test. To me, though, it seems like overkill to pass in the time as an argument. At some point the value needs to be initialized, and why not closest to consumption? Plus, the purpose of the method in my mind is to return some value based on the current time, by making it a parameter you imply that this purpose can/should be changed. This, and other questions, lead me to wonder if testable code was synonymous with "better" code.

Is writing testable code still good practice even in the absence of tests?


Is testable code actually more stable? has been suggested as a duplicate. However, that question is about the "stability" of the code, but I am asking more broadly about whether the code is superior for other reasons as well, such as readability, performance, coupling, and so forth.

WannabeCoder
  • 2,794
  • 25
    There's a special property of the function that requires you to pass in the time called idempotency. Such a function will produce the same result each time it is called with a given argument value, which not only makes it more testable, but more composable and easier to reason about. – Robert Harvey Jul 01 '15 at 15:22
  • 4
    Can you define "better code"? do you mean "maintainable"?, "easier-toUse-without-IOC-Container-Magic"? – k3b Jul 01 '15 at 16:11
  • 7
    I guess you've never had a test fail because it used the actual system time and then the time zone offset changed. – Andy Jul 01 '15 at 16:34
  • 6
    It is better than untestable code. – Tulains Córdova Jul 01 '15 at 20:17
  • 15
    @RobertHarvey I wouldn't call that idempotency, I would say it's referential transparency: if func(X) returns "Morning", then replacing all occurences of func(X) with "Morning" will not change the program (ie. calling func doesn't do anything other than return the value). Idempotency implies either that func(func(X)) == X (which is not type-correct), or that func(X); func(X); performs the same side-effects as func(X) (but there are no side-effects here) – Warbo Jul 01 '15 at 21:18
  • 2
  • 1
    Doesn't really answer your question, but you should keep in mind that good test coverage makes code easier to refactor and refactoring often leads to better code. – pdr Jul 01 '15 at 21:39
  • 2
    I think the example in the piece you reference misses an important aspect. I would have both the no argument GetTimeOfDay and the one argument version. have GetTimeOfDay() forward to GetTimeOfDay(now()). The no argument version is so trivial that it doesn't need a unit test, and the other version is pure and easily testable. Client code need only use the no-argument version. – Michael Anderson Jul 02 '15 at 01:24
  • 1
    The method is about naming different periods of the day- "night", "morning", "afternoon", "evening". Why on earth would you also want to couple that to a particular time? Now you have what should be a very general method that you've placed a weird restriction on, given multiple concerns (choosing a time, naming that time) instead of one, and made harder to test. – Ben Aaronson Jul 02 '15 at 02:58
  • 1
    My two cents on time: time is tricky, I have seen a number of procedures fail because they sampled the time too often and took incoherent decisions based on this always changing time. I personally advise getting the "now" instant once, at the start of the procedure, and always refer to this instant afterward in general. Timing routines (such as measuring elapsed since the start) can be done without exposing the internal instant. – Matthieu M. Jul 02 '15 at 08:46
  • 1
    One thing not yet mentioned (as far as I could see): There is a strong correlation between code being A) easy to test, B) easy to refactor, and to me most importantly, C) easy to debug. The common denominator is that such code must be capable of being run out of context. – DevSolar Jul 03 '15 at 13:27
  • @MichaelAnderson good idea, but how would you unit test clients that use parameterless version and make some decisions based on the returned value? You'll still have to change system time. – Sergey Kolodiy Jul 05 '15 at 19:16
  • @RobertHarvey Think of Haskell: this is a case for purity and referential transparency. – AJF Jul 05 '15 at 21:10
  • Untestable code is better, but you don't know if it works or not. –  Mar 28 '17 at 16:15
  • This question and therefore every answer to it are conjecture and, I'd argue, therefor not suitable for this Q&A format. – A. Murray Jul 31 '19 at 16:02

12 Answers12

126

In regard to the common definition of unit tests, I'd say no. I've seen simple code made convoluted because of the need to twist it to suit the testing framework (eg. interfaces and IoC everywhere making things difficult to follow through layers of interface calls and data that should be obvious passed in by magic). Given the choice between code that is easy to understand or code that is easy to unit test, I go with the maintainable code every time.

This doesn't mean not to test, but to fit the tools to suit you, not the other way round. There are other ways to test (but difficult-to-understand code is always bad code). For example, you can create unit tests that are less granular (eg. Martin Fowler's attitude that a unit is generally a class, not a method), or you can hit your program with automated integration tests instead. Such may not be as pretty as your testing framework lights up with green ticks, but we're after tested code, not the gamification of the process, right?

You can make your code easy to maintain and still be good for unit tests by defining good interfaces between them and then writing tests that exercise the public interface of the component; or you could get a better test framework (one that replaces functions at runtime to mock them, rather than requiring the code to be compiled with mocks in place). A better unit test framework lets you replace the system GetCurrentTime() functionality with your own, at runtime, so you don't need to introduce artificial wrappers to this just to suit the test tool.

gbjbaanb
  • 48,585
  • 6
  • 103
  • 173
  • 3
    Comments are not for extended discussion; this conversation has been moved to chat. –  Jul 02 '15 at 18:44
  • 2
    I think it's worth noting that I know of at least one language that allows you to do what your last paragraph describes: Python with Mock. Because of the way module imports work, pretty much anything aside from keywords can be replaced with a mock, even standard API methods/classes/etc. So that's possible, but it might require that the language be architected in a way the supports that kind of flexibility. – jpmc26 Jul 04 '15 at 06:26
  • 7
    I think there's a difference between "testable code" and "code [twisted] to suit the testing framework". I'm not sure where I'm going with this comment, other than to say I agree that "twisted" code is bad, and "testable" code with good interfaces is good. – Bryan Oakley Jul 05 '15 at 22:28
  • 2
    I expressed some of my thoughts in the article's comments (since extended comments are not allowed here), check it out! To be clear: I am the author of the mentioned article :) – Sergey Kolodiy Jul 06 '15 at 09:16
  • I gotta agree with @BryanOakley. "Testable code" suggest your concerns are separated: it's possible to test a aspect (module) without interference from other aspects. I would say this is different from "adjusting your project support specific testing conventions". This is similar to design patterns: they shouldn't be forced. Code that properly utilizes design patterns would be considered strong code. Same applies for testing principles. If making your code "testable" results in twisting your project's code excessively, you're doing something wrong. – Dioxin Jun 06 '16 at 19:13
  • 1
    +1 for mock frameworks that allow mocking of methods of existing classes (like the DateTime.Now method from the "How to write testable code and why it matters" article). – Christoph Nov 08 '20 at 12:14
69

Is writing testable code still good practice even in the absence of tests?

First things first, an absence of tests is a way bigger issue than your code being testable or not. Not having unit tests means you're not done with your code/feature.

That out of the way, I wouldn't say that it's important to write testable code - it's important to write flexible code. Inflexible code is hard to test, so there's a lot of overlap there in approach and what people call it.

So to me, there is always a set of priorities in writing code:

  1. Make it work - if the code doesn't do what it needs to do, it is worthless.
  2. Make it maintainable - if the code isn't maintainable, it will quickly stop working.
  3. Make it flexible - if the code isn't flexible, it will stop working when business inevitably comes by and asks if the code can do XYZ.
  4. Make it fast - beyond a base acceptable level, performance is just gravy.

Unit tests help code maintainability, but only to a point. If you make the code less readable, or more fragile to make the unit tests work, that gets counter productive. "Testable code" is generally flexible code, so that is good, but not as important as functionality or maintainability. For something like the current time, making that flexible is good but it harms maintainability by making the code harder to use right and more complex. Since maintainability is more important, I will usually err towards the simpler approach even if it is less testable.

Telastyn
  • 109,398
  • 4
    I like the relationship you point out between testable and flexible - that makes the whole problem more understandable to me. Flexibility allows your code to adapt, but necessarily makes it a bit more abstract and less intuitive to understand, but that's a worthwhile sacrifice for the benefits. – WannabeCoder Jul 01 '15 at 15:19
  • 3
    that said, often I see methods that should have been private being forced public or package level in order for the unit testing framework to be able to access them directly. Far from an ideal approach. – jwenting Jul 02 '15 at 05:37
  • 4
    @WannabeCoder Of course, it's only worth adding flexibility when it saves you time in the end. That's why we don't write every single method against an interface - most of the time it's just easier to rewrite the code rather than incorporating too much flexibility from the onset. YAGNI is still an extremely powerful principle - just make sure that whatever the thing "you're not going to need" is, adding it retroactively will not give you more work on average than implementing it ahead of time. It's the code not following YAGNI that has the most issues with flexibility in my experience. – Luaan Jul 02 '15 at 07:45
  • 1
    I think it's important to note that your 4 priorities does not feature "make it have tests". Some seem to lose sight of why we test, and see unit tests as a goal rather than a tool. The code should be easy to maintain, period. In most cases, tests help that, so we add them. If for some odd reason you find they make things worse, don't use them. "Easy to test" and "easy to maintain or alter" have the same root cause, but we shouldn't conflate one for the other. The target is maintainability, not test coverage. They just happen to be related. – anaximander Jul 02 '15 at 10:20
  • @anaximander: To clarify: by "test" you mean "unit tests", right? Because the purpose of testing overall is quality, and making sure our code actually does what we promised it would. – ruakh Jul 03 '15 at 00:54
  • 4
    "Not having unit tests means you're not done with your code/feature" - Untrue. The "definition of done" is something that the team decides. It may or may not include some degree of test coverage. But nowhere is there a strict requirement that says a feature cannot be "done" if there are no tests for it. The team may choose to require tests, or they may not. – aroth Jul 03 '15 at 11:22
  • @aroth - It has been over a decade since I've worked on a team that would not fire you for willfully neglecting unit tests. – Telastyn Jul 03 '15 at 13:38
  • 4
    @Telastyn In over 10 years of development, I never had a team that mandated a unit testing framework, and only two that even had one (both had poor coverage). One place required a word document of how to test the feature you were writing. That's it. Perhaps I'm unlucky? I'm not anti-unit test (seriously, I mod the SQA.SE site, I'm very pro unit test!) but I haven't found them to be as widespread as your statement claims. – corsiKa Jul 03 '15 at 16:54
  • 2
    @Telastyn - But that's anecdotal. My own anecdotal experience over the past 10 years is that 3 out of 4 companies don't care (that much) about unit tests. Of course, the 3 that cared least were startups, and the one that did was a much larger tech company (although even then, there was no company-wide mandate for tests; it was something that my Scrum team decided to include in its 'definition of done', because we cared, regardless of the company's policy or lack thereof). – aroth Jul 04 '15 at 00:24
51

Yes, it is good practice. The reason is that testability is not for the sake of tests. It is for the sake of clarity and understandability that it brings with it.

Nobody cares about the tests themselves. It is a sad fact of life that we need large regression test suites because we're not brilliant enough to write perfect code without constantly checking our footing. If we could, the concept of tests would be unknown, and all of this wouldn't be an issue. I certainly wish I could. But experience has shown that almost all of us can't, therefore tests covering our code are a good thing even if they take away time from writing business code.

How does having tests improve our business code independently of the test themselves? By forcing us to segment our functionality into units that are easily demonstrated to be correct. These units are also easier to get right than the ones we'd otherwise be tempted to write.

Your time example is a good point. As long as you only have a function returning the current time you might think that there's no point in having it programmable. How hard can it be to get this right? But inevitably your program will use this function in other code, and you definitely want to test that code under different conditions, including at different times. Therefore it's a good idea to be able to manipulate the time that your function returns - not because you mistrust your one-line currentMillis() call, but because you need to verify the callers of that call under controlled circumstances. So you see, having code testable is useful even if on its own, it doesn't seem to merit that much attention.

Kilian Foth
  • 109,273
  • Another example is if you want to pull out some part of the code of one project to some other place (for whatever reason). The more the different parts of functionality are independent of each other the easier it is to extract exactly the functionality you need and nothing more. – valenterry Jul 01 '15 at 15:14
  • 10
    Nobody cares about the tests themselves -- I do. I find tests to be a better documentation of what the code does than any comments or readme files. – jcollum Jul 01 '15 at 17:59
  • I've been slowly reading about testing practices for a while now (as somehow who does no unit testing at all yet) and I have to say, the last part about verifying the call under controlled circumstances, and the more flexible code that comes with it, made all sorts of things click into place. Thank you. – plast1k Jul 02 '15 at 03:54
12

At some point the value needs to be initialized, and why not closest to consumption?

Because you may need to reuse that code, with a different value than the one generated internally. The ability to insert the value you are going to use as a parameter, ensures that you can generate those values based on any time you like, not just "now" (with "now" meaning when you call the code).

Making code testable in effect means making code that can (from the start) be used in two different scenarios (production and test).

Basically, while you can argue there is no incentive to make the code testable in the absence of tests, there is great advantage in writing reusable code, and the two are synonyms.

Plus, the purpose of the method in my mind is to return some value based on the current time, by making it a parameter you imply that this purpose can/should be changed.

You could also argue that the purpose of this method is to return some value based on a time value, and you need it to generate that based on "now". One of them is more flexible, and if you get used to choosing that variant, in time, your rate of code reuse will go up.

utnapistim
  • 5,305
  • 17
  • 25
10

It may seem silly to say it this way, but if you want to be able to test your code, then yes, writing testable code is better. You ask:

At some point the value needs to be initialized, and why not closest to consumption?

Precisely because, in the example you are referring to, it makes that code untestable. Unless you only run a subset of your tests at different times of the day. Or you reset the system clock. Or some other workaround. All of which are worse than simply making your code flexible.

In addition to being inflexible, that small method in question has two responsibilities: (1) getting the system time and then (2) returning some value based on it.

public static string GetTimeOfDay()
{
    DateTime time = DateTime.Now;
    if (time.Hour >= 0 && time.Hour < 6)
    {
        return "Night";
    }
    if (time.Hour >= 6 && time.Hour < 12)
    {
        return "Morning";
    }
    if (time.Hour >= 12 && time.Hour < 18)
    {
        return "Afternoon";
    }
    return "Evening";
}

It makes sense to break down the responsibilities further, so that the part out of your control (DateTime.Now) has the least impact on the rest of your code. Doing so will make the code above simpler, with the side effect of being systematically testable.

Eric King
  • 10,926
  • 1
    So you'd have to test early in the morning to check that you get a result of "Night" when you want it. That's difficult. Now assume you want to check that date handling is correct on Feb 29th 2016... And some iOS programmers (and probably others) are plagued by a beginner's mistake that messes thing up shortly before or after the start of the year, how do you test for that. And from experience, I would check date handling on Feb. 2nd, 2020. – gnasher729 Jul 01 '15 at 21:30
  • 1
    @gnasher729 Exactly my point. "Making this code testable" is a simple change that can solve a lot of (testing) problems. If you don't want to automate testing, then I guess the code is passable as-is. But it would be better once it's "testable". – Eric King Jul 01 '15 at 21:53
9

It certainly has a cost, but some developers are so accustomed to paying it that they've forgotten the cost is there. For your example, you now have two units instead of one, you've required the calling code to initialize and manage an additional dependency, and while GetTimeOfDay is more testable, you are right back in the same boat testing your new IDateTimeProvider. It's just that if you have good tests, the benefits usually outweigh the costs.

Also, to a certain degree, writing testable code sort of encourages you to architect your code in a more maintainable way. The new dependency management code is annoying, so you'll want to group all your time-dependent functions together, if at all possible. That can help to mitigate and fix bugs like, for example, when you load a page right on a time boundary, having some elements rendered using the before time and some using the after time. It can also speed up your program by avoiding repeated system calls to get the current time.

Of course, those architectural improvements are highly dependent on someone noticing the opportunities and implementing them. One of the biggest dangers of focusing so closely on units is losing sight of the bigger picture.

Many unit test frameworks let you monkey patch a mock object in at runtime, which lets you get the benefits of testability without all the mess. I've even seen it done in C++. Look into that ability in situations where it looks like the testability cost isn't worth it.

Karl Bielefeldt
  • 147,435