Is AddResilienceHandler worth testing?
Absolutely. But testing can be as simple as reading the code. Were you thinking of automated testing?
I don't think that testing behavior makes any sense because of the complexity.Testing behavior will become even harder if we add more resilience logic such as rate limiting and circuit breaker.
Well sure if you do it wrong.
I could instead test that AddResilienceHandler
was added and configured,
No no. Instead test that when AddResilienceHandler
is added (and configured by your test) it does what it's supposed to do. Don't use the test to demand that it get used. Just that it works when it is.
Now, if you have a requirement to retry that's different. But that requirement has no business demanding that AddResilienceHandler
is involved. Just test that retries happen. However they got configured.
When testing behavior seek out the logic that is making decisions. That code needs thorough boundary checking. When testing configuration just make sure things are plugged in correctly. Don't get these confused and try to do both at the same time. It's extra effort and locks down things that shouldn't be.
TDD is one small part of testing. Just because something doesn't fit TDD doesn't mean it doesn't deserve a test. Heck, getting it to compile is passing a test. So is simply running it and trying it.
Don't make it an automated test without doing the work to keep it from becoming a brittle test no one understands that forces people to drag around bad code because they're too afraid to delete the test.
You're right to be concerned about the circuit breaker code. Your tests should gracefully allow that to come and go without breaking too many of them. TDD tests can do their own configuring and so ignore it. A correct integration test driven by a requirement wont care about it unless you're breaking the requirement.
Change happens. Write the test that people will know when to delete. Or don't write one at all.
How do I determine which code is worth testing?
All code is worth testing. But let's assume you mean automated testing. An automated test can be worthwhile if it shows:
- That the code under test can be trusted
- How to use the code under test
If you don't have these needs for this code you may not need this test.
Configuration code, free of behavior logic, is often skipped when it comes to automated tests. This works best when the configuration code is simple, obvious, and boring. When it can be trusted just by reading it. Interesting code is what needs the most testing.
I don't understand your answer.
Alright, lets try going through this the other way around.
So do you consider my code snippet to be configuration code or not?
It's clearly configuration code.
I consider it to be a configuration code which you said can be skipped,
Yes it can be skipped. If you already trust it and people will understand how to use it as is. The more readable it is the less it needs automated testing.
but your first few paragraphs state that I should test (presumably automated test) the behavior. –
LostInComputer
Again, testing can be as simple as reading the code.
If you decide that it needs more testing than a code review even then you shouldn't expect to test it the same way you would if it were behavior code. There's no logic here. No boundaries to test. No if
branches to cover. But you still could write an automated test. If you do, separate the retry requirement from the AddResilienceHandler
. The retry requirement test shouldn't know if AddResilienceHandler
was used. And the test of AddResilienceHandler
should do it's own configuring.
What that gives you is a way to know if the current configuration works (as far as the requirement is concerned) and a regression test that tells you if something broke the old way of configuring it. All this without locking you into any need to keep using AddResilienceHandler
if you change your mind. If you drop it you delete the one test that knew about it. You keep the requirement test that didn't.
I don't point that out to make you feel you must do these two tests. I point that out to show you what you could get out of creating automated tests for this. How flexible they could be. How much work that will be. And how different those kinds of tests should be from the more common functional core behavior tests.
Only once you understand all that will it be clear if this is worth it. If it's not, skip it.