6

Unit tests are never perfect at capturing functionality, particularly in certain parts of an application (such as the GUI), so everyone needs some measure of black box testing. Does TDD have anything to say regarding black box testing? If it does not say much, could it be true that while writing unit tests is the developers job, functional black box tests fall into a different domain like that of a business analyst or a dedicated tester?

4 Answers4

7

Does TDD have anything to say regarding black box testing?

Not really.

You must write the tests first. Therefore, there is no "white box" because the code does not yet exist.

The distinction between black box and white box means essentially nothing in the TDD context.

...could it be true that while writing unit tests is the developers job, functional black box tests fall into a different domain like that of a business analyst or a dedicated tester?

Yes. Of course.

TDD does not mean "rigidly follow some brain-dead path."

If you want to do additional testing of the GUI, that would be a well-accepted idea.

S.Lott
  • 45,390
5

TDD has evolved to include black box testing, BDD quite specifically uses it as a device (see ATDD).

However, I don't think this answer tells you in whose domain black box testing lies. Personally, I think it lies with everyone. The BA knows what the business wants and Developers and Testers understand edge cases that a BA should never need to understand, often both with completely different perspectives.

And this is why the Gherkin-based test runners have become popular, because they allow BAs, Devs and Testers to write the test cases in the same language as each other (though developers have to put some code behind the text).

pdr
  • 53,607
  • 14
  • 138
  • 224
2

IMHO TDD is primarily a whitebox technique, and, I think it is more an implementation technique than a testing technique. When a developer creates code in very small iterations "write test, write code, refactor", it is usually the same person who switches between code and test code, someone who knows exactly the already implemented parts, the missing parts of the "subject under test" as well as the test which are missing.

When you work that way, notice you sometimes have to create a new function, maybe with some minimalistic implementation. But then you change, extend, refactor and test this function, until it meets your idea of a featureset. So most of the time you work with existing code you know - on both sides (production code and test code) in parallel.

That means, when a developer who applies TDD creates a test for a function, they take the knowledge into account about the already existing implementation. They know which missing feature or edge case is still untested, or which code paths are already covered by the existing tests. That will undoubtly have a big influence which tests they create and even more, which not, since they know it is not necessary.

In fact, there is also a black box element in TDD: whenever you create a new public function to be tested, you have to think about the function signature or the API from the view point of a user of that function or API first. And whenever you add a new test for a functionality your "subject under test" does not support yet, you can do this without having decided about how to implement the functionality in detail. But IMHO that black box part is very different from a real "black box test", where an external tester who may not even be a developer does not have any knowledge about the implementation of a program.

See also: Black box or white box testing - which do you do first?

Doc Brown
  • 206,877
  • Well TDD maybe regarded as a whitebox technique but only at the unit level - One can call it black or white but essentially it is one type of testing. The white box tests, in many cases are more valuable , are the ones which test functionality at a coarser level (the interplay between units). Very few components map business value use cases to the unit level so white system box testing at a higher level is of significant value as it allows us to choose on the corner cases etc. I am not sure how TDD helps in such scenarios (assuming tests firsts are at a unit level) – user104319 Dec 26 '15 at 04:40
  • "Whenever the developer who applies TDD creates a test for a function, he has full knowledge about the already existing implementation" I'm not sure I understand this correctly, but doesn't TDD require that you write the test first before writing the function? – Mehdi Charife Sep 26 '23 at 19:16
  • 1
    @MehdiCharife: I was talking about writing a test before one changes a function (to add some feature or fix a bug). Sure, when you write a new function, the "current implementation" is empty. But when you work in small TDD cycles, you change existing function way more often than just adding new ones. – Doc Brown Sep 26 '23 at 20:20
  • I see. But in this case, shouldn't the new test be written solely based on the desired new functionality, which is presumably missing from the existing implementation? I mean, if you want your function to support handling some new kind of input, the test will only need to feed that input to the function and check if the output corresponds to a desired result. I don't understand how the existing implementation of the function would heavily impact the writing of such a test. – Mehdi Charife Sep 26 '23 at 20:25
  • 1
    @MehdiCharife: almost any function you can write which takes some integer, string or float value as input, which gives you a far larger range of input values than you can effectively write tests for. So how do you know that you have written "enough" tests for a certain function, or that you have to add extra test cases, because there might be a case your function currently does not support? The answer (often, not exclusively): you can look at the implementation and the current code and branch coverage. .... – Doc Brown Sep 26 '23 at 20:50
  • ... You, who has written the function, knows which extra test cases are not worth to pick because you see how you did implement the function. For example, lets say you test you latest CSV parser function, which takes a column separator as a parameter. How many tests with different column separators will you need to make sure you have sufficient coverage? You don't decide this effectively by looking at the function in a black-box manner. – Doc Brown Sep 26 '23 at 20:53
  • In the case of a CSV parser, I don't think that a first implementation would give you much direction to which new parameters to investigate, since it would likely depend on a separator comparator, which will probably be natively available in the language if you chose to represent the separator as a character. Unless you write your own comparator, you wouldn't be performing any sort of logic on the provided parameter. Because of this, the implemenation would be incapable of guiding you towards potential edge cases. – Mehdi Charife Sep 26 '23 at 21:16
  • "You, who has written the function, knows which extra test cases are not worth to pick because you see how you did implement the function." I read somewhere that some programs (structured?) can be mathematically proven to produce a correct result for certain inputs, so in this case it does seem like knowledge about the existing implementation could greatly influence the kind of tests to be written. – Mehdi Charife Sep 26 '23 at 21:28
  • "Unless you write your own comparator, you wouldn't be performing any sort of logic on the provided parameter" - well, when you have written the source code of the function, you know this for sure if there is such kind of logic or not. I could give you better examples, but I think the comment section isn't the right place for this. – Doc Brown Sep 27 '23 at 05:53
  • 1
    @MehdiCharife: I added an extra paragraph, maybe it helps to understand my point. – Doc Brown Sep 27 '23 at 10:58
  • @MehdiCharife yes, some programs can be proven correct, but the programs tend to be trivial and the amount of effort needed tends to far outstrip that of any reasonable testing approach. – Jacob Raihle Sep 27 '23 at 11:33
0

Much like the "field of testing" development, one kind of testing never really excludes another kind of testing.

As long as you follow the testing approach, you can test your software in multiple ways. That might mean not doing black box testing while you're doing TDD, because TDD would require violating the black box rules on internal inspection; however, a second team, or even the same team revisiting code they no longer fully remember could do black box testing in a subsequent effort.

Edwin Buck
  • 680
  • 4
  • 7