Talk:Test-driven development
This is the talk page for discussing improvements to the Test-driven development article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||
|
The content of this article has been derived in whole or part from http://www.pathfindersolns.com. Permission has been received from the copyright holder to release this material under both the Creative Commons Attribution-ShareAlike 3.0 Unported license and the GNU Free Documentation License. You may use either or both licenses. Evidence of this has been confirmed and stored by VRT volunteers, under ticket number 2012112810010438. This template is used by approved volunteers dealing with the Wikimedia volunteer response team system (VRTS) after receipt of a clear statement of permission at permissions-enwikimedia.org. Do not use this template to claim permission. |
Limitations section
[edit]The "Limitations" section says that TDD "is currently immature". I don't understand what is meant by that.
It also lists "GUI" as one of the limitations. As someone who works on complex Swing applications for a living and even has written a whole book chapter about TDDing Swing GUIs, I guess I simply disagree. It would be nice if the original author could clarify why he thinks it is a limitation.
Thanks!
Ilja
- I would like to hear your thoughts on testing GUIs. Are you talking about abstracting functionallity away from the GUI into a business logic layer, and just using the GUI as a thin interface layer on top? If so, technically you still aren't testing the GUI. Testing GUIs and graphics in general just isn't very feasible right now. Computers get lost in not "seeing the forest for the trees". One can use tests to check pixel values on the screen, but what good is it if the test can't determine *what* the picture shows. capnmidnight 16:18, 30 April 2006 (UTC)
- I partially agree. Regardless of your level of abstraction, the Event Handlers are almost always in the inline code, scripting, or code behind. IMHO "GUI testing" is a test of whether or not the GUI reacts as expected. For example, if clicking a button instantiates an object in the DAL and it results in a NullReferenceException, it could be a GUI bug (i.e. Session cleared between Postbacks) or a DAL bug (i.e. no constructor, returned null, etc). Slackmaster K 18:55, 18 July 2008 (UTC) —Preceding unsigned comment added by Tsilb (talk • contribs)
- Are we talking about testing the widget set used by others to write their apps, or about testing app code written on top of the widget set? Testing the widgets in themselves is IMO doable using mocks for their clients, plus bitmap comparison for visual appearance. Testing their clients is similarly doable by mocking he widgets. I too used to think that UI unit testing is not possible. However, I'm not thinking that way anymore, since I got a better understanding of what unit testing actually is. Testing an application's oerall appearance, including events generated by widgets and layout is not unit testing, it's integration testing at its best - it tests several components across at least two layers (your UI code and the widget library you use) at once. Writing a test that actually exercises the entire call stack, from the click on the button down to the persistence layer, is even farther away from unit testing. When unit testing the button widget, you're only interested that a click generates a click event. When unit testing your UI handler code, you don't really care from where the click event comes, you just want to see that if it is received, it is correctly routed further downstream. The question, IMO, isn't whether TDD-ing for UI development is possible, but whether it makes sense. I mean, I don't use TDD on glue code, and most UI code should be glue code, IMO. — Preceding unsigned comment added by 212.146.95.34 (talk) 10:46, 14 June 2013 (UTC)
In limitation #4, mention is made of fixing tests when refactoring invalidates them. I have run into this problem myself. I have looked for information on how to handle this situation and found nothing. This may be one of the things that make TDD immature. I would very much like to see details of how one can fix tests. --NathanHoltBeader (talk) 22:23, 25 August 2009 (UTC)
- The only advice I know is, "Carefully, very carefully". I have become aware how 'precious' TDD tests are, because unless you are willing to scrap the whole piece of code and start TDDing again, they can never be replaced. Not everyone you may work with will be aware of that as they maintain the module! Tools like NCover do not solve the problem - a regex test may get used by the tests, but not necessarily in every boundary case that is important. I wonder, do we need something on the joys and limitations of NCover and its brethren here? --Nigelj (talk) 19:38, 26 August 2009 (UTC)
Some of the citations here are pretty shaky. There's a link to a developer's personal diary (Citation 21) and a link to a blog post from a single developer who is not widely known as a practices expert (Citation 22). This isn't to say there isn't some valid criticism out there, but neither of those are particularly impressive examples of the arguments against TDD. We either need some serious, wide circulation citations, or better yet some studies of effects on productivity with negative outcomes Nicklbailey (talk) 21:43, 11 April 2017 (UTC)
non-functional link
[edit]The link to "Test-driven Development using NUnit" at in the References is non-functional in my browser (Firefox 1.0, Windows 2000). I suggest this link be eliminated, and along with it the Jason Gorman vanity page as non-encyclopedic. --Ghewgill 18:35, 13 Jan 2005 (UTC)
- Works fine on my Linux box with Firefox. He probably didn't have a pdf viewer installed. --Nigelj 20:34, 30 August 2006 (UTC)
The link in the note 3 is not good. https://en.wikipedia.org/wiki/Test-driven_development#cite_note-Cworld92-3 — Preceding unsigned comment added by Apieum (talk • contribs) 11:28, 30 November 2019 (UTC)
- I have tagged it as dead. Walter Görlitz (talk) 14:38, 30 November 2019 (UTC)
Merge
[edit]A merge with Test Driven Development is in order. Actually that article frames the generic issues better, though it's a short stub, and this article seems (in the section Test Driven Development Cycle) to imply that there's a proper-named methodology here.
- Ummm, the Test Driven Development page just is re-direct to this page: they are the same article.--Nigelj 20:31, 30 August 2006 (UTC)
Where is "Test Driven Development" from?
[edit]Please, Who is the author of first article about "Test Driven Development"??? Who defends "Test Driven Development"? Etc... I think it would be very helpful for everyone is researching about it. --200.144.39.146 17:16, 10 March 2006 (UTC)
- From the article: "the book Test-Driven Development by Example [Beck, K., Addison Wesley, 2003], which many consider to be the original source text on the concept in its modern form."--Nigelj 20:31, 30 August 2006 (UTC)
- It predates 2003. We used the phrase in Java Development with Ant in 2002, and took it from one of Kent's earlier XP books. I think I first saw it in 2000, associated with JUnit work by Kent and Erich Gamma. SteveLoughran 10:41, 17 April 2007 (UTC)
Merge with Tester Driven Development
[edit]I read Tester Driven Development and it seems appropriate to move it into a "criticisms" section of this page. --Chris Pickett 22:32, 30 November 2006 (UTC)
- No, I think that it is a play on similar words, but as a concept it is entirely different. I've never heard what that article describes called 'tester driven development', but I have heard it called 'feature-creep', 'buggy spec' etc. It's an example of one way a project manager can begin to lose control of the project, not anything to do with the development methodology the developers may or may not be using to produce the actual code. --Nigelj 22:04, 30 November 2006 (UTC)
- I realize that "Tester Driven Development" is not the same thing as TDD. But it seems to me like this anti-pattern might actually describe TDD done badly. "It refers to any software development project where the software testing phase is too long."---clearly that includes TDD, since you can't get "longer" than "since before the project begins"! :) Well, at the very least, there should be a disambiguation page or "not to be confused with" bit or something, so that people looking for Tester Driven Development when they mean TDD don't get the wrong impression. --Chris Pickett 22:32, 30 November 2006 (UTC)
- Tester-Driven Development is clearly a pun on TDD, but a different concept. I think Tester-Driven-Development could be pulled into an anti-patterns in testing article, which could include other critiques of TDD, and of common mistakes in testing (like not spending any time on it). SteveLoughran 10:42, 17 April 2007 (UTC)
Limits of computer science
[edit]- Automated testing may not be feasible at the limits of computer science (cryptography, robustness, artificial intelligence, computer security etc).
What's the problem with automated testing of cryptography? This sentence is weird and needs to be clarified. — ciphergoth 11:08, 19 April 2007 (UTC)
- Automated testing is pretty tricky today with testing that a web site looks ok on mobile phones, especially if its a limited run phone in a different country/network from where dev team is; I'd worry more about that than AI limitations. Security is a hard one because one single defect can make a system insecure; testing tries to reassure through statistics (most configurations appear to work), but can never be used to guarantee correctness of a protocol or implementation. 'Robustness' is getting easier to test with tools like VMWare and Xen...you can create virtual networks with virtual hardware and simulate network failures. SteveLoughran 21:24, 19 April 2007 (UTC)
Benefits need some citation
[edit]Test driven development is a great idea. But some of the claims in the Benefits section are unsupported. I think at the least they need to be cited, or downgrade the claims to be opinions (and reference those who hold these opinions). For example, the claim that even though more code is written, that a project can be completed more quickly. Has anyone documented an experiment with two identical teams one using TDD and the other not?
Similarly, the claim that TDD programmers hardy ever need to use a debugger sounds ludicrous to me (how about debugging the tests?). As well as the claim that it's more productive to delete code that fails a test and rewrite it, than it is to debug and fix it??? Here's the current text from the article:
- Programmers using pure TDD on new ("greenfield") projects report they only rarely feel the need to invoke a debugger. Used in conjunction with a Version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests is almost always more productive than debugging.
Mike Koss 21:57, 28 July 2007 (UTC)
- I agree, the claim that "reverting" is the best approach to a test failure is, to use a UK technical term, bollocks. To roll back all changes the moment a test fails implies you cannot add new features to a program. Because it is inevitable that changes break tests -regresssion testing is one of their values. Once a test fails, you have a number of options
- roll back the change, hit the person who made it with a rolled up copy of a WS-* specification
- run the tests with extra diagnostics turned on, and try and infer why the recent change(s) are breaking it.
- attach a debugger to the test case and see what happens when the test runs.
- Debugging a test run is a fantastic way of finding out why a test fails. The test setup puts you in to the right state for the probem arising, and once you've found the problem, you can use the test outside the debugger to verify it has gone away. This is another reason why you should turn every bugrep into a test case -it should be the first step to tracking down the problem. The only times you wouldnt use a debugger are when logging and diagnostics are sufficient to identify the problem, or when you are in a situation where you can't debug the test run (its remote, embedded or the debugger affects the outcome).
- I would argue strongly for removing that claimed benefit entirely. SteveLoughran 14:27, 18 October 2007 (UTC)
- In fact, modern revision control systems are adding features to make this process more convenient. For example, the git revision control system contains a git-bisect command which takes you on a binary search between the current broken revision and a known working revision, driven by the results of your unit tests to find the exact commit where your test failed. --IanOsgood 19:43, 28 September 2007 (UTC)
- Nice. Continuous Integratin tools do a good job of catching problems early, provided they are running regularly enough, and developers check in regularly, instead of committing a weeks worth of changes on a friday. SteveLoughran 14:29, 18 October 2007 (UTC)
Low Quality external links
[edit]I'm pretty unimpressed with the quality of half the external links here...they primarily point to various blog entries in the .NET land. Those aren't the in-depth references that wikipedia likes to link to. I've gone through and (a) cut out anything that wasnt very educational or required logins to read and (b) limited links to one per blog. Furthermore I'm not convinced by the referenced articles that VS2005 is capable of test-first development, at least if you use the built in framework, but I left the articles there in.
Given that TDD originated in Smalltalk and Java, I'd expect to see some more there. Here's some options
- Push all product listings to the separate List of unit testing frameworks
- Split stuff up by .NET, Java, Ruby, other platforms, and try and keep coverage of all of them.
- Create separate pages for testing in Java, testing in .net, where we can cover the controversy (The junit4 problem, testng vs junit, NUnit versus the MS Test tools...)
If there is value in all the remaining blog entries, this content could be used as the inspiration for a better wikipedia article. —Preceding unsigned comment added by SteveLoughran (talk • contribs) 20:46, 17 January 2008 (UTC)
- The links havent improved much. Unless anyone else wants to, I'm going to take a sharp knife to the links. Those that make good articles to reference, they should become references. Those that dont get cut. I don't want to impose a 'no blog' rule because it is how some of the best TDD proponents get their message out. But we have to have high standards: it has to be something on the class of Martin Fowler's blog to get a mention. SteveLoughran (talk) 21:32, 25 May 2008 (UTC)
- I've just moved some of the links around and purged any that weren't current or particularly good. Another iteration trimming out the .NET articles is needed. SteveLoughran (talk) 22:12, 25 May 2008 (UTC)
- I cleaned out quite a lot of them. Let's start fresh and add back valuable stuff one at a time as needed (if in fact any are needed). - MrOllie (talk) 01:10, 13 June 2009 (UTC)
Code visibility
[edit]In this edit an anonymous user at 65.57.245.11 completely re-wrote the Code visibility section without comment or discussion. S/he introduced a discussion of black-box, white-box and glass-box testing. As far as I know, these concepts relate to testing, but are quite alien to test-driven development. In my experience, if you are unit-testing as you develop, you often have to test the details of code whose function is vital, but that will end up 'private' in the final deployment.
I have been re-reading some of 'TDD by example' by Kent Beck, but can't find any discussion of these issues. Have other editors got some reputable references that can help us get to the bottom of what should be in this section? —Preceding unsigned comment added by Nigelj (talk • contribs) 23:04, 25 January 2008 (UTC)
Certainly Black Box and White Box is used in testing, but not usually in TDD, where the tests are written before the code. That said. there is always the issue of whether to test the internals of code or just the public API. Internals: more detailed checking of state. Externals: guarantee stability of public API, provides good code examples for others. SteveLoughran (talk) 22:10, 25 May 2008 (UTC)
- I think that the discussion of black-box, white-box and glass-box testing is out of place here as it has no relevance to TDD. In my experience, you have to write tests at potentially every level of the code, depending on your confidence level and therefore the size of the TDD 'steps' that you are taking at the time. If that means you have to end up making encapsulated members public just to test them, it can ruin the proper encapsulation of the actual design. That's why I originally wrote this section, as that's important, and there are tricks that help with it that are non-obvious at first. The section's fairly meaningless now, but now that every entry in WP has to be referenced, I'm afraid I don't have time at the moment to track down references for all that was there before this edit. It's a shame that the section's currently full of such irrelevant and useless tosh at the moment, though. --Nigelj (talk) 17:21, 26 May 2008 (UTC)
- Having read it again, it reads like almost academic paper-style definitions of things, apparently not written by a practitioner of TDD; some projects clearly scope test cases to have access to internal state of the classes. As you say, it needs to be massively reworked. We can put the x-references in after. Overall, I'm worried about this article...it seems to have got a bit long and become a place for everyone with a test framework or blog on testing to put a link. If we trim the core text, we'd be making a start at improving it. SteveLoughran (talk) 09:51, 27 May 2008 (UTC)
Ja test-Driven Development consist of many prototypes Ja —Preceding unsigned comment added by 196.21.61.112 (talk) 07:45, 1 September 2008 (UTC)
I made this edit for the Code Visibility para, which has been reverted.. incorrectly IMHO. I'd be willing to discuss it if the person who reverted it back wishes to.. or can cite sources which support that section. I already linked to a mailing list post by Dave Astels, who wrote the other TDD book Gishu Pillai (talk) 16:36, 6 December 2008 (UTC)
If you think something is worth to be tested and still it should be private, this is a code smell. You should consider to move this code to a new class where it is legitimately public and create a private instance of this class at the original place. — Preceding unsigned comment added by 93.192.8.123 (talk) 07:48, 30 May 2011 (UTC)
Agreed with previous comment. From practical application in industry, a unit test within a TDD project does not test the hidden data or functionality of a unit, only the public interface. By definition the private or hidden members are part of the implementation, not the API. Qbradq (talk) 15:51, 9 August 2013 (UTC)
What Does This Mean
[edit]I can't figure out what this sentence is supposed to mean: "To achieve some advanced design concept (such as a design pattern), tests are written that will generate that design." What does it mean for tests to generate a design concept? —Preceding unsigned comment added by 203.45.67.213 (talk) 00:52, 5 October 2009 (UTC) oluwaseun —Preceding unsigned comment added by 81.199.145.162 (talk) 10:20, 9 October 2009 (UTC)
Likewise, this quotation in the Development style section:
In Test-Driven Development by Example Kent Beck also suggests the principle "Fake it till you make it".
What does that mean in the context of TDD? —Preceding unsigned comment added by 142.244.167.122 (talk) 00:18, 8 April 2011 (UTC)
Dvanatta (talk) 04:54, 9 December 2011 (UTC) To explain the concept: the key is that the client of the code is written before the code itself is written. Said in another way, before you write code, you write an example of how that code should work. Hence, when writing client code, if you find it really tedious to use the code you plan to write, you may discover changes to the interfaces and design that make the code more natural and easier to use. These are changes you learn about only after writing examples of client code. Thus, it is the examples of how you would use the code that can determine and even drive the exact design and interface of that code.
Flowchart
[edit]"Test fails" and "Test succeeds" appear to be backwards in the flow chart. Rewriting a test if it succeeds, and writing production code if it fails doesn't make sense. —Preceding unsigned comment added by 134.167.1.1 (talk) 20:41, 2 December 2009 (UTC)
- No, it is right. If a new test passes straight away, it isn't going to 'drive' any new code development, so you re-write the test until it needs some new code to be written to make it pass. (This is an unusual requirement, usually a new test fails straight away and you can proceed, but it can happen) Then, you have a failing test and you write new, production code until it passes. (That's the bit where the test 'drives' the development) Then refactor the code, keeping the pass. And then repeat. Red-Green-Refactor. --Nigelj (talk) 20:56, 2 December 2009 (UTC)
- There is still something that I don't understand in the flowchart. Does the "Clean up code" square corresponds to the refactoring step ? If it is; I think that the first diamond "Check if the test fails" has to be inverted (yes <-> no). Refactor a test should not break it. I think that this flowchart is quiet confusing.Raoulinet (talk) 16:44, 11 December 2012 (UTC)
- "Clean up code" is the last step in a cycle, and it does refer to the refactoring step. While refactoring, all the tests will probably be run several times, and again afterwards, and it is true that none of these test runs are explicit in the diagram. I guess whoever drew it assumed that, having got that far, re-running the tests will obviously be part of the refactoring process. This is clear in the article text. When you loop back to the top, you are starting a new test cycle and so you start with a new test, that initially fails, to drive the development of some further new functionality, so of course the meaning of the steps have to be the same as they were last time through. --Nigelj (talk) 18:36, 11 December 2012 (UTC)
- It's not just refactoring, but that is a large part of it. Otherwise, I agree with Nigelj. --Walter Görlitz (talk) 19:53, 11 December 2012 (UTC)
- Thanks. Raoulinet (talk) 07:46, 12 December 2012 (UTC)
- It's not just refactoring, but that is a large part of it. Otherwise, I agree with Nigelj. --Walter Görlitz (talk) 19:53, 11 December 2012 (UTC)
- "Clean up code" is the last step in a cycle, and it does refer to the refactoring step. While refactoring, all the tests will probably be run several times, and again afterwards, and it is true that none of these test runs are explicit in the diagram. I guess whoever drew it assumed that, having got that far, re-running the tests will obviously be part of the refactoring process. This is clear in the article text. When you loop back to the top, you are starting a new test cycle and so you start with a new test, that initially fails, to drive the development of some further new functionality, so of course the meaning of the steps have to be the same as they were last time through. --Nigelj (talk) 18:36, 11 December 2012 (UTC)
- There is still something that I don't understand in the flowchart. Does the "Clean up code" square corresponds to the refactoring step ? If it is; I think that the first diamond "Check if the test fails" has to be inverted (yes <-> no). Refactor a test should not break it. I think that this flowchart is quiet confusing.Raoulinet (talk) 16:44, 11 December 2012 (UTC)
Criticisms
[edit]Many articles have a "Criticisms" section, why Test-Drive-Development doesn't? —Preceding unsigned comment added by 8.7.228.252 (talk) 22:19, 10 June 2010 (UTC)
- See WP:STRUCTURE - it is usually best to "fold[] debates into the narrative, rather than distilling them into separate sections", so any ingrelevant, well-sourced criticisms of TDD should be mentioned, as they arise, in the normal text rather than separated out into a "Criticisms" section. --Nigelj (talk) 08:53, 11 June 2010 (UTC)
Blurry flowchart
[edit]What's with the blurry flowchart? Why is it png instead of pure svg? 69.120.152.184 (talk) 02:15, 26 September 2011 (UTC)
Recent Additions to "Requirements", "Development Style", and "Fakes, Mocks, and Integration Tests"
[edit]Response to: "The material you added to the article was taken from several sources with only minor edits. The reference did also not meet Wikipedia's WP:V policy as one is required to create an account to access the whitepaper. I'm sorry, but I had to remove it all despite how promising it appeared to be. Walter Görlitz (talk) 21:20, 27 November 2012 (UTC)"
- I represent the copyright holder and have submitted a declaration of consent to donate the copyrighted material. Also, you are not required to create an account to access the whitepaper. The access page is merely an optional request for information. Clicking send will take you directly to the whitepaper PDF. I found the WP:V guidelines unclear based on your comment. Although this is not the case, I thought sources that require purchase or membership were acceptable: "Other people should in principle be able to check that material in a Wikipedia article has been published by a reliable source. This implies nothing about ease of access to sources: some online sources may require payment, while some print sources may only be available in university libraries." WP:V --Stephaniefontana (talk) 15:54, 28 November 2012 (UTC)
ConfirmationOTRS|license= for pathfindersolns.com
[edit]Hi,
We received an OTRS id today for the content added in this edit and for content on http://pathfindersolns.com. The content is released as "Creative Commons Attribution-ShareAlike 3.0" (unported) and GNU Free Documentation License (unversioned, with no invariante sections, front-cover texts, or back-cover texts), which is compatible with Wikipedia's license.
If you have any questions, you can leave a note at WP:OTRS/N or my talk page. Thanks, Legoktm (talk) 22:27, 28 November 2012 (UTC)
Pathfinder Solutions
[edit]I some recent edits, I became aware just how much of the present state of this article depends on a single commercial source - Pathfinder Solutions of Wrentham, Massachusetts. I was just about to make some changes, and decided to check what the cited source actually said. I was faced with a webform saying, "Pathfinder Solutions. Download Whitepaper. Before downloading, please tell us a little about yourself." Now I really am upset. It's like a Wikipedia page has been hijacked by a private company who is using it to gather a database of personal information about the editors and readers of this article. I suggest that, to avoid corporate sponsorship of any Wikipedia page, the rest of us work to find replacement references that are actually freely available. --Nigelj (talk) 21:45, 22 January 2013 (UTC)
- Seems to meet the WP:SPAMLINK guideline. Remove at will. If you want to leave the content and tag it, that would probably be best. --Walter Görlitz (talk) 21:50, 22 January 2013 (UTC)
Shortcomings
[edit]This item:
- The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. This is especially the case with Fragile Tests.[15] There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs, it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase described above.
Isn't an argument against test-driven development, but rather an argument against poor implementations of test-driven development. That testing causes some overhead is a given, but it's a necessary cost to get the benefits. Tanath (talk) 20:42, 7 February 2013 (UTC)
The whole section (Shortcomings) seems to be written from a flawed perspective. It assumes that unit tests are intended to find defects and detect regressions. It describes the shortcomings of approaching TDD from this flawed perspective, not of the methodology itself. I would like to re-write the section as listing this common misconception as a potential pitfall, including the complications already listed. Qbradq (talk) 14:58, 9 August 2013 (UTC)
- You're welcome to re-write the section and prove that it's a flawed perspective, which it may not be. I just tagged the section as it did not contain a sufficient number of references. Walter Görlitz (talk) 15:26, 9 August 2013 (UTC)
- The 1st, 3rd, and 4th points are flawed because they work under the assumption that TDD is limited to unit testing, which is not true.
Off-Site Knowledge
[edit]The sentence, "Use Kent Beck's four rules of simple design", which has two links, requires me to go off site in order to understand the recommendation. They should instead be listed in the article, if they're that important. KazDragon (talk) 10:02, 26 September 2013 (UTC)
TDD isn't about unit tests?
[edit]This edit is a bit disconcerting when it states "TDD is not about unit tests so the paragraph is worthless". The sentence that was removed does not state that at all only that there is a reliance on unit tests, which is true. perhaps we should discuss this removal of content. Walter Görlitz (talk) 22:41, 21 March 2014 (UTC)
- Thanks Walter for your contributions. From deleted text:
- Test-driven development reliance on unit tests ...
- Is that true? Definition of rely from wiktionary:
- To rest with confidence, as when fully satisfied of the veracity, integrity, or ability of persons, or of the certainty of facts or of evidence; to have confidence; to trust; to depend; — with on, formerly also with in.
- I think there should be a reference for such garbage statements. Haven't seen or heard of a development team yet that relies solely on unit tests. Daniel.Cardenas (talk) 14:45, 23 March 2014 (UTC)
- When using this particular method, TDD, unit tests are used. Walter Görlitz (talk) 15:35, 23 March 2014 (UTC)
- Yes, but not exclusively or sole reliance. Daniel.Cardenas (talk) 01:42, 24 March 2014 (UTC)
- I'm sorry. I do not see your point. The material is valid and your semantic argument is not. If you don't like the wording, change it but don't delete the otherwise valid content. Walter Görlitz (talk) 01:46, 24 March 2014 (UTC)
- TDD is primarily driven from unit tests rather than black-box tests. Do you deny that?
- TDD cannot be easily done from a black-box or GUI level because you first write the (unit) test, then code to make the (unit) test pass. Are you saying that this is not true? I know of no documented procedure to do automated functional GUI tests or even integration tests. I'd be happy to see some though and certainly no tests with database interaction or network configurations, which is what the paragraph states.
- Do you disagree that "TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world"? That's what you're deleting all over one, small, insignificant word that would be easy enough to change. Walter Görlitz (talk) 01:52, 24 March 2014 (UTC)
- You have made a lot of statements, without references. It doesn't matter what you or I think, if there aren't references, then it is crap. No references to primarily unit tests. How do you classify integration tests? Integration tests are not strictly black box testing. Here is a reference agreeing that that section is crap: http://www.agiledata.org/essays/tdd.html#Misconceptions . Quote:
- You only need to unit test. For all but the simplest systems this is completely false. The agile community is very clear about the need for a host of other testing techniques.
- Daniel.Cardenas (talk) 01:19, 25 March 2014 (UTC)
- No I asked a lot of questions that you are refusing to answer. You are being intractable and refuse to discuss. I have no choice. I have warned you for your poor and uncooperative behaviour.
- I have not stated that you only need to unit test. Feel free to counter the claims then or tag the questionable material, but the objection of reliance on unit tests is valid. Walter Görlitz (talk) 01:56, 25 March 2014 (UTC)
- I have produced two references as well. Walter Görlitz (talk) 02:13, 25 March 2014 (UTC)
- You have made a lot of statements, without references. It doesn't matter what you or I think, if there aren't references, then it is crap. No references to primarily unit tests. How do you classify integration tests? Integration tests are not strictly black box testing. Here is a reference agreeing that that section is crap: http://www.agiledata.org/essays/tdd.html#Misconceptions . Quote:
- Yes, but not exclusively or sole reliance. Daniel.Cardenas (talk) 01:42, 24 March 2014 (UTC)
- When using this particular method, TDD, unit tests are used. Walter Görlitz (talk) 15:35, 23 March 2014 (UTC)
Pretty poor shape
[edit]This article appears to be in pretty poor shape at the moment.
- Someone had been busy converting sections into bullet point lists, in opposition to WP:PROSE guidelines.
- Most of the worst bits of the text still reference that awful Pathfinder PDF (see comments in two sections above). I think most of that stuff was copy-and-pasted.
- There is a vast amount of how-to style addressing of the reader: "Remove any duplication you can find," "Use Kent Beck's four rules of simple design to guide you," and so on. There are also lots of implied "you's" as the subjects of instructional sentences like, "Move code from where it was convenient..."
There must be a wealth of university-level books on the subject of TDD by now, that we can base better-sourced text on. I'm no longer involved in software development, so I'm not really in the swim as I once was. --Nigelj (talk) 11:55, 22 May 2014 (UTC)
Warnings on "Individual Best Practices" section
[edit]@Walter Görlitz: Hi! This is with reference to your recent reversion of my edit. I had removed some warnings at the top of the Individual Best Practices section because they didn't seem pertinent. I'd like to explain that.
The first warning concerned "How-To" content. The ensuing list is a list of best-practices -- not a step-by-step guide. It's functionally equivalent to saying:
Bread recipes often include
- Water
- Yeast
- Flour
- Salt
Furthermore, this in no way detracts from the content, and given that I personally found it useful, it doesn't seem like it should be flagged for removal or restructure.
The second warning is regarding prose. This is purely stylistic, and I don't think it's appropriate to turn what is essentially a list into prose with useless filler just because some convention somewhere says so. The end result would be harder to read and less useful overall. This doesn't seem to be in alignment with the point of a wonderful resource like Wikipedia.
Please consider reinstating my edit.
Thanks, Kael.shipman (talk) 16:59, 3 August 2016 (UTC)
- Removing such tags is supposed to be by consensus. We need more than one editor's opinion to remove these. Myself, I'd keep these tags, as the section doesn't follow WP norms, even if the info is correct. --A D Monroe III (talk) 17:08, 3 August 2016 (UTC)
- I too would remove those edits. Oh. I did. Walter Görlitz (talk) 03:43, 4 August 2016 (UTC)
Add a link to www.wedotdd.com containing interviews with companies practicing TDD and links to many high quality internet resources about TDD
[edit]Current article version explains main ideas of test-driven development. However it does not contain information about its acceptance and practical relevance for the software development. As already mentioned its links seem to not be updated. It also does not inform about software companies using and teaching TDD.
Recently one of the TDD practitioners has created a comprehensive web site containing a growing list of companies and teams that practice TDD, http://www.wedotdd.com/ . This site collects information world wide. Actually it is the current who is who list of companies focusing on TDD. For instance it contains interviews with Robert Cecil Martin, J. B. Rainsberger and many other TDD practitioners well known in Software craftsmanship community. Therefore I suggest to add link to http://www.wedotdd.com/ to the External links section.
About me: I am not affiliated with the suggested page. As a software developer using TDD and also leading a Munich software craftsmanship meetup I can see that the page has high relevance for the topic. It contains relevant well collected information about usage of TDD in the software development and also links to many well chosen resources recommended by the TDD related companies known and accepted in the community. I have seen that link to this page was already added to the article but it was promptly removed by editor Walter Görlitz.
Dimitry Polivaev (talk) 15:57, 26 November 2016 (UTC)
- The site feels promotional and does not appear to be a reliable source. Walter Görlitz (talk) 15:27, 4 December 2016 (UTC)
- Promotional for whom? This site makes interviews with companies doing TDD for sharing their practices and does not promote any specific company. Do you know any more reliable site containing state-of-the information about companies practicing TDD? I don't. Dimitry Polivaev (talk) 23:19, 5 December 2016 (UTC)
- Promotional for the site. The site does make interviews with companies doing TDD. Feel free to seek a third opinion. Walter Görlitz (talk) 06:13, 7 December 2016 (UTC)
- Third opinion request submitted Dimitry Polivaev (talk) 20:42, 7 December 2016 (UTC)
- Promotional for the site. The site does make interviews with companies doing TDD. Feel free to seek a third opinion. Walter Görlitz (talk) 06:13, 7 December 2016 (UTC)
Response to third opinion request: |
I don't believe this link adds anything to the quality of the article, and therefore should be avoided. Generally, if the addition of a link improves the article it can be added. If it instead improves the reputation of the external website, or attempts to drive traffic to that site, it should not be added. See WP:REFSPAM for more information on external links. Bradv 21:47, 7 December 2016 (UTC) |
Stack Overflow link
[edit]I have semi protected the article for a week. There was an edit war over this edit. I'd like to point out that, where there is disagreement, the onus is on those who want to include content to get consensus to do that. If editors want to include the link, I suggest they describe why here, on the talk page, rather than edit warring. Yaris678 (talk) 20:27, 26 December 2016 (UTC)
- This has been going on since December 10. The anon who has added it six times since then. The anon's edits have mostly been unexplained or replies to previous removals. The only guideline comment was WP:SELFSOURCE.
- The arguments against: Bradv indicates it was WP:LINKSPAM. MrOllie indicates that it's a forum post and does not belong, apparently related to WP:ELNO. My opposition is related to MrOllie's. At first, it's not clear if this is the Kent Beck. If it was his personal blog or website it would be different than if it were a a forum posting (again WP:ELNO is related). The problem is that SELFSOURCE is for identifying reliable sources rather than external links. If there was a point about Beck's views on unit testing for coding, it might work, although his EXtreme Programming series already cover this quite well. However as an EL, it doesn't work. Walter Görlitz (talk) 07:14, 27 December 2016 (UTC)
Image has two labels back to front.
[edit]On the TDD cycle image after you (re)write a test the red return path is labelled as 'test succeeds" where it should be "test fails" 1.132.97.116 (talk) 18:39, 13 May 2017 (UTC) Simon
- No. It is correct. The image, File:TDD Global Lifecycle.png, starts with (Re)write the test. From there the state goes to, "Check if the test fails". The transition here is that if the test passes (it is labelled "succeeds") then there's a problem with the test and you need to "rewrite the test." If it fails, which is expected, you write just enough code until the test passes, and you leave that phase. In other words, if you have a test and it passes, but you have no code that is making that happen, there's a problem with the test itself or the assumption underlying the test. Walter Görlitz (talk) 20:06, 13 May 2017 (UTC)
Endless series of untestable claims not supported by citations
[edit]Reading through this, it reads like advertising and a great many claims are either untestable or, where they might be testable, do not contain references or citations. An example is Receiving the expected test results at each stage reinforces the developer's mental model of the code, boosts confidence and increases productivity which is at core advertising, an untestable claim, and lacks references or citations which can support it.
Being Wikipedia, the article qualifies for instant deletion. It certainly isn't encyclopedic, it really does look like snake oil advertising for an untestable concept -- and for books written to sell the nonsense to people who are incapable of discerning snake oil.
I've been writing commercial software since 1978, I was on DARPANet before we literally voted to open the network up to public and commercial access, and over the decades I've seen supposed better way of doing things in software and hardware development come and go, and TDD looks to me to be one of the least plausible concepts, ranking up there with "Agile" and other concepts that have never achieved proven benefits, merely hype and wishful thinking. SoftwareThing (talk) 21:54, 21 January 2020 (UTC)
- That amount of negativity may give you cancer. ... Using your experience as justification to nay say holds little weight with me. I'm more impressed with action than words or credentials or age ... What about TDD does not work? Have you tried TDD? Have you tried Agile? Don't knock till you try it. ... I do think that WP should not describe benefits without saying that they are arguable; cited or possible benefits; benefits as described by proponents. Stevebroshar (talk) 20:57, 15 April 2024 (UTC)
Oh, and religious dogma advocates
[edit]Doing Ye Ole Google around the network, I can find endless examples of people religiously advocating TDD repeating all of the untestable claims made in the extant article, and that's another sign of snake oil. Selling a concept and the books which try to make a case for the supposed benefits of same with zero statistical or clinical real-word positive science-based results being offered is classic snake oil. SoftwareThing (talk) 21:58, 21 January 2020 (UTC)
- Have you ever tried it? It works IMO. ... It's easy to nay say. But, your negativity does not bother me. I use TDD and my clients benefit. ... and good luck finding a scientific study on its value ... or similar for any software development technique. ... Honestly, it's surprising that there is no study of such things; no software development sociology. Stevebroshar (talk) 20:50, 15 April 2024 (UTC)
What to do in case of intentional limitations by spec?
[edit]If I have read the article correctly, then nothing is being said about the following problem and the best-practice to handle it:
If you know about a bug or a limitation in your software and the client wants that you should not fix it (because it's just a corner case and/or too expensive to fix or you have no resources or it's not included in current release or whatever), then what do you do about this? There are three possibilities:
- Remove or don't implement a test for this special case as it is otherwise always flagged as a problem
- Leave the test failing, because the test itself is valid
- Change the test to expect the "wrong" outcome, so that the test always succeeds, because the "wrong" outcome is currently accepted
What is best practice for this problem? This should be added to the article (with references of course), because this is fundamental to someone new to this topic. --165.222.180.133 (talk) 16:57, 31 January 2020 (UTC)
- This is an important issue, and actually much broader than the narrow example here. It's worth thinking deeply about.
- In the simple situation, as you narrowly describe here, it can be handled by managing the definitions of 'expectation' and 'test'. Suppose that the system works with small numbers, and has an error when those numbers become too large. Users can enter a large number (let's suppose, a 16 bit maxint), but if they enter one which is too large there is then an error. Several things could happen:
- The system works (no error)
- A run-time 'invalid input' error is raised
- An obvious error value is returned (unexpected, but repeatable behaviour - maybe maxint itself)
- A random and unidentifiable number is returned. It's wrong, but you can't tell it's wrong (without parallel calculation of the correct value).
- A run-time error is thrown. It's not easily identifiable.
- Disaster: the system stops, the plane crashes, it executes a halt and catch fire instruction.
- We would like option #1. Maybe we can have this, if we re-implement a 32-bit system. But that's expensive, so we choose not to. Maybe (a rational behaviour) we go to option #2 instead (probably a cheap and minor code change).
- The others are usually bad engineering (#2 should be achievable within budget), but sometimes it really is acceptable to take the 'No-one will do that' approach (but that is itself now a constraint in the system design, and needs to be maintained over the whole service lifetime).
- What you do now is to change the specification. You still keep to the engineering goal of 'Write a system which meets the spec and passes all of its tests'. But now the specification changes. It says #2 is OK, rather than #1. We might still do non-functional testing to verify this. If #3 or #4 (even #5!) are acceptable, then that becomes part of the spec. The wording "behaviour in this case is undefined" often gets abused. But at least state what level is acceptable. If "Garbage In, Garbage Out" is OK (hey, they started it, giving your code garbage) then state that, but still state that #6 isn't acceptable, because it leaves the system in a non-functional state afterwards. Then test that you're meeting this.
- A similar issue also comes up if you're developing new code: when are tests added, and when are the new features seen as 'required' (i.e. once a test pass is now needed for that feature). The pure TDD cycle is well known, but it's very fine-grained and so is slow to work with. But when do you expect all tests to be working? And does that (a mistake!) mean that you stop running those tests at all for too long, between the release of new feature sets (as the suspension of such test then exposes you to the classic risks of untested code again). Andy Dingley (talk) 18:20, 31 January 2020 (UTC)
- Wikipedia is not a how-to manual. So while it's a real-life issue testers will encounter, it's not required to discuss how to address problems to have an encyclopedic understanding of the topic. Walter Görlitz (talk) 20:23, 31 January 2020 (UTC)
- I agree that we don't need "how-tos" here. But this is a basic question about this topic, so I think it still belongs here if we can answer this in a short and concise way. Andy's answer is certainly too long for the article even if we can accept it as-is. --165.222.56.136 (talk) 12:55, 17 February 2020 (UTC)
- If reliable sources discuss it, then the article may discuss it as well. Walter Görlitz (talk) 19:07, 17 February 2020 (UTC)
- I agree that we don't need "how-tos" here. But this is a basic question about this topic, so I think it still belongs here if we can answer this in a short and concise way. Andy's answer is certainly too long for the article even if we can accept it as-is. --165.222.56.136 (talk) 12:55, 17 February 2020 (UTC)
- So if your code returns #4 (wrong result), you're saying that it becomes part of the spec. So the spec would say something like "for all input that our tool cannot handle, we return wrong results". That's difficult, because you don't specify how wrong the input has to be. The calculator is maybe a bit too simplistic. Let's say you have a machine that does color-detection with a camera of newly produced t-shirts (just a random example, not even coding related) and as output says "yes, it's green" or "no, it's not green", but it has a limitation of when a green dust flake is on the shirt, it also says "yes, green", although that's completely wrong. Maybe other inputs also fail. So you write a test to insert brown t-shirts with a green piece of dust on them into the color detector and you get consistently wrong answers. Your product manager says that this doesn't need fixing, because green dust is rare. What do you do with your test now? Remove the silly test? Or change your test to expect "yes, green" for your brown shirts if there's such dust on them, as this is now a specified "requirement"? --165.222.56.136 (talk) 12:55, 17 February 2020 (UTC)
- The spec then isn't that it must return wrong results, but that it can (and that's still a test pass).
- Imagine a machine for sorting out only red, green or blue T shirts. Put a brown one in and the spec might permit the machine to say any of "red", "green", "blue" or "can't-tell". But it would still require the machine to say "I have a T shirt" and it would not be allowed to crash or stop.
- We can probably make "can't-tell" a reliable result (case #2). But sometimes we can't, and even then it's not a requirement that we do. For T shirt sorting it probably is, because we know operators are going to give it brown shirts. But sometimes a module within a system has much more controlled inputs, and so we can be less forgiving. But now we have to change the spec from a "T shirt sorting machine" to a "Machine for sorting T shirts that are either red, green or blue only". Andy Dingley (talk) 13:58, 17 February 2020 (UTC)
- I just did a small expansion on scimitar antenna. Obscure things as radio aerials, but it turns out they're still cited as US contract case law, 50 years later. The government spec was impossible to build, so whose fault was it? The spec said they should both work, and be small and reliable. The first company made them the right size, but couldn't make them work in that size. They were kicked out of the contract and the government bought them from another company, who'd simply made them too big to start with. Really no-one had ever cared what the size was, it was just a vague number which got written down early on, then turned into dogma. So was this a fair cancellation of a contract? Andy Dingley (talk) 14:04, 17 February 2020 (UTC)
- The "undefined result" is something I could live with in my case. So to answer my original question, we should either remove the test (not testing the special cases, answer #1) or better, what you're saying, answer #3 (make the test working somehow), but not how I originally wrote (make the "wrong" result the expectation), but instead accept any result as correct. This would at least test for crashes etc. in case of special input. Not sure how this could be added to the main wiki article without blowing it up though. --165.222.180.135 (talk) 14:47, 18 February 2020 (UTC)
- Wikipedia is not a how-to manual. So while it's a real-life issue testers will encounter, it's not required to discuss how to address problems to have an encyclopedic understanding of the topic. Walter Görlitz (talk) 20:23, 31 January 2020 (UTC)
- Yes. It's a real world thing. Yes. It's interesting. but ... has nothing to do with TDD. It's more about what is expected behavior. And that, as noted, that can be less than clear sometimes. TDD is about validating expected behavior; not how to know what is expected or more edge: what to do about behavior that is clearly bad, but we're not going to fix. ... For what it's worth: I'd add a passing test that validates the bad behavior and note in the test code that it's validating bad behavior. If someone does eventually fix the bug, then the test would fail; a good failure ... as all test failures are. Stevebroshar (talk) 20:42, 15 April 2024 (UTC)
- C-Class Systems articles
- High-importance Systems articles
- Systems articles in software engineering
- WikiProject Systems articles
- C-Class Computing articles
- Low-importance Computing articles
- C-Class software articles
- Mid-importance software articles
- C-Class software articles of Mid-importance
- All Software articles
- C-Class Computer science articles
- Low-importance Computer science articles
- All Computing articles
- Items with VRTS permission confirmed