Hi. It's an age-old story, I write code, then I write some tests to prove that what I built was right, and it's pretty standard. Unfortunately, there's two problems with that. A, I'm going to write poor tests because I already know how the code works, so I'm going to write tests based on how I know it works. And B, because I'm the developer who actually made the code, finding bugs makes me look bad, it's a hit to my ego. So, I'm going to write nice tests that don't try and break my program too much. Both of those things aren't good. Instead what we want to be able to do is provide a certain level of quality in this, by building the tests first. It's very easy to say there's no code written in isolation, so no matter what, you're okay, this isn't an attack on your ego. And as we don't know how the code is going to work, we just know what it's going to do. It's the input and output treated like a black box. We're going to set up our test now, so that we can, at the end, decide does it meet the actual behavior we were looking for and did we build to that standard? So we decide at the beginning, here's what I'm supposed to do. Now I'm going to write tests for that and then build code until the tests pass, rather than the other way around. This flips that standard approach, we code until the test passes. And the test is just waiting there, it'll be able to be run while you're developing. It's especially useful in multi-part tests, where you can see progress in development as certain aspects of the test suite start to pass, even if the later ones fail. As you move as you build, it gives you an idea of here's what my progress is, these tests have passed. These are the important tests and these are the less important tests, so I've gotten most of this done. Now, there's an additional part to the process that's important. We start off by adding tests and then run them to make sure they all fail. If you're adding tests for new functionality and some of the tests pass, you have to look at what you're really building. That's something that already works the way it's supposed to, that might indicate a problem in design. Once you have the tests, you run them and they fail, so then it's time to develop code. We introduce new functionality into the system and then we execute the tests again. All the tests in this suite, all the tests, should be run. Make sure that some of the tests now pass. New tests end up passing, new ones as in they didn't pass before. This is what we want to look at, it's iteratively by refactoring. We continue to add additional functionality that causes more tests to pass and that's in a direction towards completion of the project. Again, we're focused on what it should do, not how the implementation is going to make it happen. We're not throwing together tests at the last minute, it's not the final thing that's keeping the code from going out the door. We set up our tests upfront, so that we know when we're done. This isn't just an idea, it's not theory. It was put into practice, and we quantitatively know that they tend to be more productive. You write more tests, which is probably good, and you tend to be more productive if you follow this approach. Test-driven development is a key aspect of a lot of Agile methodologies. We don't want to be blinded by our own code when we go to test, especially in an environment where there's rapid iteration, rapid prototyping. We want to be able to look at small sets of code, identify the behavior, set up tests, then build the code to add the functionality. By doing this, we get a lot of incremental status updates, so it's good for managers, and realistically we end up with better quality code. When we know what we're looking for at the outset, we know that the requirements problem has already been solved, then we can just code until it works. And we know when we're done. We know when we've reached 100% passing on our tests, we're done coding.