[MUSIC] Now we've talked about the need for automated testing, and we've talked about unit testing. Now is unit testing alone, enough to make sure that we build high quality software, in that we understand the ramifications of our changes and ensure that we don't break anything? And the answer is no, unit testing alone is not sufficient. And the reason is is that, although we may have this component built perfectly to spec, and we may have this component built perfectly to spec, our specification itself, or some other issue that we may be unaware of, timing, resource consumption. Maybe when we load both of these into memory it takes up too much memory and we get problems. Maybe when they're both executing, we do things too slowly. Or maybe if they do things in parallel it causes problems. So we can't just test purely in isolation with unit testing. We also have to go and test components as assemblies or as entire applications. And when we go and we pull them together and we begin testing them in the context of each other, and understanding how they interact with each other, that's called Integration Testing. And that's a critical piece of testing as well. Just like unit testing, this is something that we can automate. There's different ways of going and setting up a series of components and running a test that goes through all of them. Now often the Integration Testing is more like getting a user to run all of the test cases and interact with the application. It's not exactly like it, but it's more like that. And it puts everything in context. So it's closer to the finished product in the way it's going to execute. Whereas unit testing is individual pieces that's sort of farther away from the final way that the component will be executed and used. So Integration Testing gives us some of the context around, will that component function correctly when it's placed with this component and they interact together? And often what we see is emergent behavior that we didn't expect shows up in Integration Testing. And that's why it's important that even though we've built really sophisticated unit test suites, we can't go and neglect testing other aspects of the application or testing it in other ways. So Integration Testing is an important piece of building applications. Now even if we go an integration test, that still isn't enough for our application. We still have to do other types of testing. Another type of testing we typically do is User Acceptance Testing. And what this is, is yes, we've built this application. Yes, all of our automated tests make us think that it's behaving correctly and doing what we expect it to do. But at the end of the day, we still have to hand this application off to the people who are really going to use it. And have their feedback on, is it doing the thing that they expect it to do? Is it helping them solve the problem that they want to solve? And this is what User Acceptance Testing is. Is it's giving our completed, well, completed's the wrong word, giving our application that we think is in a good state, over to our users to go and try it. To run through the test manually, and be able to tell us, does it do things correctly? Now, we do Unit Testing, we automate that. We do Integration Testing. And unfortunately you can't automate User Acceptance Testing, because really that's about, in many ways, understanding human behavior and whether or not people are going to like a particular application or design. Or assumptions that we had about the users may be incorrect, and that's where we discover these things. So with all these things, we're getting feedback loops. With Unit Testing we get a really tight fast feedback loop. With Integration Testing we get a longer feedback loop, typically. The tests take a little bit longer to run, but they give us more indication that the application as a whole is correct. And then the even longer feedback loop is User Acceptance Testing, but at that stage we're getting even more feedback that it's ready to go. If we can get through User Acceptance Testing, that means real people are using it, they're solving real problems hopefully at that point. And they're telling us that everything's good to go. But each of these things has a different feedback loop. And if our way of testing was is that we made changes and then we sent them all to User Acceptance Testing. Well, by the time that we discovered that we had a problem, there would be such a long feedback loop, it would be really expensive for us to try to fix that problem. Or to even remember what the problem was about. And going and evolving our software and maintaining it over time, adding features, fixing bugs, would take a huge amount of time. So we have all these different types of testing, but we also have them in sort of layers of testing. And we do different types of testing, because we want to make developers as productive as possible. We want to give them feedback as quickly as possible. We want to make the burden of knowing that their changes are correct as low as possible. But then simultaneously, we have to balance that out with knowing that things are really working as expected at all levels. From just the pure execution of the application as a whole, to actually giving it out to users. Now there's still other types of testing we can do that we'll talk about. But we can see the importance of building up a robust test suite, a robust approach to testing, and trying to help our developers know that the changes they're making are correct, while limiting how difficult it is for them to find out and measure that those changes are correct. And we want to make developers happy. We want to make them have fun building software and being creative and focusing on the hard problems.