Hi, this is the last lecture about testing. These are the topics that we will cover in this lecture. First, we will talk about how to implement tests, then we would talk about how to perform different types of tests, for example, unit testing, integration testing, and system testing. Then finally, we'll talk about how to evaluate test. when we implement tests, we want to automate a test procedures as much as possible. Just because running test cases can be very tedious and time-consuming, because there may be many possible input values and also many system states that we have to consider when we perform testing. A test component is a program that automates one or several test procedures or parts of them. That means, it's going to be another program that we can use to perform the test cases. That's like another program that we can use to run all the test cases. There are many tools available to help us write test components that; record the action for a test case as user performs the actions and also Parameterize the recorded script to accept a variety of input values. spreadsheets and also database applications can be used to store the required input data and also the results of each test. After we run all the test, some of the test they may pass and some of the test they may fail. So we need to use a spreadsheet or use a database application to keep track the results of the test. Now, we talk about how to perform test. To perform test, we first set up the system according to software configuration and also some tests configuration, then we feed the input into the system, then we get some results. Then we evaluate and compare the result against the expected results. If the results are the same, then there is no error and we are done. But if the results they are not the same, then we know that there must be some errors and we have to debug the system. Then after debugging, we have to perform the same test again and then evaluate the result again. Then finally, we need to use some spreadsheet or a database application to keep track the error rate data. That means, some of the test they may pass a some of the test they may fail. Then we need to use an Excel file or a database application to keep track which tests that we have passed and which test that we failed. In the last lecture, we have talked about different types of tests that we can perform on a software system, for example, black box testing, white box testing, and also regression testing. A testing strategy is going to specify which testing techniques, for example, white box, black box or regression testing are appropriate at which point of time. We have to specify the following when we specified a testing strategy. We have to specify exactly what are the steps that need to be conducted to test a component. That means, we have to specify exactly what we have to do in order to test a particular component within the software system, and also when are the steps planned and undertaken, and how much time, effort and resources will be required to do the steps. But we said that test planning, that means coming up with all the testing strategy is going to be difficult because of the time uncertainty in the debug part. We don't know exactly how much time we need for debugging. Usually, we perform testing after implementation before that deadline. So that means we have limited time for testing. So given limited time, we have to balance flexibility and creativity with planning and management. That's why we say here that testing often occurs when deadline process almost served because we perform testing before we release the product and we should make progress measurable. When we have problems, then we should identify them as soon as possible. When we develop a system, we do it outside in. What do we mean by outside in? Means that we're going to first perform requirements capture, then do analysis and design, and finally implement the system. But when we perform testing, we do it inside out. So we first start with unit testing. That mean we focused on source code. Then we integrate all the classes together and then perform integration testing. Then once we have the entire system, we perform system testing. Finally, we perform accepted tests and demonstrate all the functionalities in front of our client using a certain testing. When we perform unit testing, we focus on the source code of the system. We need to see the source code. Most of the test cases, they will be white box test cases together with some black box test cases. Then when we perform integration testing, we use a combination of white box and black box test cases. But when we perform system testing, we focus on the entire system without seeing the source code. Most of test cases, they are black box test cases. Then finally, when we perform acceptance test, we are talking about black box testing. Again, we try to demonstrates something in front of the client, but not the source code. That means we're going to use black box test cases. Who should be doing the testing? So for unit testing, integration testing, and system testing, they will be performed by the developers but for acceptance testing, we do it in front of the client, so it will be done by the users. So that's why we say unit testing is going to be done by the software engineer. Integration testing will be done by this software Engineer or the testing team. Then the system testing will be done by the testing team. Then finally, acceptance tests, we do it in front of the client. That means the users or the client, they will try out the system.