Now let's take a look at this example and try to come up with the acceptance test for this particular problem statement. Again, this is the prompt statement for the ASU course registration system. These are all the requirements that you can capture from the prompt statement. For example, students can request course catalogs containing course offerings for the term, and a student decide which courses to take, etc. Now we get all the requirements from the prompt statement. Then the next thing that we have to do is to restate all the requirements in a testable way. For example, instead of saying the students can request course catalog containing course offering for the term, we simply say that the system must be able to produce a course catalog containing course offerings for the term. Now, when we perform testing, we focus on what the system can do. That's why we have to restate a statement and focus on testing the system. That's why instead of saying that students can do something on a system, we say that the system must be able to, for example, produce something and they show something to the user, etc. Then for the next statement, students decide which courses to take. Again, remember that we are testing the system but not testing the student. Is this statement testable? No because we are testing the system but not the student. We cannot test whether the student can decide to take which courses or not. This one is deleted because it's not testable. We have to do to same thing for the remaining requirements. That means we have to restate all the requirements in a testable way. Notice that you have to include all the requirements when you perform testing. There may be other requirements that you have gathered from the users or from the system requirements specification. Remember to include them in a acceptance test. Then finally, when you come up with the acceptance test, make sure that you use to magic word, demonstrate. For example, first we're going to demonstrate that the system can produce something. Then for course registration, we're going to demonstrate that course offerings can be selected up to a maximum of four, and then demonstrate that alternative course offerings can be selected up to a maximum of two, etc. We're going to use the magic word demonstrate because we're going to demonstrate something in front of client. This are all the acceptance tests that we can come up with using the prompt statements from ASU. Again, the keyword is demonstrate something in front of the client. Notice that this test must be operational by devising test cases for use by the client. That means the client is going to actually use the system to validate all the requirements. Finally, we'll talk about how to evaluate test. When we evaluate a test, the test engineer needs to evaluate the results of testing by doing the followings. Comparing the results with the goal outlined in the test plan, and also preparing metrics to determine the current quality of the software. How do we know when to stop testing? That means our software is good enough, and then we can release the software to our users. We can consider the system task completeness and also coverage. For example, if we pass 99 percent of the test and we can say that our system is reliable and then we can release to software to our users, or reliability based on the arrow rate. If the error rate is low, that means we don't have many mistakes in our system. Then again, we can say that our system is stable and we can release the software system to our users. Within the project, we have to keep track the error rate using Excel or using a database application. Let's say the actual failure rate is higher than expected, what should we do? Then these are the things that we can do when we have a high failure rate. We can perform additional tests to find more defects, of course. Or we may realize the test criteria may be they were too high or too tough. Or we may deliver acceptable parts of the system and continue revising and testing acceptable parts of the system and then deliver them later on as, for example, DLC. Notice that all tests, they are important. But if you really don't have enough time, then this are the things that you should do. Testing a system capability is more important than testing it's components. That means, for example, black box testing on the whole system is going to be more important than unit testing. Testing old capabilities is more important than testing new capabilities. For example, a particular function is working in an older version. Then in the new version, that capability or that function should still be working. Testing typical situations is more important than testing boundary cases. This are the things that you could possibly do when you don't have enough time to perform testing. This is a little summary about testing. To perform testing effectively, you need good planning. That means you need to know exactly what you're trying to test. You need to use the right test in the right situation. In this course we have introduced free types of tests that you can perform on a software system. They are white box testing, black box testing, and also regression testing. You can perform testing early and frequently because you know that you perform testing mainly after implementation and usually you have time pressure after implementation. That's why it's a good idea to start testing early. Also use a tool to help you run all the tests systematically and also thoroughly. For example, if you're using Java, then you can use J unit to help you run all the unit tests. You should have a clear criteria for stopping because you cannot wait for a perfect system. Let's say if your system pass 99 percent of the test then you may say that your system is stable then you may think about releasing the system to the end users. That's why I say you have to decide beforehand how many testing will be enough. That's all I want to cover in this lecture.