So now we start with, so now let's start with unit testing. So when we perform unit testing we have a component to be tested, usually a class or a part of the program. Then, we have many inputs that we want to feed into the component that we want to test. And the types of test cases can be related to interface, independent paths, boundary conditions, local data structures, or error handling paths etcetera. And we emphasize on white box testing techniques, because now we are talking about source code. But notice that when we are testing a class or part of the system, the other parts of the system may not be ready yet. So how can we perform testing with the rest of the system? So when we perform testing, we need something called driver and stubs in order to complete testing. So drivers is going to help us feed the input that we want to use for testing, and then feed it into the function that we want to test within this component to be tested. For example, if I have a function to get absolute value in this component, then this driver is going to help me feed the input into the function. And then double check whether the output is going to be correct or not. So this is what we mean by the driver, and notice that this class or component to be tested. Maybe calling some other classes to do something for for this class, but notice that the other classes they may not be ready yet. So how do we perform testing when the other classes they are not ready yet? So that's why we need stubs, and they are components called by the component to be tested. So this component maybe calling another class to do something. For example, this component is calling another class to get some student information. Then instead of calling the actual class, that is going to give me the student information, I need some dummy class to complete the test. And this dummy class may always just return the same student information to this component, so that I can complete the test even with the student class. So that's why I say stub is simply a dummy class. That's going to return some necessary information for the component to complete the test. So in object oriented testing, what should we unit test? So a unit test has to be at least a class, so we have to test every class that we have within the software system. But somehow, you know that for an object that we have within the system we can put it into different states, just because right, an object may have multiple states, then it's going to make testing difficult. So this is what we have talked about in the last lecture. Somehow you have to put every class into every single possible stay before we perform testing, then how to deal with inheritance and polymorphism. That means when we have super class and subclasses, then let's say if a subclass overrides methods of an already tested super class, then what needs to be tested? Should we only test the overridden methods or should we test all the methods? So how to deal with encapsulation? What do we mean by encapsulation, means that we try to perform information hiding, we try to hide what's inside an object. So this is pretty common in object oriented programming. So in order to perform testing, we may need to provide a method for testing only, and that's going to report all of an object state. So we're going to use that method to get what's actually inside that particular object. So normally when we implement a stack usually we only have two methods, push and pop, push, you can push something into the stack, and then pop, you can get something from the stack. But in order to perform testing we may also need a method called peek in order to see what's actually inside the object, or let's say a stack. So if you peek on an empty stack you should see nothing. If you put something into the stack, and then peek, you should see the elements that you have just inserted into the stack. And if you pop something out, and then peek then you should see something being removed from the stack etcetera. So we need this extra peek method in order to see what's exactly inside the object. And the next topic that we're going to talk about is integration testing. So some of you may ask if all the components they are working correctly after unit testing. Why perform more testing? Why perform integration testing? The reason is that, interaction kinds of errors cannot be uncovered by unit testing. For example interface misuse, interface misunderstanding, or timing errors. They cannot be discovered by unit testing. So that's why we have to perform integration testing to make sure that we don't have interaction errors. But when we perform integration testing, we are not going to do it big bang. That means we integrate all the classes together, then we get the entire system, then we simply test the whole system. Just because if we are testing the whole system is going to be the same as system testing, it's not integration testing. So to perform integration testing, we have to do it incrementally, so incrementally means we're going to integrate the classes into the system one by one and then see what's going to happen. So for integration testing, we can do it top down and usually the top system is going to be do user interface components. So by using this top down approach, we first test, so when we perform integration testing, one approach that we can use is top down that means we. So when we perform integration testing, one approach that we can use is top down that means we're going to test the top subsystem first. And then we're going to go downward, and then test the other subsystems. And usually the tops of system is going to be the user interface components. So using this approach, we will test the user interface first. So we test the top subsystem with stubs, and then we will replace the stub one by one with the actual subsystem, using depth-first, or breadth-first. And then a new subsystems are integrated. Some subset of previous tests is rerun for example regression testing. And the good thing about top down approach is that, it's going to test the user interface components first. And if the user interface components, they are critical within the software system, then using top down is going to be good, because you test the critical components first. But the bad thing is that, it cannot do significant low level processing until late in the testing. And we also need to write many stubs when we perform testing. And usually the lower level subsystems they are controllers which control the behavior of the software system. And using top down that means we're going to test them late in testing. Another approach that we can use is what we call bottom up. For bottom up, we test the bottom subsystem first, and then we're going to test the bottom subsystems using drivers. And then once we complete the test for a particular subsystem, then we're going to integrate that subsystem with the top part of the system. And the good thing about bottom up, is that we're going to test the controller first. And usually the controllers, they are critical within a software system. So it's a good idea to always test them first. And the bad thing is that we're going to test the user interface components late in testing. Now, we can see that if you're using top down, we're going to test user interface first. If you're using bottom up, we're going to test controllers first. Then there is a method which is called sandwich. That means we're going to test the top part using stubs. And then we're going to test the bottom part using drivers at the same time, so we can do them in parallel. Then after we test the top part and the bottom part of the system, we can integrate them together. And this is what we call the sandwich approach. The good thing about sandwich, is that now we perform testing on the user interface and the controller at the same time in parallel. But the bad thing is that we have to write many stubs and drivers. And the sandwich approach is very efficient, because we can perform testing in parallel. And it can shorten the total testing time, and notice that when we perform testing, critical subsystems should be tested as early as possible. Now, if the user interface is critical, then we test them first. If the controllers, they're critical then we test them first, etcetera. And also address several software requirements or have a high level of control. That means having high cyclomatic complexity and also subsystem that are complex or error prone, and have specific performance requirements. That means, you need to satisfy some non functional requirements on that particular subsystem. Then they should be tested as early as possible. And notice that regression testing is required for critical subsystems. Why, because the subsystem is important, and if we make a mistake before we try to avoid making the same mistake again on the same subsystem. In system testing, what we're going to do is that, we're going to test the system as a whole, to ensure that the system functions properly when integrated. And these are some specific types of tests that you may want to perform and you do system testing. For example functional testing, that means you want to verify all the functionalities specified in the system requirements, specification. And also performance testing. You have to verify the design goal that you have specified it in your non functional requirements. And also pilot testing you may select a group of end users to try it out the system. And also accepting test, that means you let the client or user to use the system. And then verify the usability, and also validate all the functional and nonfunctional requirements against the system requirements, specification. And also installation, that means you try to verify usability and also validates functional and nonfunctional requirements in real use. And in this lecture we focus on performance testing, pilot testing and also acceptance testing. So in performance testing we're talking about non functional requirements. For example, stressed whether your system can handle many simultaneous requests at the same time. Or volume, that means whether your system can handle large amounts of data, high complexity algorithms, or high disk fragmentation? Security ratify access protection mechanisms. And timing, verify the system can meet some timing constraints. And also recovery, you have to verify that the system can recover when forced to file in different ways. And you may also perform pilot testing. That means you try to invite some end users to try out the system. And there are two types of pilot testing that we can perform. One is what we call alpha testing. In alpha testing, you will invite users to develop, in alpha testing, you invite some users into the developer site and then test out in the developer site. So that's why we say that testing is performed in a controlled environment, within the developer site. And also we can perform beta testing. That means you release the software to the end users, and in the end users they tried out the software in their own machines. So that's why we say that for beta testing, the test is performed under client site, under the client's machine. And it's pretty common that we use beta testing in games. So we release the games to the players to try it out, and if they discover any parts within the game, then they can report them to the developers and the developers, they can fix the bugs. And in acceptance testing, we try to demonstrate to decline that a function or a constraint of the system is fully operational. That means we try to demonstrate something in front of the client. And these are the things that you can do in acceptance testing, for example, validation of functionalities. Does the system provide the required functionality? And also validation of interface, for example whether the interface performed desired functions, and also follow required design standards. And also information content. Let's say whether the system is going to store all the data correctly on the system. And in performance, does the system meet specific performance criteria etcetera, etcetera. And when you derive acceptance test, you have restated written requirements in a concise, precise and also testable way by doing first grouping related requirements together. And then second, removing any requirements that cannot be tested. And then you may also add any additional requirements gathered from users by looking at the use cases, looking at the domain model. And also looking at the non functional requirements, in your system requirements, specifications. And then for each requirement you have to come up with an evaluation scenario that will demonstrate to decline, that you have achieved the corresponding requirement. And notice that since most of the violations scenarios, they depend on the user interface, so they cannot be completed or tested until the user interface is decided.