As you have heard, there are many aspects of 3D analysis that are very computationally intensive. There are basically two ways that you can optimize your problem for a fast solution. You can either go and buy, acquire more computation hardware to solve your problems faster, or you can make your code more efficient. This requires certain types of programming that Brian will now talk more about. >> We can do more advanced stuff. We could do an heat equation in this case, where we say that the inner elements of a matrix should be updated by its four neighbors, up, down, left, and right. And we can express that in Python, as in Matlab, in six lines of code in this case. Very simple. If we want to do this in another programming language, C++, which is popular, and want to use some multicore CPUs, it grows to 23 lines of code. And you can easily see that you're getting more problems with understanding what's happening in the code now. If you want to run it efficiently, you have a little more code lines, but more importantly, you change to a lot of point arithmetic. I wrote this example myself. It actually took me two hours to make sure that it was actually correct. So I was not being productive. Python code we can write in ten minutes, the C code here we can write in two hours. If you want to run on efficient supercomputers, the code becomes even bigger, you can hardly read it by now. If you want to run it on a graphics card, it becomes so big that the font of my screen at least cannot show the whole program in a readable form. It is said that a professional programmer can write 10 lines of code a day, including debugging. In scientific scripting, we might be able to do twice as much. But spending several days writing an efficient tiny little folding operation as here is not an efficient use of our time as a scientist. What we came from was this. 10 minute example, maybe 15 minutes for you if you haven't tried it before but this was very simple. And if we look at how well it compares to the more fine tuned and highly optimized versions, well, compared to ordinary Python it's obviously much faster. Something like 50 times. Compared to the C and C++ solutions, you can see that the just in time solution is actually just as fast. There's no point in us as scientists trying to write highly efficient code, the computer is just as well good at doing that as we are ourselves. And we might as well use this high level of specification, it's also known as declarative programming, we tell the computer what we want done, not how we want it done. That leaves more degrees of freedom for our compiler. We get more readable code, and the computer actually runs it faster. So it's a win-win situation. >> This recommended practice of declarative programming that Brian talks about here may remind you about the practice that we use in the workflows. Where the workflow is to a large part relying on works delivered by others through these various modules that is used in the workflow. And this, in general, a very efficient way to work, also in programming, as you heard from Brian. And in our cause we have tried to do the same thing. In the owner's track where you have the opportunity to work with Jupyter Notebooks, you will see that this enables you to work in a high level programming language while growing on the power of some very efficient code packages that are installed in the server environment. You may also experience some limitations in some cases. For example, the server environment that these notebooks are running on will not give you access to GPU resources. Which means that there would be some of the heavier reconstruction exercises that are not possible to carry out. There you will need to make, for example, your own Jupyter Notebook installation in an environment that supports GPU hardware, which can be, for example, relatively easily accessed through cloud services. Where you can buy resources as you need them and at what scale you need them, and this relatively easy accessible today. So, with this I think we have concluded our module on computation resources.