So, the aim of this basically almost last part because I want to also show you that these ideas are recently being explored experimentally. So, I want to really now focus on Dendritic computation. Basically, showing you some of the ideas that came out from a better understanding of dendrites due to the Cable Theory of Neuron. So, I want just to highlight a few examples. One I already mentioned, and I will show you an example, a more specific example. The dendrites enable neurons to act as multiple functional subunits. Because it is distributed electrically, and some regions may behave electrically, locally, very different than other regions. You may think of dendrite as computing first locally here, here and there, and then globally at the axon soma. So that's one idea. I will show you that dendrites can classify inputs. Because they are electrically distributed, so this is an idea that dendrites as, as classifiers. I want to focus more on this green point number three, about dendrites as implementing directional selectivity, computing the direction of motion, of visual motion, for example. I will not discuss, but this is a very beautiful work, showing the dendrites in particular auditory system. Can be used, the dendrites, can be used to improve sound localizaiton. As you know, in the auditory system, we need to detect, to locate, where does the sound come from, here, here, or there. For animals, it's very important because they have to escape from the lion if it comes from here or there, that correct direction. So, sound localization is a very important computation of the auditory system. And there are some beautiful ideas about how do dendrites, in a particular region end with synapses coming from both ears, can compute the location of sound. I'm not going to talk about this. Just to mention that it happens. And recently, there are some beautiful papers showing that dendrites can sharpen the tuning of cortical neurons. I mention before that cortical neurons, let's say in the visual system, are tuned to a particular orientation or a particular motion, direction, motion. can show that dendrites can tune, can sharpen, can make the neuron more accurate in its sensitivity to this angle and not to this angle. So this is another computation that is suggested to be performed by dendrites. But let's go for a second, back, into what we already discussed, the McCulloch-Pitts neuron here. And show you that just the mere fact, and this is an idea, but by Christof Koch Antonio Poggio from MIT. Just the mere fact that you have a distributed system. So this is your soma, and this is your dendrite, and you have synapses. Inhibitory, excitatory, inhibitory, excitatory, and so on. Only the mere fact that you have a structure enables you to write a more extended logical to perform a more extended, logical operation, rather than this, something like this. And let me explain you very briefly why. Because here now, the location of the inhibition becomes important. Of course, when you have a point, there is no location, everything is on the point. But here, this inhibition is more close to the cell body than this inhibition. And intuitively, you may understand already now that this inhibition is more global, affects all the excitation that comes from more distant region. So, if this excitation or this excitation or this excitation is active, this proximal to the soma inhibition can veto it. This inhibition for example, is most effective on this excitation but not on this excitation. Because for this excitatory input, most of the current goes this direction, and it doesn't care there is inhibition backwards so to speak. So this in arrangement of inhibition relative to the excitation, the strategic location of inhibition, makes a difference in terms of what kind of logical operation you may do. For example, now you can write the following sentence. Rather than this one, I can say I will get an output one. I will get an output if e3 is active and not i1, or i2, or i3. Everything that is on the way, on the path between the excitation and the soma, all the inhibition here is harmful, may veto this excitation. Or, or e2 may be active and not e2 or e1. Then I get an output. Or, e1 may be active, and not i1, then I will get an output. So, you can see that Koch and Poggio suggested the dendritic tree endows the neuron with a most sophisticated logical operations. Compared to a point McCulloch-Pitts neuron. So, that's the first idea to mention regarding the affect of distributed cable system in this strategic location of inhibition versus excitation, that's what I do. Another idea about Bartlett Mel is the notion of this functional sub-units. So here, I, he drew a neuron with regional inputs. So, this is one region recieving synaptic inputs, this is another region receiving on the dendritic tree, synaptic inputs. He was wondering, can you think about the neuron is locally performing some kind of a non-linear operation, non-linear summation of this local synapses. Or this local synapses independently first? So this n1 synapses locally is performing some non-linear operation here, then here, then here. And only then, you sum these local operations globally at the soma, what he calls the Pi Sigma Neuron. Locally, you perform some non-linear operations only between your neighboring synapses, and only then, you sum up all this local operation. This is because the neuron is distributed system and he was showing that indeed, because of certain properties of dendrites. The fact that you have clustered synapses, here a cluster, here a cluster, here a cluster, here, locally. You may get some non-linearity which eventually will sum up the same body differently than if you don't have clustered synapses. So, the clustering of synapses into sub-regions whereby within each region. You have in this case, super linear operation locally, you may get on the output something that is different than if you would not have this local cluster. So, the output looks different here and here. Bartlett Mel was using this idea to show that a neuron can be, behave as a cluster, as a classif, as a classifier, as a classifier. So here is the idea. You take an image, a face, and you input the image to different regions of the dendritic tree. For example, suppose this is my face. Somehow my own face is being projected specifically whereby, nearby regions here, let's say my nose, is, is, is projected to this region. In other part of the face here, so my face is systematically, topographically, in nearby region here, is projected into nearby region here is, is, is, is projected into a cluster that perform non-linear operations locally here. So, Bartlett Mel was showing that this was the case. And somehow, you topographically match the image onto nearby regions here, represent nearby regions here. Then, the output of this dendrite, the output of this cell, are very sensitive to particular face. And that's in my own face. Another input that is not a replica of my face will generate less, less spikes because of the mapping is not accurate anymore. And so, eventually, the output of this dendrite, they say, okay, this is, you're done Segev, or similar. And everything that is not similar enough will generate another output, that means that the cell becomes a classifier. Everything that is similar to me, due to these local non-linearities, will generate something different than everything else. So, the dendrite distributed dendrite with these local synapses. Performing local non-linear operations, maybe have as a cluster on classifier. So, that's Bartlett Mel idea for dendrites. But I think really, the most influential, original idea, beyond the McCulloch and Pitts is this. By now, classical example of Rall from 64 showing that the neurons can behave as directional selective device. And I think now you have all the ingredients, that, to understand what I am saying, basically. So here is the example again. So, you have a cell body. You have a distributed dendritic tree. These tell more proximal soma. And you, you do the following exercise, very simple exercise. You have synapses, these are the red spots. You have synapses distributed over this dendritic tree, excitatory synapses. And they may operate in different temporal order, different time. In this case, the excitatory synapses first start near the soma, so they act here, here, here, and here. Proximal, less proximal, less proximal, distal, [SOUND] in time. The other direction is, you use the same synapses exactly. But you start first with a distal synapse, distance, more proximal, more proximal, more proximal, [SOUND] that's the only difference. So it's the same machinery, dendrites and excitatory synapses, but the temporal order is sweeping the synapse proximal to distal or distal to proximal. That's the only difference. Now let's look at the EPSP of the soma. So I recorded the soma, and I look at the voltage profile emerging into the soma. From the order A, B, C, D, A, B, C, D or from D, C, B, A. So A, B, C, D at the soma looks like this. You see immediately in EPSP coming from the nearby synapses. And on the shoulder of this EPSP that would have attenuated, you have the second, more distant synapses summing up temporally, the third, the fourth. So, you can see a broad shoulder of, for this order. If you do the reverse order, D, C, B, A, you see a delay here. Because the distil one was the first, and it takes it time to reach the soma. But on the way, it summates with a more proximal, with a more proximal, with a more proximal, they summate one on top of the other, building up a large voltage. So, you see that you have two voltage profiles from the same synapses just because there is a delay, which we studied. There is a delay from the distil to the soma compared to the proximal one with less delay. And because of this delay, depending on the velocity of scanning the synapses, sweeping the synapses. If the delay is appropriate, you may be able to sum one on top of the other and build a large voltage. Large peak, for this direction and a smaller peak, but broader for this direction. Why is it so important? Because now assume that I have a threshold here, or spike, at the soma. You'll see that only in one direction of synapses activation, you will get an output spike. And in the other direction, you will be sub-threshold, you will not cross the spike threshold, no spike. And this is what I wanted you to look at here now. Because suppose now, for a second, that this neuron is a directional selective neuron in the retina. We'll discuss the retina in a second. And I will show you that already in your retina, there are cells, ganglion cells, that are directional selective. So, let's say, let's say that this is a directional selective, and now I build the directional selective neuron based on what I just show you from Ralls' idea. Let's say that these inputs, these are excitatory inputs that are impinging on the dendritic tree, and each one of these are receptors sensitive to light. So when a spot of light moving this direction, it will be this receptor activating the dendrite synaptically. This one, this one, this one, this one, this one, and these, from distal to proximal, from D to A. And if the light is moving from left to right, the first synapse to be activated is this one, then this one, and this one, this is the A to B direction. And you can see what happens. You already saw that from in this direction, you will get a broader early on, but broader if you see, in this direction of light moving. You will build up voltage which may cross threshold and generate a spike. So, I just built for you using roads cable theory in ideas about delays and so on. You, I just built for you a directional selective unit. I'm just using synopsis, I'm using dendrites, and I'm using what I know about delays. I'm using all this biophysics to perform computation. Of course, you need a very exact mapping from the outside world into the dendrite, a very topographic mapping, systematic mapping. But if you have this mapping, you have a directional selective neuron. Of course, if all these synapses would have sit on the point, you will lose this computation. This computation depends on the distribution of the synapses over the dendritic tree. So, something about the structure and its synapses generate a func, a possible function. Okay, so let's summarize this aspect as following. So Rall suggested basically, so this is again Wilfrid Rall. And he suggested in, in a dip sense, that you have to be very careful when you take in mathematical order to decide what is the level of resolution. What, what is the granularity of the model needed in order to explain a phenomenon. So, you may look at the point neuron. But this point neuron cannot compute the direction of selectivity on its own because of this total symmetry, using the mechanism I just mentioned. Cannot compute it because the mechanism I just mentioned depends on some structure. The question is, do you need all the complexity of neurons to perform this? And the answer is of course, no. So Rall suggested, go to whatever level that you feel is relevant in terms of modeling in order to understand the phenomenon you are trying to understand. Okay, you may climb up, up until you model. And we can do it today. You model all the details, very fine details of the cell. But maybe in order to understand a particular computational phenomena this is enough, or this is enough. And this is not enough. So, just metaphorically, I want to tell you that when you go to a very complex detailed model, you may think about this, like the case of Rodin. So, this is a model of The Kiss by Rodin in Paris, in Rodin Museum in Paris. This is a detailed model of The Kiss because you have a woman and a man, you have a kiss, you have legs, you have hands, you have details of a kiss. I would call it a simulation of a kiss. It's a simulation. It's not a real kiss because this is a model. This is a sculpture. This is a detailed simulation, which is beautiful, but maybe not needed. Because maybe in order to capture the essence, you are satisfied maybe with this level of description and this is the Brancusi Kiss. So, Brancusi Kiss, also in Paris in the Atelier Brancusi in Pompidou Centre. You get a kiss, it's minimal kiss. Because if you go too much to reduction, you may lose the kissiness of the kiss. But, this is a compact description of a kiss, maybe this level of description, which is the minimal needed to capture the kiss. This I will call theory for the kiss. Because a theory really is a high level description. Still, using the same trying to explain the same phenomenon in this case, of a kiss, in the case before of direction and selectivity. But for this direction and selectivity, the just mentioned you don't need the details. You need the cable, you need the cylinder, it's enough. You need maybe this level, not this level, not this level, but this is not enough. So this is, metaphorically, the extreme, the extent between detailed model which is more like a simulation. And a theory, which more like an abstract view of the phenomena you're trying to understand.