0:01

Hi. In this part of the lecture, we're moving

Â beyond Hodgkin-Huxley to think about simplified models.

Â So, can one build simple models that capture the behavior of, of true neurons?

Â But are either, analytically tractable, so that one can do some analysis on them

Â and understand, maybe, how different ion channels contribute to their interesting

Â dynamics. Or else, can one build a simulation, a

Â large scale simulation, that has models that are as simple as possible that is,

Â that involves little computational time as possible to model and yet capture the

Â relevant and interesting dynamics of real neurons?

Â So here are a few different examples of firing patterns from real neurons being

Â driven by a noisy input. On the top, you see cortical neuron early

Â in development and then here thalamic neurons that have been recorded under

Â different depolarizations. So here, in particular, you can see a

Â very characteristic bursting pattern, where a bunch of spikes are generated in

Â a clump. And at a different depolarization, those

Â bursts almost disappear, and you get single spikes, more like, more like in

Â the case of the cortical neuron. And finally, here's a motor neuron.

Â So in this case, you see very regular firing.

Â Motor neurons tend to fire very regularly, and the noise leads only to

Â small deviations in the regular timing of spikes.

Â 1:18

So, we see that neurons can have a wide range of firing patterns, which come

Â about partly because of the nature of their dynamics, and partly because of the

Â nature of their inputs. Let's look at some potential examples of

Â firing patterns. Imagine that a neuron fired regularly

Â like this. And to a second input, it also fires

Â regularly, but with a different, with a different spiking interval.

Â So one might feel comfortable thinking about this neuron's behavior as

Â expressing a rate code. The spike frequency signals the input.

Â What, though, if we now had this case? Here, the mean frequency is the same, but

Â now the firing times of spikes are shifted slightly.

Â So, we might imagine that these little changes in local frequency and code

Â stimulus information, may be like frequency modulator or FM signals.

Â In the next case here, the main firing rate might still be important.

Â But there's so much variability in timing that, that suggests that precise spiked

Â times might mean something distinct about the input.

Â And what about this final case? Here now you see that there are perhaps

Â two distinct symbols in the code. This looks like the bursting that we saw

Â in this thalamic neuron, are these single spikes signalling something different

Â than these, than these groups of spikes, or these bursts.

Â So, neurons are capable of firing with these many different kinds of outputs.

Â And if we're trying to come up with a reduced model, we'd like to aim for one

Â that would allow us to represent these different behaviors.

Â So try to keep this range of different behaviors in mind as we go through

Â different ideas about how to make reduced or simplified model neurons.

Â 2:53

Let's start with the simplest case. Let's just try to write down an equation

Â from V that does something like what a neuron does.

Â So we have a differential equation that looks like this, and and our task is to

Â find a good function f of V that makes [INAUDIBLE] do what we want it to.

Â So as we observe, the behavior of the neuron can be quite close to linear as

Â long as it's not near spiking. So how bad would it be to assume that we

Â simply have a linear neuron. That is, an equation such as we've found

Â for the passive membrane. So note, from now on, that I'll set the

Â capacitance equal to 1, so we don't have to carry constants arounds.

Â So I'm drawing this case above. So here, f of V is simply minus a minus V

Â naught. So how did the dynamics of such a neuron

Â look. So here's our equation for the voltage.

Â Let's, for now leave aside our input. So f of V is minus a V minus V naught.

Â We have a fixed point at dV dT equals zero.

Â That is, that V equals V not. Now, how do the dynamics look above and

Â below that fixed point? If you have a voltage, which is on this

Â side of the fixed point. So let's add dV dT for this value of the

Â voltage is positive. So voltage increases, and that's true

Â everywhere along this part of the line. On this side of the fixed point however

Â the voltage dV dT is negative. And so anything that's out here moves

Â back toward V naught. That's what makes V naught a stable fix

Â point. But how do we get a neuron like this to

Â fire a spike. We need to add in a couple of things.

Â So for one thing we need to say that there's some threshold.

Â So as I move around in V, although I'm always being drawn back to this fixed

Â point, if I happen, because of the addition of some input, to be pushed up

Â to some threshold voltage, what I'm going to do is, set this equal to the

Â time of a spike, and just jump myself up to a maximum.

Â And so, if we just plot what that looks like in time, we have some voltage that's

Â varying along. It hits this value of the threshold, V

Â thres, and instantaneously we're going to set that equal to the maximum of the

Â spike. And the next thing we're going to do is

Â take that voltage and reset it. We're going to take it back to some V

Â reset out here, and now the input will continue to push it around, but starting

Â at that new reset value. And so you can see that this mimics

Â pretty well what spike trains look like. Let's have a look at that directly.

Â So this is like the passive membrane. It, remember the equation that we wrote

Â down earlier for the passive membrane. This captures that linear behavior.

Â It has the additional rule that when V reaches the threshold, a spike is fired.

Â And then, it's reset. And V naught is just the resting

Â potential of the cell. So here's an example of how the

Â integrating fire model works in, with, in response to a particular input.

Â It might be hard to distinguish that from a real, a real spike tree.

Â So while the integrating fire model has a lot of advantages and suddenly captures

Â some basic properties of neurons, one can come a lot closer to the true dynamics of

Â neurons. So, in the integrating fire model, we had

Â to paste on the spike to make it excitable.

Â How can we make this model intrinsically excitable?

Â 6:18

So what we need to do is to add on some more stuff to our f ov V.

Â What we need to do is to give f of V a range where the voltage can, in fact,

Â increase. So now, what have we done here?

Â We've added in another fixed point. So we have here, stable fixed point.

Â Now, what's up with this fixed point? So remember, that here, the voltage heads

Â toward the fixed point. What's going to happen as we cross this

Â fixed point. So now with voltages larger than, than

Â this value, you can see that this dV dt is now positive and we're going to start

Â heading out to larger and larger values. And so in response to dynamics like this,

Â what's going to happen is that as one crosses this effective threshold.

Â So now if you have some effective input that takes you above this value, now the

Â voltage is just going to, now the voltage is just going to increase.

Â So that means we still need a couple of extra pieces as we needed for the

Â integrating of firing neuron. We're going to add a maximal voltage, not

Â a threshold, now the threshold is determined intrinsically by the crossing

Â by this unstable fix point of f of V. But now we need some maximal voltage

Â beyond which the spike can not continue to increase.

Â And when we reach that voltage, we're going to reset again back to some reset

Â value. One example of form of a f of V that

Â works quite well, is simply a quadratic function.

Â So another example of a choice of f of V that's being shown to fit cortical

Â neurons very well is the exponential entry of fine neuron.

Â Now here, we can choose f of V, so that has an exponential piece.

Â So that part of the dynamics sub threshold are linear and part have this

Â exponentially increasing part that mimics the rapid rise of the, of the spike.

Â And again we have to add a maximum and reset.

Â So this model has an important parameter, delta, which governs how sharply

Â increasing the nonlinearity is. So here's a strongly related example of a

Â one dimensional model that gets a lot of use.

Â This is called the theta neuron. And in the theta neuron, the voltage is

Â thought of as a phase, theta. When the phase reaches pi, here, we call

Â that a spike. So what's neat about using a phase

Â instead of a continuous variable, like voltage as before, is that as soon as you

Â pass through pi, you wrap around to minus pi and that gives you a built-in reset,

Â so you don't need to add that extra part into the dynamics.

Â So the dynamics given by, by this equation here.

Â This is being shown to actually be equivalent to the one dimensional voltage

Â model with a quadratic nonlinearity. This model also has a fixed point, V

Â rest, and an unstable point, V thresh, which acts like a threshold.

Â Now, because this model fires regularly, even without input, so now, let's imagine

Â that It is zero. You can see that these dynamics are still

Â regularly firing. They'll continue to oscillate, the theta

Â neuron is often used to model periodically firing neurons.

Â So aesthetically, lets say, we're still a little pained by this construction of the

Â maximum and the reset, or even the reset on the, on the phase variable.

Â Is there anything else we can do to improve this simple model?

Â How might we prevent our spike from increasing to infinity, apart from

Â putting some maximum on it? So, let's try falling.

Â So what does that do? Now, there's another fixed point, here.

Â So that we still have our stable fixed point.

Â We have an unstable fixed point, which acts as our threshold.

Â And now, we have antother fixed point. Now, is it stable or unstable?

Â Let's just, let's just check it. So here we're increasing.

Â There we're decreasing. Here we're increasing.

Â And here we're moving back toward that fixed point, this is a stable fixed

Â point. Hopefully you will sort of intuitive by

Â now that you can tell whether a fixed point in this one-dimensional

Â representation is stable or unstable, by just looking at the slope of f of V at

Â that point. Whenever the slope is negative, that's

Â the stable fixed point. And if the slope is positive, it's an

Â unstable fixed point. So now we have this fixed point.

Â What's the dynamics? Now once we get above our threshold, we

Â increase. And instead of increasing without bounds,

Â we go to this fixed point. So that's great.

Â However, the problem is that it stays there.

Â The system is called bi-stable. In order to allow the dynamics to come

Â back from that stable fixed point, let's remember what happened in Hodgin-Huxley.

Â Actually, two separate mechanisms helped to restore the voltage back to rest.

Â 10:53

One was that, this, one was that the sodium, switching of the drive toward

Â this sodium equilibrium potential. And the other was set that potassium

Â channel activated pulling the voltage back to what the potassium equilibrium

Â potential. So here we need to do something similar

Â to pull the voltage back toward rest. And that is to include a second variable

Â to take care of inactivation. So that's done here by including this

Â second variable, u. So u here decays linearly, but it also

Â has a coupling with V. So this function of voltage means that

Â when the voltage gets large, u is also driven to be large.

Â Then we couple inactivation variable into the voltage equation.

Â One would want the function G(u) to be negative, so that a large u, pulls V down

Â again. So this leads us to the consideration of

Â models that have two dynamical variables. Now, instead of drawing my f of V against

Â V we need a new kind of plot called a phase plane diagram.

Â The phase plane is just the plane to find by a dynamical variables V and a.

Â Now, our understanding of how the model behaves is organized not just by

Â identifying the fixed points as we were doing so far, but looking at the entire

Â line of points, where either one or the other variables has zero derivative.

Â So, we can define these nullclines, here's the V nullcline, as the line in

Â which In which dV dt equal zero. So we set this equation equal to zero.

Â That's going to give us a function of u with respect to V, if we solve this

Â equation. And here, here it is.

Â Similarly, there's a u nullcline, at which du dt equals zero, and that defines

Â this other curve. For most neural models, nullclines have

Â shapes something like I've drawn here. In this particular case, there's one

Â fixed point that is a true fixed point, where both dV dt equals zero and du dt

Â equals zero. And that's here.

Â This is the resting state. So now we can think about what happens if

Â we start out at some particular value of V and u.

Â We're going to head out in a trajectory that has a velocity that has a component

Â in the V direction, given by dV dt, dV dt evaluated at V and u.

Â And at component in the u direction, which is going to be given by du dt.

Â At v and u. The nice thing about these nullclines is

Â they give us a sense of how trajectories in this two-dimensional plane will work.

Â So this green curve divides parts of the plane in which the voltage is either

Â increasing, down here on this side of the green curve and decreasing here.

Â Whereas, the red curve divides regions of the plane in which u is either increasing

Â or decreasing on this side of the red red curve.

Â So if we start near rest, with an input that now takes us out into some larger

Â voltage range, the nonlinearity in voltage now says that we start to move

Â quickly in V. We're now going to undergo what will look

Â like a spike. And now that we've crossed that green

Â line, now remember that the direction of the voltage now changes now we're

Â going to come backward. Wrap around this direction, move that

Â way, but we still need to increase in u in this half of the plane.

Â Now we've crossed that red line. Now we start to decrease and we come back

Â this way. And so we have a spiking trajectory.

Â If we now plot that, so now the voltage as a function of time, small and rapidly

Â increases. Now, depending on how quickly it moves

Â along this part of the no claim, it will gradually come back again.

Â So there's an enormous amount or richness and fun to be had by analyzing your

Â [UNKNOWN] dynamics. Likes in the phase plane this is a very

Â simple example that can be multiple fix points, limit cycles, all different kinds

Â of bifurcations and dynamics as the input changes.

Â Since there's no way we can do justice to this in this course, I'm not going to go

Â into any of this, would actually be a great point to branch out an entire

Â second course. Some [UNKNOWN] dynamics in phase plane

Â analysis. So luckily for you there's a great book

Â available by Eugene Izhikevich if you want to explore this direction.

Â The reference is posted on the website. There's also a lot of resources online

Â by, by scholars like Wulfram Gerstner, Bard Ermentrout, our white knight of the

Â previous slide. And also generally perusing through

Â scholarpedia. What I will do, however, is introduce you

Â to one final model that's inspired by all this richness.

Â And that's the so called simple model. Izhikevich and others have noted, that if

Â you zoom in here, to the, to this part of the phase plane, you can pick off the

Â important dynamics that generate a lot of the nice behavior of real neurons.

Â 16:11

So what this would do is as V gets larger then that's going to drive u to get

Â large. The coupling here is now going to

Â decrease the voltage as u gets large. So that forms the basic role of

Â inactivation. So this reduced model is certainly not

Â complete. So like in the cases that we've just

Â left, we've thrown away the higher order dynamics in voltage that allow it to

Â restore itself from a spike. So we have to go back to putting in a

Â maximum and a reset. So these are parameters of the model.

Â The u variable also needs a reset. So one is left with one, two, three, four

Â parameters and these four parameters determine the decay rate of u.

Â To determine the sensitivity of u to changes in V and they also determine the

Â reset of V and u. So here's a range of very different kinds

Â of firing patterns from very different kinds of neurons, sort of being generated

Â by different choices of these four parameters.

Â So these are all model fits to different kinds of, of real neurons and they've

Â been fit using just those four parameters, so you can see that you can

Â get a simple regular spiking dynamics that you can get integrating the firing

Â neuron. You can also get neurons that do

Â intrinsic bursting that have these very rapid sequences of spikes.

Â You see bust that are punctuated by these large inactive periods.

Â You see fast spiking, low, low threshold spiking.

Â You see spike frequency adaptation that the firing of this neuron starts off

Â rapid and gets slower and slower. This [UNKNOWN] cortical neuron that has a

Â burst of spikes and then no firing. And here you see something nice that you

Â actually can't get it off from the integrated firing like neuron.

Â You see subthreshold resonance. That is the propensity of the neuron to

Â oscillate in response to an input. This is something that can only be

Â achieved with two variable systems, because the two variables can play off

Â against each other. This can't be caputred by a regular

Â integrated [INAUDIBLE]. That was a painfully brief and partial

Â overview of the world of simplified models.

Â As you can imagine this is something of a mathematician's playground.

Â So there's lot to be found out there if you'd like to do more reading.

Â We're going to continue in the next part of today's lecture to go in the other

Â direction to look at the, the Gory reality of neurons and try to understand

Â how one can model complicated dendrytic algebras/g.

Â