0:31

At the very beginning of our journey, we learned about neurons, synapses and brain

Â regions. This was in week number one, when we did

Â our neuroscience review. Adrian then told you about a class of

Â descriptive models known as neural encoding models.

Â And you learned about the spike triggered average as well as covariance analysis.

Â And you also learned about the Poisson model of spiking which describes how

Â neurons fire in a stochastic manner. The following weeks we covered Neural

Â decoding methods. Which allowed you to discriminate between

Â stimuli based on neural activities as well as decode stimuli from populations

Â of neurons. And, we learned about Information Theory

Â and how it's related to neural coding. In the previous week, we shifted gears

Â and Got into mechanistic models and particularly we looked at single neuron

Â models. And we covered concepts such as the RC

Â Circuit model of a membrane as well as the famous, Hodgkin-Huxley model of how,

Â the action potential is genrated in neurons.

Â And we ended with simplified neuron models such as the Integrate and fire

Â model. Of a neuron this leads us to the question

Â of how can we model neurons that are connected to eaach other in networks.

Â 1:49

So how do enurons connect to form networks, you know the answer,they use

Â synapses in particular we are going to focus on.

Â Chemical synapses because there the most common type of synapse found in the

Â brain. What do these chemical synapses do?

Â Well, as you know, when there is spike that arrives from the first neuron.

Â So we're going to call this the pre-synaptic neuron, and this the

Â post-synaptic neurons. The spike causes some chemical to be

Â released into the space known as the synaptic cleft and this chemical are in

Â turn are going to bind with some receptors on the post synaptic membrane.

Â And that in turn is going to cause either an increase or a decrease in the membrane

Â potential of this. Postsynaptic neuron how does it happen?

Â Let's first review what happens in the case of a excitatory synapse.

Â So in the case of an excitatory synapse, when you have an input spike you get the

Â neurotransmitter release. In this case it would be glutamate, which

Â binds to receptors in the post-synaptic membrane, and that in turns causes ion

Â channels to open. So you could have ion channels that open,

Â which in turn cause positive ions such as sodium to come inside the cell.

Â And that in turn is going to cause a deep polarization which basically means you

Â have an increase in the local membrane potential of the neurons.

Â We have an increase in the local membrane potential, and that excites the cell.

Â On the other hand, in the case of an inhibitory synapse, you have the input

Â spike releasing neurotransmitters into the synaptic cleft and this could be a

Â neurotransmitter such as GABA A or GABA B.

Â And this binds to receptors, again, in the post-synaptic membrane.

Â And that in turn causes some ion channel to open, and this could result in either

Â chloride coming in, or you might have positive ions such as potassium leading

Â to cell. And that in turn cause hyper polarization

Â or a decrease in the local membrane potential given by these negative signs

Â over here. And so that's the effect of an inhibitory

Â synapse. Now, what we want to do is

Â computationally model the effects of a synapse on the membrane potential V of a

Â neuron. So, here's a cartoon of what we want to

Â do. Here's a synapse.

Â And, we would like to model the effects of input spikes.

Â As they're transmitted by the synapse on to the membrane potential V of a neuron.

Â So, how do we do that? I'll let you think about that for a

Â couple of seconds. Let's start by looking at the RC circuit

Â model of the membrane, which you heard about last week's lecture.

Â As you recall we were modeling the membrane in terms of a resistance and a

Â capacitance so here 's the membrane voltage as you recall there is a net

Â negative charge on the inside compared to the outside of the meembrane.

Â And we were also allowign for some current I sub E to be injected into this

Â ball, which is approximating a neuron. Now here's the circuit diagram for the

Â same situation and you have both the membrane capacitance and the membrane

Â resistance shown here, along with the equilibrium potential of the neuron

Â denoted by e sub l. Now how do we model such a circuit?

Â Well if you go back to your physics class in high school, you will recall that the

Â charge held by a capacitor is given by q equals cm, in this case the membrane

Â capacitance. Times the voltage across the capacitance

Â so Q equals CmV now if we take the derivative of this equation with respect

Â to time 2 dq dt that is nothing but the current coming into the cell and that is

Â given now by Cm dv dt.Now this equation. Cm dv dt equals i can be written in this

Â particular manner by using the fact that we have an input current i sub e divided

Â by a, which is the input current per unit area as well as the current due to the

Â leakage of ions. This is the current due to ion pumps if

Â you recall that maintains this equilibrium potential E sub L.

Â So the E sub L if you recall the equilibrium potential was something

Â around minus 70 millivolts also called the resting potential of the neuron Now

Â given this equation, Cm, dv/dt equals this current that's coming into the cell.

Â You can now multiply both sides by the resistant R sub m.

Â This is also called the specific membrane resistant, and this little C sub m, is

Â the specific membrane capacitance. Then the equation that you'll get, looks

Â something like this. So now what we have is the product RM

Â times CM and that is something called the membrane time constant, tal sub M, and

Â that in turn is also equal to the total membrane resistance times the total

Â membrane capacitance big RM times the big CM.

Â And they related to each other in this particular manner by the surface area of

Â the cell. And so this equation here is describing

Â how the membrane behaves as a function of time as you inject some input current

Â into the cell. Now what is this equation really telling

Â us about the membrane? Well, here is time, and here is the

Â volatge as a function of time. And so if you, start out, at some

Â particular value, let's say at equilibrium, so that is given by EL.

Â 7:59

Then if you inject some current into this neuron.

Â This equation tells you that the voltage is going to raise to some particular

Â level and it will stabilze at some partiuclar level as long as you are

Â injecting the same current and that value is going to be the steady state value so

Â thats Vs s at some particular value here. And that Vss, the steady state voltage,

Â is going to be equal to whatever is the value that we gets when you said dv/dt

Â equal to 0. So let's set dv/dt equal to 0 and what

Â you're going to get then is minus V minus e l, plus i e.

Â Rm is equal to zero and if you solve for V you're going to get EL plus IeRm as the

Â voltage that the cell converges to, that's the steady state voltage of the

Â cell. Now if you turn off [SOUND].

Â The input currents, set that equal to 0. Then what are you going to get?

Â Well, you're going to get an exponential detail back to the equillibrium

Â potential, EL of the cell. the membrane times constant tau, and

Â plays an important role in determining. How quickly the cell reacts to changes in

Â the input. So, for example, if tau m is very large,

Â then the cell will take a long time to converge to the steady-state value.

Â And, similarly, when you turn off the input, it will take a long time to

Â converge back to the equilibrium potential.

Â On the other hand, if you have a small time constant for the membrane, then the

Â cell will react quickly to inputs, and it'll converge quickly to the steady

Â state value. And when you turn that input off, it'll

Â quickly converge back to the equilibrium potential.

Â It might be fun to make an analogy here. When you wake up in the morning, you

Â might find yourself a bit sluggish and slow.

Â And a bit slow to react to new inputs and that's when we could say you have a large

Â time constant. But after you've had your first few cups

Â of morning coffee, you might find yourself alert and fast.

Â And one could then say that you have changed your time constant to being the

Â tiny value. Okay, so perhaps that analogy was bit

Â corny. Well, in any case how do we model the

Â effects of a synapse on the membrane potential V, Now that we know how to

Â model the membrane potential using the RC circuit model.

Â 10:39

So what do synapses do? We know that synapses release

Â neurotransmitters which in turn cause ionic channels to open or close and that

Â in turn changes the membrane potential of this whole synaptic cell.

Â So what we really need to do is to be able to model the opening and closing of

Â ionic channels. On the membrane.

Â So given that we have a model of the membrane potential, how do we model the

Â opening and closing of ionic channels? Well here's a hint, remember the

Â Hodgkin-Huxley model? So in the Hodgkin-Huxley model you had to

Â model the opening and closing of potassium and sodiaum channels.

Â And you did that by adding these additional conductance to model the

Â opening and closing of sodium and potassium channels.

Â So can you do something similar for synapses, which in effect also open and

Â close certain channels? The answer as you might have guessed is

Â yes, we can model the effects of a synapse on the membrane potential by

Â using a synaptic conductance. And that is given by g s.

Â And the other component of the synapse model, besides the conductance g s, is

Â the reversal potential or the equilibrium potential of the synapse.

Â And so here is the equation again, so we have tau m dv dt equals the first term is

Â the league term as in the previous slides but then here is the new term that is the

Â input coming in from the synapse. So we have the term corresponding to the

Â difference between the current voltage and the equilibrium.

Â Potential of the synapse as well as the conductance which is going to change as a

Â function of the inputs being received by the synapse and finally of course we have

Â the input current which is optional so if we have input current we can model that

Â by adding this additional term so the important point here is that.

Â For the synapse model we have these two components the Gs and as well as the Vs

Â and so for an excitatory synapse yu can imagine the Es is going to be a value

Â that is higher than the equilibrium potential of the cell which is going to

Â excite the cell on the other hand for an inhibitory synapse.

Â The ES is going to be a value lesser than the equalibrium potential and that's in

Â turn going to decrease the membrane potential.

Â So how does the synaptic conductors GS, change as a function of the inputs

Â received by the synapse. So you could have these spikes coming in,

Â and that in turn is going to change the synaptic conductance.

Â So how do we model the effects of input spikes on the synaptic conductance?

Â Here's the equation for the synaptic conductance.

Â It's a product of three different factors which together capture the function of

Â the synapse. The first factor, g max, is the maximum

Â conductance associated with that particular synapse.

Â And that for example is associated with the number of channels that one might

Â find on the post synaptic neuron. So the more the number of channels, the

Â larger the value for G max. The second term, P release is the

Â probability of release of neurotransmitter, given that you have an

Â input spike. So once you have an input spike, what is

Â the probability that neurotransmitters are going to be released into the

Â synaptic cleft. And the last term, Ps is the probability

Â of post synaptic channels opening, so, what are the probabilities that these

Â channels on the post synaptic side are going to be open given that you have

Â neurotransmitters being released. And that in turn also corresponds to the

Â fraction of channels that are opened at any point in time.

Â 16:27

Now may be you thought that the differential equation for Ps looked a bit

Â intimidating or may be you thought that was a bit confusing but here's what Ps

Â really looks like as a function of time given as spike on the y axis we are

Â plotting Ps. Which has been normalized to have a

Â maximum value of 1 and on the x axis we have time measured in milliseconds and

Â what we are showing is biological data from three different kind of synapses.

Â The AMBA synapse, GABA a synapse and the NMDA synapse.

Â What you'll notice is that for the ampa synapse, the way that ps behaves can be

Â modeled quite well by using an exponential function.

Â Which we're calling kt. On the other hand, for the gaba a and the

Â nmda synapse, the way that ps behaves is fit better by something called the alpha

Â function which has. A peak that is after, the spike has

Â occurred. So there's some amount of delay before

Â the peak occurs and that's given by the alpha function as shown, down here.

Â So, this is the equation for the alpha function and it has a parameter Tal peak.

Â Which allows you to fit the particular data by shifting the peak from the time

Â that the input spike occurred. So the spike occurred at time zero, and

Â the peak might occur slightly later as determined by tal peak.

Â 18:25

We can categorize the input spike train in terms of what is known as the response

Â function, Roby. So that's given by the summation over all

Â the times at which. A spike occured.

Â So some, some or all the I of delta, T minus TI.

Â So this is basically the delta function. So everytime you have a psike you put in

Â this delta function which is essentially an infinite pulse at that location of the

Â spike. Now why would you really want to do that?

Â Well it turns out that when we do an integral for the filtering, it turns out

Â to be quite convenient to have the spike train as one of these summations of delta

Â functions. So basically this is a technical detail,

Â so don't get so worried about it right now.

Â So suppose that we have a spike train and we would like to model the effect of all

Â the spikes on this particular neuron. How'd we do that?

Â Well let's first select what kind of synapse this particular synapse is.

Â So suppose it's something like an ampa synapse, as we discussed in the previous

Â slide. The ampa synapse behaves as of it is an

Â exponential function, so we have something that looks like this.

Â This is k and this is. T is a function of time.

Â And so, this can be used as a filter to act as the effect of an input spike on

Â the postsynaptic neuron. So, now we have a filter.

Â So here is the filtering equation that will model how the synaptic conductance

Â changes on the post-synaptic neuron side. So basically what we are saying is that

Â gb which is the synaptic conductance at b is essentially just nothing but the

Â maximum conductance times basically summation of all of these exponential

Â functions added together, and if you like integrals as the summation here is the

Â linear filtering equation. And here is your favorite function, the

Â rho b, the neural response function, where you have these delta functions

Â summed up at the locations where you have spikes.

Â Now, if you're still confused about this, actually, there's a very easy way to

Â interpret the summation or this integral. So, here is The spike train and here's

Â what the synaptic conductance GB is going to look like.

Â So every time you have a spike, you put in one of your K functions, your synaptic

Â filter. And then when you have another input

Â spike such as this one, you simply add a copy of the synaptic filter, and you do

Â so for each of these input spike. And so you're going to get a synaptic

Â conductance that looks something like this.

Â So this is what GBT looks like for this particular.

Â Input spike train. So that wasn't really too hard, was it?

Â The moral of the story here of course was that don't be too intimidated by these

Â types of complex equations. So are you ready now to put everything

Â that you have learned so far to create a network model?

Â Now let's do it, so here is a simple example, let's just take a two neurons,

Â neuron 1 and neuron 2 lets connect them together with excitatory synapses.

Â Neuron 1 cnnects with neuron 2 with this excitatory synapse and neuron 2 connects

Â with neuron 1 with this excitaory synapse.

Â Now each of this neurons is given by our. Favorite equation here is the equation

Â for how the membrane potential changes as a function of time.

Â Here's the time constant for the membrane.

Â And we're going to model these two neurons as Integrate-and-Fire neurons.

Â So, this is something you heard about in Adrian's lecture in a previous week.

Â And so the Integrate-and-Fire neuron essentially models the membrane potential

Â and then when there is A particular threshold that is reached.

Â So, here's the threshold. Then the neuron has a spike.

Â So, the neuron spikes here and then is reset back to a particular value.

Â So the particular value in this case is minus 80 millivolts.

Â And the synapses are going to be modeled as Alpha synapses.

Â So, you're going to use an Alpha function which has Essentially as we saw before,

Â it peaked just lightly after zero and then it decayed down to zero.

Â And so, we're going to first look at what happens if we model excited rate

Â synapses, so neuron one excites neuron two and neuron two excites neuron one.

Â And here is what the behavior of the network looks like for these two neurons

Â when they're exciting each other. So you can see that neuron one fires

Â first in this case and then neuron two fires after and so on.

Â So they basically alternate firing from one to the other.

Â Now what will happen if we change the synapses from excited rate to inhibit

Â rate? So here's something surprising that

Â happens. So if we change the synapses to

Â inhibitory synapses, so we can do that by changing the equilibrium potential, also

Â called the reversal potential of the synapse, to minus 80 millivolts.

Â So that's less than the resting potential neuron given by minus 70 millivolts.

Â So you can see that when you change the synapses to be inhibitory then we get

Â synchrony which means the two neurons start firing at the same times so this

Â synchronized wti each other and that's a really interesting property that people

Â have been looking at also in certain brain regions so.

Â Here's an example where a simple model of just two neurons either exciting each

Â other or inhibiting each other give rise to sudden interesting behaviors that

Â might be of relevance to people trying to model particular circuits in the brain.

Â Okay great, that wraps up this particular lecture segment.

Â In the next lecture. We look at how we can go from spiking

Â networks to networks based on firing rates.

Â And this, as we'll see, makes it much easier to simulate large networks of

Â neurons. So, until then, goodbye and ciao.

Â