[MUSIC] Welcome to week two of computational neuroscience. I'm Adrian Fairhall. Last week's introduction gave you a high level overview of some of the concepts we'll be covering in the course. Today we're going to start on the subject of the first half of the course which is neural coding. As you know and heard last week brains do many, many different kinds of things, but one of the best studied is the representation and transformation of sensory information. So the first part of this course will be an introduction to common concepts that are used in thinking about and quantifying the way that information is represented in the brain, and how we can extract that information by monitoring brain activity. Our ability to respond to the world and think about it purely inside our head requires a transformation of information from one form to another. Sensory signals or maybe written language are converted into physical processes inside our brains that contain some version of that information. A natural way to talk about this is in terms of a code and our task is to discover the form in which information is represented in the activity inside our heads. Today we'll be talking about how to go about cracking this code. So, first we'll discuss some techniques for recording from the brain. We'll then talk about tools for discovering how the brain represents information and models that express our understanding of this representation. Next week we'll go on to thinking about some methods for inferring what the brain is doing based on recordings of acts, of it, of it's activity. The following week we'll talk about, information theory which is a method to quantify neural representations. And finally in the fifth week we'll talk about the biophysical basis of how the brain processes inputs and performs complex computations. So the methods that we use to collect brain activity determine the ways that we can analyze it and also the kinds of information that that activity can contain. So let's start by reviewing a few techniques that are currently in use to peak inside brains. We'll go from large scale to small scale, starting with some methods that allow us to record activity from people, while those people are performing tasks. So you've probably heard of functional magnetic resonance imaging, as this is also used as a diagnostic tool in hospitals. This technique allows us to record from a persons brain while they're performing some task, as long as that task does not require moving around. The person is placed inside a scanner with their head in the center of a large magnet. The scanner measures spatial perturbations in the magnetic field, which are caused by changes in blood oxygenation. As different parts of the brain become active, blood flows to those areas that support the underlying neural activity. This provides a measure of activity over regions of the scale of about a cubic millimeter. Obviously this must represent the average activity of millions of neurons. Here's an example of some images that one can collect from fMRI, showing in color small regions of the brains that respond differentially to some experimental condition, in this case the viewing of some images. While fMRI is a wonderful method for discovering the approximate regions of neural activity, as we've said the responses that are recorded are averaged over many neurons and also they're slow. The blood flow response to changes in neural activity happens over timescales of seconds. Another method that still relies on averaging over the responses of many neurons but has a much faster response time is EEG electroencephalography. This method is faster because it captures the changes in the electrical fields of the underlying neural circuits directly. Here, one of our former grad students, Kai Miller, is wearing a cap covered by electrodes that are making contact with his scalp. The downside of EEG is it tends to be a very noisy signal, since there are many contributions to the recorded signal. Still, methods like fMRI and EEG are very exciting because, although they're limited, they're non-invasive and so they can be used on healthy, awake human subjects. Ideally, of course, we'd like to have access to the activity of single neurons. In cases when we have direct access to neural tissue, one can use devices such as I'm showing here: a multi-electrode array. This one developed by physicist Alan Litke. Most of the device that you see consists of electronics and amplifiers for amplifying the tiny voltage signals extracted from individual neurons. At the center, down here, is the array itself, blown up on the picture here. Each electron is about 10 microns across, roughly the size of a single neuron, and this array has 512 such electrodes spaced 60 microns apart, so one can record from many neurons simultaneously, as is being done here with the slice of the hippocampus. The multi-electrode array shown before is great when it's possible to lay the tissue directly on the array surface, for example in experiments using brain slices. But generally one would like to penetrate into the brain to see what neurons are doing when the organism is carrying out normal behavior. This rather scary looking electrode is actually only about, about two millimeters long. The tip of each prong is an active electrode. In newer versions of such electrodes, that are being developed by groups at MIT and elsewhere, these electrodes can be moved, individually, into tissue, allowing one to find active cells. Other electrodes also have multiple contact points along them so that one can record at different depths simultaneously. Another beautiful technique for recording for many individual cells is through calcium imaging. Here, cells contain a calcium indicator that changes its florescent properties when calcium binds to it. Thus, the fluorescent light intensity is an indicator of the amount of calcium inside the cell. Since calcium enters the cell during action potentials this signal acts as a record of the firing activity of the neuron. This technique is being used to record, like here, for many neurons at once, possibly even thousands. It's also possible to use fiber optics to be able to open windows like this, our neural activity deep within the brain. The electrical techniques mentioned so far look at the changes in the electric field outside the cell caused by signals that the cells generate internally. It's also very interesting to look inside the cell to learn how these signals are being generated. To do this, we can use patch electrodes, which the experimenter clamps onto the cell membrane, and can use to make a direct electrical contact with the inside of the cell. So now we've seen some examples of the kind of data we have to work with. Let's move on to talk about the neural code itself. Let's start out discussion with some example experimental data which we'll take from a very important part of the brain, the retina. Your eyes are an outlying but very vital part of your brain. The retina is a sheet of cells at the back of your eyeball that take light that's focused through the lens and converts those light signals into electrical signals. So by looking at these signals we can get our first look at the language of the brain. Here's a rough cartoon of the experiment. A retina is dissected from an eye and placed on top of one of those multi-electrode arrays that we already saw, and it's kept in some fluid that will help to keep it alive and active. A movie is projected down onto the retina and the neurons of the retina respond to the movie. So let's zoom in a little on the retina itself, which is an excuse, really, to show you a beautiful image drawn by Ramon y Cajal, a master Spanish anatomist who was active at the turn of last century. This is an example of very early connectomics. Well, in real life, the cells in the retina are very densely packed, here they're drawn schematically in this image to get a sense of the circuit wiring. So this image shows you the cells that capture light, here at the photoreceptors, both rods, here, and columns here, and the successive layers of cells that accumulate and process the information from the photoreceptors, until they finally reach the output cells, here, the retinal ganglion cells, whose axons join the optic tack in heading out of the eye and into the rest of the brain. Here's a typical way that we look at responses of a single neuron. This is called a raster plot. Every tiny red dot here is an action potential or a spike. When we play the movie once, time goes this way, the neuron fires at some particular times, marked by those red dots. When we repeat the movie, we can plot the responses in that second repetition, and in some cases, they're almost the same. We can repeat it again and again. The responses for different repetitions are staggered upward by a little so that each horizontal strip represents one repetition of the movie. So now we're looking at the neural activity of a group of about 20 retinal ganglion cells while the movie was played repeatedly. In fact, many times, as you can see from the number of repetitions in one of these individual raster plots. So what you can see hear is that each cell responds every time at specific sudden points in the movie. Sometimes the responses are strong. Maybe here, and sometimes they're weak, here. There are some repeats of the movie, where the cell doesn't fire at all. What I hope you can see from this is something extremely beautiful and exciting, each neuron is encoding some feature or features of the movie. And each neuron is responsible for a different set of features, sometimes overlapping. For example, Cell R and Cell P have a lot of features in common, but sometimes they're very different. So our questions are, how do we use these responses to determine what in the stimulus is making that cell fire. Looking at this picture one also wonders how should we think about this concerted activity? Does every neuron signal its own message or is the population responding as a whole in some complex code? Well, we'll not answer that question. We will be discussing models that, that describe single neuron and population level responses. So the questions we'll be addressing over the next few weeks involve how we read out this code. There are two ways of looking at it that are of course quite related. We can consider the end-coding problem. How does a stimulus cause a pattern of responses? This leads us in the direction of building quasi-mechanistic models of our neural system that allow us to predict it's response. We can also remain agnostic about the system and simply ask what do the responses that we observe tell us about what the stimulus was? This is the approach one might see at work in say a neural prosthetic used to drive a robot arm whose job would be to read some measure of neural data and activate the arm to move in that person's intended direction. So our goal will be to build models of this type. Because neural systems are noisy or may contain information about aspects that we don't control, we think about the model as inherently probabilistic. We would like to know the probability of a response, given a stimulus. This is a conditional probability distribution. Conversely, in decoding, we'd like to know the probability of the, of a stimulus having been shown, given the response that we recorded. What we'll have to define and ultimately discover is what is the appropriate measure of response? What is the right way of thinking about the stimulus? And what's the relationship between them that's embodied by our coding model? So here's an example of neural representation of information. We'd like to find some stimulus parameter, along which the neural response varies. So we have stimulus parameter, in everything that follows I'm going to call my stimulus parameter S. And the meaning of S is going to change from slide to slide. And neural response, the simplest way we can be think about this, and the approach that we will be taking today, is that the neural response is some kind of average firing rate or probability of generating a spike. So here are a couple of classic examples of what we call tuning curves. Here's data from a neuron in primary visual cortex, V1. Such neurons respond to oriented bars of white, here, passing through a sudden location in the visual field. As the orientation of the bar is charged, say from almost horizontal to almost vertical, this particular neuron at first does not respond at all, but then starts to fire with a higher and higher rate. And then less again, as the orientation moves away from that preferred one. If you count up the spikes in a time bend of say, 100 milliseconds, and pluck those number of spikes as a function of the orientation, you'll get a curve like this, which looks like a Gaussian. Here's another example, this time from the motor cortex. Now, as the animal in these experiments, a monkey, makes an arm movement in a certain angle relative to his body the firing rate of this neuron is large for some angles and small for other. And this time the plot of firing rate versus movement direction is more like a cosine curve. It turns out the neurons in primary visual cortex are sensetive to many input features and often the sensitivity to those features is distributed in an orderly way, across the cortical sheet. So what you're seeing here is an image of a piece of cortex, a piece of visual cortex. The color indicates the value of a particular feature to which the neuron or neurons in that location respond most strongly. This picture of response as a function of location in cortex is called a funcitonal map. This is an example of some functional maps in cat and bush baby, and these classic what's called pinwheel structures. In this case, the neurons prefer an angle encoded by color, changes systematically in space. The lower two panels show a map of preferred spatial frequency again in cat and bush baby that is the width of lines in a grating, to which these neurons have a strong response. Again, this shows an orderly pattern across cortex. FMRI studies show that there are localized regions in the temporal lobe that are responsive to semantic categories, for example, faces or houses. So the notion of a stimulus that drives a neuron maximally can get quite complex. And the idea of being able to draw a tuning curve as a function of some meaningful stimulus parameter becomes quite difficult. As we can see here in these famous experiments. So this group recorded from single neurons, in the parahypocampal area in humans. This is normally not something one can do, but here the experiment has worked with patients who are undergoing surgery for epilepsy and had agreed to participate in experiments. What you're seeing here is the response of a particular neuron to different images, where here the number of the image is not arranged in any particular order. It is on the x-axis. So, clearly there are few images that drive this neuron particularly well. If we now look at the images that correspond to those large responses. It was found that these have a common property, they all turn out to be images of Brad Pitt and Jennifer Aniston. So interestingly this neuron did not fire to images of either of them separately, so see for example this response of Jennifer Aniston by herself. Well, its true that because these experiments were done a long time ago it's likely that not too many people have this neuron type anymore. Maybe some of you, who, unlike me, refuse to read magazine covers in the checkout line at the supermarket are a little bit younger, some of you may never have had a neuron of this type. So here's another intriguing example. Again, some particularly large responses to some subset of those images. Now when we look at which images those were, we see that they all have a common property, in this case, Pamela Anderson. But interestingly, they're not just photographs of Pamela Anderson, here and here, but also drawings of Pamela Anderson. And although you probably can't see it on your screen, there's also a case of a picture of Pamela Anderson's name written in text. So in later experiments, this group has also showed that such neurons also respond to audio clips of that person's name. So we can think about neurons like this as embodying a concept. So what's emerging from all this, is a pictgure of brain regions having increasing complexity of stimulus representations, starting from more geometric and becoming more semantic, here down in the retina and the thalamus, lateral geniculate nucleus. We see very simple forms of receptive fields. As we go to V1, we see oriented edges. As we move up to V4, we see conjunctions of edges that form contours. And higher up into the, into the brain where we see semantic categories such as houses and maybe specific houses, such as The White House. It's tempting to think about this as a progression of features being agglomerated into more and more specific and complex features. This also leads to increasing in variance. Higher order areas are less sensitive to details, such as color or location in the visual field. So this idea of hierarchical features being assembled in a feed forward way is the basis for many powerful and important models, including ones that are enjoying a lot of success in Machine Vision. What makes the true computation very interesting is that these regions are massively interconnected. For example, while the thalamus generally is thought of as a relay station that takes in information as represented in sensory receptors, like the retina, and distributes it to various other regions of the brain, in fact it receives a massive amount of feedback from all of these areas. And this suggests that these representations also feed back to control what information is coming through in the first place. So this is how semantics, the meaning or the value of the context of an image, can end up influencing its initial representation in V1 or V4. So here where there's a role for learning and for expectation. What you think you're looking at can shape what you actually see. Here's maybe a nice example of such top down effects. You might find it hard, at first, to figure out what this is a picture of. Once you've seen it, whenever you see this picture again, which will be regularly, if you go into the field of visual neuroscience, you'll always see it instantly. And while the role of these top-down effects is very interesting and quite an open area that we've love to talk about more in this course, I'm primarily going to stick with these lower level, geometric representations. While later in the course, Roger will tell you more about how networks can learn to maintain memories, which presumably form the basis of these semantic expectations. In the next section, we're going to talk about how we go about constructing these response models. Let's take a break and be back for the next section.