After introducing the concept of visual queries and showing you that some designs may be better than others at supporting some visual queries, you may ask yourself, what is it that makes some visual representations easier to use and better to accomplish certain tasks? So, I'm not going to fully answer this question here because that's going to be the main focus of next week. But here, I just want to touch upon a very important concept. The idea that the way our visual system, the human visual system is organized, makes some visual features much easier to detect, to observe than others. So, what are these features? Before I say that, I just want to show you an example. So, say that you have a very long list of numbers, like the one that you have in front of you, and I ask you to count or even just detect all the times that there is a, the number three in this list. So, think about how you are trying to solve this problem. I guess what you're trying to do is to read from left to right, top to bottom or something similar. A way of scanning the old set of numbers and trying to figure out every time there is a three to mark it. So, that's a very inefficient way of doing that, and the reason why it's inefficient is because it's represented in this way. But if I represent the same thing this way where I highlight with colors, all the threes that are available there, all in a sudden, this task can be executed much faster. Why is that? It may seem a trivial question but it's not. Why is that that if I ask you to count the number of threes with this visual representation is much harder than doing the same thing with this visual representation. Well, the trick is exactly in what I was trying to say before. The idea is that our visual processing system can tune in into certain features and not into some others. In particular, in our brain, in the early stages of information processing that go from our eyes and optic nerve to our brain, we have detectors that are tuned or they are tuned to detect certain type of information, and they can detect effectively all type of information. There are things that these part of our brain, these neutrons can do very well and some others that they can't do very well, and as you can imagine, knowing which ones work best and what kinds of receptors we have, it's very useful to decide what kind of designs work best. So, in particular, after information goes through our eyes and through our retina into our optic nerve, it reaches two main regions of our brain that are labeled as V1 and then followed by V2. As I said already, in V1 there are receptors that are specifically tuned to detect certain types of visual information and there are many types of receptors. Some of them are tuned to detect certain types of form, some are there to detect certain types of colors, some detect orientation, some detect motion, some detect depth and so on. So, these neurons have been studied for a very long time, and as I said, they are tuned to detect very specific kinds of information. After they are detected from V1, this information is channeled through V2 that is, again, another area of our brain that processes information very early in the stage of processing and it's able to identify even more complex types of patterns. So, what happens is that when we are looking at the world or a picture in front of us, these neurons are firing louder and louder when the pattern that they are tuned to detect is present. So, when these patterns are present in your field view, they fire louder and they send a stronger signal. Another important characteristic of the way these processing is organized, is that its parallel. What do I mean by parallel? It means that there are neurons, these neurons are able to analyze information in a parallel fashion. So, think about every single image that is entering our eye, is not processed linearly by looking at every single location of the view that we have in front of us. All the locations are processed all at the same time by the neurons that are in V1 and in V2. Why am I saying that? Well, I'm saying that because the fact that this processing is parallel, makes it extremely efficient. So, the visual features that are detected by these early stages of vision has been historically called preattentive features. Now, for a number of reasons, vision researchers realized later on after doing a lot of research in the space that the word preattentive is not completely accurate. Why is it not accurate? Well, the idea was that we could detect these features independently from the mechanisms of attention, and later on we discovered that attention actually does play a major role. But that's not something I want to dig too much deeper here. One thing to keep in mind though, is that these features can be described as features that we can tune on. So, if they are present in front of us, we can tune our attention to detecting these specific features. That's a very important characteristic. So, I don't want to dig too much deeper on preattentive features here because we are going to talk about them once again in the next module, and we're going to dig much deeper on what preattentive features are and how preattentive processing works, and how the effectiveness of these feature changes according to some contextual factors. The only piece of information that you have to remember from this module is the idea that we do have these receptors, they are very efficient and there are some visual characteristics that we can detect very efficiently, whereas there are some other visual characteristics that our brain is not capable of processing as efficiently as when we have preattentive features.