In this lecture, we're going to continue talking about the building blocks of user interaction, focusing on the output part of the equation and the information that the system presents to the user as the user uses the system. In the broadest terms, you can think of the output as the full UI, as well as all the content that the application has. But for our purposes, we're really going to be focusing more on the informational side of what exactly is the information that the system is presenting to the user and how. You can think of the output as both having structure and content. Structure of the output has to do with the format in which the information is presented, while the content has to do with what actual information is being presented to the user. Clearly, the content of the output is going to vary from one application to the other based on what the application is intended to do. But the structure, the format in which the information is presented, can be common across multiple applications. When we think about the output structure, there are a number of ways that the structure can be passed. One of the key aspects of the structure is the modality, whether the information is being presented visually, for example, on a screen, in most screen-based applications, or in some kind of an ambient way such as, for example, a colored light in a room, whether it's presented as audio, or it's presented through some kind of haptic feedback. In some of the newest applications that are being created, we've also started to see auditory feedback as a modality for output. But, for most of the current applications, the output is either going to be visual, audio, or haptic. Output also has a format. So, we can present to users with numbers,. We can present them with graphs. We can present them with a list of things, like the list of results on the Google web page. We can send them a push notification, as well as many other kinds of feedback that application is being able to do. Finally, another aspect of the output structure is the location or where the output takes place. This can happen inside the application itself, it can happen on some kind of a wearable device, it can happen in the environment in a lot of the embedded computing, or it can happen on something like the lock screen of a mobile phone. The modality format and location, all determine how exactly a piece of information is going to be presented and what kind of user experience will result from that presentation. The kind of consideration that the designer needs to think about when they're designing output has to do really with what exactly the user needs to know in order to perform the task the system is intended to support, when and how often the user interaction with the information will take place, and in what state the user is going to be when the information is presented. Again, is the user going to be walking? Are they going to be driving? Or are they going to be sitting still with a lot of both attention and cognitive capacity to engage with the information? Then, things like what is the user's current knowledge base. For instance, for things like graphs, often you need a certain level of both numeracy and graph literacy for the graphs to be effective output methods. So, being able to understand the user's current knowledge base for the targeted population is really important. Let's take a look at some of these issues again in relation to Heartsteps, the walking intervention that we were discussing last time. So, one of the things that Heartsteps does is to provide activity suggestions to users a few times each day. So, the information type that is presented through the activity suggestions is really text that contains a suggestion of how the user can be active right now in their current context. This text is sent to users as push notifications. That's done, on average, every two hours. The timeliness of receiving this suggestion is actually important. So, it's important for the user to see the suggestion relatively close to the time that the suggestion is sent. This is sent when user is currently and has recently been sedentary. So, in order to ensure the timeliness of receiving the information, the push notification is being used as the output modality, which means that the suggestion shows up on the lock screen of the phone. It creates a chime when it arrives, as well as the phone vibrates, providing multiple ways for user to see that they have received this particular piece of information. On the other hand, the graphs that the Heartsteps provides users are really presented as a screen inside the interactive application that the user has on their phone. The information presented in this case is the step counts. When the user looks at the information, it is completely user-defined. So, the user decides when they want to go into the application to take a look at the information that's presented there, although we assume that a user will be doing this several times a day. The user state is likely going to be variable. Sometimes they might be actually doing that while they're walking to see how much they can walk so far, while other times they just might want to review the data when they're being sedentary. Again, in terms of the format, we provide both a number for just the total step count for that day that the user can quickly see if they just want to check quick reference, as well as graphs that allow them to monitor or investigate how they have been walking that day. At the same time, since we're using a jawbone move activity tracker, there's some lights on the tracker itself that provide users with an ambient awareness of how much they have been walking. Even from these two kinds of output, you can already tell that there are conceptually two types of output methods: push methods and pull methods. The pull content is really how traditional information was presented to users in interactive applications. This means that the information is present in the system, and it's made available to the users, but the user has to actively decide to go and access the information. The information can be, in this case, highly interactive, and it leaves the user in complete control of when and how much of information that the user is going to be getting. Given that the user is deciding when they're going to be accessing the information, this also means that the pull content can be highly complex because user can decide to access that information at times when they have cognitive capacity and time to engage with complex information effectively. Push content, on the other hand, is delivered to the users based on a set of rules. It's literally pushed to the users so that the user receives that information at a time that the system decides it's a good time for the user to receive that content. Push content often uses sensing and user modeling to determine the right time to deliver the information. Given that often the user is not going to be in a state that we can completely predict, the push content often needs to be kept simple. So, the user can easily grasp the essence of what is being communicated to them and can then decide on whether to engage with the content further or to let it go. On the other hand, push content can often be perceived as high burden because it often involves repeatedly notifying the user of the events that are happening, which if it's done too much can create annoyance and frustration. How exactly the system decides to deliver the information to the user is determined by the notion of state. Really, the way to think about the state is that it is a set of values of system inputs that are currently in the memory, as well as the set of rules that determine what kind of output the system will produce. So, this is really the guts of an interactive system that translate the input of the data that the system has taken in into the output that the user is going to see. As an example of state, let's once again think about the activity suggestions to bettering Heartsteps. In order to deliver activity suggestions of activity, Heartsteps needs to track different pieces of information to determine whether to send a suggestion at all, and if so, what kind of suggestion to send. I mentioned suggestions can be delivered up to five times a day in Heartsteps. These are the times that they're actually user-specified. So, at the start of the study when they're on-boarding into the system, the user actually selects the times during the day when they want to receive the suggestions. The Heartsteps data system keeps these times in memory and uses those times as the times when for each user it tries to send a suggestion. What suggestion is chosen is based on the user's current context. So, there's a number of pieces of information that the Heartsteps needs to keep in memory in order to pull the message from the right category to send to a user: the user's current location, the weather outside, time of the day, and whether it's a weekday or a weekend. All of these new variables are actually kept as part of the Heartsteps state in order to determine what suggestion to send. Suggestions are also not provided to the user if the user is currently active, so if they're walking or running, or if the user is driving. So, before sending a suggestion, the Heartsteps system has to look at the user's current activity, and it keeps that current activities as part of the state in order to determine whether to send a suggestion. Similarly, suggestions are not provided if the user has been active in the last 10 minutes, so that the history of activity is a part of the state. Finally, suggestions are not provided if the user had turned off suggestions for a period of time, and that period covers the current moment. So, this is something that the user is able to do through the application, in this particular case, through the Snooze button in the suggestion that you see on the screen to tell the system that they don't want to receive the suggestions for the next, let's say, three, four hours or eight hours, or the rest of the day. If the system is in this state, no further suggestions are sent to the user. So, all of these pieces of information are kept as part of the state in order for Heartsteps to be able to appropriately send a suggestion or decide not to send a suggestion if the time is not a good time to send one. So, all of this is part of the state. Another key concept is the notion of a mode. Mode is really the element of the state, and usually a user-controllable element of the state, that consistently determines how output is presented to the users. As I just mentioned in the previous slide, in the case of Heartsteps, one of the modes that the Heartsteps can be put into is that all suggestions are snoozed. This can be done either through the settings in the application or by putting it into this mode when the user receives a suggestion. But there are also system-wide modes. Some of the common ones on contemporary mobile phones are things like whether the ringer is on and off. So, if the ringer is off, all notifications that would typically chime or ring are just provided to users as vibration. If the system is in an airplane mode, it's not able to receive any notifications from the Internet at all. If it's in the do not disturb mode, all those notifications are silenced. If a system is in a night shift mode for example, the colors on the whole call system moved towards the yellow to make it easier for the person to fall asleep. So, all of these things are states that the system is maintaining, that tell both the system as a whole and individual applications how to behave in a consistent way. It is important for the user to understand what mode the system is currently in, and a feedback for this is really crucial. In the case of system-wide states, like the ones we have been discussing, icons are often included either in the task bar or in other interface elements like the control center in the case of the iPhone that you see on the screen here. They tell the user in what mode the system is currently. So, to summarize, the UX design involves designing inputs, outputs, and rules that translate inputs into desired outputs. This is really fundamentally what the designer does when they're developing a new system. Design of both inputs and outputs must consider when, where, and how the user will interact with information and what information exactly needs to be provided to the system and the system needs to provide in turn back to the user. Then, the visibility of the state can be really fundamental to creating good user experience. So, the user understands exactly how this translation is happening and why the system is behaving in the way that it's behaving. Design of these three elements is really the fundamental task of designers, and prototyping is the way that designers get these elements of the user interaction right. Starting with the next lecture, we will start looking in more details of how the process of prototyping works. Thanks for watching, and see you next time.