Welcome back. In the last lecture of the core course, we'll be looking at some of the changes in the technological landscape that are creating both new opportunities and putting new pressures in interaction designers to change their craft to be able to accommodate a variety of new technologies that are emerging on the market. This is an exciting time to be an interaction designer. While the technology state remained relatively stable for a long time. If we think about desktop computers as being there for several decades, and it really the graphical user interfaces didn't change fundamentally all that much when mobile phones became popular. We have a long period of time when interaction designers could basically create mostly screen-based applications using a relatively standard set of tools. A lot of the tools that we've been discussing in this course and be able to basically manage any project that is brought to them. That is no longer the case. There are a lot of changes in technology that have shown up in the last couple of years and that they're going to still emerging, that are creating both a lot of new technologies that interaction designers need to work with and also opportunities for creation of a lot more interesting interactions than we have been able to have in the past. Probably the smallest change in all of this is the fact that now we have screens everywhere. Screens on our wrists, screens in our cars, screens on our fridges, screens embedded in our walls. The screens have really become pervasive in our lives and interaction designers are having to figure out what changes when they start designing for these various screens. In terms of what that means for the interaction design is that traditional prototyping tools like five wireframes, for example, can still be used to design user interactions. So, there's really no huge change here in terms of the actual tools that designers use. But testing of those prototypes has to be done in a somewhat different way. Because a lot of these screens are now embedded in the physical objects that we interact with in our daily lives or on our bodies, in different contexts and different states and are basically used when the user is in different states. Like when they're emotion, when they're in crowded environments and so on. So, being able to test in relatively realistic settings. When a person is moving or when they're under a heavy cognitive load.], for example, becomes really paramount. It makes no sense to try to test, for example, interface for navigation system in a car in optimal conditions when the user can pay full attention to the screen. That's not how we drive. Most of the users attention is on the road and they're keeping the passengers safe and themselves safe. Only a fraction of their attention should they need to be able to operate the user interface of a navigation system. If a system is tested, it needs to be tested under situations where with heavy cognitive load, where those interactions can then be optimized for the environments in which a user is actually going to be using the system. So, realistic setting testing becomes really key for a lot of these new screens that have emerged in our lives. Another change is that environment itself has become instrumented. So, now we're surrounded with objects that have some form of computation embedded in them. Our thermostats have become smart machines. Our washing machines have little computers inside of them. Our locks on the doors have become smart locks that can sense that we were approaching and unlock the door automatically. Our lights can be turned on and off or can change their color based on directions that they get from our phones or other computing devices. So, our environment has become basically permeated with computing systems, and interaction designers need to be able to design interactions for all of these systems. What this means is that prototyping tool for screen-based systems become of really limited value. Wireframes really don't do you any good to figure out how Smart Lock is supposed to be functioning. A lot of the lo-fi prototyping methods that designers now need to use have to really account for physicality and they have to account for both the physical nature of the objects in which users are interacting, as well as, the physical environment in which those interactions take place. So, things like storyboards, wizard of Oz prototyping, but both lo-fi prototyping methods that become a lot more useful to design for Internet of things and these instrumented environments. There are also a number of other techniques. Like role-playing and something called body storming, where basically individuals put their emotion and the shapes they make with their bodies try to enact what interaction with a computing system might look like. So, those methods become really paramount for doing lo-fi prototyping for this domain. Finally, it's often needed to create functional if simple physical prototypes that the individuals can interact with to get a realistic experience of what working with an object might look like. So, micro-controllers like Arduino and Raspberry Pi and so on become a part of the designer's toolbox for creating prototypes that can test the design concepts and can provide individuals with something they can provide feedback. Another change that happened recently is the proliferation of new interaction modalities. Using of tapping, for example, to provide information into computing systems. Haptic feedback, use of motion with things like Microsoft Kinect for example, where it actually gets the collation of the individual that used to control some of the interactions. All of these things are becoming a lot more prevalent with how individuals interact with computing systems. Interaction designers are now trying to develop new interaction modalities and new interaction techniques that work effectively in this new change landscape. Here as well, there is a high need to prototype physical artifacts with which individuals can interact. To test haptic feedback and what kind of haptic feedback individuals are going to be able to understand and use effectively, you really need to have something, a physical object that can provide haptic feedback. There is no way around it. Prototyping tools for creating smart physical objects like micro-controller systems become really a central prototyping tool that designers have to learn how to use. They also need to be able to prototype movement. So, role-playing just using cameras to record individual trying different movement become another way that designers are able to do lo-fi prototyping in this setting. Then finally, since the interactions often happen in different contexts and when user is in different environments, being able to test in those environments becomes really important as well. Another change that is really picking up steam recently is that a lot of our systems are now based on artificial intelligence. Probably prime examples of this are smartest systems like Amazon Echo, like Google Assistant which are intended to help individuals solve their problems just by asking their system to do something and that system would be able to adaptively respond to those requests. In this case, Wizard of Oz prototyping is absolutely the fundamental. This really become the way that the new systems can be initially designed and tested with users. Designing effective technology that can do the artificial intelligence is a tremendous amount of implementation work. So, designers have to rely on Wizard of Oz prototyping to be able to test their ideas before that kind of implementation begins. It also requires the designers that they have a deeper understanding of the actual underlying technology and what is technologically possible in order to come up with prototypes that are realistic and that are feasible from the technological perspective. That also means that basically they need to be working much more closely with computer scientists, then that was the case with traditional screen-based interactions. Although it's not optimal, it is also not unknown for design firm to pick up a project and spec out the full screen-based application like a mobile phone application or web application without ever talking to developers first. Just give them the final mock-ups as the spec for what needs to be developed. That cannot be done with artificial intelligence based systems. So, that the very close collaboration and ongoing collaboration between designers and computer scientists is crucial. Finally, we're now at a point where virtual and augmented reality devices are becoming good enough and cheap enough that they're definitely entering consumer market and are going to be more and more popular with every passing year. That means that UX designers need to really try to understand how to develop effective, augmented, and virtual reality experiences. For low-fidelity prototyping, that means that they can sometimes just use physical environment. So, the physical environment itself becomes a prototype for what a virtual environment is going to be like once it's implemented in a virtual reality system. But also, it also means that UX designers need to acquire a lot of new skills with the prototyping platforms that they likely did not use before. Game engines, such things like Unity and Unreal for example, become really fundamental way that VR experiences are being prototyped. As long as that there's a variety of dedicated VR prototyping tools like Sketch box, that interaction designers will need to learn. What makes this tricky at least at this point in time is that both the tools themselves for doing prototyping and design in VR and AR environments, as well as, the Web Best Practices, but what interactions are effective in those environments are evolving really quickly. So, designers who want to work in this field have to be comfortable with change. They have to be adventurous willing to pick up new tools and willing to change their practice and change what experiences they're creating with every passing year. So, it adds some challenge on the part of the designer to work in this environment, but at the same time, given the pace of development, it's one of the most exciting areas where UX designers are able to work at this point. Just to summarize, we are carrying really huge changes in the technological landscape, all happening in the same time right now. From instrumented environments, through virtual reality, and through just a proliferation of screens in basically any device that human beings interacting with. It means that interaction designers have a lot more diverse types of objects of which I can design, that was the case even five to 10 years ago. This means that interaction designers need to learn new tools, new learn prototyping methods in order to be able to effectively work in these environments. On the other hand, the actual process of design does not really change with these technological changes. The actual process of figuring out what the design problem is, doing formative research, trying to understand target users, generating ideas for solutions, and then prototyping those ideas, all of that remains the same, even in this completely changed environment. It is just that the actual specific tools and the specific interactions that the designers are going to be creating that are evolving and changing. But the actual process and basically all the things that we learned in this course and the last course UX 505 are very much still the fundamental pieces and it's the skills of interaction design. So, in that respect, the actual process is rock solid and allows effective design even in these new environments that are just coming onto the market. Thank you for watching and see you next time.