Welcome back. In this lecture we are going to shift gears just a little bit from talking about explicit design techniques and the design problems that UX designers have to solve, to really thinking about the process of design a little bit more broadly and trying to consider some of the larger issues that designers in my opinion at least need to be thinking about as they're working on their technologies. There is no doubt that technology has profoundly shaped our world. Some of these changes have very clearly been for the better, it has influenced how we communicate, our ability to stay in touch. With technologies such as these massive online courses like this one, there has been a great increase in access to information and knowledge to areas and populations that did not have that access before. There's very little doubt that those changes can be profound and profoundly positive. Other changes have been less than positive. We increasingly have very wide prevalence of surveillance, people's privacy has been compromised in many ways, and we certainly have a lot of technology that is capable of destroying many many human lives at a press of a button. So, the question really is, given the technology shapes our world and that it does that in really profound ways, can designers do anything to make it more likely that the impact is positive? So, what is the role of designers in trying to at least to the extent that they're able to ensure that technologies that they're designing is going to have a positive impact on the world? The answer to this question it has to do at least in part with how you think about the relationship of technology and social change. There are at least three main positions on this that have been debated widely in the literature over the last 20 to 30 years. The first of these is the position called social determinism, which basically states that technological consequences are predominantly the result of dynamics in the social and cultural and political context and the design of the technology itself really doesn't matter all that much. That quark technology ends up being used is really predominantly determined by the social which is out. One of the examples of this is the use of Blackberries in the late '90s and early 2,000s, when Blackberries were at the peak of their popularity. The main parties that use Blackberry at the time, were corporations and governments, and those were the kinds of organizations that these systems were originally designed for. But beyond those, it turned out that some of the most prevalent use of the Blackberries was in organized crime and drug dealing, and that drug dealers and criminals really adopted this technology widely and as opposed to someone who takes the social determinism perspective on technological effect, would argue that the reason for that is that in this context the need for privacy was paramount, and that any system that provided privacy and allowed for private communication would have been adopted. So, in this case it was a Blackberry but if the Blackberry was not there, any other kind of system that provided private communication even if it were smoke signals, would have been just fine. So, the most of the weights on countless technology was used, came from the social configurations and political and economic configurations in which this technology ended up being adopted, not the technology itself. Another position on the opposite end of the spectrum is what is traditionally be called technological determinism. This position in rough terms states the design of technology dictates at least to some extent, it's social, political, psychological and economic consequences. So, that really the design of the technology matters and that the design of the technology can greatly influence what consequences that technology has. There are two flavors of this position, the soft determinism position, basically states that the reason that the design of technology matters is that the design of the technology embodies a range of implicit assumptions that designers bring to the table and that insofar is those assumptions are common, they end up creating the technologies that end up changing the technological and cultural landscape. So, if we take for example the assumption that people really highly value security and that security is going to be valued or many other things in people's lives, that leads to the development of security cameras or for homes it leads to the development of surveillance cameras in public places, and we end up with a situation where surveillance becomes extremely widely spread and privacy decreases, and that really the reason for this is that a lot of the assumptions about the importance of security and the relative unimportance of privacy compared to security, were brought into the design process. Hard determinism is states that at least for some technologies, technological affordances necessitate certain kinds of social and political arrangements and they solved that basically technology can be inherently political. Probably the most extreme case of this is the example of nuclear weapons, that are so destructive, that are so utterly dangerous, basically to the planet as a whole. The moment those weapons that were created, they necessitate political and social arrangements that are much more authoritarian that have certain safety checks in place to make sure that these weapons are never inadvertently activated and that basically requires much much stricter, much more hierarchical social arrangements and political arrangements that were possible before these weapons came into being. So, at least for certain classes of technology, that really the technology itself changes the world profoundly because now that the existence of the technology dictates that certain other things in the world absolutely have to change, so that the technology itself carries a lot more agency in this setting than what other positions advocate. The middle ground between these is what is typically referred to as interactionist position. This position states that basically the affordances of technology makes certain uses more or less likely, but that this interacts with the social configuration, the social milieu and which then adapts, takes that technology and adopt it to its needs and to its priorities. So that it's really this interaction of what technology makes possible and how the technology is designed is going to at least to some extent make certain uses easier and certain uses harder, but then that interacts with the needs of the people that their priorities, their social configurations, and so on to end up with the final results. So, if we take a position like this, what can we say about this example of Blackberry use in organized crime? Right, could designers could've done anything to make the situation any different? So, if we take the notion of affordances, what Blackberry's provided was in affordance for private to encrypted communication. So the interactionists would say, you have that on the one hand which makes the use of this system highly appealing to any party that requires private encrypted communication. On the other hand you have various social groups that might or might not require private encrypted communication. So, ordinary people typically at the time did not. So, the adoptions of Blackberry among the lower level unless they just needed to send a lot of email, so to where the keyboard was really attractive. But organized crime trying to stay under the radar of law enforcement, so they flocked to the technology in much larger numbers. If you actually try to think about what could you change in that situation, then there might have been some things that could have been changed in relation to how quark Blackberry systems worked, that would have potentially changed it's adoption in organized crime. One is that if the system always required a organization hosted BlackBerry Enterprise Server, rather than the phone providers being able to host Blackberry servers. Those servers would be relatively difficult to administer, then the level of expertise that would be needed to use this kind of a system, might put it out of reach for at least lower-level criminals and drug dealers because they wouldn't have the infrastructure needed to actually run the system. Licensing requirements that required some kind of authentication would have been another thing that could have been done to change the importances of the technology and therefore its downstream adoption by different social groups. Interactionists would state that basically it's really this combination of what the technology makes easy and what the technology makes difficult based on its design and the needs of different social groups that interact to create the final affects. Things to consider if you take this kind of interaction is position. Then, certainly the designer takes on some responsibility for shaping the technology in such a way as to try to minimize downstream harm. If you're going to do that, then there are certain sets of things that as a designer you would need to be considering. One is, issues like what kinds of relationship is your system facilitating and what forms it's hindering, does the design excludes certain kind of populations. Here's a very simple example that is much more prosaic than the examples we've been discussing in this lecture so far, but if you have a, for example, a physical activity system like a Fitbit, that predominantly uses complex graphs, that by default will exclude the population that does not have the graphical and numerically [inaudible] that is needed to interpret those representations. Right. So, the exclusion end up happening just from the kinds of design decisions that a designer made in relation to how the feedback is being provided. Other things that can play into this is also the notion of value and cost and how the value on cost is distributed. So, who benefits from the system and who is bearing most of the cost? Another simple example around this has to do with what they typically call this patient-reported outcomes systems. These are systems that are used in healthcare settings to collect information about patients health when they're outside of the clinic. Traditionally, a lot of these systems can be designed in such a way where they're really basically a long questionnaires that patients are being asked to complete and then those answers are being transmitted to the healthcare system where the clinicians can review them and use them in clinical encounters. With systems like this, most of the cost of using the patient reported outcomes falls on the patient, because they're the ones who are having to put in all the work in completing these instruments, but very rarely do they get any immediate feedback. So, these answers tend to just disappear. Once they click the submit button all that information goes away and the patient is left with basically no immediate value for all the work they have done on using the system. All the benefit comes through the clinical system because they're the ones who are now getting clean structured information that they can use as part of clinical care. So, the configuration of costs and benefits creates a power structure between healthcare systems and patients that designers of these systems probably did not intentionally create to be such, but that ended up being a side effect of how basically of lack of feedback that the assistance provided to patients. If you're going to be trying to really try to think of the downstream effects in a systematic way, there are in fact tools that can help with this kind of reflected process of designing. One of them is the notion of cultural probes, which is a set of tools developed by Bill Gaver and his students in the UK, that has to do with basically introducing certain kinds of technologies and social media to just see how people respond. These kinds of probes often are designed in a way to very much emphasize certain kinds of effects. Maybe erosion of privacy being an effect that the probe really amplifies to see how people respond to these things and to try and better understand their reactions. Another kinds of tools is a set of scenario-based design extensions called envisioning scenarios, that were developed by batch of [inaudible] at University of Washington, which tries to take them to the notion of scenario-based design that it was presented in UX505, and extended to try to encapsulate the future possible side effects of technology, if that technology would to be widely distributed in a society. So, the envisioning scenarios explicitly ask the designer to think about what the world would look like if there's a technology that they're designing were to be widely spread. It was widely successful and it was everywhere, what kind of world would that would result from that? Are there some things that we can deduce from that kind of scenario from trying to think through what really brought penetration or particular technology, would do to the world to then use those envision, the facts as a way to check our current design decisions, to make sure that the defects are at least the negative effects are minimized and the desired effects are maximized. So, we explicitly think about the future and think about the penetration of the technology to change the current design decisions that a designer is having to make. Value analysis for different stakeholders, so this is the example that I was just mentioning about patient-reported outcomes really trying to be thoughtful about where the value is for the number of different stakeholders who might be interacting with the system increase bearing most of the cost of those systems. Then finally, there are checklists for trying to make explicit, what the assumptions are that go into the design process so that designers can be more reflective about what presuppositions, what beliefs they're bringing to the design problem, and are those beliefs actually things that need to be changed and the design problem itself needs to be refrained in order to avoid negative flag down the line. So, again, this is heavy stuff. We basically are saying that, even people who are working on relatively simple systems, bear responsibility for what kind of effects those systems are going to have on the world and at least to some extent. But given how much technology changes the world and given how much it's become permeated in everything we do, I don't think we as designers can completely bypass the notion that in the sense we do have that transmissibility. So, just to summarize, technology can affect the world far beyond what designers initially envision and what they initially intent. Not all effects can be predicted in advance, there's no question about that, but some of them can and if the designers are thoughtful and explicitly try to envision and predict future effects of their technologies, they can potentially design to make their systems in a way that can minimize future harm and if the designers themselves are not going to be working on this, who will? Thank you for watching and I'll see you next time.