Today we're going to talk about the connections between technology and equity, and to be perfectly honest, the ways in which technologies can often reflect and reinforce inequity and injustice. When we think about technology in terms of equity and justice, it's important to remember that one of the great myths of technology is that it's assumed to be benign, and it's also often assumed to be objective and technical. We assume that the technology itself is beneficial, but then we also tend to assume that it will be widely available. That is that everyone will have access to a technology. It will trickle down or trickle out. Developers and often economists who are thinking at the aggregate level will talk about a technology's diffusion. That's why it's important to think about socio-technical systems. In fact, because it starts to get us to think about the complexities of the relationships between technology and society and how it's actually more of a network and more of an enrollment process that actually just natural diffusion dying a glass of water. Even dying a glass of water, has to do with individual molecules. I want to talk today about a few different ways of thinking about the relationship between technology and equity. The one that's the most common is distributional equity. Now, you might not be accustomed to that term, but distributional equity basically means thinking about access to or affordability of technology. We often think, for example, about drugs and the high cost of prescription drugs as being a failure in distributional equity. Another example is the difficulties that we've had in terms of getting access to COVID testing. In low and middle-income countries, there's a problem with getting access to COVID vaccines. Those are distributional equity problems. Then finally, as I suggested, while developers and our greatest hopes perhaps often focus on a technology's diffusion or trickle-down innovation, I like to call it, in practice, often the most sophisticated technologies people aren't able to access, especially those who have limited resources are often not able to access the most sophisticated, for example, prosthetics or robotic prosthetics. They are limited then in their lives because they don't have access to the most cutting-edge technologies. Of course that's true of other kinds of medical technologies as well. As I said, I'm going to talk about a few other kinds of inequities as well. In addition to distributional inequity, we want to think also about design inequity. Design inequity means that the inequity is actually baked into the socio-technical system, whether that's intentional or through neglect. When we think about intentional inequities, we might think about a highway that bulldozers over historically disadvantaged communities of color, or we might think about the spirometer, which is based on racist ideas of inferior lung capacity among black people and actually therefore includes race correction software, supposedly to account for the inferior lung capacity of certain individuals and has real impacts on, for example, whether or not people get compensated when they experience an occupational injury and have lung problems as a result. Those are all intentional, even though the spirometer has existed for a long time and is now not even recognized as a place where initially intentional racist biases were baked into this technology. But there are also often unintentional inequities that are baked into the technology simply through neglect. If you think about the pulse oximeter, it tests blood oxygen levels and it does that through light refraction. But when you think about light refraction, of course, skin tone matters. People who have darker skin tones are not only likely to have inaccurate results from the pulse oximeter, but actually their blood oxygen is thought to be higher because of their darker skin tone. In fact, they might go to the hospital concerned about their lungs. This is something that has happened quite a bit with COVID-19. In fact, the pulse oximeter will show that their blood oxygen levels are higher than they actually are. This is really dangerous, it could influence whether or not someone gets access to the hospital is actually admitted. Similarly, we might think about the other system in India. The other system is a biometrics system. It's a fingerprinting system basically. They also use iris scans from the eyes. It's an identification system. People who want to access government services have to provide their other card that has all of this information. It's also information that's computerized. However, the, often those who are the most marginalized, people who work in the fields for example their fingerprints might be rubbed off. Or hey might not have access to as good health care and so there might be something wrong with their iris scan. The, the paradox really, but often those who are the most marginalized in the other system are often those who, where it's going to be the hardest to get this bio-metric information, but they're also often the ones who are the most dependent on the social services that the government provides. This is again, a problem that isn't intentional. It's not based on bias. In fact, it's based on wanting to provide these services more equitably, but the way that the system is designed, it actually ultimately harms these marginalized communities further. A third way of thinking about equity and technology is in procedural and equity. What I mean by that is, in the decision-making. Is that process fair? In particular, do the most effected communities have a say? One of the biggest problems that we're seeing right now are problems with algorithmic bias and algorithmic injustice. There are algorithms, these new automation are often embedded in them, historical data and the algorithms are based on this historical data that then produces predictions about how the world will work today. That might be for hiring systems for example it might be for law enforcement and predictive policing. It might even be for facial recognition technology and matches between driver's license photographs and what a facial recognition camera captures, but those matches and those ideas about who should be hired and who should not, who is a risk in terms of criminal justice and who is not. All of those are based on historical data and they're also based on particular categories in the data that developers, computer scientists, electrical engineers are developing in their laboratories. They're the ones that they think are the most important when it comes to developing the algorithms that will then engage in these predictions. The problem, of course, is that our histories are full of inequities, number one. Those are to some degree the design in equities, but also that the developers don't necessarily reflect society in an egalitarian way, and because the developers don't reflect societies, they're not necessarily thinking about the appropriate categories are to ensure that any inequities are addressed. Then finally, it's really important to think about historical legacy when we think about equity and technology. Now, one of the great myths of thinking about the relationships between technology and society is that emerging technologies often have unintended consequences or we can't predict their implications, but many researchers have demonstrated that in fact, technologies follow historical legacies. After all, they're created in particular societies and societies are actually quite durable in terms of their values. An important example here is the history of surveillance technologies in particularly looking at a black community. For a long time in the history of the United States, people of color, but specifically black people, and often those who were enslaved had to carry lanterns at night, essentially so that they could be surveilled. This is an early 19th century mechanism of surveillance. If we understand the lantern laws, those early lantern laws as forms of surveillance really, then it helps us understand what the implications are likely to be of today's ringing doorbells, closed caption televisions, as well as facial recognition technologies. In fact, we find today that those kinds of technologies are used disproportionately to surveil in historically disadvantaged communities of color and specifically black communities and often to bring those communities further into criminal justice systems. The irony is that for example the ring doorbell is used very differently depending on the community within which they're being used. In wealthier communities, these technologies are being used to check whether or not a package is on the doorstep and it stays on his doorstep, but in other communities, in less advantaged communities, these same technologies are being used for surveillance purposes and for increasing the connections with the criminal justice system and so this is another way of thinking about. I'm really disaggregating the relationship between technology and equity. The idea here is to think about the relationship in a variety of these different ways in order to then actually enhance how technologies can actually address concerns of inequity and injustice.