Welcome to the third week of this course. This week we will dive into the basics of incorporating safety into autonomous vehicle design. Throughout this module, we'll discuss some of the recent autonomous vehicle crash reports, then we will formally define safety concepts for self-driving cars, and discuss the most common sources of hazard that occur. We'll discuss some industry perspectives on safety, and finally, we'll wrap up by discussing some of the common frameworks that are used in safe system design. In this lesson, we will discuss some of the first incidence of self-driving car crashes from 2017 and 2018. Then, we will define some basic safety concepts and list some of the most common causes for autonomous vehicle hazards, and discuss the low level and high level requirements for safety. We should point out that the material in this module is mostly taken from the published guidelines, by the International Organization for Standards, ISO. You'll find a more comprehensive version of these frameworks online. Let's start with a discussion of some of the more prominent autonomous vehicle failures to date. In March 2016, a self-driving Google car now Waymo, ran into the side of a bus when it attempted to pull out from behind an obstacle in its way. A bus was approaching from the rear and was aiming to pass the Google car in its lane, which was over to the right of its lane prepared to turn. The Google car software believed the bus would not attempt to pass it, as the gap between itself and the cars in the next lane was too narrow. It turns out buses habitually shoot through smaller gaps than Google anticipated leading to the crash in this case. By the time the Google car could react to the new measurements of the bus location, it was too late. This is just one example of how hard it is to predict all other vehicles actions before they happen. A year later, an Uber self-driving vehicle overreacted during a minor collision caused by another vehicle and ended up overturning. Since the dynamic models of the vehicle don't assume significant disturbance forces from other vehicles acting on the car, the controller had likely not been tested for such a scenario and overreacted. This crash highlights the need for robustness integrated into the control system and for exploratory testing that covers as many foreseeable events as possible. In the late 2017, a GM Cruise Chevy Bolt knocked over a motorcyclist after it aborted a lane change maneuver. After the Bolt initiated the maneuver, the gap it was hoping to enter closed rapidly due to a braking lead vehicle in the adjacent lane. The motorcyclist who was lane splitting, moved forward beside the Bolt and blocked the return maneuver. The Bolt was stuck in a dilemma situation; to collide with the motorcycle or to crash into both cars in the adjacent lane. It's not clear that a specific decision was made here to choose one or the other outcome, and a lawsuit has buried the details of the case. However, because other agents are also predicting the self-driving cars actions, it is very challenging to assess what the right action is in many situations. As it's possible, the merging might have been possible with a more aggressive driving style or a slightly delayed abort might have been enough time to avoid the motorcyclist. This tight interaction of decision-making is still a big challenge in self-driving cars. Finally, we should talk a little bit about the Uber crash that led to a pedestrian fatality in early 2018. Operating in Tempe, Arizona, Uber had an extensive testing program at the time, with safety drivers monitoring the autonomy software. The incident occurred on a wide multilane divided road at night, where a pedestrian was walking her bicycle across the road in an unmarked area. The victim, Elaine Herzberg, was a 49 year-old woman from Tempe. This is the car and the scene depicted from a bird's eye view. You can see the pedestrian entering from the left and the vehicle traveling along the road way from the bottom of the image. The preliminary investigation revealed that there were multiple failures that led to the incident. Let's walk through the different contributing factors. First, there were no real-time checks on the safety driver. In this case, the safety driver was inattentive and allegedly watching Hulu at the time. The safety driver could have been doing anything and Uber didn't have any way in the vehicle to assess the drivers attentiveness. Because watching an autonomous driving system operate is a difficult task to stay focused on, it is really important to have a safety driver monitoring system. Second, there was significant confusion in the software detection system. Upon initial detection at six seconds to impact, the victim was first classified as an unknown object, then misclassified as a vehicle, and then misclassified as a bicycle. In the end, the decision made by the autonomy software was to ignore the detections, possibly because they were too unreliable. Perception is not perfect and the switching classifications should not have led the vehicle to ignore an object like that completely. Finally, 1.3 seconds before the crash, the Volvo emergency braking system did detect the pedestrian and would have applied the brakes rapidly to reduce the impact speed, potentially saving the life of Elaine Herzberg. However, it is not safe to have multiple collision avoidance systems operating simultaneously during testing, so Uber had disabled the Volvo system when in autonomous mode. Ultimately, the autonomous vehicle did not react to the pedestrian's path and the inattentive driver was unable to react quickly enough to avoid the collision. The combination of the failure of the perception system to correctly identify the pedestrian, with a bicycle and of the planning system to avoid the detective object even though it's class was uncertain, led to the autonomy failure, and the lack of human or emergency braking backup ultimately led to the fatality. So, we can see that from this set of incidents, every aspect of the autonomous driving system; the perception, planning, and control, can all lead to failures and crashes, and that often the interaction of multiple systems or multiple decision-makers, can lead to unanticipated consequences. In fact, there are many more ways an autonomous system can fail. It is clear that we need rigorous and exhaustive approaches to safety, and both industry and the regulators are tackling the safety challenge head-on. Okay. Now, that we have a sense for the challenges of safety assessment, let's formally define some basic safety terms. We will use the term, harm to refer to the physical harm to a living thing, and we will use the term risk to describe the probability that an event occurs, combined with the severity of the harm, that the event can cause. We can now describe safety as the process of avoiding unreasonable risk of harm to a living thing. For example, driving into an intersection when the traffic signal is red would be unsafe as it leads to unreasonable risk to harm of the occupants of the vehicle and to other vehicles moving through the intersection. Finally, a hazard is a potential source of unreasonable risk of harm or a threat to safety. So, if my system software has a bug that could potentially cause an accident, the software bug would be a hazard. Now, what do you think are the most common sources of autonomous vehicle hazards? Well, hazards can be mechanical, so maybe incorrect assembly of a brake system causing a premature failure. They can be electrical, so faulty internal wiring leading to a loss of indicator lighting. Hazards could also be a failure of computing hardware chips used for autonomous driving. They can, as described earlier, be due to errors or bugs in the autonomy software. They might be caused by bad or noisy sensor data or inaccurate perception. Hazards can also arise due to incorrect planning or decision-making, inadvertently selecting hazardous actions because the behavior selection for a specific scenario wasn't designed correctly. It's also possible that the fallback to a human driver fails by not providing enough warning to the driver to resume responsibility or maybe a self-driving car gets hacked by some malicious entity. These are all the main categories of hazards that are regularly considered; mechanical, electrical, computing hardware, software, perception, planning, driving-task fallback, and cybersecurity. Each of these hazards requires different approaches when assessing overall system safety. We'll see more on how to deal with these categories in later videos. Now that we know the basic terminology involved in safety, let's think about the following question. How do we ensure our self-driving car is truly safe? That is, how do we take the complex task of driving and the many hazards that can occur, and define a safety assessment framework for a complete self-driving system? In the US, the National Highway Transportation Safety Administration or NHTSA, has defined a twelve-part safety framework to structure safety assessment for autonomous driving. As we'll see in the next videos in this module, this framework is only a starting point and different approaches that combine multiple existing methods and standards have already emerged in the industry. So, let's first discuss the NHTSA's safety recommendations. This framework was released as a suggested, not mandatory framework to follow in 2017. The framework itself consists of 12 areas or elements any autonomous driving company should focus on or rather, are encouraged to focus on. First, a system design approach to safety should be adopted, and this really permeates the entire framework document. Well-planned and controlled software development processes are essential, and the application of existing SAE and ISO standards from automotive, aerospace, and other relevant industries should be applied where relevant. For the remaining 11 areas, we can organize them loosely into two categories. Autonomy design, which requires certain components to be included and considered in the autonomy software stack, and testing and crash mitigation, which covers approaches to testing the autonomy functions and ways to reduce the negative effects of failures, as well as learning from them. In the autonomy design category, we see some components we're already familiar with. The NHTSA encourages a well-defined operational design domain, so that the designers are well aware of the flaws of this and limitations of the system, and can make an assessment as to which scenarios are supported and safe in advance of testing or deployment. Next, it encourages a well-tested object and event detection and response system, which is critical to perception and crash avoidance. Then, it encourages the car to have a reliable and convenient fallback mechanism by which the driver is alerted or the car is brought to safety autonomously. It is essential to develop this mechanism keeping in mind that the driver may be inattentive. So, some thoughts should go into how to bring the system to a minimal risk condition if this happens. The driving system should also be designed such that all the federal level, state level, and local laws for traffic are followed and obeyed within the ODD. Next, the framework encourages designers to think about cybersecurity threats, and how to protect the driving system from malicious agents. Finally, there should be some thought put into the human machine interface, or HMI. So, the car should be able to well convey the status of the machine at any point in time to the passengers or the driver. Important examples of status information that can be displayed, are whether all sensors are operational, what the current motion plans are, which objects in the environment are affecting our driving behavior, and so on. We now move to the testing and crash mitigation areas. First and foremost, the NHTSA recommends a strong and extensive testing program before any service is launched for the public. This testing can rely on three common pillars; simulation, close track testing, and public road driving. Next, there should be careful consideration of methods to mitigate the extent of injury or damage that occurs during a crash event. Crashes remain a reality of public road driving and autonomy systems that can minimize crash energy and exceed passenger safety standards in terms of restraints, airbags, and crash worthiness should be the norm. Next, there should be support for post crash behavior. The car must be rapidly returned to a safe state, for example, brought to a stop with fuel pumps securing the fuel, first responders alerted, and so on. Further, there should be an automated data recording function or black box recorder. It is very helpful to have this crash data to analyze and design systems that can avoid the specific kind of crash in the future, and to resolve questions about what went wrong, and who was at fault during the event. Finally, there should be well-defined consumer education and training. So, courses for the fallback driver during testing and training for consumer drivers and passengers to better understand both the capabilities and limits of the deployed autonomous system. This final step is essential to ensuring our natural overconfidence in automation, does not lead to unnecessary hazards being introduced by early adopters. Keep in mind that these are suggested areas that any company should work on. Not mandatory requirements, yet. The main objective of the NHTSA is to guide companies building self-driving cars without overly restricting innovation. We're pre-selecting technologies. As entrance to the market start to emerge, it is likely that more definitive requirements for safety assessment will also emerge. Okay. Let's summarize. In this video, we discussed a few of the first accidents that the self-driving industry has seen, and revealed the many ways in which autonomy software can fail. We then formally defined harm, risk, hazard, and safety, and listed out the major sources of autonomous vehicle hazards. We then reviewed the NHTSA safety framework. In the next video, we'll discuss some industry perspectives on self-driving safety, as well as some safety recommendations for self-driving cars. See you then.