Here's an experiment where we're studying how to use teams of robots in emergency response. Imagine there's a 911 call or a disaster that might naturally have occurred and you want eyes and ears on the scene before human first responders or law enforcement officials get to the scene. You can get robots with downward facing cameras and microphones at the scene monitoring everything before the humans arrive at the scene. We're interested in determining how to coordinate actions of teams of robots and how to create artificial swarms. What information must robots communicate with each other? How do they coordinate their actions? How do they cooperate to perform tasks that they individually cannot perform? And how to they collaborate with different types of robots? Certainly nature has many examples in which individuals coordinate, cooperate and collaborate to perform tasks that they individually can not accomplish. This is an example of desert ants, where they cooperate to carry objects back to their nest. Here's another example where you see a half a million to a million starlings off the coast of Denmark coordination to form these incredibly complex patterns in the sky. Or this example, where army ants create artificial bridges, going from a region which might be flooded by water to dryer lands. In all of these examples, individuals coordinate, collaborate and cooperate. We're interested in learning from these examples and building artificial swarms. Here are three organizing principles for collective behavior. First, it's clear in all these examples that individuals act independently. There is no leader in the swarm that's telling every individual what to do. Second, all actions are based only on local information. Again, there is no mechanism for sharing information so that every individual has the same global picture. Third, there's a paradigm of anonymity in play here. The individuals are anonymous. It really doesn't matter who you're collaborating with or who you're coordinating your actions with. When we build robots, we try to preserve these three overarching principles. So here is an example that illustrates a cooperative task in which robots coordinate the actions to build a three dimensional structure. In this particular experiment, the robots determine on the fly, who gets to pick up what object, each object is a trust like element, and then who gets to place them in what position. By coordinating their actions, they'll be able to build complex three dimensional trusses. In each of these examples, the robots only have information about the CAD model for the truss, and they're able to determine how to coordinate actions and how to plan their trajectories in a safe manner in order to accomplish the task. Let's look at the mathematics of this problem. This is a complex task. Imagine you have n robots and m obstacles. If you look at the state space, the dimensionality of the state space grows with the number of robots. For one robot you might have a state space consisting of its configuration and its velocity. For n robots, you essentially have n copies of the state space. So the complexity of whatever operation you need to perform, whatever calculations you need to conduct, will grow as n increases. The number of potential obstacles each robot must need to consider also increases. In fact, if you look at the number of potential interactions, that goes as n squared. In addition, each robot must worry about the obstacles in the environment, and if there are m obstacles there are m possible interactions. In a team of n robots, the number of such interactions m times n. Thus, the complexity of the task of coordinating motions and avoiding obstacles, goes as mn+n squared. Finally, there's a question of, who gets to do what? This is the so called task assignment problem. If the robots have a number of goal positions to choose from, the problem of which robot gets to go to what goal position can be formulated as a common problem, and the complexity of this grows as n factorial. Here's a cartoon that illustrates that particular problem. In this cartoon, I only have three robots. Starting at positions s1, s2, and s3, and there are three goal positions. The goal positions are not labeled. The first decision the robots have to make is, who goes to which goal position? So now this is the labeled problem where the robots have decided. The labels g1, g2 and g3 designating the goal position g1 to robot 1, g2 to robot 2, g3 to robot 3. Mathematically this can be described by an assignment matrix filled with 0s and 1s. The entry, sub i, j of this matrix is equal to 1 if robot i is assigned the goal destination j. And it's 0 if the robot i is not assigned the goal destination j. The second sub-problem is planning the trajectories. Here I've stacked all the state vectors into one big state vector. And the problem of planning trajectories that are safe must be formulated in this joint state space. As I mentioned to you earlier, the dimensionality of the state space grows as n. The safety constraint is expressed by this inequality. Assuming the robot are circular or spherical, we require robots i and j, to be separated by at least two times the radius. So for all pairs of robots i and g, this inequality must hold. Further, this inequality must hold at all instances of time. And then finally, as before, we would like these trajectories to be optimal. Once again, we assume that there's a functional that describes our measure of optimality and we're looking for trajectories that minimize the value of this functional. The key problem is to find the right assignment and to plan the trajectories. And both of these need to be done concurrently. One of them grows as n factorial, the other one grows exponentially with n. There are four key ideas that we use to solve this problem. First, as I mentioned before we combine these two individually difficult problems. The problem of assigning goals to robots and the problem of determining the optimal trajectories. Second, we use the concept of a leader-follower network to formulate interactions between robots and their neighbors. Third, we embrace this paradigm of anonymity where robots can be exchanged between each other so that the specific identities of the robots don't matter. And then fourth, we think about how robots should share information, enabling them to make decisions based only on local information. Let's see how each of these ideas work out in practice. First, we want to be able to assign goals and plan collision free trajectories. Let's consider a very simple example, we have two robots, a red robot and a blue robot starting off from designated start positions with designated goal positions. If you make a bad assignment. Even if you planned the best trajectory possible and in this case let's assume the straight line is the best trajectory, you might get a collision and that's because these straight line trajectories intersect. If you were instead to think about the problem of finding the assignment and the outcome of the trajectories concurrently. You might come up with a different way of approaching the problem. So here was the assignment with the optimal trajectories, and this turned out to be harmful because the robots come in each other's way. What if you instead decided to go with a different assignment? Now the optimal trajectory, the straight line trajectory, is one in which neither the robots get in each other's way. So you have two possible assignments to consider, phi 1, where the trajectories intersect, resulting in collisions, phi 2, where the trajectories don't intersect. Instead of first finding the assignment and then solving for the optimal trajectory, we solve both these problems concurrently. Before solving these two problems concurrently, let's make an assumption. This assumption is that the start and the goal positions for the robots are not too close to each other. In other words, if I take any start position s sub I for the ith robot and any goal position g sub j for the jth robot. We want these positions to be separated by a distance that's proportionate to R, the constant of proportionality here happens to be 2 times square root of 2. With this assumption, we explore all possible assignments of robots to go positions. And you can verify that for three robots and three goals, there are six such assignments. Now we find the best possible trajectory, but this time we're minimizing the functional over all possible trajectory and all possible assignments. As a practical matter, you can go through each of these assignments, and there are six of them, and for each assignment you find the best possible trajectory. It turns out if you solve this problem, then the resulting solution will be safe and you're guaranteed not to have collisions. This is a simple result, but the best result allows you to solve large scale problems without considering the collision avoidance problem. And this happens to reduce the complexity of the planning problem. Of course, it's not practical to explore all possible assignments for large numbers. Unfortunately there's a well known algorithm called the Hungarian Algorithm that allows us to search through the set of assignments provided we know the optimal trajectories for each assignment. So in the new algorithm, we first find the set of optimal trajectories from each start position to all goal positions. Note that each computation is for a single robot. For N robots and N goals, there are N squared such computations. We then find the assignment with the lowest total cost. The Hungarian algorithm, which is an order and cubed algorithm, gives us the solution for the best assignment. So if you solve CAPT, using the approach that I just described, you will find that the time required to compute the solution does not grow exponentially, or as N factorial. In fact there are two curves here, two straight lines that show the cubic growth of computational time and the quadratic growth of computational time as a function of the number of robots. Over hundreds of experiments, the results that we've obtained suggested the complexity somewhere between quadratic and cubic. Now, let's move to the second key idea which is the concept of incorporating leader-follower networks. The idea is simple, each robot monitors the distance with respect to its neighbors. More precisely, it looks at the separation as a vector. So if you take the robot in the middle, call it i, for every neighbor j, it monitors the difference between xi and xj. Of course there are many neighbors. And there are many robots. Each robot needs to regulate its position so that the separations from its neighbors is at a desired value. This is done using the control algorithms that were described earlier. Now in addition to this from a practical standpoint you may have to have designated leaders in the team that actually use absolute information and not just relative information with respect to their neighbors. You may have one leader, or a second leader, or many leaders. But without having at least one leader you will not have an absolute sense of where the team is? Where the formation is? This idea is demonstrated in this experiment where an external user is manipulating the single designated leader. Of course as all the robots monitor relative separations, they are able to regulate that and follow the leader as the leader is being manipulated by the human user. Note again, in this case that the human actions really don't matter. What really matters is the robots talking to each other and monitoring their separation. At every instant they're trying to regulate their position so that this separation is at a specified value. The third key idea is this paradigm of anonymity. Here, the robots have been asked to maintain a circular formation so they try to maintain the desired separation in order to form a circular formation. As you can see in this video, as robots are placed into the formation or removed from the formation, their presence is detected by the neighbors. And the neighbors then adjust their positions to ensure that the relative separation is maintained. They are agnostic to who their neighbors are, they only react to the presence of the neighbors, and, again, the physical position of the neighbors relative to where they are. If you combine some of these ideas now, you can require that the shape be specified and changed as a function of time. So in this video, you actually see the shape of the formation change and also the position of the group changes the function of time. Robots start off from a rectangular formation, change into an ellipse, flatten out into a straight line, before deforming backup into an ellipse and then into a circular shape. And the same idea can be extended to more complex shapes. Here you see a 20 robot team. And then a 16 robot team that changes its shape. And it's able to go from a three dimensional shape to a two dimensional shape, and then adapt the shape as they go through this window. And here you see the H-shaped formation, where the robot so again very precisely controlling their position even as they come within inches of there neighbors. So this technology is useful as we think about practical task such as creating robot first responders. So here you see a team of outdoor robots. They're able to react to, for example a 911 call, and they do that by positioning themselves around the building from which they think that the 911 call has originated. Robots like this could also respond to active shooters without humans telling them where to go. The minute a gun shot occurs, you can imagine these robots taking up positions outside the building where they think the source of the gunshot is. And this is shown on the top left, you'll see the robots coordinate the motions to take up positions. They're using the concurrent assignment and planning algorithm to take up positions. They use this paradigm with anonymity. They really don't care the identities. They really don't worry about the identities of their neighbors and they're able to take up positions to get information about the building. On the top right, you see the video from these cameras being assimilated into a mosaic. And at the bottom you see a three dimensional reconstruction of the scene being developed by the robots. So far we've been talking about tasks which required the robots to coordinate with each other to perform the task more efficiently. In each of these examples we looked at thus far, individual robots might have been able to perform the task on their own. The presence of multiple robots results in greater efficiency and better coordination guarantees that the deficiency improves. What about tasks like this? Where the robots actually cannot perform the task on their own, imagine payloads that are too heavy for individual robots to carry. Here you can see robots cooperate to pick up heavy objects and then transport them. How do we get robots to cooperate in tasks like these? Here's another example where a flying robot must collaborate with a ground robot in order to build a map in the interior of a building. How do robots like these collaborate to autonomously create three dimensional maps? Remember, an aerial robot can view the scene from a vantage point that is very different from what a small ground robot can. Likewise, the ground robot can position its sensors underneath tables to get viewpoints that the aerial robot cannot. We achieve collaboration by thinking about this task in terms of information. The robots are driven by the common goal of reducing uncertainty about the map. At every step, they think about the information that they can gain by positioning their sensors to get new information. Here's an equation that describes the essential strategy being used by the robots. In this equation, they're trying to determine their paths. Designates the path of the robot. And they choose the path from a set of available paths. The available paths are those that conform to the environment and are safe. And they are paths that respect the dynamics of the robot. In the numerator you see the quantity which represents information gain. So they're trying to find the path that maximizes the information gain. The information gain is described by two random variables. The first random variable is a map. The map is a complex object. But it consists of many different elements, each representing features in the environment. It's a random variable because everyone of these features is a random variable. There's uncertainty associated with everyone of the variables. The second random variable that we want to consider is the measurement. Every time the robot moves, it generates measurements. And what the robot really needs to do is to maximize the information gained through it's motion so that the measurements yield maximum information about the map. In fact, these measurements should reduce the uncertainty in the map. And then finally, we want to minimize the time taken to maximize the information gained. Thus in the numerator, the duration of the trajectory is also considered. On the bottom right, you see the robot reasoning about the multiple trajectories it has at its disposal. And on the center right, you see the sensor models that the robot is using to reason about the uncertainty and its means to reduce that uncertainty. So here's this small ground robot that we have in our lab. And in this video, you see what the robot is seeing through its cameras. The robot starts off from a location in the environment, and of course it doesn't know where it is relative to its environment. But based on what it sees, and integrating information from multiple views, it starts building a map. This video is sped up 20 times, but you get the picture. At every step it's reasoning about the possible trajectories it can execute, the information that it's likely to gain if you were to execute one of these trajectories. Assimilating all this information then into a three dimensional map. Of course this strategy when used with a flying robot will yield different information. And here's the same thing being done by a flying robot. When it takes off and it reasons about the different trajectories available to it, those trajectories are different than those available to a ground robot. The kinds of information that it can get are also different. But together with the ground robot, they collaborate and ensure that they can cover the entire environment and build a complete three dimensional map. Techniques like these can enable collaboration between aerial and ground robots. In this video you see an aerial and a ground robot collaborating to build a map of a multistory building that has been damaged by an earthquake. The robots travel as a team, but whenever one of the robots gets obstructed like in this example because of the collapsed doorway, the air robot can take off and traverse areas that were unavailable to the ground robot. And in the process, they share information, and they are able to build a complete three dimensional map. Maps like these can be built autonomously without any intervention by the human operator. This is a map of a multi room single storied building. And here's another map of a multistory building. Swarms of robots can build maps like these very quickly and really change the way we think about emergency response.