Let's get started by designing Google Cloud Networks and load balancers. Google runs a worldwide network that connects regions all over the world. You can use this high bandwidth infrastructure to design your Cloud networks to meet your requirements such as location, number of users, scalability, fault tolerance, and latency. Let's take a closer look at Google Cloud's network. This map represents Google Cloud's reach. On a high level, Google Cloud consists of regions which are the icons and blue, points of presence or Pops, which are the dots in gray, a global private network, which is represented by the blue lines, and services. A region is a specific geographical location where you can run your resources. This map, it shows several regions that are currently operating as well as future regions and their zones. As of this recording, there are 21 regions and 64 zones. The pops, our Google's network is connected to the rest of the Internet. Google Cloud can bring its traffic closer to its peers because it operates an extensive global network of interconnection points. This reduces costs and provides users with a better experience. The network connects regions and pops, and is composed of a global network of fiber optic cables with several submarine cable investments. In Google Cloud, VPC networks are global, and you can either create automotive networks and have one subnet per region, or create your own customer network where you get to specify which region to create a subnet in. Resources across regions can communicate using their internal IP addresses without any added interconnect. For example, the diagram on the right shows two subnets in different regions with a server on each subnet. They can communicate with each other using their internal IP addresses because they are connected to the same VPC network. Selecting which regions to create subnets in depends on your requirements. For example, if you are a global company, you will most likely create subnetworks in regions across the world. If users are within a particular region, it may be suitable to select just one subnet in a region closest to these users and maybe a backup region close by. Also, you can have multiple networks per project. These networks are just a collection of regional subnetworks or subnets. To create custom subnets, you specify the region and the internal IP address range as illustrated in the screenshots on the right. The IP ranges of the subnets don't need to be derived from a single side or block, but they cannot overlap with other subsets of the same VPC network. This applies to primary and secondary ranges. Secondary ranges allow you to define alias IP addresses. Also, you can expand the primary IP address space of any subnets without any workload shutdown or down time. Once you've defined your subnets, machine in the same VPC network can communicate with each other through their internal IP address regardless of the subnets they're connected to. Now, a single VM can have multiple network interfaces connecting to different VPC networks. This graphic illustrates an example of a Compute Engine instance connected to four different networks covering production, test, Infra, and an outbound network. A VM must have at least one network interface, but can have up to eight depending on the instance type and the number of VCP use. A general rule is that with more VCP use, more network interfaces are possible. All of the network interfaces must be created when the instance is created, and each interface must be attached to a different network. Shared VPC allows an organization to connect resources from multiple projects of a single organization to a common VPC network. This allows the resources to communicate with each other securely and efficiently using internal IPs from that network. This graphic shows a scenario where a shared VPC is used by three other projects namely: service projects A, B, and C. Each of these projects has a VM instance that is attached to the shared VPC. Shared VPC is a centralized approach to multi-product networking because security and network policy occurs in a single designated VPC network. This allows for network administrators rights to be removed from developers so that they can focus on what they do best. Meanwhile, organization network administrators maintain control of resources such as subnets, firewall rules, and routes, while delegating the control of creating resources such as instances to service project administrators or developers.