Now, let's talk about network connectivity. So network connectivity is quite an interesting one as well when it comes to GKE On-Prem. So with GKE On-Prem, you have two modes. One of them is Island mode, right and the other one is flat IP mode. With Island mode, what you would find is that the port IP addresses are not routable in your data center. So there is an isolation. There is a cluster and it is completely isolated from the rest of your on premise. So your mainframe or other virtual machines or databases cannot reach the pods directly. They have to use an endpoint or a service in Kubernetes as you're all familiar with so you either expose any of the services at an endpoint to reach your pods. So the IP addresses are not routable. The other option which is still being worked on, and it is not GA I believe, is a flat IP mode that it kind of equates to a native EPC. So then your pods are routable in your data centers and they're all reachable as well. So let's talk about Island mode for a moment because it has a bit more interesting infrastructure that we can talk. So in here, you can see two user clusters and the communication between the different machine, the virtual machines, is done by BGP internally inside of your on-premise, inside of the cluster. And the pods again are not reachable from any workloads inside of your On-Premise directly. You have to go through an endpoint which is a service in Kubernetes. There is a direct pod to pod communication if they want to connect inside of your cluster. So within the cluster pods can communicate and reach each other as we all know and love. But outside of the cluster, it doesn't really work. There is a full node to node mesh. The pod CIDR block is non routable and the node IP addresses must be routable of course. So the nodes themselves the IP that you allocate to them must be routable inside of your data centers, the pods do not. And that's the general idea behind here. So, if you want to use two different clusters and you want communication between the pods, they will have to go through a service as we all familiar with Kubernetes. Okay, so control plane hybrid connection. So I separated the connection for control plane and data plane. So control plane connection, the way that it works is that you have a GKE connect agent. Installed in your cluster and that is over here at the bottom. So you can see here that you have GKE preamble and you have the API server, which is also called the control plane or a master. And that communicates with the GKE connect agent. And that connect agent is quite cleverly uses an outbound connection over TLS. So there is no need for you to open a firewall rules or anything in your on premises network in order to allow connections for us. The connection is down in an outbound connection. So it can traverse NATS. It can traverse these firewall rules and you don't have to open any ports. For Google's communication inbound, we initiate the connection from that agent. It is really helpful for a lot of big enterprises. Now, basically there is no public IP addresses required for that connect agent. It just creates that connection outside and it is authenticated and encrypted of course with a service account and all the rest of that bank, which you will be able to do. You will be able to see how it works in the lab. And users can interact with that connect agent. So when you go into the GKE workloads webpage, which is the GKE dashboard, that basically communicates all the way here with your credentials. So let's see how that works. So a User Admin Access, right, so this is I'm a user and I have a connection to a to a cluster somewhere in your on premise. I need to authenticate with my connection, with my credentials. Maybe it is a username and password. Maybe it's an open ID, or anything like that. I create that connection and then I am able to see in the dashboard the only things that I am allowed to see. I'm not able to see anything that I don't have any RBAC access to. So RBAC is basically, again, is to provide us with a visibility. So if I'm a user and now you can only seek namespace b, I can only see namespace b in the dashboard as well. And that is basically what it allows you to do. It allows you to provide you with this transparency and fine grained control. And basically you have also communities audit logs from everywhere. So, data plane hybrid connection, so this is about the data plane now, right, these are the users. This is the workloads themselves, the connection that you have from your business logic, the sensitive stuff that you would have to secure in some form way. Now we talked about the connection between your on premise and the Cloud. And this can be secured in three different ways. You can either use Cloud VPN, you can use a partner interconnect or a dedicated interconnect for the workloads data. For the actual connection from your business logic. Yeah and that basically provides you with a partner interconnect and dedicated interconnect. It's a physical connection into Google's data center. One is through a partner, the other one is directly into Google. The question that you have to ask yourself is how much bandwidth do I need? And whether I have a physical access into Google's data centers in terms of location. So these are the two factors that you have to think about when you choose a partner interconnect or connecting directly to Google. So if you are a data center and you have a data center right next to Google and you need 100 megabits per second connection. You will go with the partner interconnect because it wouldn't make economic sense for you to create a direct connection to Google. For instance, and that is something that you want to keep in mind when you choose such a connection. So to conclude this module, we've been talking about the container orchestration layer and the containers themselves how they work across different across on premise in the public cloud. The distinction that I would like to make here is that these two clusters, one on premise one on the cloud, are not a continuous cluster. They are two isolated clusters that runs in two different places, two different control planes, in two different environments. And the control plane data that egresses between them is that connection that you can see here. The data plane, the workload information that you have to maybe share across them is something that we will talk about later in the talk.