- [Russ] Let's say your monolithic application has grown to the point where it is a little bit too big, and starting to cause you some headaches with its current implementation. We are now considering a migration to a microservices architecture. We have spoken about the benefits of microservices. Microservices can be scaled and deployed independently of each other. Smaller teams can built around building and maintaining a microservice. Teams can choose languages that best suits the problem they're solving. Each team can use the benefits of containerization for deployment and isolation. That all sounds great. But we do now need to consider some networking requirements. My monolithic application, made calls to functionality in one big codebase. Now that I have services talking to services, these calls have been replaced with network communications. There are some extra challenges for microservice architecture. A service mesh is application-level networking, to make it easy for your services to talk to each other. A service mesh also builds in a lot of observability for your applications, and some high availability features. AWS App Mesh is a managed service mesh from AWS that does all of this, and more. None of the challenges with microservice networking are insurmountable. Let's understand my requirements before we start talking about a solution. We have have multiple smaller services that now comprise a new application. We need to design and manage traffic policies within the application. I'll want some documentation to know which services are dependent on others. I'll want to build some observability into communications. Service A is talking to service B, how is it performing? What is my average latency in that communication? Am I seeing any errors in communications? My services will be hosted in multiple containers that can fail and run out of resources. I want to be able to detect this, and route around failure. Related to this is my ability to deploy new versions of a service. Service B is getting upgraded. When it completes, I'll need to reconfigure, so all the services previously talking to service B, know to use the new version. Or maybe I'll want to route a percentage of traffic to the new deployment for a canary-style deployment. We can run the new service with 10 percent of production traffic for a week, gather some metrics to see how it's performing. Once we're happy with the new service, I want to switch it to 100 percent. I could build all of these features into my services, but wouldn't it be nicer if the service mesh could handle this for me? And I can write the services that focus on the business logic of my application. A service mesh can do exactly this. We implement this with a sidecar proxy. If the network calls between services are routed via proxies, the service mesh can implement the monitoring, routing, discovery, and deployment features. Your application code doesn't need to know anything about it. Using a proxy is also language independent. If your language of choice can build network requests, then it can work with a proxy. As a service developer, I no longer worry about implementing something like retry logic. My service just talks to another service. The communication goes through a proxy where retry logic is implemented. I'm convinced my application would really benefit from a service mesh. I've decided to use AWS App Mesh for a managed solution. Your interaction with AWS App Mesh will happen in two places. Envoy Proxy for the sidecar proxy deployed alongside your service containers and the App Mesh control plane. Envoy Proxy is a cloud-native, open-source, high-performance service proxy. Envoy was accepted into the Cloud-Native Computing Foundation in 2017. Envoy is an established and mature proxy, very well-suited to building a service mesh. When you upgrade your application to use AWS App Mesh, you add an Envoy as a sidecar proxy in your ECS tasks or Kubernetes Pods. Who configures this proxy I have sitting with all of my services? This is exactly what the control plane of App Mesh does. You design how you want your service mesh to look, and the App Mesh control plane converts this to configuration, and deploys the configuration to all of your service proxies. The control plane is building and distributing the initial configuration for your proxy. The control plane is also monitoring your services for any dynamic state changes. For example, a health check discovers a failed node in a service, and the control plane builds a new configuration to route around the failures, and sends this out to the proxies. App Mesh uses proxies to build observability into your service mesh. Observability is made possible with App Mesh logging, tracing, and metrics. Logging, we can access logs on Envoy Proxies to see all the requests going into an App Mesh virtual node. Tracing, my monolithic application gave me full stack traces when an exception occurred. Now, a single request in my microservices application can travel through many services. Having a full trace of the request is critical for debugging I want to do. App Mesh integrates with tracing products like AWS X-Ray, Jaeger and, Datadog for aggregated logs of every hop between services. Metrics. Envoy is tracking a lot of metrics built from communications going in to and out of the proxy. We use this to gather a lot of metrics, like connection count, bytes in, bytes out, requests, response time, and HTTP error codes. Envoy Proxy metrics can be delivered to Amazon CloudWatch, Prometheus, and Datadog. I need to describe my service mesh requirements to App Mesh. I do this by creating App Mesh resources. Let's start with the first resource. A mesh. A mesh is simply the logical boundary for network traffic between your services. You describe your service mesh by creating resources inside a mesh. A mesh has a name, and a setting for the egress filter. An egress filter allows or denies traffic to non AWS resources outside the mesh. A virtual node is a pointer to a task group. For example, an ECS service or a Kubernetes deployment. You configure a listener for any inbound traffic that will be coming into the node. If the node communicates with any services in your mesh, you define backends. When you are configuring a proxy that is inside your ECS service or Kubernetes deployment, you configure which it node belongs to. A virtual service is a representation of a real service provided in your mesh. The service can be a direct link to a virtual node, or connected to a virtual router. A virtual router contains routes to direct incoming traffic to nodes using rules, like weight and matching, on URL paths. A virtual node is configured with a listener to define inbound traffic. Hopefully, you now have a better idea of the advantages of building a service mesh for your microservices. I'll be back in the next video to walk through a demo where we can see App Mesh resources configured for a simple service mesh.