An application and its dependencies are called an image. A container is simply a running instance of an image. By building software into Container images, developers can easily package and ship an application without worrying about the system it will be running on. You need software to build Container images and to run them. Docker is one tool that does both. Docker is an open source technology that allows you to create and run applications in containers but it doesn't offer a way to orchestrate those applications at scale like Kubernetes does. In this course, we'll use Google's Cloud build to create Docker formatted Container images. Containers are not an intrinsic primitive feature of Linux. Instead their power to isolate workloads is derived from the composition of several technologies. One foundation is the Linux process. Each Linux process has its own virtual memory address phase, separate from all others. Linux processes are rapidly created and destroyed. Containers use Linux namespaces to control what an application can see, process ID numbers, directory trees, IP addresses, and more. By the way, Linux namespaces are not the same thing as Kubernetes Namespaces, which you'll learn more about later on in this course. Containers use Linux cgroups to control what an application can use, its maximum consumption of CPU time, memory, IO bandwidth, other resources. Finally, Containers use Union File Systems to efficiently encapsulate applications and their dependencies into a set of clean minimal layers. Now let's see how that works. A container image is structured in layers. The tool you use to build the image reads instructions from a file called, ''The Container manifest.'' In the case of a Docker formatted Container Image, that's called a Docker file. Each instruction in the Docker file specifies a layer inside the container image. Each layer is, ''Read only.'' When a Container runs from this image, it will also have a writable ephemeral top-most layer. Let's take a look at a simple Docker file. This Docker file will contain four commands, each of which creates a layer. At the end of this discussion, I'll explain why this Docker file is a little oversimplified for modern use. The From statement starts out by creating a base layer pulled from a public repository. This one happens to be the Ubuntu Linux runtime environment of a specific version. The Copy command adds a new layer containing some files copied in from your build tools current directory. The Run command builds your application using the make command and puts the result of the build into a third layer. Finally, the last layer specifies what command to run within the container when it's launched. Each layer is only a set of differences from the layer before it. When you write a Docker file, you should organize the layers least likely to change through to the layers that are most likely to change. By the way, I promised that I'd explain how the Docker file example you saw here is oversimplified. These days, the best practice is not to build your application in the very same container that you ship and run. After all, your build tools are at best just cluttered and deployed Container and at worst, are an additional attack service. Today, Application Packaging relies on a multi-stage build process in which one Container builds the final executable image. A separate container receives only what's needed to actually run the application. Fortunately for us, the tools that we use support this practice. When you launch a new container from an image, the Container Runtime adds a new writable layer on the top of the underlying layers. This layer is often called the Container layer. All changes made to the running container, such as writing new files, modifying existing files, and deleting files are written to this thin writable Container layer in their ephemeral, when the Containers deleted the contents of this writeable layer are lost forever. The underlying Container Image itself remains unchanged. This fact about Containers has an implication for your application design. Whenever you want to store data permanently, you must do so somewhere other than a running container image. You'll learn that in the several choices that you can choose from in this specialization. Because each Container has its own writable Container layer and all changes are stored in this layer. Multiple Containers can share access to the same underlying image and yet have their own data state. The diagram here shows multiple containers sharing the same Ubuntu 15.04 image. Because each layer is only a set of differences from the layer before it, you get smaller images. For example, your base application image, maybe 200 megabytes, but the difference, the next point release might only be 200 kilobytes. When you build a container, instead of copying the whole image, it creates a layer with just the differences. When you run a container, that Container Runtime pulls down the layers it needs. When you update, you only need to copy the difference, this is much faster than running a new virtual machine. It's very common to use publicly available open source Container images as a base for your own images or for unmodified use. For example, you've already seen the Ubuntu Container Image, which provides an Ubuntu Linux environment inside of a container. Alpine is a popular Linux environment in a container, noted for being very, very small. The NginX web server is frequently used in its Container packaging. Google maintains a Container Registry, gcr.Io. This Registry contains many public, open source images and Google Cloud customers also use it to store their own private images in a way that integrates well with Cloud IAM. Google Container Registry is integrated with Cloud IAM. So for example, you can use it to store your images that aren't public. Instead, they're private to your project. You can also find Container images and other public repositories, Docker Hub Registry, GitLab and others. The open source Docker command is a popular way to build your own Container images. It's widely known and widely available. One downside, whoever a building Containers with a Docker command is that you must trust the computer that you do your builds on. Google provides a managed service for building Containers that's also integrated with Cloud IAM. This service's called, ''Cloud Build,'' and we'll use it in this course. Cloud build can retrieve the source code for your builds from a variety of different storage locations. Cloud Source Repositories, Cloud Storage, which is GCP is Object Storage service or git compatible repositories like GitHub and Bitbucket to generate a buildup Cloud Build, you define as series of steps. For example, you can configure build steps to fetch dependencies, compile source code, run integration tests, or use tools such as Docker, Gradle, and Maven. Each build step and Cloud build runs in a Docker container. Then Cloud build can deliver your newly built images to various execution environments, not only GKE, but also App Engine and Cloud Functions.