Welcome to the Cloud Architecture Components: Part 1 module. By the end of this module, you should be able to describe the abstraction layer in a cloud architecture. Cloud infrastructure is a term that describes components required to support cloud computing. The five typical components of a cloud infrastructure are explained here. Hardware: Physical resources such as servers, processing units, graphical processing units (or GPUs), power supply, memory, and more. Storage: Storage hardware stack abstracted into a cloud resource such that adding or removing storage volumes or drives does not need manual provisioning of servers every time. Common cloud storage formats include block storage, object storage, and file storage. In block storage, data is stored as blocks across storage systems and it is most suitable for static data assets. In object storage, data is stored as data objects uniquely identified by a metadata identifier and it is most suitable for assets that change dynamically. File storage is associated with network attached storage (or NAS) and is configured with a single data path. Network: Network equipment such as physical wires, switches, routers, firewalls, load balancers, and more that provide underlying network connectivity. Virtual networks are created over the physical resources to provide network connectivity between cloud services and resources. Virtualization: An abstraction of hardware resources that creates a pool of virtual resources available for use by cloud services. Management and Automation Tools: Tools, controls, and software that define configure, run, and manage the cloud resources. An abstraction layer in software enables you to create virtual isolated resources over a physical hardware such as compute, storage, and network. Processes, disk and file system usage, user management, and networking, are segregated between resources. Virtualization abstracts away hardware and provisions a pool of virtual resources for disparate operating systems to run upon while containalization abstracts away operating systems and enables you to run isolated applications. Virtualization is an abstraction layer in software that enables you to create virtual resources over a physical hardware such as compute, storage, and network. It enables multiple operating systems and applications to run on the same hardware while running segregated from another in terms of processes, disk and file system usage, user management, and networking. This enables more efficient utilization of hardware resources, faster provisioning, ease of management, resiliency, and many other benefits. Popular virtualization types include: Server virtualization—Which emulates server resources such as processors, memory, IO for use by multiple virtual machines, and applications in isolation at the same time. Storage virtualization—Which is the abstraction of multiple physical storage devices into a single storage cluster managed centrally. And, Network virtualization—Is the virtualization of one or more hardware appliances to provide a specific network function such as a router, switch, firewall, or load balancer. Virtualization is the foundation of cloud computing. The ability to create virtual servers, infrastructure, devices, applications, network, computing resources, and more while decoupling resources from hardware and changing the hardware/software relationship, helps realize the potential of cloud computing to the fullest. A hypervisor or the virtual machine (or VM) manager is the software layer that serves as an interface between the virtual machines and the underlying physical hardware. It ensures that each virtual machine has the allocated resources and runs in isolation. It also manages the scheduling of virtual resources such as CPU cycles, memory, IO, and network traffic against the physical hardware. There are two types of hypervisors: Type 1, or bare metal hypervisors— These hypervisors run directly on the hosts hardware, replacing the traditional OS. The VM resources are scheduled directly to the hardware by the hypervisor, thus making it very efficient. Examples of Type 1 hypervisors include VMware ESXi, Microsoft HyperV, Citrix Hypervisor, Nutanix AHV, and more. Type 2 hypervisors— These hypervisors run as an application on the underlying host OS. The VM resources are scheduled against the host OS which then executes against the hardware, thus having a performance overhead. Examples of Type 2 hypervisors include VMware Workstation, Oracle Virtual Box, Parallels Desktop, and more. One of the most popular open source virtualization technologies is Quick Emulator Kernel-based Virtual Machine (or QEMU-KVM). KVM is built into the host kernel as a module and provides hypervisor support enabling you to run multiple isolated environments called, guests or virtual machines. It works with hardware that supports virtualization extensions such as Intel-VT and AMD-V. It is highly cost efficient and powerful when compared to commercial offerings and is preferred by many other open source cloud technologies such as OpenStack. QEMU is an emulator of the hardware, including peripherals. It also emulates the user-level processes, enabling applications compiled for one architecture to run on another. QEMU-KVM enables you to run the guest OS at near native speed by taking advantage of the virtualization extensions on the hardware. A container is a packaged application with the code and all its dependencies like executables, binaries, libraries, and configuration files. It represents OS level virtualization techniques. Containers are lightweight, portable, have lesser overhead, and improved performance. Containers do not contain OS in them. An organization using containers can migrate existing applications into modern cloud architectures, develop new cloud-native applications, provide support for microservices architecture, find a better fit into the DevOps methodology, and take advantage of automation and orchestration. A container engine uses the Linux kernel containment features such as namespaces, SELinux, Cgroups, chroots and more, to provide application isolation. A container platforms such as Docker is used to create and build applications inside containers using images and run the containers from the terminal. Container runtime environments such as containerd, runc, and cri-o, run the container on the host and manage the images. They provide a high-level container interface to orchestrators such as Kubernetes. A typical production environment may require hundreds of containers running simultaneously, making it quite complicated. Orchestration tools like Kubernetes can help create and manage the containers, manage resources, scale workloads, and provide resiliency. Yet another popular open source cloud technology for virtualization is Docker. It lets you package and run an application in a loosely isolated environment of containers, enabling you to run many containers simultaneously on a host. Docker provides a platform to manage the lifecycle of a container by enabling you to: Develop an application and its components using containers; distribute and test an application using containers; and port containers across data centers, cloud providers, or a hybrid of the two. Docker also makes it possible to manage workloads dynamically, scaling them as per demand and in near real time. Production workloads can be managed by orchestrators like Kubernetes. It is important to understand the difference between containers and virtual machines. Some of the key differences are explained here. While containers abstract at the application layer, packaging code and their dependencies, virtual machines abstract physical hardware into virtual resources to optimize utilization. Multiple containers share the OS kernel while running as isolated processes, while each virtual machine has its own processes, disk and file system, user management, and networking, all allocated from a virtual resource pool. Containers do not package the entire OS, thus taking up less space than virtual machines, virtual machines, on the other hand, package the OS along with the application, binaries, and libraries, hence making it bulkier in terms of space and boot time. Note though, that it is possible to have containers and virtual machines work together for greater flexibility in deployment and management of an application.