You've probably been hearing people talk about the cloud more and more. There are public clouds and private clouds and hybrid clouds and rain clouds but not really relevant here. There are cloud clients and cloud storage and cloud servers too. You might hear the cloud mentioned in newspaper headlines and TV advertisements. The cloud is the future, so we're told, and IT support specialists really need to keep up on the latest innovations in tech in order to support them. But what exactly is the cloud? The truth is the cloud isn't a single technology or invention or anything tangible at all. It's just a concept, and to throw in another cloud joke, a pretty nebulous one at that. The fact that the term 'the cloud' has been applied to something so difficult to define is pretty fitting. Basically, cloud computing is a technological approach where computing resources are provisioned in a shareable way so that lots of users get what they need when they need it. It's an approach that leans heavily on the idea that companies provide services for each other using these shared resources. At the heart of cloud computing is a technology known as hardware virtualization. Hardware virtualization is a core concept of how cloud computing technologies work. It allows the concept of a physical machine and a logical machine to be abstracted away from each other. With virtualization, a single physical machine called a host could run many individual virtual instances called guests. An operating system expects to be able to communicate with the underlying hardware in certain ways. Hardware virtualization platforms employ what's called a hypervisor. A hypervisor is a piece of software that runs and manages virtual machines while also offering these guests a virtual operating platform that's indistinguishable from actual hardware. With virtualization, a single physical computer can act as the host for many independent virtual instances. They each run their own independent operating system and, in many ways, are indistinguishable from the same operating systems running on physical hardware. The cloud takes this concept one step further. If you build a huge cluster of interconnected machines that can all function as hosts for lots of virtual guests, you've got a system that lets you share resources among all of those instances. Let's try explaining this in a more practical way. Let's say you have the need for four servers. First, you need an email server. You've carefully analyzed things and expect this machine will need eight gigs of RAM to function properly. Next, you need a name server. The name server barely needs any resources since it doesn't have to perform anything really computational. But, you can't run it on the same physical machine as your email server since your email server needs to run on Windows, and your name server needs to run on Linux. Now, the smallest server configuration your hardware vendor sells is a machine with eight gigabytes of RAM. So you have to buy another one with those specifications. Finally, you have a financial database. This database is normally pretty quiet and doesn't need too many resources during normal operations. But for your end of month billing processes to complete in a timely manner, you determine the machine would need 32 gigabytes of RAM. It has to run on a special version of Linux designed just for the database so the name server can also run on this machine. So you order a server with that much RAM and then a second with the same specifications to act as a backup. In order to run your business this way, you have to purchase four machines with a grand total of 80 gigabytes of RAM. That seems pretty outrageous since it's likely that only 40 gigabytes of this total RAM will ever be used at one time. Most of the month you're using much less. That's a lot of money spent on resources you're either never going to use or rarely use. So let's forget about that model. Instead, let's imagine a huge collection of interconnected servers that can host virtualized servers. These virtual instances running on this collection of servers can be given access to the underlying RAM as they need it. Under this model, the company that runs the collection of servers can charge you to host virtual instances of your servers instead of you buying the four physical machines. And it could cost much less than what you'd spend on the four physical servers. The benefits of the cloud are obvious. But let's take it a step further. The cloud computing company that can host your virtualized instances also offer dozens of other services. So instead of worrying about setting up your own backup solution, you can just employ theirs. It's easy. And if you need a load balancer, you can just use their solution. Plus, if any underlying hardware breaks, they just move your virtual instance to another machine without you even noticing. To top it all off, since these are all virtual servers and services, you don't have to wait for the physical hardware you ordered to show up. You just need to click a few buttons in a web browser. That's a pretty good deal. In our analogy, we used an example of what a public cloud is, a large cluster of machines run by another company. A private cloud takes the same concepts, but instead, it's entirely used by a single large corporation and generally physically hosted on its own premises. Another term you might run into, a hybrid cloud, isn't really a separate concept. It's just a term used to describe situations where companies might run things like their most sensitive proprietary technologies on a private cloud while entrusting their less sensitive servers to a public cloud. Those are the basics of what the cloud is. It's a new model in computing where large clusters of machines let us use the total resources available in a better way. The cloud lets you provision a new server in a matter of moments and leverage lots of existing services instead of having to build your own. To sum up, its blue skies ahead for anyone using their cloud. Sorry, I couldn't resist.