Table of Contents
Edited: 2020-01-07 15:20:50 -0500
Caveat: The underlying assumption for this article is that you want to host your own virtual infrastructure rather than relying on third parties, whether due to concerns over things like privacy, security, protection of secrets, and/or cost; or other factors.
In short OpenStack takes a lot of time and expertise. It therefore distracts from the focus of your small business or startup. Finally, there is certain minimum amount of hardware required for redundancy and failover.
Time Sink / Cost of Expertise
Learning how to configure OpenStack for production is non-trivial. Yes there are various tools for getting started quickly, but the gap between those and having a production-grade system you understand is very large. Correspondingly the cost of third party consultants and expertise in OpenStack is quite high, and tends not to be ‘set and forget’ but rather an ongoing expense.
Unless you go big enough to build in redundancy and failover in each of the components, there are a many points of failure. And even once you’ve got it working, a bad configuration change can bring down the entire infrastructure.
Minimum Deployment is Significant (for a SOHO / Startup)
- Storage redundancy takes at least three storage nodes, each with equal and significant storage capacity.
- If you only have one controller (and supporting daemons like database, memcached, etc) this becomes a choke-point which can bring the system to spectacular failure or worse a gradually and continuing lack of responsiveness and inability to complete tasks.
- You need enough compute nodes to
- migrate on failure
- amortize the cost of the support infrastructure (since the whole point of the exercise is to ‘do useful things’).
- host all the VMs you want to host (if that’s not very many, then the cost to compute ratio of a small cluster is rather high).
But: It’s More ‘Real Cloud’-like
What Does That Get You?
The reality is that hosting a small deployment is quite different than going to scale, so believing you’re preparing for scale by using ‘big iron’ when you are small is foolhardy. Further, you would need to use ‘big iron’ approaches to your entire development and deployment effort. Finally, there is a lot of overhead created by going ‘big iron’ that will interfere with agility and ability to change quickly.
In addition if one uses tools such as Ansible and ‘Cloud’ images with cloud-init on libvirt the one is already using the most significant parts of a typical cloud stack for small to medium deployments.
Headless Libvirt & Remote Virt-manager a Potential Solution
I’ll be writing some articles on this in upcoming entries.
Easy to Configure
Setting up Libvirt on Debian or Enterprise Linux (CentOS or RedHat), is quite easy, even for small production deployment-ready scenarios.
- Install the packages.
- For initial development you only need SSH and libvirt-clients on your controlling host (and SSH on local host and Libvirt host). For added ease of use you can use virt-manager (Virtual Machine Manager).
- Configure networking:
- Set up bridges and vlan filtering (these is part of the secret sauce) so that you isolate internal from external virtual machines. (Note for best effect you will need a managed switch for this to be useful, but $100-$200 CAD ones are sufficient), and a router capable of handling VLANs (OpenWrt will let you go COTS, otherwise you will need a more expense business class router; again this is also true of OpenStack). You could forgo the VLAN filtering but that’s a bad idea for production from a security standpoint.
- Assuming you want TLS instead of SSH (for production):
- Create certificates (e.g. using FreeIPA or other CA management tools) — You need to do this for a production OpenStack deployment as well, and the production OpenStack there is more infrastructure to protect.
- Install certificates on controlling hosts and libvirt hosts.
- Edit configuration appropriately.
- Use libvirt-clients, virt-manager, and Spice or VNC clients.
- Create and destroy VMs using generic Cloud images for QEMU/KVM from you preferred OS vendor (or build your own) with an ISO providing cloud-init data. (To be detailed in a future article, but web searches are your friend).
- Repeat for the (small) number of compute nodes you need (since we are specifically comparing small deployments). There are no overhead hosts for the Libvirt infrastructure.
Easy to Maintain for Small Deployments
With so few moving parts relative to OpenStack, it’s easier to keep track of what is the state of configuration and deployment. It’s also less complicated and therefore less expertise is required to managed it. In addition with fewer hosts there are fewer hosts on which to maintain and underlying OS.
Can Still Use Cloud Images for a Cloud ‘Feel’
OpenStack images and/or QEMU / KVM images are almost certain to work with this setup, as long as you provide the ‘cloud-init’ data. You can also use the scripting capabilities of tools like Terraform to automate processes, albeit with more custom work than OpenStack.
But: Doesn’t Scale
The downside is that even with automation there is less internal bookkeeping and more onus on you to keep track of your deployment. This doesn’t scale as well. In addition, managing shared storage can be more of a challenge since doing so is not built into the framework.
Cross The Bridge When You Get to It
Ultimately you should do what lets you focus on your main business objectives and avoid getting bogged down in things like ‘big iron’ and cumbersome administrative processes before it’s relevant.