Published on August 25, 2016 by Guest blogger
This post covers the details of fully-automated deployments using Ansible and the SysEleven Stack, which is based on OpenStack.
Scaling is done by the cloud, but what do humans do? They code! In this post, I'll present how to define your infrastructure as software code – network, storage, and virtual machines – and go over options for utilizing that infrastructure to rapidly roll out Magnolia instances based on SysEleven’s "Automate–Build–Run!" approach.
The Old Way
I’m a system engineer at SysEleven, a Berlin-based hosting company. Up until now, we’ve mostly offered traditional hosting services. If a client wanted a cluster or a service, he would email, call, or go through our ticketing system. We’d begin to set up the service for him. This can be a tedious process, it could take some time to get everything up and running.
The customer begins to install his application, does some unit tests. Every setup is basically individually built. We use automation software like Puppet, automating around 80 or 90% of the work. But there are always config files that people have touched, not to mention individual configurations, both of which are very difficult to reproduce.
The customer might come back and need a relaunch at a later period or request more services. In the worst case scenario, the engineer who set up the initial cluster is not avaiable, and you'd find that he had made changes, modifications, or even just a small tweak in a config file. You’d find yourself doing diffs between files, checking installed packages between the new node and the old node, trying to figure out why it doesn’t work.
We’ve come up with a better approach. In this post, I’ll go over how to run scalable setups on SysEleven’s cloud platform. Companies like Amazon and Google have already shown us how it can be done. We’ve taken that approach, invested a lot of time and effort, looking at the market and various vendor solutions. At the end of the day, it came down to OpenStack.
We put together a team to get an OpenStack installation running for us. We found that with 10 or 15 boxes together, you can’t even saturate a 10 gigabit network link. There were high latencies due to the software defined network (SDN) layer. That’s not good enough for our clients. We examined all the components and optimized some, including the vendors for underlying storage systems and SDN. By changing these and the hardware we work with, we received a result that we’re satisfied with.
The Automate, Build, Run method
Our approach to getting clusters and systems up and running has three phases. “Automate” is infrastructure as code, where you describe your setup in a templating language. “Build” is the process of assembling components through the OpenStack API. The SysEleven Stack, just like OpenStack, has an API. We send our requests as infrastructure code to the API. Finally, there’s the “run” phase, in which the full service, including every component, is running.
What is automated infrastructure?
Automation means describing our infrastructure as code. This can be a single component, or it can be a complete setup.
Take a single service. It consists of many different components: a hard disk which you need to describe; the amount of RAM you require; computing resources like CPU usage you may want to limit; different resources for a load balancer. You’ll need resources for an application server or database nodes. There’s the underlying Software Defined Networking, basically a virtual network, as well as routers that allow machines without a public IP to communicate with the Internet to run updates or to contact your external ERP system. These, as well as things like firewalls, floating IPs, fail-overs, and security groups can be described in the template.
Once all of these components are described in code, you can set everything up, including the number of servers that you need. You can create different sized setups for staging, production, or (in Magnolia terms) your user acceptance test. The diagram shows what that would look like, including the identity service keystone. The identity service enables different levels of access for different team members, so a senior developer would be able to change the live setup, while a junior or backend developer can run testing clusters or unit tests.
All of the different components, like block storage, object (S3) storage, CPU and RAM, can be viewed in the dashboard. And manage there, but it’s more effective to do it on the command line. For actions like checking status, looking at CPU allocation the dashboard is helpful.
The last step is to run our infrastructure setup. To do this, we need a few more additions. We need a software component, like an Nginx load balancer, an Apache web service, or in the case of Magnolia, even a Tomcat server. We need a deployment mechanism to get the application running: and also a orchestration tool. We’ve decided to use Ansible because it covers many of the issues we’ve had. With Ansible, you can set up your virtual machines through the API, then install and configure your services all with one software.
The issue we brought up at the top with altered config files can’t happen anymore. Everything is either in an infrastructure template or an Ansible playbook, which can be replayed at need. Once your testing setup is running, you can reproduce it for your user acceptance test, your staging environment, and your production environment.
With the SysEleven Stack approach, you end up with three key advantages: reliability, fast time to market, and reproducibility.
The setup is reliable, because you can run it over and over again, with the same result.
Because every step is automated, the time to market is very fast. There’s no manual configuration, all machines are started via the API and Ansible does the configuration, to set up load-balancing services, DNS, databases, or whatever else is needed. Once they’re not needed they can be shut down right away or restarted instantly.
Finally, it’s reproducible. Our best practice setup is integrated into the recipe, and we'll never lose it. The tweak that a sysadmin found two years ago is in our recipe. It’s not hidden somewhere in a config file; it can’t be lost. It’s there to use every single time.
We worked with Magnolia to create Ansible modules for the SysEleven Stack for the ideal setup. Those include modules for Tomcat, the load balancer, as well as Magnolia’s software stack, which you can use to upgrade software, do backups, checkout your Git repository, and more.
You can have different environments: integration, user acceptance test, and acceptance, which you can all run through your Magnolia Now instance in the cloud.
What do we need to get Magnolia Now up and running? A standard cluster setup takes 3-3.5 minutes to build the servers. Then we need at least one load balancer. In an ideal world you'd want a cold standby load balancer as well, something with a floating IP on it. If the load balancer fails you can fail over to the hot or cold standby server so your setup stays online.
We need at least two public servers for a high-availability setup. We can do the load-balancing on the Nginx software load balancer or a hardware load balancer if you require additional features. We distribute the load between your two servers. On top of that, we need an author server for whoever is adding content to Magnolia. You can have a cold standby if you need it, but in a standard set up one author server is enough. All of our stuff is in the cloud, so we can build the server again if we need it. We also need all the Ansible modules developed for this, and the very last component is the Magnolia software itself.
Finally, to really enhance the experience of using SysEleven Stack with Magnolia Now, we added a bit of DNS magic. We have a service where you can distinguish the UAT and production environments via an URL, so you don’t need to use your own domain. We provide you with a domain until you’re up and running, so you don’t have to use your production domain on your test setup.
To get all of that up and running, it took us 9.5 minutes until you can log into the backend. We’re looking into improving that in the future.
By the way, this is something we can do for any kind of application. If you need help getting an application running in the cloud, SysEleven can support you with the process.
Simon Pearce has been Senior System Engineer for more than four years at SysEleven. His team is responsible for the daily smooth operation of customer projects. Beside his technical work on running applications, Simon is a highly skilled technical consultant with experience in setting up Linux cluster environments. Simon has a broad, deep knowledge in setting up complex IT projects. Getting to know the people behind the projects is an important aspect of his work. He also values constant exchange about technical subjects and successful project management skills.
Magnolia has an amazing community of partners and clients, among them quite a few wordsmiths. From time to time, they put their expertise into blog posts and share them on this platform.
See all posts on Guest blogger