Tuesday, April 27, 2010

Notes on simple design

Clusters and server farms are complex stuff. Complex are applications that run on them. (these are truisms don't they). So why add even more complexity with management. I mean the sysadmin's job is to simplify his things as much as possible. The sysadmin's work should put the least possible fingerprint on the whole setup. Simple cluster designs may not have a high-tech look&feel. They don't run the bleeding edge, untested software with funky features. They rather follow the KISS rule and are based on primitive designs. They don't require as much attention as complicated clusters.

Here are some interesting quotes I've come across:

"Autopilot: Automatic Data Center Management"
Michael Isard
Microsoft Research
"We believe that simplicity is as important as fault-tolerance when building a large-scale reliable, maintainable system. Often this means applying conservative design principles: in many cases we rejected a complex solution that was more efficient, or more elegant in some way, in favor of a simpler design that was good enough. This requires constant discipline to avoid unnecessary optimization, and unnecessary generality. ―Simplicity‖ must always be considered in the context of the entire system, since a solution that looks simpler to a component developer may cause nightmares for integration, testing, or operational staff.

21st Large Installation System Administration Conference (LISA ’07)
On Designing and Deploying Internet-Scale Services
James Hamilton
"Keep things simple and robust. Complicated algorithms and component interactions multiply the difficulty of debugging, deploying, etc. Simple and nearly stupid is almost always better in a high-scale service-the number of interacting failure modes is already daunting before complex optimizations are delivered. Our general rule is that optimizations that bring an order of magnitude improvement are worth considering, but percentage or even small factor gains aren’t worth it."

Here goes what I do to keep the setup simple:

Automation
When automating (for example server deployment), I usually avoid writing one big script which does everything. Deploying a server usually takes several steps and a few different scenarios. You choose different installations based on machine types, change machine's vlan, make it network boot, supply the image etc. etc. Scenarios also differ. Sometimes a new server comes and you need to introduce it to the cluster and sometimes you need to redeploy a machine which is broken. Sometimes you redeploy a machine and assign it to a different server farm. A script to tackle all of this all would be error prone and also difficult to test in production (you don't test errornous thing on production right?).
Instead, I break the automated action into small pieces. For each of them I write a small script (one script to take care of networking setup, one for pxe-booting a server etc., one for updating cluster membership etc.). Then I test these scripts. It's easy as they are not robust and unlikely to have errors. I also try to base them on primitive protocols e.g. using telnet or ssh with an expect library. Then I try to glue them together into bigger stuff.

Operating systems
Operating systems are simple when they are standard installations. Having large number of servers, one should not customize any element of them by hand. Instead I found it is quite good to package all your scripts into a software package like rpm or deb and roll it out. For example, you may have a package called "my-custom-setup-.rpm" which places custom scripts into /usr/local. It also installs all the dependancies - rpm handles that. In general, it's also a nice idea to have a local copy of your distro's repo.
To distribute /etc/ contents, puppet from Reductivelabs is a great tool. I usually use puppet to ensure the latest version of my custom rpm is installed and to make changes to /etc. (puppet is not good at distributing large files, so syncing binaries with it is no good - rpms handle this much better)

Networking
If possible, it's desirable to have all the machines within a single network without vlans. Vlans introduce complexity into server deployment process. Usually you deploy a server within one vlan where you have PXE and tftp. When deployment is finished you need to change the vlan, which involves changing it on your switch. You run a large number of machines, so you need to automate this process. You also need to keep a mapping between your servers and switch ports. After you change the vlan, you cut the access to your server. So now you need to access its management processor to reset it to get a new address. You also need to know management address of every server. In most cases having vlans is a must and cannot be avoided, but if you don't need it, don't use it ;-)

Agents
To effectively manage the servers, you usually install some agents on them. For me the best agent is sshd with pubkey authentication. It's rock stable, which gives you two things: you don't have problems with accessing your servers. If you have - then you can safely assume that the whole machine is down for some reason (swapped out, turned off, etc.) - this is the second thing. OK - I also run puppet clients. But the general idea is that management agents should be simple and known-for-years stuff.

Reliability
Clusters' reliability is based on multitude of nodes. As a rule - I don't guarantee my users any failover mechanism. It's up to the application to handle it. I don't build any HA into systems themselves.

Information
It's common that in the lifecycle of a cluster some machines come and go, some migrate between clusters. It's crucial to keep this information somewhere. A simple, "primitive" approach to this could be like this one. DNS is a database, simple, widely-used, guaranteed to work (replication built-in ;-). The client is built into any linux.

It's very easy to become "innovative" in the bad sense when it comes to clusters. I mean being "bleeding edge" often means "complicated". Many ready to use datacenter solutions are all-in-one bulky software, which promises everything, but in fact it's very complicated, tailored to specific systems and platforms (take a look HP SIM, IBM Director).

In fact, most sysadmins I know, end up with custom tools built on opensource components with simplicity in mind.

Sunday, April 18, 2010

Degrees of control

This post was inspired by what happened to me lately at work. A guy from security came in and told it would be great if we could allow only certain packages to be installed on our linux boxes. Everything what is not specified on the machine's profile would be automatically erased.

When I look at this situation, I come to think that there are times and setups when you want to control every change that happens on your server farm and sometimes you only want to control some parameters of your machines.

So there are basically two approaches:
-> "God-mode": you have a reference server image to which you introduce changes and then sync your servers to this image (changes entered manually on your servers are overwritten)
-> "modelling-mode": you say: this server must have an httpd & postfix running, also group apache needs to be present, etc. . You care only about httpd, postfix and apache group - the rest can be modified freely.

Approach 1. you can use if you run a homogenic server farm, just like an HPC cluster where you have a headnode and a number of similar worker nodes. This approach does not deal well with situations where you have a mixture of different OSes, hardware and machine types. On the upside - you always know what you are running. The security guy is always happy ;-) Also - tools used here are quite simple. All you gotta do is to sync your cients with reference image.

Approach 2. you use if you run more diverse environment (who'd suppose ;-). I mean here a bunch of large websites, serving different domains, several database configurations, proxies etc. - see here you might easily have over 10 installation types, each of them possibly running different OS, hardware etc. When you think about it, it is easy to realize that controlling this mess with approach 1 is impossible. Especially when there are several admins, each controlling his domain of expertise. It's likely that your database admins don't know your configuration tools . Also they know databases better than you. So it's reasonable only to assure that package postgres or mysql is installed on their machines and leave other system tuning up to your fellows.

Some words about tools that can be used here:

For approach 1:
-> systemimager - a cluster deployment and management suite. You store images of your servers in a central repository. They are plain directories so you can chroot into them, install some software, add users, etc. and then propagate changes to your clients. All of this is done with rsync so you don't interrupt your farm members' work.

-> startng machines with common nfs root - machines mount a common root filesystem from a NFS server. What you change on the nfs share is immediately propagated to clients

For approach 2:
I recommend running puppet + nagios. With puppet you ensure that certain aspects of your servers are the way you want them (i.e. apache installed, user apache present etc.). However puppet fails on reporting, so you need to monitor how puppet imposes your configuration with nagios checks. All the rest is in the hands of your fellow admins.

Comments and suggestions higly welcome ;-)

Thursday, April 15, 2010

Using DNS for inventory tracking

This post will be quite short - rather a tip ;-).
Recently I started using DNS to store information about my assets. It turned out to be a primitive but very handy way to do it.

You usually keep track of your hardware in some kind of spreadsheet or a database. They can be updated automatically or manually. Ready tools to do it are ocsinventory or puppet storeconfigs. This is cool, but to access the data you need to launch your browser, enter fields etc. etc. It all takes time, particularly when you need to look up information on only one host.

I found things turn much simpler if you put vital hardware data into your DNS (hinfo, txt custom fields are just for that). You can access them later on using "host -t " command from any unix-like os.

For one of my hosts the output might look like this:

host -t txt compute002
descriptive text "location: Rack103a-5"
descriptive text "role: compute-node"
descriptive text "hardware: ibm, 8gb ram, 2xIntel Xeon 5540"

Wednesday, April 14, 2010

Linux clusters - first aid kit

Here is a list of tools to build and manage linux clusters and farms. Some of them I used a lot, some I only know to be respected among fellow sysadmins. All of them are free and/or GPL-ed.

Distros

-> OSCAR - quoting the webpage "OSCAR allows users, regardless of their experience level with a *nix environment, to install a Beowulf type high performance computing cluster. It also contains everything needed to administer and program this type of HPC cluster.". If you are in the HPC business and need MPI, scientific libraries etc. installed out of the box - this one is for you ;-)

-> ROCKS - quote "Rocks is an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Hundreds of researchers from around the world have used Rocks to deploy their own clusters". AFAIK this one is based on redhat. Job queueing system, MPI and scientific apps included in so called "rolls" (see the doc).

Administrator toolbox

-> cobbler - deployment tool using kickstart/preseed. Redhat-centric.

-> systemimager - one of my personal favourites. A VERY massive server deployment tool using rsync and torrents. It supports many cool features like cloning a server without the need to bring it down. Hardware independent imaging possible with a few hacks. When you run a homogenic server farm it's also a very convenient server management tool - images are stored as directory structures - it's possible to chroot in them, change, and sync changes to live systems! ;-). Some killer apps like parallel multithreaded shell and file distribution tool are also included.

-> capistrano - an automation tool. Have not used personally, but I plan to check it out some day

-> TORQUE resource manager - none of HPC clusters can live without a proper queueing system. Torque maintains your job queue, determines free resources (cpus, ram) on your compute nodes and distributes computations on cluster nodes.

-> mcollective - A job execution framework. I have not used it personally, but plan to give it a shot. From the webpage: "The Marionette Collective aka. mcollective is a framework to build server orchestration or parallel job execution systems. Primarily we'll use it as a means to programmatically execute actions on clusters of servers."

-> puppet - a language to describe your datacenter. Just one thing to say about it - A MUST-HAVE. It's hard to describe all the things this tools does (definitely, expect a detailed post on it soon). It is a language to describe server configuration. You can group machines in classes, describe what services to run, which users to have on various type of nodes with various operating systems. If you seek a simpler replacement for cfengine - this is the choice ;-)

-> ganglia - a monitoring framework designed for clusters. Supports aggregation functions for multiple nodes.

-> freeIPA - authentication framework based on kerberos,ldap.

If you think the list is not complete, please comment ;-)

Sunday, April 11, 2010

How to migrate linux between different hardware

Any experienced systems administrator must have come across this issue in his life. Your server became outdated 3 years ago with all RAM and disk slots allready filled. You need more power. Gotta buy a new server, set it up, install the software, configure it and run the services on the new server.

But wait a minute... The old redhat 3.x you are using presently is hardly available now. What about the code your fellow admin wrote 3 years ago and left soon afterwards (code still works but you don't have idea about its dependancies etc.).

The best idea is to clone the server and deploy it on the new one. But this is usually hardware dependant - which means you cannot redeploy the clone on servers which require different drivers).

Fortunately, linux handles hardware changes quite nicely and it is easy to setup hardware independent imaging.

However some conditions apply:

-> on the new server, the only thing that will change is probably modules used, along with initrd. These things are hardware dependant.

-> you will be running grub1 on the destination server (grub2 is not supported by systemimager AFAIK - see later)

-> you might experience some minor problems with udev which may fail to start during boot. From my experience it turns out, that this is usually not a major problem.

Software:

-> systemimager
-> an iso of your linux distribution

Hardware:
-> old server (ServerA)
-> new server (ServerB)
-> a third server (ServerC)to run systemimager-server - just for migration time

Procedure overview:

We first get the image from ServerA and transfer it to systemimager-server on ServerC. We also install a basic, plain operating system on serverB.
Then we overwrite serverB with ServerA's image excluding parts which are hardware dependent (modprobe.conf, modprobe.d). We regenerate initrd and reboot ;-)


Diving in

-> install your linux distro onto ServerB from iso

-> On ServerC download and install systemimager-server package (instructions available on the systemimager website. My ubuntu has it out-of-the-box in apt). The server on which you install should have enough disk capacity to hold ServerA's filesystem. Start systemimager-server-rsyncd service.

-> download and install systemimager-client on ServerA and ServerB

-> on serverA turn off firewall, shut down production services (possibly - as many daemons as you can). Then run:
si_prepareclient --server serverC

-> on serverC:
si_getimage --image image-serverA --golden-client serverA

this will start image retrieval which is done with rsync

-> while the image is being cloned, login to serverB and say which files will not be overwritten. On serverB - edit the file /etc/systemimager/updateclient.local.exclude and add the following lines:

/etc/modprobe.d/

/etc/modprobe.conf


-> when image retrieval is finished, on serverB run:

si_updateclient --server serverC --image image-serverA --no-bootloader

this will transfer the image from serverC onto your new server serverB

-> to hold new hardware on serverB, you need to regenerate initrd and look if all grub entries are correct. mkinitrd reads /etc/modprobe* to determine which hardware modules are needed to start the system. They are not overwritten by cloning because you excluded them earlier. (redhat provides easy command for the whole process called new-kernel-pkg). On redhat it also works to reinstall the kernel with rpm --force option.

-> after this is finished, reboot.

Conclusion

That's it. In practice this has worked for me several times. I would mostly reimage IBM eServer (from /dev/sda) on an HP ProLiant (on /dev/cciss/c0d0) and vice-versa. It worked well on dell blades. I successfully done V2V and P2V migrations on vmware/xen,kvm. The systems were redhat 5 and 4, debian.
However I expect some problems might arise if you tried to advance too much in kernel versions (for example run ancient redhat with a modern kernel on new hardware). An idea to handle this is to exclude not only modprobe* stuff from imaging but also whole /boot partition and /lib/modules/*. This is still to be tested ;-)