Sunday, December 27, 2015

Linux | Guidelines for setting up a Linux Server as a VM

These are just plain guidelines for setting up a Linux server as a virtual machine. As time passes I keep seeing the same mistakes from fellow admins, over and over again so I decided to mention some of it.

Feel free to add anything else in the comments or point me out should I am making a mistake somewhere along the way ;)

First things first, you have to decide what will be the use of the VM that you are creating? Will it be a file server, or a web server, or a database server? Perhaps it will be a dedicated VM for a couple of users. No matter your choice, if you are setting up the server as a virtual machine in you infrastructure you have to keep in mind a few things so these are just a few golden rules for setting up a Linux VM so that you don't end up crying later when things go bad.




Before reading further I would like to point out that this is just "my way of doing things". This is because I noticed that some people disagree with this method. Now I don't say that this is the right way and your's is wrong, I merely point out that this has always worked for me, and I 've never been burned doing it this way, but I've found problems doing what most people do. Then again, 5+5=10, but so as 8+2 and 3+7.



Golden Rule #1

Always set up each of the partitions on their separate virtual drives!


Yes! That is the way to go with the virtual machines. I keep over and over again seeing Linux VMs with one large partition separated on several smaller ones. The time of physical servers is slowly coming to an end. But even in those times when there was only physical server people knew that separating the partitions on their own drives is the best way to go.

When I posted this article 2 weeks ago from today, immediately I noticed that many people disagree with this method. I don't know why, but that is how I am setting up my VMs according to my past experiences, not only Linux but Windows too. Think about it a little. This way, not only that you may provision more space on each of the partitions as needed, but you can also distribute (Storage vMotion) all of your vdisks across your storage systems.


This is a bad example of a Linux setup as a VM:

As you can see from this image, this is a Ubuntu VM with one virtual hard drive of 40G. When I did df -h on this VM this was the result:

# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       504M  128M  352M  27% /boot
/dev/sda2        39G  1.7G   38G   2% /
devtmpfs        912M     0  912M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M  8.5M  912M   1% /run

What could possibly happen here is filling up of the root file system. Than the server comes to a stop and trust me that it will happen on the worst possible time. In times like these with all of these possibilities that the virtualization can offer, there is really no need of setting up a single vdisk on any VM not just Linux.

This is what is a little more appropriate set up of a Linux VM:


On the above image HD1 is used for hosting /boot and /(root) file system. HD2 holds the /var file system, as this is a database server. And HD3 which is 8G holds the swap space.

Because this is a DB/WEB server an even better version of this VM would be with 6 hard disks: 
HD1 (thin) - 1G - /boot
HD2 (thin) - 8G - /(root)
HD3 (thin) - 8G - /tmp (keep in mind that /tmp should be at least as twice as the RAM)
HD4 (thick) - 16G - /var 
HD5 (thick) - 8G - /var/log
HD6 (thick) - 8G - swap

And even another example for a file server:
HD1 (thin) - 1G - /boot
HD2 (thin) - 16G - /(root)
HD3 (thick) - 500G - /home
HD4 (thin) - 16G - /tmp
HD5 (thick) - 8G - swap


Golden Rule #2

Unless you know exactly what you are doing, and what you want to achieve, you don't use LVM (Logical Volume Mirroring) on virtual machines!


When I first told this, the people misunderstood me so let me paraphrase a bit - unless you know exactly what the purpose of the LVM will be, don't set up the VMs using LVM. This doesn't mean that it is wrong to use it, it just means that it is wrong to use it without intended purpose.

I am mentioning this here, because many Linux distros by default use LVM. CentOS is one of those examples. When you boot a CentOS bootable iso image, if you go next next and than finish the installation quickly you will end up with a Linux utilizing a LVM on the partitions. For a physical server this may be somewhat OK, but in most cases this is bad for VM.

True is the fact that if you set up your VM using LVM, than you may provision more space on your partitions on the fly. But think about it - Exactly how critical this VM can be so that you can say: I must never reboot this VM?.

Using LVM on a VM can potentially lead to problems later in time. During my time as a sysadmin I've had only once "need" of LVM and I used it for one partition only, because the agreement was that we had to provision some space now, and some more later (the client was unsure of how much space they needed and I was also happy to experiment). The reality is that later I found out that I didn't had to use LVM anyway.

Q: Than how can I add more space on the VM should I need to extend a partition?
A: Use gparted bootable ISO to extend your partitions. 


Golden Rule #3

If you use LVM /boot and /(root) should always be set up traditionally.

If you ever had to rescue a server that fully utilizes LVM, then my friend, you share the pain. To avoid these problems always set up the /boot and /(root) file systems using the traditional approach. Also make sure that any other critical partition holding working system files uses the traditional approach as well.

HD1 - 1G - /boot
HD2 - 8G - /(root)
HD4 - LVM partitions
HD5 - LVM partitions

You've been warred about this!

Golden Rule #4

Partitions/disks holding databases should always be setup as thick provisioned


Yes, including the swap space. Swap is essentially a database that Linux is using to store some of the RAM memory so swap should also be thick provisioned. The partition holding the database table-spaces is usually /var/lib. As a general rule of thumb you always wan to set up at least /var to be on a separate partition. If you plan on being even more precise than you can set up /var/lib on one partion and /var/log on another smaller partion because /var/log usually holds the log files and it is constantly under pressure from the syslog service.

This is an example on a precisely set up database server (Note that I don't do this that often, but it is a correct way to do:)

HD1 (thin) - 1G - /boot
HD2 (thin) - 8G - /(root)
HD3 (thin) - 8G - /tmp
HD4 (thin) - 8G - /var
HD5 (thick) - 80G - /var/lib
HD6 (thin) - 8G - /var/log
HD7 (thick) - 8G - swap

If this is to be a highly utilized database, than you should consider putting the /var/lib hard disk on a faster storage maybe with it's own dedicated LUN.


Golden Rule #5

/tmp and swap partitions should be at least as twice as the size of the RAM of your server

If your RAM is 8G than /tmp and swap should be 16G respectively. It is just a basic guideline for setting up a Linux server anyway, not just a Linux VM.

As some users pointed out to me that this is wrong, and after thinking a little bit, I will say that I somewhat agree with that.

But I will stand my ground that the swap should be at least as the size of the RAM (that is up to a certain amount of ram).

For example:
If the VM has 2G of RAM then I would put 4G of swap. I am doing this rule up to 8G of RAM. Anything above I am making the swap at least the the size of the RAM. But if it has 128G of RAM then using 128G of swap will be a wasted space. So for larger amounts of RAM memory, you can always put less swap, and add more should a need arises.

As for the /tmp partition, in many older books you can find that the recommended /tmp size is also twice as the size of the RAM. Of course with today technology this may have no point. In any case, many applications use the /tmp partition to put stuff inside, so that folder should always be on it's separate partition.

The way I am doing is similar as the swap, up to 8G of RAM I'm making /tmp to be twice the size and anything above that as the size of the RAM or less. 


Golden Rule #6

Never skip the installation of VMware Tools. Ever!


"I am just going to install this VM now and I will install the applications and I will install the VMware Tools later" 

I'll make it like this now and make it right later. Sure you will! No! That is one of the biggest sysadmin lies. Just as you "fixed later" that patch cable that you run-over 3 cabinets and through the passage from the switch to that server at the end of the server room, that same way you will "install" the VMware Tools, "later". No my friend. The time to install VMware Tools is exactly THE FIRST THING AFTER your have finished the installation of the VM.

Why? Because my friend this is Linux. It's because the installation of the VMware tools regenerates the kernel and the initrd image, so you can never be sure that the installation of the VMware Tools later (read - after the server went into production) will not screw up anything with whatever custom apps that server might be running. Plus it requires a reboot, so you will have to go through the email sending procedure asking requests so that you can install and reboot the server in 00:00h.

Without VMware tools the ESXi will continuously be swapping on the disks and that will add additional overhead to your infrastructure. Those who used the VeeamONE monitoring solution know what I am talking about. Trust me, with VMware Tools on each of your VMs your ESXi servers will be grateful.







1 comment: