Posted in How To

Virtualization Cluster With CentOS 7

Virtualization Cluster With CentOS 7 Posted on March 2, 20244 Comments
Steven Van Setten is a Senior System Administrator, Dev Ops in Chicago. His background is in manufacturing and construction, but started in IT after the housing market declined in ‘08. He’s an avid photographer, having worked in a studio early in adulthood, and has a growing interest in media production. He still enjoys working with his hands, whether it’s projects around the house, or installing new computer hardware.
(Last Updated On: April 13, 2024)

This article is an overview of the steps taken to create a small virtualization cluster built for fulfilling personal infrastructure requirements like file sharing, syncing, and trying new applications. Reasons for creating a cluster instead of a single server are to eliminate a single point of failure, and to allow hardware maintenance without interrupting services. The goal was to have something resilient that is easily maintained and simple to operate.

The Hardware

I’m a hardware guy. It’s not everyone’s cup of tea but I’m the kind of guy that remembers the specs of every system he’s ever had. As such, this might be a section that can be skimmed by those who just want the list or skipped for those that have no interest. Over the years, I’ve become accustomed to having a Windows desktop that I use for the rare things I find necessary and a Linux desktop that I use for everything else. Additionally, I’m an AMD fanboy. Though I am well aware of Intel’s superior performance both in processing power and energy consumption, I just can’t seem to shake “the feels” I get when buying AMD. That said, when it came time to build, my sensibilities (I’m a middle of the road kind of guy when it comes the performance) and loyalties sent me down the path of purchasing a couple of motherboard/processor combos that included an FX-6300. This served me well for a year, and then a regional computer store had a sale on FX-8320s which came with free motherboards.

I quickly purchased the upgrades and ended up with a couple of combinations laying around. This was ripe for projects, but with the Holiday Season approaching, using some of the parts to make a Christmas gift became too tempting. Shortly after, I decided that I would consolidate into a single desktop that I would dual-boot, and this left me with an additional computer. Deciding to upgrade the processor in my main desktop (since it would be my only one) to a FX-8370 left me with the two FX-8320/Motherboard combinations. For each of these, I purchased 16GB RAM, two 120GB solid state drives, and two 3TB hard disks. With this equipment, a couple of cheap ATX power supplies and a quick order of a couple cheap 2U cases from a popular Website, I was ready to go.

The Software

Though Ubuntu will likely always have a soft spot in my heart, being a Linux Systems Administrator in the US, I have a predilection for CentOS. With version 7 being available but still gaining usage in my production environment, this seemed like a solid project for getting familiar with the nuances in the new version, and it would provide a long supported and stable environment for my hypervisor. My method employs installing the base OS with OpenSSH and building from there.

Once the installation was complete, I logged in and proceeded to install using Package Groups, a feature in Yum, Red Hat’s package manager. These are a logical grouping of packages in order to complete specific tasks. The “Virtualization Host” group contains all the necessary packages for running a minimal hypervisor. The following command is used to initiate the install:

yum groupinstall "Virtualization Host"

Once the required packages were installed, I moved on with the preparation and configuration of my cluster.

The Storage

Gluster is the storage solution I decided to go with, as it is a versatile solution and it allows for easy migration of stored images across cluster nodes. What is Gluster? It’s a clustered file system Think of it as a distributed NAS. Storage nodes can be added and configured as mirrors, stripes, or combinations of the two (similar to RAID1, RAID0, and RAID10 respectively). Additionally, multiple nodes can be added to the storage cluster to expand storage without nodal redundancy. For this project, I decided to use a mirrored configuration. This creates a level of redundancy since both nodes will contain the same data. One of the benefits of Gluster is that the protocol writes to both nodes at the same time, so there is no syncing delay.

An interesting aspect of Gluster is that it uses a configured file system as the basis of its storage. Logical Volume Management (LVM) is my preferred method of configuring Linux storage and the following is an overview of the configuration I decided to build on the back end. I divided one of the 120GB SSDs into three partitions, 512mb as sda1 for /boot, 18GB as sda2 for a Physical Volume for the OS Volume Group, and the remaining space as sda3 for a Physical Volume for a Fast Storage Volume Group.. I then added the second SSD as another Physical Volume for the Fast Storage Volume Group. A Logical Volume (LV) for the Root (/) file system was carved out from the OS Volume Group, followed by an LV for VM Storage from the Fast Storage Volume Group. A Data Storage volume group was created out of the two 3TB hard disk drives, and a small (64GB) logical volume for ISO Storage as well as a striped logical volume for Data Storage were created. All logical volumes were formatted with XFS and then mounted as /glusterfs/vmstore, /glusterfs/isostore, and /glusterfs/datastore.

After configuring the storage volumes on the server, it’s time to set up Gluster. First step is to add the Gluster repos to you package manager. This can be done using the following command:

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo

Once the repo has been added, you can install the server using yum

yum -y install glusterfs-server

This will need need to be completed on both of the servers to facilitate the redundancy that Gluster provides. After installing it on the second server, I proceeded to configure Gluster.

First I started the Gluster Daemon on both servers

systemctl start glusterd

Next, from the first server I established the cluster creation

gluster peer probe server2

In order to establish the configuration using hostnames, I ran the command on the second server

gluster peer probe server1

We can now create the volumes:

gluster volume create gfs_vmstore replica 2 server1:/glusterfs/vmstore/brick1 server2:/glusterfs/vmstore/brick1
gluster volume create gfs_isostore replica 2 server1:/glusterfs/isostore/brick1 server2:glusterfs/isostore/brick1
gluster volume create gfs_datastore replica 2 server1:/glusterfs/datastore/brick1 server2:glusterfs/datastore/brick1

Once the Gluster volumes have been established in a cluster, they will need to be mounted to be used. In this case, the servers are going to be both the clients and the servers, as such the mounts will need to be added to the fstab using the following lines.

Server1:/gfs_vmstore /data/vmstore glusterfs defaults,_netdev,backupvolfile-server=server2 0 0
server1:/gfs_isostore /data/isostore glusterfs defaults,_netdev,backupvolfile-server=server2 0 0
server1:/gfs_datastore /data/datastore glusterfs defaults,_netdev,backupvolfile-server=server2 0

Once the mounts have been added to the fstab, they need to be mounted

mount -a

Now that the hypervisor has been installed and the storage has been enabled, the server is ready to have the Virtualization environment configured. This can be done using the command line interface but using tools from a workstation can allow a faster creation and is easier to comprehend with a visualization.

The Interface

Virtual Machine Manager or Virt-Manager is a graphical tool for the Linux desktop that allows interfacing with libvirtd (an API for managing a myriad of hypervisors, notably KVM which was installed in the first part). I use Ubuntu flavors as my main desktop As such, the installation command for installing the application on my workstation is as follows:

sudo apt-get install virt-manager

Once the application is installed, I was able to start it and configure it to manage my virtualization cluster. The cluster servers need to be added to the interface. This is done by going to the file menu and selecting “Add Connection.” On the dialog box that opens, I left the default under “Hypervisor” (QEMU/KVM), checked “Connect to remote host,” left the method as SSH, left the username as root, and set the hostname to server1. I then did this again using the hostname of server2.

virtmgr-addserv2

Once the hosts are added to the interface, double clicking them will open a dialog for entering the root password you set at install.

virtmgr-baseview

Next, right clicking on the host and selecting “details” brings up a screen for configuring the host. At this point, I configured the storage.

Clicking the “storage” tab, and then the plus (+) button, a new dialog opens with a wizard for configuring a storage pool. I selected “File System Directory” and gave it a name of “VMStore.”

datastore-vmstore-name

On the next page I set the path of the directory to /data/vmstore.

datastore-vmstore-path

After clicking finish, it is now available in the storage list. I proceeded to configure the rest of the shares (DataStore - /data/datastore and ISOStore - /data/isostore).

Node-storage

Following the same steps on the second host causes the gluster directory to be the storage pool on both nodes which allows migrating VMs between the hosts.

Now that the cluster nodes are configured in Virt-Manager, I am able to proceed with uploading ISO files and provision my first virtual machine. To do this, I used SCP to upload the CentOS 7 ISO to the /data/isostore directory on server1.

scp ~/Downloads/CentOS-7-x86_64-Minimal-1503-01.iso root@dreaming01:/data/isostore

Doing this will cause the file to be accessible from either host. In the interest of observing Gluster functionality, the first VM will be a CentOS 7 server on server2 using the ISO file that was uploaded to server1 as the destination. Right click on the host, and select new, select “Local install media” and click forward.

newvm-1

The next screen lets you choose which ISO should be used for the installation.

newvm-2

Clicking “browse” browse opens a dialog that looks like the storage tab of the details menu. Selecting the ISOStore pool, it’s apparent that Gluster is functioning, since the CentOS ISO that was uploaded to server1 is available on server2.

newvm-selectiso

Next, the OS type and Version need to be selected. This loads specific configurations for the VM to improve performance.

newvm-3

Set the memory and processor count.

newvm-4
On the storage page, choose to “select or create custom storage”

newvm-5

Clicking Manage greets with a familiar storage dialog screen. Here, I go to the VMStore pool and click new volume (the “+” button above the storage volume contents), I then name the volume and set the capacity and click finish.

newvm-newvolume

Highlight the newly created volume, and click “choose volume.”

newvm-storagevolumesel

Once selected, it takes me back to the New VM wizard, and I click forward to the last screen.

newvm-7

On the final page, I name the VM, and click finish.

newvm-8

This opens a new dialog, and after entering the root password, the console of the new VM comes up. From here, I can begin installing the OS on my new VM.

newvm-console

Proceed with the normal CentOS installation process.

More great Linux goodness!

Steven Van Setten
Steven Van Setten is a Senior System Administrator, Dev Ops in Chicago. His background is in manufacturing and construction, but started in IT after the housing market declined in ‘08. He’s an avid photographer, having worked in a studio early in adulthood, and has a growing interest in media production. He still enjoys working with his hands, whether it’s projects around the house, or installing new computer hardware.

4
Leave a Reply

Please Login to comment
2 Comment threads
2 Thread replies
1 Followers
 
Most reacted comment
Hottest comment thread
3 Comment authors
Steven Van SettenWolf PaulAdrian Bradshaw Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Adrian Bradshaw
Member
Adrian Bradshaw

Very cool but I can’t help thinking you could have used oVirt 🙂 which would add a great UI, live migration etc etc

Wolf Paul
Member
Wolf Paul

In this section above:

| We can now create the volumes:
|
| gluster volume create gfs_vmstore replica 2 server1:/glusterfs/vmstore/brick1 server2:/glusterfs/vmstore/brick1
| gluster volume create gfs_isostore replica 2 server1:/glusterfs/isostore/brick1 server2:glusterfs/vmstore/brick1
| gluster volume create gfs_datastore replica 2 server1:/glusterfs/datastore/brick1 server2:glusterfs/datastore/brick1

you have a typo, the second commandline should have “isostore” in the server2 path as well.

It would also make this article more helpful for newcomers to RHEL/CentOS 7 if you could provide sizes for the physical volumes on the SSD and for the root LV.