P2V: Debian Testing PXE server and Vmware Workstation 11

ASUS Spresso Open

MARCH 2017 – so I’ve had an old PXE an ASUS Spresso (S-presso) with a Pentium-4 sitting around for the last few years, and got a wild hair to do a P2V (Physical-to-Virtual) installation on it. First, I had to make sure it still works right after attempting an OS upgrade of Debian Testing (in a VM) to modern standards. Last time this box was booted was November 2013 and it was provisioned with “Dreamlinux 5″ back in January of 2012.

Thankfully, the SATA drive and CMOS battery have survived with apparently no ill effects, since the box has been moved around through a couple of house changes with no special storage arrangements – it’s basically been “unpowered” sitting in a corner.

ASUS Spresso

ASUS Spresso Open

ASUS Spresso back

 

( NOTE these are stock images. My unit has a PCI slot on the right that has a Gig Ethernet card in it and an unused AGP video card slot on the left. The motherboard only supports 100Mbit Ethernet. )

A few notes about PXE:

REFERENCE – https://en.wikipedia.org/wiki/Preboot_Execution_Environment

PXE HOWTO – https://debian-administration.org/article/478/Setting_up_a_server_for_PXE_network_booting

Tools used:

– FSArchiver

– Vmware Workstation 11

– SystemRescueCd

– KNOPPIX

– GParted

STEP 1: BACKUP the physical PC

This 32-bit box is so old, I had to install FSArchiver (my preferred bare-metal backup program) manually. Incidentally, it also weighs a ton with a DVD drive and a standard SATA hard drive, so removing the DVD and converting to an SSD (or laptop drive) to save weight and would be recommended if I wanted to haul it around.

(Commands entered on Physical box, bash shell, as root)

apt update && apt install fsarchiver -y

apt update && apt install fsarchiver -y
ddir=/mnt/extra
outfile=spresso-p2v-backup-20170302.fsarchive.fsa
rootdev=/dev/sda3
time fsarchiver -o -A -z 1 -j 2 savefs \
$ddir/$outfile \
$rootdev
cd $ddir
fsarchiver archinfo $outfile 2> flist--$outfile.txt

I then copied “spresso-p2v-backup-20170302.fsarchive.fsa” to my ZFS file server (method omitted), since we will need it later for the restore.

Also, since all of my Squid cache and ISO files for PXE are on /mnt/extra on the Spresso, I made a tar backup of those to my ZFS server.

cd /mnt/extra
time tar czpf /mnt/bkpspace/spresso-mnt-extra-bkp-20170302.tgz *

STEP 2: Setup the P2V VM in Vmware Workstation 11 and restore the backup

I tried as much as possible for the VM settings to duplicate the physical box, but in this case I already had a P2V VM of my daily workstation (XUbuntu 14.04 LTS) that I wasn’t using, so adding a 3rd virtual drive to the existing VM and reusing the existing GRUB saved me some steps. Running

update-grub ‘ on the Ubuntu side after restoring enabled me to boot my restored environment.

VM details:

RAM: 1.5GB

Processors: 1

SCSI1: 23.5GB (this is the Ubuntu drive)

SCSI2: 20GB ( this is /home and swap )

SCSI3: This was added “as new” and then Expanded from 10GB to 60GB (after running out of space)

Net1: Bridged (gets DHCP address from my LAN)

Net2: Started out as Host-only and ended up as LAN Segment – this enabled the “client” bare VM to boot over the network.

Details for the Spresso PXE P2V VM

I used a SystemRescueCd ISO to boot the VM, issued ‘ startx ‘ and used ‘ fdisk ‘ to make 2 partitions that fairly closely matched my physical layout.

PROTIP: In hindsight, I should have made the root partition around 15-20GB because of all the package upgrades that needed to be downloaded. (I ran out of space on root once during the upgrade and had to issue an ‘ apt-get clean ‘ to free up space.)

sdc3: 10GB ext4 (restored root) – Advice: Make yours bigger.
sdc4: 50GB ext4 (/mnt/extra)

PROTIP: As root, run ‘ tune2fs -m1 /dev/sdc3 ‘ — this will also reduce the Reserved space for ext4 and give you some extra usable free space.

Now, since I had plenty of space to work with, I copied my FSArchiver backup file from my ZFS server to the 50GB VM partition.

(Still in SystemRescueCd, in the VM)

mkdir /mnt/tmp
mount /dev/sdc4 /mnt/tmp
cd /mnt/tmp
ifconfig # Make note of my VM's IP address: 192.168.1.95
nc -l -p 32100 |tar xpvf -
##Netcat - Listen on port 32100 and untar whatever is received

(Now on my ZFS server)

tar cpf - spresso-p2v-backup-20170302.fsarchive.fsa |nc -w 5 192.168.1.95 32100
## (tar to stdout and stream a copy of the backup file to the VM's IP)

This is basically a quick-and-dirty way to transfer files over the network without resorting to FTP or slow SSH file copies – the fsarchive backup file ended up being around 3GB and transferred in less than a minute over Gig Ethernet.
Netcat REF: http://www.terminally-incoherent.com/blog/2007/08/07/few-useful-netcat-tricks/

Now to restore the backup:

(still in SystemRescueCd, in the VM)

time fsarchiver restfs spresso*.fsa id=0,dest=/dev/sdc4

And that’s half the battle right there. Now we just need to make some changes to the restored /etc/fstab so it will boot. ( If we were doing a full migration, we would also need to adjust things like /etc/network/interfaces , /etc/rc.local , /etc/hostname , and /etc/hosts )

For completeness, here’s more info on how to do a Linux bare-metal backup and restore:
REF: http://crunchbang.org/forums/viewtopic.php?id=24268

So now I ‘ reboot ‘ into the VM’s already-installed Ubuntu 14.04 and edit my Dreamlinux /etc/fstab.

mkdir /mnt/tmp
mount /dev/sdc3 /mnt/tmp
screen -aAO # PROTIP: GNU screen is invaluable for switching between virtual terminal windows
fdisk -l # check out our disks
blkid # get partition labels and UUIDs
/dev/sdb2: LABEL="swapb" UUID="f0eb7148-4ff2-4eeb-a82c-349d384a5255" TYPE="swap" PARTUUID="ff1ca075-02"
/dev/sdc3: LABEL="root" UUID="38f1d4be-3293-4272-ab79-4ad76cbd5a36" TYPE="ext4" PARTUUID="3a3b65f3-03"
/dev/sdc4: LABEL="extra" UUID="603a61fc-4436-4cb0-baac-ef9170754228" TYPE="ext4" PARTUUID="3a3b65f3-04"

NOTE that FSArchiver will by default restore the same UUID and filesystem label, so no worries – we don’t have to modify the VM’s fstab entry for the root filesystem.

( Now Hit Ctrl-A, then c to create a new Screen )

cd /mnt/tmp/etc
jstar fstab

( Use your own editor here. I happen to like Wordstar keybindings. )

Now we can switch between those 2 virtual terminal windows and even copy/paste text without using the mouse. (See ‘ man screen ‘ for more details.)

To make a long story short, I added or verified the following to SpressoVM’s /etc/fstab to enable my existing swap partition and double-check that the root filesystem would be mounted as expected.

LABEL=swapb none swap sw,pri=2 0 0
LABEL=extra /mnt/extra ext4 defaults,noatime,rw 0 2

While I’m here, I also restored the /mnt/extra files from their tar backup as well.

umount /mnt/tmp # we're done with restored root
mount /dev/sdc4 /mnt/tmp
cd /mnt/tmp
nc -l -p 32100 | tar xzpvf -

( then on my ZFS Server )

cd /mnt/bkpspace; time cat spresso-mnt-extra-bkp-20170302.tgz |nc -w 5 192.168.1.95 32100

( now back in the VM )

update-grub # make sure ubuntu knows how to boot dreamlinux
reboot

And that’s pretty much it. After that, to make a long story short (again), I went through several cycles of ‘ apt upgrade ‘ and ‘ apt-get dist-upgrade ‘ making VM Snapshots along the way, and I had to make allowances for files that were provided in more than one package.

I also upgraded the kernel to linux-image-4.9.0-1-686. The full saga is documented with errors and fixes, so email me if you want to know more (but it’s a pretty long read.)

apt-cache search 4.9.0 |awk '{print $1}'
apt-get install linux-headers-4.9.0-1-686 linux-headers-4.9.0-1-686-pae \
linux-headers-4.9.0-1-all linux-headers-4.9.0-1-common linux-support-4.9.0-1 \
linux-image-4.9.0-1-686-pae

STEP 3: I tested PXE booting with another VM using a dedicated network segment.

The end result of all this: I created a “blank” VM with the capability to boot from network (needed to modify the VM’s BIOS for this by pressing F2 at boot) and after switching the 2nd network adapter from “Host only” to “LAN segment” I successfully booted the VM from PXE! Mission Accomplished!

Now, since I’ve done all the heavy lifting in the VM and my original box is still working the same as it was (but running old software) I can use pretty much the same procedure to do a V2P (Virtual to Physical) to a spare 500GB laptop drive instead of having to repeat the upgrade all over again.

For brevity, these are the instructions I include in my bkpsys-2fsarchive script on how to Restore:

time fsarchiver restfs backup-root-sda1--ubuntu1404*.fsa id=0,dest=/dev/sdf1
Statistics for filesystem 0
* files successfully processed:....regfiles=159387, directories=25579, symlinks=49276, hardlinks=25, specials=108
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
real 4m26.116s
( 3.9GB )
 mkdir /mnt/tmp2
 mount /dev/sdf1 /mnt/tmp2
 grub-install --root-directory=/mnt/tmp2 /dev/sdf
mount -o bind /dev /mnt/tmp2/dev; mount -o bind /proc /mnt/tmp2/proc; mount -o bind /sys /mnt/tmp2/sys
chroot /mnt/tmp2 /bin/bash
update-grub
[[
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.2.0-36-generic
Found initrd image: /boot/initrd.img-4.2.0-36-generic
Found linux image: /boot/vmlinuz-3.19.0-25-generic
Found initrd image: /boot/initrd.img-3.19.0-25-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
Found Ubuntu 14.04.4 LTS (14.04) on /dev/sda1
done
]]
grub-install /dev/sdf # from chroot

^D
umount -a /mnt/tmp2/*
# DON'T FORGET TO COPY /home and adjust fstab for swap / home / squid BEFORE booting new drive!
# also adjust etc/network/interfaces , etc/rc.local , etc/hostname , etc/hosts

Restored PXE-server Spresso running in Vmware

Details of the "blank" VM that boots from PXE

BIOS details for the "client" PXE-booting VM

Once I had KNOPPIX up and running in the “blank” VM, I used ‘ GParted ‘ to make a 768MB Swap partition and used the rest as a “data” ext4 partition.

–Here’s the “client” VM booting over the network/PXE from my migrated Spresso VM:

VM booting over the network/PXE

–NOTE that while the PXE box is running a 32-bit processor and environment, CLIENT boxes can boot a 64-bit kernel and environment as long as the CLIENT processor is capable.

–The way my PXE environment is setup, clients only have a limited network connection from the PXE server and have to download everything (including OS install packages) over the Squid proxy cache running on the Spresso’s port 3128 for extra security – client boxes can’t ping out to random websites and all downloads are logged.

Squid proxy REF: https://en.wikipedia.org/wiki/Squid_%28software%29

Knoppix DVD 64-bit (selection #99) in the "blank" VM

( After booting into text mode, I issued a ‘ startx startkde ‘ at the terminal prompt to get X up and running )

Knoppix DVD 64-bit (selection #99) in the "blank" VM

STEP 4: Upgrade the ISOs in the VM to latest Knoppix and SystemRescueCd (TODO)

STEP 5: V2P: Reverse the process and upgrade the physical box with all the updates from the VM

–I haven’t done the “V2P” – Virtual to Physical – part yet, but that’s not a high priority at this point since I have everything pretty much the way I like it right now. Maybe in a future update. 😉

AWS Outage: We Need to Talk About These Nines

AWS-Burn

I walked out of a meeting, preparing to go to lunch. One of the guys on the database team grabs me as I pass and informs me that his AWS (Amazon Web Services) permissions are broken. He’s unable to see any of his S3 buckets. I walk to my workstation, sit down, log into the console and find that none of our S3 buckets seem to exist. First thing’s first- let my boss, the director of technology, know, then run to the development directors to inform them. Grabbing my laptop, it’s back to the conference room with my director and a fellow sysadmin. Amazon’s status site asserted that everything was actually okay. Operations is doing what they do best – scrambling. Email notifications are going out to the technology department, the sales department, and the customer management and support teams. Two of the team members from Operations and Help Desk were at an AWS conference. They chime in on the email threads, digitally chortling about how the presenters had finished explaining that the eleven-nines of availability meant the S3 service would only go down once every 10 million years just before their presentation ground to a halt, because S3 was… unavailable.

This was a small outage for us. But an outage is an outage and they happen. Ultimately, they’re unavoidable because nothing is flawless. Mistakes will always happen eventually and Murphy has a law out there that’s still on the books. Within a few hours, the services were back up and our products recovered. It was a shock when it happened, but once we got our bearings, there was nothing we could do but accept it and wait. Notifications were out, and so it was time to monitor and send new ones when things were back up. This is both the benefit and burden of relying on someone else’s infrastructure. The aftermath seen in the headlines the next day is where it gets interesting. The tech world was awash in scolding sentiments about redundancy and proper architecture. There was a considerable amount of finger waving and condescension exclaiming that all those companies that suffered outages should have used multiple providers or at the very least multiple regions. But in all fairness, that’s not what these cloud service companies sell us.

Everyone in the industry knows that buzzwords are just that, words. Those of us in the trenches hear them day in and day out. They’re sometimes what gets a company to buy into a brilliant new project, and other times the thing that gets a company to push a futile, terrible and frustrating new project. The latest thing, for now, is to brag about the 9’s of uptime. Five-nines has become a misused claim that’s so pervasive very few people consider what it actually means anymore. Truly offering that level of uptime would mean that something is only unavailable a total of 5.26 minutes every year. So at eleven-nines, a five hour outage on S3 would be valid if the service didn’t have another outage for about 57.04 million years. I don’t think there’s anyone who doesn’t realize this is hyperbolic and that the vendors are really just trying to express an extremely high level of confidence (read hubris) in their product.

Technologists who work in an enterprise environment understand how important high availability is and the consequences of not configuring the proper level of redundancy. Those of us who have made the shift to a public cloud infrastructure are cautious about how readily we believe the claims these providers make. Every one of us has been burned at some point or another but we try to recognize why it happens and make sure we find a way to correct it or work around it. While we may balk at their exaggerations, at some point you have to trust your vendor. Companies aren’t in the habit of purchasing an EMC then going out and buying a 3Par as a back up when getting new SAN storage so why should the same mindset not apply to infrastructure as a service?

While I understand that companies like Netflix invest large sums of money into building suites of applications specifically made to cause outages so they can work to ensure their reliability, I also understand that not every company can afford the time, effort or assets to do this. The prospect of services such as AWS, Azure, Google Cloud and Digital Ocean are that companies can have access to the necessities of technological infrastructure in order to create, grow and innovate without the unattainable initial capital required to do so just ten years ago. With these services comes a certain expectation from their customer base that is in no way unwarranted. Amazon themselves got burned by this outage, as their status page was incorrect because it relied on the very services that went down. Most of the people who work with these things already knew they were relying on something that could fail and were taking a calculated risk in doing so, and those who didn’t know now. Unfortunately, mitigating those risks often takes a considerable effort by several times, and the driving forces behind it don’t always have the ability to allocate the necessary resources outside of their central team. Moments like this can sometimes be leveraged as proof of value for spending those resources, and those of us who lead these public infrastructure migrations try to do just that.

However, I question the validity of the allegations that customers should “know better.” These services are boasted as being unrealistically reliable, and while reason dictates that they can’t actually live up to their declarations, it shouldn’t be as far of a departure as it is. If a company is stating they have x-nines of reliability, they are asking to be relied upon. And when it’s a company like Amazon, an established technological powerhouse which not only embraces its position as the leader in public cloud offerings but redefined the market, those assertions should come with a level of accountability and expectation. We have to trust our vendors, so maybe it’s time they assess their claims and make them a bit more realistic.

Virtualization Cluster With CentOS 7

newvm-8

This article is an overview of the steps taken to create a small virtualization cluster built for fulfilling personal infrastructure requirements like file sharing, syncing, and trying new applications. Reasons for creating a cluster instead of a single server are to eliminate a single point of failure, and to allow hardware maintenance without interrupting services. The goal was to have something resilient that is easily maintained and simple to operate.

The Hardware

I’m a hardware guy. It’s not everyone’s cup of tea but I’m the kind of guy that remembers the specs of every system he’s ever had. As such, this might be a section that can be skimmed by those who just want the list or skipped for those that have no interest. Over the years, I’ve become accustomed to having a Windows desktop that I use for the rare things I find necessary and a Linux desktop that I use for everything else. Additionally, I’m an AMD fanboy. Though I am well aware of Intel’s superior performance both in processing power and energy consumption, I just can’t seem to shake “the feels” I get when buying AMD. That said, when it came time to build, my sensibilities (I’m a middle of the road kind of guy when it comes the performance) and loyalties sent me down the path of purchasing a couple of motherboard/processor combos that included an FX-6300. This served me well for a year, and then a regional computer store had a sale on FX-8320s which came with free motherboards.

I quickly purchased the upgrades and ended up with a couple of combinations laying around. This was ripe for projects, but with the Holiday Season approaching, using some of the parts to make a Christmas gift became too tempting. Shortly after, I decided that I would consolidate into a single desktop that I would dual-boot, and this left me with an additional computer. Deciding to upgrade the processor in my main desktop (since it would be my only one) to a FX-8370 left me with the two FX-8320/Motherboard combinations. For each of these, I purchased 16GB RAM, two 120GB solid state drives, and two 3TB hard disks. With this equipment, a couple of cheap ATX power supplies and a quick order of a couple cheap 2U cases from a popular Website, I was ready to go.

The Software

Though Ubuntu will likely always have a soft spot in my heart, being a Linux Systems Administrator in the US, I have a predilection for CentOS. With version 7 being available but still gaining usage in my production environment, this seemed like a solid project for getting familiar with the nuances in the new version, and it would provide a long supported and stable environment for my hypervisor. My method employs installing the base OS with OpenSSH and building from there.

Once the installation was complete, I logged in and proceeded to install using Package Groups, a feature in Yum, Red Hat’s package manager. These are a logical grouping of packages in order to complete specific tasks. The “Virtualization Host” group contains all the necessary packages for running a minimal hypervisor. The following command is used to initiate the install:

yum groupinstall "Virtualization Host"

Once the required packages were installed, I moved on with the preparation and configuration of my cluster.

The Storage

Gluster is the storage solution I decided to go with, as it is a versatile solution and it allows for easy migration of stored images across cluster nodes. What is Gluster? It’s a clustered file system Think of it as a distributed NAS. Storage nodes can be added and configured as mirrors, stripes, or combinations of the two (similar to RAID1, RAID0, and RAID10 respectively). Additionally, multiple nodes can be added to the storage cluster to expand storage without nodal redundancy. For this project, I decided to use a mirrored configuration. This creates a level of redundancy since both nodes will contain the same data. One of the benefits of Gluster is that the protocol writes to both nodes at the same time, so there is no syncing delay.

An interesting aspect of Gluster is that it uses a configured file system as the basis of its storage. Logical Volume Management (LVM) is my preferred method of configuring Linux storage and the following is an overview of the configuration I decided to build on the back end. I divided one of the 120GB SSDs into three partitions, 512mb as sda1 for /boot, 18GB as sda2 for a Physical Volume for the OS Volume Group, and the remaining space as sda3 for a Physical Volume for a Fast Storage Volume Group.. I then added the second SSD as another Physical Volume for the Fast Storage Volume Group. A Logical Volume (LV) for the Root (/) file system was carved out from the OS Volume Group, followed by an LV for VM Storage from the Fast Storage Volume Group. A Data Storage volume group was created out of the two 3TB hard disk drives, and a small (64GB) logical volume for ISO Storage as well as a striped logical volume for Data Storage were created. All logical volumes were formatted with XFS and then mounted as /glusterfs/vmstore, /glusterfs/isostore, and /glusterfs/datastore.

After configuring the storage volumes on the server, it’s time to set up Gluster. First step is to add the Gluster repos to you package manager. This can be done using the following command:

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo

Once the repo has been added, you can install the server using yum

yum -y install glusterfs-server

This will need need to be completed on both of the servers to facilitate the redundancy that Gluster provides. After installing it on the second server, I proceeded to configure Gluster.

First I started the Gluster Daemon on both servers

systemctl start glusterd

Next, from the first server I established the cluster creation

gluster peer probe server2

In order to establish the configuration using hostnames, I ran the command on the second server

gluster peer probe server1

We can now create the volumes:

gluster volume create gfs_vmstore replica 2 server1:/glusterfs/vmstore/brick1 server2:/glusterfs/vmstore/brick1
gluster volume create gfs_isostore replica 2 server1:/glusterfs/isostore/brick1 server2:glusterfs/isostore/brick1
gluster volume create gfs_datastore replica 2 server1:/glusterfs/datastore/brick1 server2:glusterfs/datastore/brick1

Once the Gluster volumes have been established in a cluster, they will need to be mounted to be used. In this case, the servers are going to be both the clients and the servers, as such the mounts will need to be added to the fstab using the following lines.

Server1:/gfs_vmstore /data/vmstore glusterfs defaults,_netdev,backupvolfile-server=server2 0 0
server1:/gfs_isostore /data/isostore glusterfs defaults,_netdev,backupvolfile-server=server2 0 0
server1:/gfs_datastore /data/datastore glusterfs defaults,_netdev,backupvolfile-server=server2 0

Once the mounts have been added to the fstab, they need to be mounted

mount -a

Now that the hypervisor has been installed and the storage has been enabled, the server is ready to have the Virtualization environment configured. This can be done using the command line interface but using tools from a workstation can allow a faster creation and is easier to comprehend with a visualization.

The Interface

Virtual Machine Manager or Virt-Manager is a graphical tool for the Linux desktop that allows interfacing with libvirtd (an API for managing a myriad of hypervisors, notably KVM which was installed in the first part). I use Ubuntu flavors as my main desktop As such, the installation command for installing the application on my workstation is as follows:

sudo apt-get install virt-manager

Once the application is installed, I was able to start it and configure it to manage my virtualization cluster. The cluster servers need to be added to the interface. This is done by going to the file menu and selecting “Add Connection.” On the dialog box that opens, I left the default under “Hypervisor” (QEMU/KVM), checked “Connect to remote host,” left the method as SSH, left the username as root, and set the hostname to server1. I then did this again using the hostname of server2.

virtmgr-addserv2

Once the hosts are added to the interface, double clicking them will open a dialog for entering the root password you set at install.

virtmgr-baseview

Next, right clicking on the host and selecting “details” brings up a screen for configuring the host. At this point, I configured the storage.

Clicking the “storage” tab, and then the plus (+) button, a new dialog opens with a wizard for configuring a storage pool. I selected “File System Directory” and gave it a name of “VMStore.”

datastore-vmstore-name

On the next page I set the path of the directory to /data/vmstore.

datastore-vmstore-path

After clicking finish, it is now available in the storage list. I proceeded to configure the rest of the shares (DataStore – /data/datastore and ISOStore – /data/isostore).

Node-storage

Following the same steps on the second host causes the gluster directory to be the storage pool on both nodes which allows migrating VMs between the hosts.

Now that the cluster nodes are configured in Virt-Manager, I am able to proceed with uploading ISO files and provision my first virtual machine. To do this, I used SCP to upload the CentOS 7 ISO to the /data/isostore directory on server1.

scp ~/Downloads/CentOS-7-x86_64-Minimal-1503-01.iso root@dreaming01:/data/isostore

Doing this will cause the file to be accessible from either host. In the interest of observing Gluster functionality, the first VM will be a CentOS 7 server on server2 using the ISO file that was uploaded to server1 as the destination. Right click on the host, and select new, select “Local install media” and click forward.

newvm-1

The next screen lets you choose which ISO should be used for the installation.

newvm-2

Clicking “browse” browse opens a dialog that looks like the storage tab of the details menu. Selecting the ISOStore pool, it’s apparent that Gluster is functioning, since the CentOS ISO that was uploaded to server1 is available on server2.

newvm-selectiso

Next, the OS type and Version need to be selected. This loads specific configurations for the VM to improve performance.

newvm-3

Set the memory and processor count.

newvm-4
On the storage page, choose to “select or create custom storage”

newvm-5

Clicking Manage greets with a familiar storage dialog screen. Here, I go to the VMStore pool and click new volume (the “+” button above the storage volume contents), I then name the volume and set the capacity and click finish.

newvm-newvolume

Highlight the newly created volume, and click “choose volume.”

newvm-storagevolumesel

Once selected, it takes me back to the New VM wizard, and I click forward to the last screen.

newvm-7

On the final page, I name the VM, and click finish.

newvm-8

This opens a new dialog, and after entering the root password, the console of the new VM comes up. From here, I can begin installing the OS on my new VM.

newvm-console

Proceed with the normal CentOS installation process.

The Novelty of KDE Neon

KDE Neon

The good folks at KDE managed to engage a market of Linux desktop users underserved by other distribution models. Or, maybe it’s just me.

KDE has a long history in the desktop ecosystem. It was the first Linux desktop I was exposed to back in 2006. Back then, it was on OpenSUSE and it was clean and functional. For some reason after that, installing KDE had never really appealed to me. I’ve tested it out briefly when poking around at what the OpenSUSE guys were doing and I’ve run Kubuntu for brief snippets. For years, I’ve been trying to find out what type of desktop user I am and which distro fits my needs.

I’ll admit that I’m a moving target. What I like changes depending on workflow expectations and machine capabilities. I’ve started to notice that I expect a desktop to have modern features while still maintaining a familiarity that doesn’t cost me days to adjust to the new environment. In this space, there are a lot of really neat contenders. Gnome has got a great desktop, but it renders slow on X. Budgie on Solus is a cool project with a lot of momentum, but isn’t ready for what I’d like to do (yet). Ubuntu MATE got me back to running Linux full time and it’s so familiar it’s hard to not use it. In addition, the software boutique and MATE tweak features gave me some really great access to modern features and software. Why the software boutique isn’t installed by default on every distro is just beyond my comprehension. That thing is just amazing!

MATE failed me when I got a new Hi-DPI machine with a backlit keyboard. In some cases, there were solutions I could apply to the issues arising from this new machine, but over time it became too cumbersome to keep searching to find them. I had to get work done. I knew the team was working to fix them, but the gap between the team fixing the issues and me needing to be functional was too long for my patience. The sour onion in my sandwich was when I had trouble grabbing the edge of a window to resize it and realizing how many attempts it took to perform this simple task. Sure, I scaled up my fonts, but that didn’t scale up the window edges.

I poked around at other distros and eventually landed on KDE Neon.

I know that to gongoozle is a verb that means to stare idly at a canal or watercourse. It’s an oddly specific verb and it’s even more odd that I know what it means. That being said, I’ve never bothered to learn how the community labels some things as distributions and some things as not distributions.

Here’s what I can gather:

  • Neon is not a distribution. It’s a desktop that sits on top of Ubuntu LTS (currently 16.04).
  • It’s not Kubuntu. Kubuntu is a Ubuntu flavor.
  • The word flavor reminds me of ice cream. So I guess when you’re trying a flavor you’re licking it?

So I guess the way to look at this is that Neon is an open-faced sandwich where the bread was made by the Ubuntu bakery. It’s good bread.

What I get on Neon is a desktop that’s updating and becoming more refined while still maintaining the underpinnings of what makes Ubuntu so marketable. This is exactly what’s missing from the Ubuntu ecosystem. In that ecosystem, you can run dated a dated desktop for several years and watch its wrinkles become more frustrating over time. Or you can run the nightlies as your OS and watch things break and get fixed. You’ll have the latest desktop the good folks have selected, but it may not work the way you’d expect. I did this for several months and it was unpolished but quite enjoyable.

So then there’s Neon. The desktop updates as needed and with the underpinnings of 16.04 still get you SNAPs, ZFS, and a great repository of software. Since I’m human, I interface with the machine through the desktop (and the occasional command line). I don’t directly interface with the code underneath. I want clean lines and elegant functional design. I want to be able to resize my windows on the first try. In Neon, scaling for Hi-DPI is easy, font management is excellent, alt+space launcher is awesome, super key search is flawless (even works when I misspell things), and not only are there elegant lines there’s an amazing amount of design thought into the way everything works and works together. I get that, and all the familiarity of the Ubuntu stack underneath.

You might pick on me for touting font management, but it’s a serious indicator of a polished desktop. If the font management is good, it’s likely because the design team had people on it who understand fonts. So you’re only likely to see this on a more polished desktops. Font management is also never the priority. So if the developers got around to getting it done, then it means they’ve worked through quite a large stack of issues to get fonts going. So yeah, for me you can tell the quality of the desktop by the way it manages fonts.

It might just be me, but I believe these KDE guys are on to something. They’ve been able to federate the effort of one of the most popular Linux distros (Ubuntu) and marry that to their effort on the desktop. They’ve created a wonderful balance for a symbiotic relationship that benefits both parties. Canonical should be advertising this solution while their users are waiting for Unity 8.

Groke is another old fashioned word with Scottish origins. It’s a verb that means to gaze at someone while they’re eating in the hopes that they will share their food with you. Folks who have been working with Ubuntu’s deployment scheduled releases have certainly benefited from their professionalism over the years, but I’ve heard of many who’ve been groking at those with rolling release desktops. Now finally it seems they can have both. KDE Neon is where the rolling release desktop meets a stable foundation and it’s where a great team hit the moving target of what I want in a distro. Thank you! Now I have more time to gongoozle.

Linux Digital Audio Workstation Roundup

ardour

In the world of home studio recording, the digital audio workstation is one of the most important tools of the trade. Digital audio workstations are used to record audio and MIDI data into patterns or tracks. This information is then typically mixed down into songs or albums. In the Linux ecosystem, there is no shortage of Digital audio workstations to chose from. Whether you wish to create minimalist techno or full orchestral pieces, chances are there is an application that has you covered.

In this article, we will take a brief look into several of these applications and discuss their strengths and weaknesses. I will try to provide a fair evaluation of the DAWs presented here but at the end of the day, I urge you to try a few of these applications and to form an opinion of your own.

Ardour

ardour

Ardour is one of the best known digital audio workstations to be released as open source software. It is also one of the most professional and full featured applications of its class. Ardour’s features include audio and MIDI recording, an intuitive single window interface, and excellent support for LADSPA, LV2, DSSI and Linux VST plugins. Ardour’s only weakness is that it does not feature clip or pattern-based recording, which may be a nonstarter for those looking to migrate from FL Studio or Ableton Live. But for those who prefer a more traditional work flow based on tracks and timelines, Ardour is an excellent choice.

Ardour is freely available for download in most Linux repositories. Its source code can also be obtained from its project website along with Windows and Mac versions.

Audacity

audacity

Audacity is a powerful cross-platform audio editor for Windows, Mac and Linux. While it is not, strictly speaking, a digital audio workstation, it supports an extensible collection of effect plugins, and tools that make editing audio a snap. Audacity can be used as an application on its own to produce podcasts or as a way to create and edit audio clips that can be used by a sampler or another DAW.

Audacity is freely available for download in most Linux repositories. Windows and Mac versions can be obtained from the project website.

Bitwig Studio

bitwig

Bitwig Studio is one of two proprietary DAWs that is available for Linux that I will mention in this article. Like Ardour, Bitwig features audio and MIDI recording and a single window interface but it was designed so a user could effortlessly move between clip and timeline-based recording. In addition to having excellent support for LADSPA, LV2, DSSI and Linux VST plugins, Bitwig also brings its own toys to the party in the form of 62 additional instrument and effect plugins. With this being said, Bitwig is not cheap. At the time of this writing, version 1 retails for $299, but for those coming from a Windows or Mac-based production environment, tools like Bitwig provide an additional impetus to make the switch.

Versions of Bitwig Studio for Windows Mac and Linux can be purchased from Bitwig’s company website.

LMMS

lmms

Linux Multimedia Studio (LMMS) is a pattern-based sequencer, designed for modern music production. Out of the box, LMMS features several of its own native synthesizers in addition to supporting numerous LADSPA, LV2, DSSI and Linux VST effect plugins. One thing that separates LMMS from the other DAWs covered here is its support of some Windows-based instruments through the use of wine, Carla Patchbay and Vestige plugins. Additional Linux instrument VSTs can also be brought into an LMMS environment using the Carla Rack plugin. Although LMMS does not support live instrument recording, the application is downright fun to use. For best results download it from the KXStudio repositories.

LMMS is freely available for download in most Linux repositories. Windows and Mac versions can be obtained from the project website.

Qtractor

qtractor

Qtractor is a timeline-based MIDI and Audio recorder and sequencer that features support for LADSPA, LV2, DSSI and Linux VST plugins. Although its feature set is quite similar to Ardour’s, Qtractor uses a simple multi-window interface which is very familiar to those who used Cubase back in the day. The one thing that makes Qtractor stand apart from other DAWs is how the application can easily connect to external midi instruments and audio effects with its own internal jack connection. For those who want a full-featured DAW without dealing with a complex learning curve, Qtractor is a sure win.

Qtractor is freely available for download in most Linux repositories. For details, visit the project website.

Rosegarden

rosegarden

If you want truly granular control of MIDI file creation, you cannot go wrong with Rosegarden. Like Qtractor, Ardour uses a time-line interface that is reminiscent of Cubase, but allows users to edit MIDI data through the use of a piano roll editor, a musical notation editor, and for those who take their MIDI seriously, an events-list editor. Rosegarden does provide facilities for audio recording and supports LADSPA, LV2 and DSSI effect plugins, but does not have its own features for any in-depth audio editing. Instead, it lets Audacity do the heavy lifting. With this being said, Rosegarden is an excellent choice for those who largely rely on MIDI sequencing to get their work done.

Rosegarden is freely available for download in most Linux repositories. For details, visit the project website.

Seq24

seq24

Seq24 does only one thing, but it does it well. The thing is pattern-based MIDI sequencing. Unlike the other DAWs covered here, Seq24 features a truly minimalist user interface. It does not support audio recording, and does not support any plugins. Instead, it sends MIDI data to software synths already installed on your system. Audio is then recorded from these instruments using another DAW, like Ardour, Audacity or Qtractor. For those who wish to begin and end music production in the same application, Seq24 is a bit of non-starter. But for those who prefer a more modular workflow, Seq24 is definitely worth looking into.

Seq24 is freely available for download in most Linux repositories. For details, visit the project website.

Traktion T7

tracktion

Tracktion T7 is yet another time-line based MIDI and Audio recorder and sequencer that features support for LADSPA, LV2, DSSI and Linux VST plugins. However, like Bitwig Studio, Tracktion T7 brings some of its own effect and instrument plugins to the party. Of the DAWs I have looked at thus far, Tracktion T7 has some innovative features that I have not seen in any other digital workstation. Although it does not fully support pattern / clip-based sequencing, it features a way to preview audio samples simultaneously for easy arrangement. Tracktion T7 also features non-destructive wave editing, and track automation features that are not to be missed. Prices for Tracktion T7 start at $60 for the base DAW and can go as high as $200 for a bundle that includes Tracktion T7 and various plugins. Alternately, a slightly less-featured, free version of Tracktion T5 is also available from the company website.

Tracktion T7 is available for purchase from Tracktion’s company website.

Final thoughts on Linux Digital Audio Workstations

The Linux ecosystem is rich in options for recording your musical ideas into a solid, professional product. In this article we looked at several choices for recording music and audio under Linux. Some of these offerings, like Audacity, LMMS, Rosegarden and Seq24, lend themselves well to specific tasks like wave editing and MIDI Sequencing, while other solutions like Ardour, Bitwig Studio, Qtractor and Tracktion are full featured visigoths, ready to take your next project from start to finish.

What is your favorite Linux-based DAW? Leave a comment below.


Corrections and Errata

After the article was published, I noticed that I made a few factual errors. My apologies to those who may have been misled, and a special thanks to those who pointed out these inaccuracies. A list of corrections follow.

  • Bitwig studio does not support LADSPA, LV2 and DSSI out of the box. But this functionality can be achieved through the use of the Carla Patchbay plugin.
  • LMMS studio only supports LADSPA plugins out of the box. LV2, DSSI, VST and VSTi plugin functionality can be achieved through the use of the Carla Patchbay plugin, provided that you use a special build of the application available from the KXStudio repos.
  • Rosegarden only has LADSPA functionality, for effects and DSSI for softsynths. Other instruments and effects can be achieved by sending a MIDI out to external applications.

Linux Mint 18.1 Is The Best Mint Yet

Mint-super

The hardcore Linux geeks won’t read this article. They’ll skip right past it… They don’t like Linux Mint much. There’s a good reason for them not to; it’s not designed for them. Linux Mint is for folks who want a stable, elegant desktop operating system that they don’t want to have to constantly tinker with. Anyone who is into Linux will find Mint rather boring because it can get as close to the bleeding edge of computer technology. That said, most of those same hardcore geeks will privately tell you that they’ve put Linux Mint on their Mom’s computer and she just loves it. Linux Mint is great for Mom. It’s stable, offers everything she needs and its familiar UI is easy for Windows refugees to figure out. If you think of Arch Linux as a finicky, high-performance sports car then Linux Mint is a reliable station wagon. The kind of car your Mom would drive. Well, I have always liked station wagons myself and if you’ve read this far then I guess you do, too. A ride in a nice station wagon, loaded with creature comforts, cold blowing AC, and a good sound system can be very relaxing, indeed.

MINT JUST GOT BETTER… AGAIN

I had no intention of writing this article at all until I upgraded one of my machines from Linux Mint 17.3 to 18.1 the other day. Frankly, I didn’t think there’d be all that much to talk about but I have been more than just a bit surprised at just how smooth and elegant Linux Mint 18.1 “Serena” is. To put it simply, this is the best Mint ever. Linux Mint 18 was released with much fanfare last year and I have tried it on various systems. I found it to be just a bit shaky… A lot of that shakiness can be attributed to the fact that Ubuntu 16.04 LTS landed with numerous bugs and issues and, since Linux Mint 18 is based on Ubuntu 16.04, some of that filtered down from upstream. Ubuntu has addressed most of those bugaboos over the last year and the first “dot release” in the LM 18 series seems to have successfully smoothed over whatever rough edges might be left in Ubuntu. Everything just works. Not a trace of the notorious Ubuntu Network Manger bug can be found in the new Mint. Wi-Fi is rock solid and dependable.

LINUX MINT IS MY DAILY DRIVER

I look at a lot of Linux distros for YouTube videos and to keep up with what’s going on. Some people think I am a notorious distro hopper who can’t make up his mind because of those videos. The truth of the matter is that I have been using Linux Mint almost constantly since the 17 series came along in 2014. I switched to Ubuntu MATE on a couple of my machines for a while and I consider Ubuntu MATE to be my second favorite distro. Mint is not perfect – no Linux distro is –- but it’s proven itself to be super stable. I run the Cinnamon Desktop everywhere and I have become very accustomed to how it works. It offers a nice balance between the oversimplification that GNOME 3 has become and the complexity of KDE Plasma. Mint has some very nice, simple tools included that I end up using more often than I thought I would. There is a USB stick formatter that will format any external drive with just a couple of clicks. There is also a handy USB Image writer tool that will create a bootable USB drive from any ISO Image. It’s just the dd command with a GUI front end, folks. Insidiously simple but very powerful and convenient.

X-APPS RULE

Linux Mint forked a number of applications with the introduction of the 18 series and these are called X-apps. The name comes from Linux Mint’s standard Mint-X theme. So far, the X-apps offer an image viewer called Xviewer, a basic media player called Xplayer, a text editor called Xed and a photo manager/editor called Pix. The introduction of X-apps raised more than a few eyebrows in the Linux world. Some folks didn’t see the need for the duplication of effort but I have come to realize that the Mint folks had some very good reasons. Let’s take Pix as an example. Pix is a fork of the GNOME project’s gThumb application. I like gThumb a lot and I’ve used it for years but GNOME has made some pretty radical changes to gThumb’s UI in the latest versions. The program is still great but the new interface is awkward for many and oversimplified. Pix has taken the nice technical advances of the later gThumb version but kept the more traditional interface. The same can be said for the other X-apps. They are all very familiar and easy to use for anyone who knows their way around a computer. All the buttons and menus are where you expect them to be. I moved directly from 17.3 to 18.1 and didn’t miss a beat. Just today, I used Pix to create some thumbnails for a web project and I breezed through 20 photos in no time. On the other hand, I was using the latest gThumb in Ubuntu GNOME 16.10 a couple of weeks ago and I kept having to click things just to figure out what they did…. It was flat out annoying!

LET THE MINT MUSIC PLAY

Linux Mint has shipped with Banshee as its main media manager for years but the Mint team decided to dump Banshee in favor of Rhythmbox. I have been gravitating towards Rhythmbox lately myself so I’m happy about the change. Rhythmbox is more focused on music than Banshee and doesn’t play videos… It also offers a nice podcast manager and it will rip your CD’s with ease. Banshee did all this and it also featured a video player but Banshee isn’t as configurable. For instance, Banshee didn’t offer quality and file format setting for audio files whereas Rhythmbox makes it easy to set that stuff up just the way you like.

I find the way Rhythmbox does playlists to be easier to work with too. Banshee won’t let you drag files from music folders into new playlists. You can only choose songs from the imported library. If you have a large music collection like I do, you probably have your music folder laid out in such a way that music is grouped by genre and era. Rhythmbox does let you drag files from folders right into the new playlist whether they are currently in the main library or not. I created some playlists that had everything in my collection from my favorite artists. All I had to do was use the search function in the Nemo file manager to list all the songs I had from each artist and then drag them into a fresh new playlist for each one. Cool, huh? Now if I just gotta hear a bunch of Billy Joel, I can click on the Billy Joel playlist and it’s all there. I can also just have the computer serenade me with a divine shuffle of whatever it chooses next.

Long-time Banshee users might not be too keen on setting everything up again in Rhythmbox but if they upgraded to 18.1 in place then all that is need is to install Banshee and everything will be just as it was.

Those who don’t like Rhythmbox or Banshee can install whatever they like, of course. Linux is all about choice.

ALL LINUX MINT, ALL THE TIME

I was looking back on the last year and I realized that just about every new client who came to me through ezeelinux.com had asked for help with Linux Mint. Also, many of the clients I had put on Ubuntu in the past came back to me during 2016 and asked me to help them move to Mint. Considering that and the fact that Linux Mint 18.1 was proving itself to be just plain awesome, I made the decision to change my focus from offering support any Debian/Ubuntu based distro to Linux Mint exclusively. Now, this does not mean I won’t help someone with Ubuntu if they ask for it specifically but they are gonna have to come up with a really good reason why they want Ubuntu for me to not try to talk them into using Mint. Mint just offers a more polished and cohesive user experience and it is very well documented. New users who start with Mint tend to have fewer questions for me than those who are on anything else. This is good for me because I am busier than a one-armed paper hanger and it’s also good for them because it builds confidence as they learn more about Linux. I’m sure that some will want to move on to more advanced-user-focused forms of Linux and that’s fine with me. If they should decide that Arch or Fedora is where they want to be then I can assume that they are ready to deal with the challenges they will face with those more cutting-edge distributions. My job will be done then and a new Linux Geek will be born!

SNAPPY MINT

Linux Mint does have its quirks… One of them is when it comes to installing applications. You get full access to the Ubuntu repositories and the Ubuntu PPA system. This is a good thing but that also can cause some issues when you try to install some apps that are not quite compatible with Mint or cause conflicts with Mint’s native apps. It’s rare but it can happen. Also, Mint does something that bugs me. They ship with the APT package management system set to ignore recommended packages when installing software. Linux developers often use existing programs to add features or functionality to their own application. While these programs aren’t absolutely necessary for the main app to run properly, not having them installed can make for unexpected behavior and limited usability. Ubuntu ships with APT set to consider the recommended packages as dependencies, thus avoiding these vexing issues. I always advise clients to change this setting in Synaptic Package Manager before they start adding software to a fresh install of Linux Mint.

One very cool new feature of Linux Mint 18.1 is that it introduces Ubuntu’s Snappy package system. Programs distributed through Snappy are called Snaps and can be installed with just a few simple commands. Snap packages are different from APT’s .deb packages in that they include everything in the snap that it needs to run. Snaps run in containers that are isolated from the main system, which makes them more secure. Removing a snap won’t leave any extra packages behind or remove something that another program might need to keep working. Snappy is growing fast and many major distros are offering snap support. I’m hopeful that it will help eliminate some of the software pitfalls that Linux Mint users have had to deal with.

MINT MOVING FORWARD

Linux Mint has definitely become the distro that geeks give their Moms and it’s the Noob’s best choice for their introduction to the world of Linux. It’s also great for lazy folks like me who have a house full of computers that they just want to keep up and running with a minimum of fuss.

I personally applaud the Mint team’s efforts to keep things consistent and familiar. I do not agree with those who think that a desktop computer’s UI should look and act like a tablet or smartphone’s. Those are different devices with very different use cases. Also, the trend where menus are being replaced by buttons and settings and features once considered necessary are hidden or removed is disturbing to me. Developers get so excited about making it all look slick and clean that they miss things that users need. As an example, I found that the printer configuration application in GNOME 3 dropped all mention of sharing a printer on the network. Those who want to setup CUPS either have to log into the CUPS server through a browser or they can open a terminal and pull up the old family’s CUPS printer manager app and set it there. I’m happy to report that Linux Mint still uses the little CUPS app and it has all the sharing features, just like it always has.

For more about Linux Mint: www.linuxmint.com.