Supercharge Your Command Line With Fish Shell

Linux has no lack of command-line shells. There’s ‘sh’, ‘bash’, ‘ksh’, ‘zsh’ and more. To get the most out of them, however, requires a bit of learning, time and work; something many of us don’t have time for. For those of us who want the power and pretty without the pain, there’s one shell that might be just what the doctor ordered.

At one time or another most Linux users will—either out of need or curiosity—venture onto the command-line. Unless their distribution has changed or modified the defaults, for most users it will mean being confronted with a pretty stark Bash prompt. Bash has been the de facto default shell for the majority of Linux distros for good reason. It’s extremely stable, powerful and flexible, while also being almost %100 POSIX standards-compliant. Some distros go the extra step and try to make bash a bit more user friendly, with coloring and things like tab completions to make finding commands and files easier. For the most casual of users, that alone might be enough. For those who would like a little more usability and friendliness from their terminal but aren’t command-line enthusiasts, trying to modify or enhance bash can be akin to trying to learn Cuneiform. There are other shells as mentioned above, like the very popular zsh. However, just like bash, getting  some ease-of-use and friendliness out of them is going to require a fair amount of tinkering. Fortunately, for those of us who want some of those cool features without having to get intimate with configuration files, there’s fish.

functionsI’m not sure if the intro to fish’s website is supposed to be some kind of pun or what, but believe me, fish is a modern, up-to-date and actively-developed shell. What really sets it apart is its user friendliness. Out of the box, it has most of the features the casual user would want and expanding on those doesn’t require a degree in programming. A refreshingly great feature of fish is its ‘help’ system.

Unlike most Linux shells that provide no more than cryptic ‘man’ files, fish installs a complete HTML user manual when it is installed. When a user types ‘help’ at fish’s command prompt, instead of a somewhat vague in-terminal listing of commands coming up, or something telling the user to “enter ‘man XXX’ for more”, fish opens your browser and presents you with a hyperlink-filled web page of all you may need. Features, options and commands are all well explained, most with examples to facilitate understanding. If you want help with a particular command, say with creating a function, at fish’s prompt you can simply type ‘help function’, and fish will open your browser on the section explaining functions.

Don’t like the default command prompt? No problem. At the default command prompt just type ‘fish_config’ and fish will open your browser with a configuration page where you can view examples of prompts and choose one you like, plus much more. This isn’t some remote server doing the configuration either. All this is right on your own system. The developers of fish put the most common things people might want to change in their shell into an easy-to-use interface. With just these two things alone, one can see what they mean by “user-friendly” but there’s more. The fish shell website also has a well done and comprehensive tutorial with screenshots, so the user can see how the topic they’re explaining would look.

Of course, once installed and first used, the most obvious immediate difference from what one may be used to is the colors. Fish is very colorful without having to set a thing. Out-of-the-box fish also does command completion, meaning it will give suggestions, “guesses” at what you want to do when you start typing. When first installed, though, it naturally doesn’t know what commands are on your computer, so you have to get it to create it’s completion list. It does know its own commands, so to get an idea of what it’s capable of. Just type ‘fish_’ and hit <TAB>. You’ll be presented with a list of commands fish already has matching what you typed, along with a short description of what each does.

First one you’ll want to execute is the “fish_update_completions” command. Simply add the “u” to the end of the “fish_” string you already typed, and fish will start showing the rest of the command as a suggestion, in front of what you’re typing. Press either the right-arrow key, or CTRL+f to let fish finish it for you. Press <ENTER> and watch fish start parsing all the man pages on the system, creating a database for it’s use. In a few moments the command prompt will come back, and fish will now know all the system’s command. To get an idea of what it can now do, type “fc” at the prompt, you’ll see fish automatically append the “-” to it. Fish knows all the “fc” (font configure) commands start with “-“, so it adds it right away.

Below the command prompt you’ll see all the “fc-” commands, with a short explanation of what they do. As before, you just start typing the next letter from the one in the list that matches what you wanted, and fish will place it after what you’ve already typed. All you need to do is hit right-arrow or CTRL+f to accept it. Or you can just hit CTRL+c to cancel it and return to the command prompt. If a command search produces several with the same next letter, just type however many more letters are needed to get to the one wanted, fish will constantly narrow it down as you type.

You’ll notice as you use fish that it seems to be learning, every time you start to type something, fish is suggesting a completion for it. Fish does this by first trying to match what you’re typing with a recent command, something you did earlier, and if there isn’t a match, it looks for one in its completion database. Like most shells, fish keeps a history of all the commands you’ve entered, thereby saving you from having to type the same thing twice. The history is searchable. You can just type the first few characters of a previous command and if fish’s default suggestion isn’t the one you wanted, press the up-arrow key. Fish will start replacing the command with ones that have the same starting string of characters. Find the one you want, then press <ENTER> for fish to execute it again.

This works for any command, including moving around the file system. Fish is very good at that. It can offer completions for navigating the file structure either from its history, or from the file structure itself. Oh, and for those times you might not want fish to remember where you’ve been or what you’ve done, just precede the command with a space. Anything started with a space character isn’t put into the history. Great for the aluminum-foil hat crowd.

fishFor me, one of the really useful and fun features of fish is the ability to create functions on the fly, then save them if I feel I’ll use them again, all without leaving the command-line. For example, one of the things I do often from the command-line is update my package list and system. I could do it from a GUI, but I’m one of those guys who likes to see and know what’s happening. Currently I’m running Linux Mint, so that involves two operations, ‘sudo apt update’ and ‘sudo apt upgrade’. While not really a lot to type each time, it would be nice to make it easier and faster. I believe computers should work for us, not the other way around. So, at the fish command prompt I created a function, as seen in this screenshot.

With fish, when you type the keyword “function”, it automatically goes into ‘edit’ mode. I gave the function a short, easy to remember name “suds”, my acronym for “sudo update system”. I type “function suds” then press <ENTER>, fish automatically jumps to the next line, and indents it to proper code. Next I typed the first command I wanted my function to do, “sudo apt update”. Press <ENTER> again, then typed “and sudo apt update”. Notice the inclusion of the word “and”? Fish has a special user-friendly way of combining commands that are not only easy to read, but quite powerful. The “and” keyword tells fish not to execute the next command until the first one is done, and not to execute it at all if there was an error produced by the first command.

If there is an error on the first command, instead of running the next one, fish outputs the error to the terminal, so the user can see it and know what went wrong, then exits the function. When I finished the second command-line, I hit <ENTER> and then typed “end”, another keyword fish knows, so it exits edit mode and returns to the command prompt. I can then run the command “suds” as though it was a system one. Because it’s a sudo command, fish prompts me for the administrative password, then apt updates the package cache. If all went well, apt then checks for updates and prompts me to accept any if they exist. I type “y” for “yes”, and apt installs them.

Now with most shells, if I wanted to keep this function beyond the current shell session, I’d have to open some resource or config file with an editor, then type it or copy-n-paste it over again there to keep it. Otherwise, once I exit the shell or logout, the function would be lost, as it only resides in system memory. Not terribly difficult, but I’m lazy, and fish has a better, easier way. I simply type “funcsave suds” at the command prompt, and fish saves my new function in its configs for me. Shiny!

In this article I’ve only scratched the surface of what fish is capable of. There are tons more features it has to make using the command-line easier and enjoyable, all of it covered in its excellent help system and website. While it might not pull long-time bash and zsh users away from their lovingly-massaged and configured shells, it’s definitely worth checking out. Especially if you’re an occasional CLI user who’d like a bit more than default bash, but doesn’t want to delve too much into editing resource and config files to get it. You don’t have to commit to anything or make any changes to try it out.

Most distros have it in their repositories, and like any shell, you can switch to it from an open terminal to try without making it permanent. Install it, open a terminal, and at your current prompt type “fish”. Your session shell will be switched, and you’ll be greeted by fish. Try it out, kick the tires, and when you’re done, simply type “exit”. Fish will return control to your original shell, nothing permanently changed. If you decide you like it, you can make it permanent, fish can even help you do that. Just type “help” at the fish prompt, and look for the section titled “How do I make fish my default shell?” It’s under the “Frequently asked questions”. If you decide it isn’t for you, uninstall it, no harm done. If you spend any time trying it, especially if you changed any of the default, it will have created a user configuration folder in “~/.config/fish”. If you’re not going to keep it on the system, you’ll likely want to delete that too.

There is one word of warning I should mention: using fish can be addicting. Enjoy!

What Are Linux Meta-packages?

I was recently in a discussion about meta-packages, and realized many users don’t know what they are or what they do. So, let’s see if we can clear-up the mystery.

Meta-packages in a nutshell

A ‘meta-package’ is a convenient way to bulk-install groups of applications, their libraries and documentation. Many Linux distributions use them for a variety of purposes, from seeding disk images that will go on to become new releases, to creating software “bundles” that are easy for a user to install. A meta-package rarely contains anything other than a changelog and perhaps copyright information, it contains no applications or libraries within itself. The way they work is by having a list of “dependencies” that the package manager reads. The package manager then goes to the repositories to find the dependencies and installs them.

In the above screenshot, we see as an example the ‘ubuntu-mate-core’ meta-package and what it actually contains, a changelog and copyright documents.
In this next shot, we see what the same package has for dependencies. This one meta-package is going to install a lot of software that would be hard for a user to know they needed.

As an example, let’s imagine we’re running Lubuntu, with the LXDE desktop and we want to try Ubuntu-MATE. We already have the Ubuntu base system, so all we need is the Ubuntu-MATE desktop. There’s a lot of packages that make up the default MATE desktop! There’s Marco, mate-panel, Pluma, Atril, Engrampa, EOM, Caja, etc., etc., plus the libraries, plugins, applets, documentation… simply a lot of stuff. Trying to install it package-by-package would be a real pain in the butt. In the end we’d almost certainly be missing bits that should be there for a proper Ubuntu-MATE experience. Rather than go through that exercise in frustration, we can simply install the meta-package ‘ubuntu-mate-desktop’. The meta-package will then go out and install all that we need for a proper MATE install. We install one package, it does the rest. Simple, convenient.

Meta-packages are for more than just installing desktop environments. They can be used to group together and install application “bundles” for particular tasks, like photographic processing, video processing, web development, digital art, and more. Anything that requires several applications and their support libraries could be a candidate for a meta-package. In fact, many distributions use them for that purpose.

Is that all they do?

That depends on what is being installed. Generally a software “bundle” meta-package is a one-time-use thing. Many will say in their description “After installation this package can be safely removed.” Once the software itself is installed, the package manager will take care of updates for the installed applications and libraries, so the meta-package is no longer required. Those can be removed with no penalty.

Others, like ones that install an entire desktop environment, will depend more on the user’s long-term goals. Meta-packages are often used for more than installing the current versions of software. They can also be used for upgrading.

Going back to our example of installing Ubuntu-MATE from a meta-package, let’s say we really like MATE and plan on using it well into the future. We’ve heard that the next release version of MATE is going to have some cool new applications, so we’re looking forward to trying them out. We also plan on just upgrading our existing install when the new release hits the servers instead of doing a complete reinstall from an iso image. When we do the upgrade, the meta-package will get upgraded along with everything else. In the new meta-package version will be dependencies for the new applications, so the package manager will install them too, even though they weren’t already on our system. The meta-package ensures our upgrade gets everything that makes up the newest version of MATE desktop – all the existing applications and all the new ones. So we’d now have the newest MATE with all the newest stuff.

“I want to remove application Y, but the package manager says it’s a dependency of meta-package Z. Can I remove Z too?”

Again, that depends on what the meta-package was designed to do and what you plan to do long-term. Let’s say the meta-package was for installing a photographic processing tool suite. When the meta-package is viewed in a package manager, it should state in its description that it can be safely deleted, in which case the answer is “yes”, you can remove it. If the meta-package doesn’t explicitly state it is safe to remove, then we’re probably dealing with one that’s designed to help with upgrades. In that case, we have a choice to make. We can remove any meta-package; removing it will not affect our currently installed system and software. But we’ll also have removed an upgrade tool. If a new release comes out with new software (apps and libraries that weren’t on the original install) without the meta-package when we do the upgrade, only currently installed software will be upgraded. None of the new stuff will get installed unless by chance one of the existing upgraded apps now depends on it.

If we plan on doing a network upgrade instead of re-installing from an iso image, and we want to ensure a smooth experience and get all the newest apps then it’s best to leave the meta-package installed. If there’s an application the meta-package installed that we don’t want to use, just leave it and install what we want. With today’s large hard drives, the few megabytes the unwanted application occupies on it isn’t worth worrying about. If having the unwanted application show up in the main menu is annoying, check with the desktop’s documentation on how to edit the menus and remove its entry.

NOTE: Some applications will require changing file associations, so the desktop and file manager know to use the new application for working with those files. That’s generally done by right-clicking with the file manager on one of the associated files and changing its properties, usually something titled “Open With” or “File Association”. If unsure, check with the desktop’s documentation or on their website for how to change file associations.

If on the other hand disk space is at a premium, and/or you don’t care about future upgrades, then yes, you can safely remove the meta-package. Removing the meta-package does not remove any other currently installed software or cause dependency issues with any other currently installed software. The only thing potentially broken is the ability to do a complete release upgrade.

meta-package itself
As you can see in this screenshot, there is nothing that depends on the meta-package except the meta-package itself.

“I removed the meta-package and now my package manager says all the ‘automatic’ installs are ‘manual’. What gives?”

Applications and libraries that are installed as dependencies of something else are called “automatic” installs in some package managers. They were installed because some other package needed them – in our case, the meta-package. Packages we install ourselves are considered “manual” installs. If we install an application that wasn’t on the original iso image, it will show up in some package managers as “installed, manual” or similar wording. All “manual” means is the user chose to install it, not some other package as a dependency. When we remove a meta-package any application or library that was installed as a dependency of it will be re-marked as manually installed. It’s nothing to worry about. Nothing will be removed or fail to update because of it.

For example, let’s say we install Cherrytree, a note-taking application, on Ubuntu-MATE. Cherrytree doesn’t come with MATE; we installed it. It shows up in the Synaptic package manager as “Installed (manual)” under the Status section. Unless we ourselves remove it, Cherrytree will be there. Doing an ‘apt-get autoremove’ to clean up old residuals or doing a system upgrade will not affect Cherrytree at all. We installed it, we want it there and the package manager is smart enough to know that. Same holds true for all the packages that were ‘automatic’ and are now ‘manual’ because we removed the meta-package. The package manager no longer sees them as dependencies of the meta-package. Instead, it sees them as something the user installed and treats them accordingly. They continue to get updates and be treated just like any other package we install ourselves.

In summary, meta-packages make user installation of what could be a confusing mass of software easy. They make seeding disk images for new releases simpler for distributions. And, they often provide a convenient method for ensuring smooth, complete release upgrades. If you’ve read this far, hopefully we’ve shed some light on the mystery of meta-packages for you.

Exploring Tiling Window Managers

Awesome Window Manager

If you have a low-resource computer, one with a small screen like some laptops, or are even someone just looking for something different to try, a tiling window manager could be a good option. They’re not for everybody, but then they’re not just for command-line commandos either.

I’m a keyboard guy, I like using keyboard shortcuts and keeping my hands on the keyboard as much as possible. Besides, I suck at touch typing and reaching for the mouse constantly just throws me all out of whack when I go back to the keyboard. It’s one of the reasons I was a KDE user for such a long time. KDE’s Plasma Desktop is probably the most keyboard customizable desktop environment out there. Plasma had some quirks though; there were bugs and oddities that annoyed me quite often. So I found myself in the “looking for something different” category.

I decided to give Gnome 3 a try. It’s a very nice desktop, though not as keyboard-friendly as Plasma, but close. It’s slick, well thought-out and quite customizable. I used it for a while, but—maybe it’s just me—it seemed like there was a lot of wasted space in the user interface. When putting applications side-by-side or corner tiling them, it seemed like so much of the screen was taken up with UI elements that it was annoying. That, and every time there was an update, something would break. Generally what broke were the extensions I had installed for it, and sometimes the theme I was using. I tried Enlightenment E17 next. It was also very nice, but it had a confusing settings manager and was just buggy. In time, I probably would have learned the configuration, sorted out the various bugs, and made better use of it. But time wasn’t a luxury I had much of. So then I tried MATE, and liked it a lot. Still do, and have it installed on my machine, though I rarely use it anymore. What I started to run into was theming quirks, that are mainly caused by GTK 3 apps and the constantly shifting way they’re themed. I run a very mixed set of applications, some Qt, some GTK 3 and some GTK 2, with a smattering of command-line ones. It was getting really hard to find a theme that would work well with all my apps and not pollute my ~/.xesssion-errors file with warnings and errors. MATE is a great desktop and I still use it from time to time just for a change. But there were still some personal irks (quirks?) I had with it that kept me looking for another option. Since I’d been seeing a lot about tiling window managers, I figured what the heck, let’s give one of them a try. That was about 2 years ago, and I’ve been using one ever since.

There seems to be a general idea that tiling window managers are for the geeky, that they’re only for command-line gurus and those who choose to live in a CLI world. Frankly, most screen-shots we see of them in action do nothing to dispel the idea. Tons of terminals open in little tiles showing code and system stats that are the common fodder. The plain fact is this is simply not true. GUI apps work wonderfully well in a tiling window manager. There are trade-offs, though. Tilers are very keyboard-centric, though you can still use a mouse for many things. If you’re a mouse maven, you probably don’t want to try one. They’re very basic, though many are infinitely expandable and customizable. Out-of-the-box they have few if any system-tray widgets, fancy menus or glitzy themes. They also don’t have compositors, at least none that I know of (yet). So if you want effects like drop-shadows and transparency, you’ll have to install a separate compositor. Probably the biggest sticking point for many is: while some tilers are usable right away, many require some configuration before you can even use them. All of them will eventually require the user digging into configuration files to get the most out of them. For these trade-offs what you get is a window manager that makes ultimate use of your screen space, is blazingly fast, uses very little memory or system resources, is totally customizable, but initially as bare-bones as it gets. Again, they’re not for everyone, and they don’t try to be. But if you’re looking for something that might be a better fit for your needs than your current desktop, and you’re not afraid to dig into a configuration file to make changes, then a tiling window manager just might be worth checking out.

There are dozens of tilers available- a quick web search for “Linux tiling window manager” will give pages and pages of results. So which should you try first? I think that would depend on your needs and skill level. If you’ve never edited a configuration file, touched a script or spent any time in the command line, but would still like to try one, I recommend one that is usable right out-of-the-box. That way you can try it out for a while in its default form and decide if you’d like to go further with it. For more experienced users, I think just reading up on the various ones you’re interested in should lead you in the right direction. You know your abilities. Looking at the tiler’s website or Git-Hub page should give you enough information to decide if you’d like to try it. For anyone’s first adventure in a tiler, I would recommend sticking with those that are in your distributions repositories. They’re easy and quick to install and if you decide it’s not for you it’s a lot easier to uninstall. However, if you’re comfortable compiling or have experience installing software from source, go with whatever interests you.

I’ll point out right away that I have not tried every tiling window manager. out there. I would not be surprised if there were others out there that might be better suited for inexperienced users. If you know of one, hit the comments below and tell us about it. I would also strongly recommend that anyone who is going to try a tiler, read about it’s keyboard shortcuts before you try it. Not knowing at least the basic keystrokes for opening an application and moving around within the interface is the biggest reason people who’ve tried tilers give up right away. Write them down, print them out, whatever works for you. Just have them ready before you start your experiment. I made that mistake the first time I tried one. Was greeted with a blank screen and had no idea how to do anything with it. Don’t make my mistake. Get the keyboard shortcuts first, and have them handy.

For those who would like to try a tiler with minimum fuss, I’d recommend one called i3. It has very few dependencies, is in almost every repository on the planet, is extremely popular with a large user base and perhaps most importantly for first-time users, is usable right after install. In fact, it should show up right away as an option in your login screen after installing it. When you first log into it, it asks only one question: which key would you like to use for your default “meta” key. The “meta” key in tilers is the one you’ll press first, adding some other key to generate a command the tiler will execute, like opening an applications menu. I recommend choosing the “Windows” key, as it’s generally not used for anything else in any other Linux application.

i3 Tiling Window Manager
i3 Tiling Window Manager

This is a screenshot of the default i3 window manager. On the bottom is a bar showing the current workspace number (1) and some system statistics. By default, you move to workspaces by pressing the “meta” key and a number. Meta+2 would move you to workspace 2, etc.. The current workspace number is displayed in the bottom-left corner, and any that are occupied with applications will also show up there in a different color. You open a menu of applications by pressing Meta+d. (Note: in default i3, as well as most of the others, the application menu is all executable files on the system, both terminal apps and GUI ones, listed by their actual names. Tread carefully. Many tilers will have an optional menu you can install that will only read the ‘*.desktop’ files DE’s use, producing a more user-friendly menu. But it’s not included by default in most distributions.) A bar will appear at the top of the screen with all the installed applications in alphabetical order. Start typing an app name and the list will narrow down until you find what you wanted. Hit enter and the app will open, taking up the whole screen minus the bottom panel. Press Meta+d again, pick another application, and it will open to the right of the current one, each occupying half the screen. If you have the keyboard shortcuts (you did get them first, right?) you can manipulate the windows, switch them around, put them one on top of the other (called ‘stacking’), or tab them like in a web browser. You can open more windows, pre-select how i3 will tile new windows (horizontally or vertically), or switch to another workspace and open some there. There’s no wallpaper, though there are various means to put one on the desktop, covered in i3’s documentation. And like all the tilers I’ll mention here, there’s no compositing; meaning there’s no drop-shadows or other effects, though that too can be added. i3 has truly great documentation on their website. Take the time to read through its basic use or watch one of the videos to see what you can really do with it. i3 may be one of the easiest tiling window managers to install and customize to your needs yet it’s still very powerful and extensible. Plus, with its large user base, there’s plenty of help and ideas available. There’s good reason it’s one of the most popular—if not the most—tiler in use today.

Second minimum-fuss tiler I’d recommend is Awesome. Like i3, Awesome is usable right away. The difference being Awesome doesn’t ask which key you’d like to use for Meta. It defaults to the ‘Windows’ key. This is configurable if later you decide to do so. It also has a scrollable menu accessed via the Meta+d key like i3. Additionally Awesome has extensive documentation on its website and a large user base. But it’s not quite like i3 when you first start it, as you can see in the screenshot.

Awesome Window Manager
Awesome Window Manager

By default, Awesome puts a panel on the top, with a menu icon and all the workspaces in the top left. At first the click-able menu icon shows a very sparse menu, but it’s extensible via the configuration. Workspaces that are currently occupied have a little square icon above them and the currently viewed one is in a lighter color. On the right, instead of a lot of system information like i3, it has the current date and time, and an icon to show its different tiling modes. With Awesome, how it tiles applications can be changed both by keyboard shortcuts and by clicking on that top-right icon. One of the ’tiling’ modes is actually a free-floating type, much like a regular stacking desktop. In that mode, applications are opened as regular windows and you can hold the Meta key down while left-clicking anywhere on them to move them around the desktop. Meta+right-clicking on them allows you to resize them. Unlike i3, Awesome does use wallpapers out-of-the-box but there’s no compositing by default. Awesome is extremely configurable, but there’s a catch. Its entire user interface is configured in Lua script, so to make changes and configure it to your needs is going to require a bit more work. Basically, you’re going to need to learn how to do it using Lua. Awesome also uses several files for its configuration, as opposed to i3’s single one, so it can be a bit confusing if you have no experience with working in multiple configuration files. Awesome makes up for it by having extensive documentation for every aspect of it, so a little time reading and experimenting can produce great results. Despite being easier than many of the other tilers out there to get started with, it’s still not as easy as i3 to configure for newbies. That being said, I still recommend it for those who want to try a tiler out. It needs no configuration to just try it. If you decide you’d like to go further with it, the documentation and hundreds of user-contributed tools and widgets will help you create a desktop that rivals any other out there.

For the more experienced users, there are plenty of options out there to choose from. Both of the already mentioned tilers are excellent choices for any level user, so I’d recommend them just as heartily for more experienced ones. Others for consideration would include Xmonad. It’s very popular, though it requires configuration in Haskell, so it’s not for the faint-hearted. Herbstluftwm is another one gaining popularity. Or you might consider the ‘original’, Ratpoison, one of the first if not the first tiler. A quick web search will give some idea of how many different ones there are to choose from. Your experience level will help dictate more than anything what you choose to try. For me, it ended up being one called “bspwm“, and acronym for “Binary Space Partitioning Window Manager”. A friend of mine suggested it about a year ago, and once I got it configured to my liking, I’ve been using it ever since. While I’d say it falls into the “experienced user” camp, it’s not overly complicated to install and use. The defaults are sane and usable out-of-the-box for the window manager itself. It’s getting a panel to do what most users would want that can be daunting, at least with the default recommendations. If you’re competent in BASH scripting, it should be no problem configuring and going with the default panel. For myself, I took an easier route and installed tint2, then configured it to my needs.

My bspwm tiling window manager
My bspwm tiling window manager

You can see in the screenshot the tint2 panel at top, showing all 10 of my named workspaces and the icons for the apps I currently have running in them, the system tray icons, and the current date and time. And you’ll notice that tilers work just fine with GUI applications. Here I have Quite-RSS (a Qt application), along with Atril document viewer and Pluma text editor (both GTK apps), all getting along very well. Not a terminal in sight. I use them, if you look carefully at the tint2 panel on top you’ll see about 10 of them open, running something in various workspaces. But I don’t live exclusively in them just because I’m using a tiling window manager. I use what I’ve found over time works for me, regardless of whether it’s terminal-based or GUI.

Earlier, I mentioned compositing. I don’t know of any tiler that has it by default that was written for the X windowing system. If you know of one, please hit the comments below and let us all know about it. However, you can get basic compositing while using a tiler. Xcompmgr and Compton are two of the best known, though Xcompmgr hasn’t seen any updates in a while. I use Compton myself. To get one working with your tiler, install it, then (depending on which tiler you’re using, and/or how you want to start it), it gets loaded either via your tiler startup configuration, a script the startup calls, or in your X startup script. To use a compositor, you should check with your tiler’s documentation as to find out what the best way to start it is. These compositors are very basic. You can get drop-shadows and transparency with them but little else. If you’re after wobbly-windows or flying cubes, you’re going to be out of luck. Also, while the compositor will work out of the box in most cases. It’ll take some configuration to get the right look and performance out of it. My feelings are: while a compositor is not required to use or try out a tiler, if you’re going to be using a lot of GUI apps in it, you might consider adding it at some point. Most modern GUI apps expect a compositor and can look pretty ugly without one. There are some, though few, that just plain won’t work right without one. Pretty much anything that has xrender or OpenGL as a requirement is going to have problems or be unusable. Remember, it’s not required to simply try the tiler. I wouldn’t recommend installing one until you’re sure you’d like to go deeper into your experimentation with a tiler. It’d just be something else you’d end up having to uninstall. The GUI apps might not look like they’re supposed to, but most are perfectly usable for trying out. There’s more than a few full-time tiler users who use GUI apps and never use a compositor, not caring how the app looks as much as how it functions.

There are other trade-offs with a tiling window manager that should be considered when deciding if you’d like to try one. Most have no fancy menus, no notification pop-ups, no desktop widgets, or no system tray applications of their own, or at most only very limited and basic ones, such as a sound volume control or a clipboard manager. Informational pop-ups are also handy for those apps who use them. These little tools are pretty standard on most DE’s but not on tilers (known exception is Awesome. It has its own notification pop-ups). You can, however, install basic stand-alone ones that will get the job done. I use Parcellite myself as a clipboard manager, a great little app that’s light and easy to use, and volumeicon as my volume control. Dunst fills in for the notifications. All are readily available in most repositories and there are scores more to choose from. Some of the tilers, like Awesome, for example, have user-contributed scripts and widgets that would be native to the tiler and do the same thing. There are lots of options, but be forewarned that out-of-the-box you’re going to be facing a very basic, bare-bones desktop.

Another trade-off is: if you’re going to get the most out of a tiler, you’re going to need to edit configurations. Some, like i3’s, are pretty simple to understand and have great documentation to help along, as well as a large user base to lend a hand on sticky points. Some, like Xmonad, are much more complex, so it’s going to take some time and learning to get it to where you’d like. Your skill level and/or how far you want to take itwill also be a deciding factor in whether a tiler is something you want to try and use, and if so, which one you choose.

In the end, I can only say what was attractive to me, and why I choose to use one for my daily driver. First, as I’ve already mentioned, is the fact I tend to be keyboard-centric, and tilers if anything are keyboard-centric. Second, while I have a fairly new, powerful computer with a large screen, I like the fact that tilers are light. I like having that much more memory available for my apps, as well as having fewer background processes taking up CPU time. Third, well they just make great use of the screen real-estate. Fourth is, while a tiler’s main purpose is to tile windows and make maximum use of screen space, all offer the option of “floating” or popping the window out of it’s tile to work in. I can then resize the window to whatever I want to make it easier or more convenient to use. I can also full-screen any app for as long as I need to. Whether I have it full-screen or floating, once done with what I was doing, I can press a key combo and either close it or tile it back in place. With my preferred tiler, bspwm, I can also choose exactly where and what size a new tiled window will be, before actually opening the application. All in all, I’ve come to love the lightness, speed and efficiency a tiling window manager offers, and find I really don’t need or miss any of the bells and whistles of fuller desktop environments.

What about the future? With all the currently favorite tiling window managers firmly based in the X-window environment, what will happen when Wayland gets more adoption? While most of the current tilers show little to no action or interest in porting to Wayland, there are plenty of new tilers being written especially for Wayland, as well as some ports of existing tilers. There’s orbment on the horizon, an i3 compatible one called sway, Orbital and several others. While some (even many) of the current tilers may not be around once most DE’s and distributions have made the transition to Wayland, there certainly won’t be a lack of tilers to try or use. On the plus side is the fact that, once Wayland becomes the standard, we won’t need a separate compositor anymore, Wayland itself handles that.

A tiling window manager isn’t for everyone. If you’re a mouse-maven, extensively use a touch screen, or just can’t live without wobbly-windows and exploding menus, then a tiler is not for you. There are trade-offs and a bit of a learning curve with any that you might decide to try. But, if you’re on a low-resource computer, have one with a small screen that seems a little cramped with most DE’s, or are just a bit adventurous and want to try out something different, a tiling window manager might be what you’re looking for. Despite what you may have heard, read or seen in screen-shots, they’re not just for command-line aficionados and ultra-geeks. You might even find they’re exactly what you’ve been looking for. I did.

The X First Aid Kit For Linux

tmux with weechat, mc and w3m in panes

If you spend some time on Linux-based social media sites or forums, you inevitably see posts where a user is having trouble logging into their desktop. Either the boot finishes and they’re left looking at a black screen, or they can’t seem to get past the display manager login.

“I enter my user name and password, the screen goes black for a second, then it goes back to the login screen.”

One of the most common ‘fixes’ I see is the suggestion to reinstall the distro, or various parts of it. I cringe every time I see this. There’s really no reason to reinstall the OS unless one has been screwing around in the system to the point they’ve already trashed it beyond repair, or the installation ISO was bad to begin with. In which case reinstalling won’t fix anything when you’re using the same ISO to do it. In the second instance, here’s a suggested Rule of Thumb: always check the hash signature of a downloaded ISO against the one posted on the distro website before trying to install. Most distros put the signature right on the download page, but not all. With some you might actually have to search for elsewhere, but take the time to find and check it. Because it can save hours of headaches later.

Another common scenario occurs after an update of a perfectly working system, when the X server, display manager or desktop refuses to start. In either case, whether a new install or an update has gone bad, reinstalling, uninstalling or rolling-back shouldn’t be the first course of action. You can probably fix the problem yourself with a few tools and a little time.

Let me state here though, if you have a phobia or complete aversion to the command-line (which I will refer to with the acronym ‘CLI’ from here on), you might as well stop reading this now. Same goes for those who have no real interest in how their system works or trying to fix it themselves.

One of the beauties of Linux is that it was designed from the start to be a multi-user environment. My first install of Linux back in the Dark Ages didn’t have a GUI, but I quickly learned that if I pressed CTRL+ALT+ I could log into another session and have a whole new screen to work in. Pressing CTRL+ALT and a function key allowed me to switch back and forth between the different sessions where I would be doing different things, much like the virtual desktops in a GUI. A bit crude perhaps by today’s standards, but it worked and was instrumental in hooking me on Linux. Fortunately for those whose GUI has failed them, this still works on any Linux system. Even now, while you’re reading this, you should be able to press CTRL+ALT+F1 and switch to a command console with a prompt for you to log in. Most modern distros hook the X session to F5 or F7 to switch back, you’ll probably have to try it to see which one of the function keys brings you back to the GUI.

A WORD OF WARNING: on some distros when using the proprietary drivers (nVidia or ATI/AMD) switching to a console can cause the driver to unload, destroying your X session. If you use one of the proprietary drivers, don’t try switching to a console, especially if you have anything running in the GUI that you can’t or don’t want to lose. If you want to try it, start from a fresh, empty session. If you find you can’t switch back to the GUI, or X has died, do a ‘sudo reboot’ from the console to restart the system. Yes, there are ways of restarting the X server, but they can differ depending on the distro, so you would have to check with your particular one to find out how to do it. A reboot should work everywhere.

Now, we’re not going to discuss how to fix a broken X or display server here. There’s simply too many differences in how it gets started in the various distros, because what works in one may not in another. What we are going to cover is some tools almost all distros will have in their repositories. Tools I recommend installing that can help in the event of a failed GUI They’ll not only help you find and fix the problem, but more importantly some allow you to connect with the outside world to get help and information. They’re CLI programs that are the first things I add to a new install. After years of trying tons of different ones, I’ve chosen these due to the fact you don’t need to be an uber-geek to use them. They’re all fairly simple to understand and work with.

First thing I recommend is to learn how to use the disto’s CLI package manager. All distros have a command-line utility for installing, removing, updating or repairing packages. Whether it’s apt-get, pacman, zypper, urpmi or whatever they call it, it’s there and installed by default. Learning the basics can make all the difference when you find yourself stuck in console mode. Often a problem stems from a broken package or missing dependency, so learning how to make at least basic use of the CLI package manager is well worth the effort. With some distros it’s the only way of working with packages, since they have no GUI tool. For others, they wrap it in a pretty GUI and many users never touch the CLI version. Try it, in fact, try installing some or all of the following CLI applications with it. You might find you prefer it to the GUI one, and if not, at least you’ll know how to use it in an emergency.

Midnight Commander
Midnight Commander


Top of my list is Midnight Commander, otherwise known as ‘mc’. This is a dual-pane file manager on steroids. There are plenty of others, you may want to try some of them to compare for yourself. I chose mc because it’s intuitive and most operations can be done via menus or function keys. The most common file operation function keys are shown right on the bottom of the screen. It has a built-in help system, so it’s easy to find out how to do something that’s not in the menus you might need to do. It has a built-in text editor, and although most distros will install a text editor like nano, vi or vim (or all of them) by default, I find being able to navigate to a file with mc then edit it all within the same application fast and convenient. The built-in editor might not have all the power and options of the others, but for simple script editing it’s more than adequate. Like the file manager itself, the most common operations people do on text are mapped to the function keys listed at the bottom of the screen, while others can be found via the help screens.

MC Editor
MC Editor

When fixing a bug in startups, it involves editing a script or configuration file, and mc’s editor will do that easily. You’ll likely find yourself looking though log files to track down a problem, again, mc makes this so much easier than having to remember a bunch of terminal commands. Another nice feature is if you need to issue a terminal command while in mc, you can do that too and view the output, all without having to quit mc.

As with all the applications I’ll suggest here, or any you might choose to use instead, take some time to learn the basics of it’s use. After installing, pop it open in a terminal on your desktop and try it out. Get to know how to navigate around with it, open a file for editing, view a file, search for one, etc. A little time spent with it once in a while will help tremendously if and when you actually need it. Who knows, you might just like it enough to use it regularly. I do. I find myself using mc in a terminal just as much, if not more than my GUI file manager. Simply because in many things it’s faster and more powerful than the GUI one. Also, I tend to be keyboard-centric, keeping my hands on the keyboard, and GUI file managers require a lot of mouse work. With mc it’s pop-open a terminal, type ‘mc’ at the prompt, hit ENTER and go to work.


The next CLI application I install is htop. I think every Linux distro on the planet installs ‘top’ by default, and it is a great tool for seeing what processes are running (or not running) and much more. It’s truly a Swiss Army Knife of process investigation tools. There’s tons of things it can do, but it’s not very intuitive or user-friendly. And while there’s plenty of other process-viewing applications around, I find htop the easiest to read and most intuitive to use. Again, like mc it has common operations bound to function keys listed on the bottom of the window. Often when starting to diagnose a problem, one of the first things to do is find out what is running, and what isn’t. With a glance you can see if the X server is running, if the display manager is, or any other process you’re interested in. Using htop’s tree view you can see the parent/child relationships of processes, what process spawned what other, getting an understanding of how they all connect. With it’s sort command you can sort them by PID to see what order they started in, as the order of starting up can make a difference with things that are dependent on something else. You can sort by CPU usage to see if one’s hogging the system, and many other ways to see what you want to find out. You can see their status, whether they’re running, sleeping or “zombied”, and much more. While htop can’t fix anything, it’s a great tool that’s easy to use for getting an idea of the current state of the system, which can help in diagnosing what the problem is.

w3m CLI Web Browser
w3m CLI Web Browser

For many users, the computer they’re having problems with is the only computer they own. If X or the display manager or desktop fail to start, they have no access to their best resource for help, their web browser. While one can use a smart phone to seek help, if you’ve ever tried to read something like the Arch Wiki or Ask Ubuntu on a phone, it’s not all that easy. Fortunately, there are several very good CLI browsers available. My personal choice is w3m, mostly because it’s fairly easy to use, and it can display graphics. Lynx is another popular CLI browser. No CLI browser will give you the pretty laid-out view of a web page that a GUI browser will, but when we’re trying to find information, text is really all we need. w3m has the added benefit that if a web page has a graphic, like a screen-shot showing something as an example, w3m can display it. As you can see in the screen-shot here, a CLI browser is pretty basic in rendering a web page, but will get the job done. To invoke w3m at a console (or the GUI terminal) type ‘w3m web address’, “web address” being the web site you want. In the example, I entered ‘w3m’. When the page came up, I typed in my search term (in this case it was ‘systemd’). The page displayed is the result. Arrow keys, PGUP & PGDWN all work to move around the page, TAB works to move from one link to another, and ENTER to go to that link. Admittedly it’s not the easiest way to browse the web, but CLI browsers are fast, light, and will get the job done. Again, it helps to spend at least a little time using the browser in a terminal window from your desktop, just to get familiar with it’s basic use. That way you’re not faced with trying to figure out how to work it when you might actually need to. And while I wouldn’t want to use w3m or any of the other CLI browsers as my daily driver, it and Google have bailed me out more than once when I was stuck at a console. I consider a CLI browser a must have for any Linux first-aid kit.

Weechat IRC
Weechat IRC

There are times when we just can’t seem to find what the problem is on our own, and we need more help than what we’re finding on the web. This is where IRC can make the difference. It’s one of the oldest forms of social networking. Almost every distro out there has an IRC channel, with many of the larger distros having several. Getting on a distro’s IRC channel and asking for help can sometimes make the difference in whether we get the system going again or not. Linux is blessed with many great CLI IRC clients. The two most popular are probably irssi and weechat. I prefer weechat myself, because it has the ability to split it’s own window into several panes, so you see multiple connected channels at the same time. I like weechat so much that it’s actually my daily driver on IRC. To get the full benefit of what it’s capable of takes some time in configuring and learning it, but it’s worth the effort in my opinion. However, as part of a first-aid kit, it doesn’t need any configuration, it’s perfectly usable with it’s default settings.

Whichever CLI IRC client you decide to use, learn how to connect to an IRC server and join a chat with it. If you’ve never used IRC before, now’s a great time to try it! You can learn most of the basics of IRC here and here. Then check with your distro’s website to see what IRC channels they offer, what network they’re on, and anything else they may think is important for you to know. When ready, take your CLI IRC client for a test drive and join the channel. You don’t have to say anything, you can just idle on the side and watch how things work if you wish, or jump right in and say “Hi”.

When using IRC for problem solving help: be patient. When you ask a question on IRC, it’s not uncommon for some time to pass before anyone answers. Even when you see several people chatting away on the channel, most will not answer a query when they honestly don’t know the answer themselves. Likely there’s someone there who can help, but they may not be looking at the IRC chat at the moment and don’t see the request. Give it some time, 5 or 10 minutes minimum to see if there’s a response. If not, then try again, politely. If you’ve been watching someone helping another with their problem, you could ask them directly if they have any experience with yours. Something like this:

“I’m having a problem with X. You seem to know quite a lot, think you could help? I’d greatly appreciate any advice.”

Most importantly, be patient and be polite. Over the years I’ve had great luck with help from IRC users, but admittedly it often took a while before someone came along who could help.

tmux with weechat, mc and w3m in panes
tmux with weechat, mc and w3m in panes

The last app for our first-aid kit is totally optional, but I find it quite handy. It’s what’s known as a console or terminal screen multiplexer. As I stated earlier, in Linux you can have several console sessions open on different terminals, switching with the CTRL+ALT+F# keys. You could have mc running in one, htop running in another, w3m in another, weechat in another, etc., etc.. Problem is, sometimes we’re following instructions from the web on something that’s a bit complex to do. Switching to another screen where we can’t see the original instructions can be an issue. Trying to remember a complicated sequence of commands or things that need to be done can lead to errors, possibly making things worse. Truth is, we’re quite spoiled with our GUIs, where we can have multiple windows open on the single screen, following the instructions directly. Fortunately, we can also do this in a CLI console with a multiplexer. The two most used and available in pretty much any distro are ‘screen‘ and ‘tmux‘. Personally I use tmux. With tmux you can have multiple screens, similar to tabs in a browser, as well as splits in the current screen, much like a tiling window manager. Like all of the other apps I’ve suggested tmux has many more things it can do, but for our purposes the most important is the ability to place several of the other tools into a single screen as shown in the screen-shot. It enables us to do what we need to while reading or getting instructions, all on the same screen. And like the other applications, tmux is extremely configurable but requires no configuration to use. The example in the screen-shot is plain out-of-the-box tmux. Learning some of it’s default key bindings so we can create splits in the window and move between them is really all that’s needed.

Admittedly, trying to diagnose and fix a problem with X—or anything else—from a command console is not for everyone. There are many of us who just want to use the computer and could care less about how it works. But if you’re like me, interested in how it functions, willing to try something new and learn new things, these tools can be invaluable when the GUI goes south. For many the command line is some mysterious place that only uber-geeks go. These tools put a more familiar, friendlier face on it. With them we can move around in our system, see what it’s doing, and connect to the outside world. Over the years I’ve tried many utilities and these are the ones I’ve settled with, because they’re simple to use and get the job done. Even though I’ve used Linux for nearly 20 years now, I’m not a programmer or command-line guru. But I do like to tinker and I’ve broken my systems more times than I can remember. Being able to find help from the web at the console has been priceless. And honestly, most fixes have been very simple once I found where the problem was. Things like the wrong permissions on an executable, a script calling something that doesn’t exist or had it’s name changed in an update, or a missing package. Only once in all those years have I ever screwed things up so bad that the only recourse was to completely reinstall the system. Besides, there’s a real feeling of accomplishment when we can fix something ourselves.

What do you do with bugs?

Software bugs are an everyday fact of life, and the sad truth is we’re never going to be free of them. What causes them, and what we can do when we encounter them is what we’ll explore here.

Anyone who develops software starts on their own computer. They have a specific make and model of machine, with all the hardware that manufacturer decided to incorporate in building it. The programmer is using whatever disto they prefer, with that distro’s choices of lower level software (kernel, modules, libraries), and that distro’s spin on their desktop environment of choice. The programmer is also using the support libraries for whatever environment they’re programming for (GTK+, KDE, etc.). When they’re at a certain point where everything they’re trying to accomplish with their application works, it’s usually pushed out for some process of review, where they hope to catch any glaring bugs and fix them before release.

Think about all the variables involved in that process. Start with the programmer’s computer. How many different makes and models of computers are there? All computer manufacturers have one thing in common, they’re trying to make money while at the same time keeping costs low. They mix and match pieces, change hardware configurations, use differing makes and models of peripherals, not to mention core components like the motherboard, chips, bus types, etc. The sheer number of possible hardware variations is staggering.

Next think about the programmer’s distro of choice to work in. Every distro does something different with their core components to try and stand out, be what they feel is better than the others. They can and do change how they compile their kernels, modules, libraries, as well as how they interact with each other. Source code versions can be different, compile time flags are changed, init systems, what’s started, how and when, can all be modified. Another mind-numbing amount of variations is possible.

Then there’s the toolkit the programmer is using. Most will work with whatever is current and supported by the environment they’re targeting, and make any support libraries their application needs “requirements” for packaging. Unfortunately, it’s the distro we choose to use that has to compile and package those requirements, and they might do something different from what the programmer desired or used themselves.

The plain fact is, there are so many variables involved in computers and computer software, a programmer or team of developers can’t possibly foresee or account for them all. The very best developers can’t program for, nor anticipate, all the possible environmental and hardware variations their software is going to encounter in the wild. Many larger projects have Q&A teams or similar, where they try to “torture test” software to find and fix bugs before it’s released. While it certainly helps a lot – much is caught and fixed in the testing – it’s still impossible to account for every variation. We as users are going to find bugs sooner or later.

So before you go on a tirade in social media or other public venue about “buggy crap” software (you can insert your own vile and demeaning adjectives here), think about what the developers are up against. Think about what you are doing. In the FOSS ecosystem you’re railing against people who gave their time to create something good and useful, then gave it freely to the world. Often at some personal and/or economic costs to themselves. In essence, you’re dragging over the coals someone who is trying to help you!

Consider this: Apple controls both hardware and software creation on their platform. They build the hardware, so they know exactly what is in it. They create the software to work with that hardware. They control the toolkits, libraries, kernel, modules, init, everything. They have an extensive and thorough Q&A that all software has to pass. And they STILL have bugs!

As I said when I started, bugs are a computing fact of life. There will always be bugs, the sheer number of variables in modern computing makes that inevitable. However, we in the FOSS world have an advantage. We can do something about the bugs we encounter. We can be proactive, and we don’t need to be a programmer to do it. Almost all software projects have bug trackers, systems where users can report bugs and help find a solution for them. Those that don’t will at least have an email address or IRC channel, some form of contacting the developers. All that’s required is a little patience and a willingness to spend some time helping find a solution. Time and patience, these two things are vital to squashing bugs. If we take a few minutes to just browse through a bug tracking system, we’d see hundreds (if not more) cases of someone reporting a bug with little to no information useful to the developers, and never following up. Or worse, ranting about a bug without even trying to be helpful in squashing it.

The only way a developer can kill a bug is if they understand exactly what’s causing it. The only route to understanding is if the user supplies all the information necessary, and follows up with whatever further tests or information the developer needs. It requires time and patience. It also helps to have a positive attitude and be polite. In the course of trying to kill the bug, the developer may ask you to do or try something you don’t understand. If and when that happens, tell them you don’t understand and ask for guidance. Remember, you’re both asking for help with a problem. They need your help to fix the problem, so don’t be shy in asking for their help in return. Most importantly, stick with it. There IS a solution, it just might take some time to find it. Chances are it’s working properly on their system, so they need to diagnose what is different between your system and theirs, and then figure out what to do about it.

Not all users can be bug chasers. Many of us just don’t have the time or patience for it. That’s okay, because we can still be helpful. How? By NOT trashing the software or developers because we have a bug. Put yourself in the developer’s shoes. You’re trying to create something useful and giving it freely to the world, and someone is publicly flaying you for it? How would you feel? Don’t be a troll. If you don’t have the time or patience to help with a bug, that’s fine. But don’t be a troll instead.

For those of us who do have the time and patience, chip in! It’s the beauty of FOSS, even those of us who aren’t programmers can still be actively involved in software development. Report the bug. If it’s already been reported, add your information. Most bug trackers have a guide for how to submit information, follow it and include yours, don’t just add a “Yeah, I have that bug too.” The more information the developers have on how a bug is affecting users, the more likely they are to find a solution. Then take the time to follow up on it to its resolution. By staying actively involved until it’s fixed, you’ll be able to honestly say “Yep, I helped in the development of that.” You’ll have changed from a being a “user” to being a “contributor”. Then the real reward happens, you get to soak up the feel-good that comes along with the knowledge you’ve helped on something the whole world can use.