A Quick Comparison of Ubuntu MATE vs Linux Mint MATE

Ubuntu MATE 1510

Today John writes:

I’m really liking the new Ubuntu-MATE 16.04 and strongly considering switching over to it after the official LTS comes out. I’m currently using LinuxMint-MATE and really liked it, but the look of Ubuntu-MATE takes me back to my early Ubuntu days. How do I explain to those in my Linux SIG why I’m considering the change. Other than the Ubuntu-MATE top panel design, what’s the differences between the two in terms like you did with the engine? Maybe i’s the new software boutique, maybe it’s the Welcome app, but I’m sure there’s better reasons that I want to go with Wimpy’s team’s development.

This is an excellent question, John and while others reading this will disagree, I’ll share my opinion of what makes the two distros different. Keep in mind, this is my opinion. Your mileage from this opinion may vary. Also, this is by no means an exhaustive comparison. This is merely touching on important elements that I think are worth noting.

LTS vs latest release

One difference between the two distros is that one is based on Ubuntu LTS and the other is based on Ubuntu’s latest release. There are advantages and disadvantages with both. But if you’re struggling with software bugs, then the latest release option might be a better fit for you. Otherwise, if the LTS model is working for you, awesome, then there are no concerns to worry about.

Provided tools

  1. Welcome Menus – Both Ubuntu MATE and Linux Mint MATE edition both provide their own tools. However, the the biggest difference is absolutely Ubuntu MATE’s Welcome app. What you see in Ubuntu MATE 15.10 doesn’t do it justice. It’s mind blowing in 16.04 beta. For new users, it walks the user through every important aspect of setting up/customizing their user experience. Instead of merely providing links and access to help online, Welcome goes about ten steps farther.
  2. Getting Started Section – In the Getting Started section for example, a complete Linux newbie is taken by the hand and provided push-button access to software updates, drivers, input settings, backups, firewall, customization, keyboard shortcuts and, of course, a troubleshooting section.
  3. Troubleshooting Section – The troubleshooting area inside of Getting Started provides an attractive human-readable display of what components your PC is made up of. For deeper troubleshooting, the Utilities section provides everything you need to drill down any problem areas that might be affecting your system. From here, take any discoveries you make back to the Chat room and Community buttons on the front page of Welcome for help. I’ll stop there, as this doesn’t even include the software boutique…which uses the Windows to Linux approach for software discovery. Mint’s Welcome app also shares some of the same functionality as MATE, but lacks Ubuntu MATE’s more advanced built in functionality.
  4. Software Boutique – With regard to software installation, Ubuntu MATE starts off at the software boutique and then allows you, within the boutique, to install whatever type of software installation GUI you want. Mint provides its own Software Center. Neither approach is better or worse than the other, just different.

Little under the hood tweaks

  1. Working Touchpad Settings – When I install Ubuntu MATE onto any laptop, my touchpad will not be active while typing. This prevents cursor jumps when typing and is a wonderful feature. This feature was not included with Mint by default, although I’m sure someone could argue that the touchpad settings can do this for you. They would be wrong, but it’ll be argued by most people regardless. The GUI for this feature rarely works.
  2. Power Management – Then there is power management. In 2016, it’s grounds for flogging not to have TLP installed on a system by default. Ubuntu MATE has this whereas Mint does not. This makes Mint fine for desktops, but on laptops it means a user must know TLP exists and then they’ll need to install it. TLP is critical for power management on laptops as it detects when you’re connected to your power supply and when you’re running off the battery. From there, it provides you the best power settings to maximize your battery life for each session. It’s also fully automatic and works on just about any laptop.

That’s what does it for me. Even though I don’t “need” Welcome for example, doesn’t mean I value it any less. My own mom uses Ubuntu MATE and Welcome – it’s a great distro. So I guess my advice is this – if what you’re using works for you, awesome. But if you’re wanting to try something different, Ubuntu MATE is a great choice.

Do you have Linux questions you’d like Matt to help with? Hit the link here and perhaps you too, can Just Ask Matt!

What’s The Difference Between GNOME, Unity And MATE?

Today Skip writes…

I’ve been using Ubuntu (with a NVIDIA card of some sort) for ten years now, but I’ve NEVER understood the difference between Gnome (I thought I liked that), Unity, and MATE(THREE thumbs UP!). Is there a simple explanation?


Boy Skip, this is a tall order. I’ll do my best though.

MATE and GNOME are considered desktop environments. Unity…and this is where it gets confusing…is a graphical shell running on top of GNOME Shell. Confused yet? Okay, bear with me as I break this down.

Think of a car

So we have car. This car has stuff like an engine, a radio, a gas pedal and the brake pedal. The GNOME desktop is basically your car radio, heater/AC, gas pedal, brake pedal, steering wheel, seat belts…stuff you would use to start, drive, stop and generally click buttons/turn knobs to make stuff happen in your car. The same exact thing would apply to MATE, as it’s the same brand of car, but a slightly different model with different features.

Both GNOME and MATE share one thing in common – the car’s engine. As we know from cars, they may share an engine type…but sometimes they’re slightly different models. In this case, our engine is called the GNOME Shell.

When I use the gas pedal, the engine responds. The same thing when I turn the key, the engine responds. And when I hit the brakes, the engine will go into an idle once the car’s at a complete stop. Each component of the car I use (very loosely speaking, obvious exceptions applied here) affects the engine at some level.

Then we have Unity. To use Unity, we need to use an engine lift to remove the existing engine (GNOME Shell), so we can drop in a new engine called Unity. This new engine isn’t really better or faster than your old one. It’s just a different type of engine. This different engine still uses the same gas, brake, radio, AC and other components as it did with the previous engine.

Wait, so what is the Linux kernel in car speak?

Ugh, this is going to be where my entire analogy will get people on edge – but let’s try this.

Add a new muffler? How about new belts, a lift kit and professional grade shocks and struts? This would be your kernel. Each time you upgrade your kernel, your car gets enhancements that make it drive better. Sometimes that upgrade means you’re able to support something it couldn’t previously; like adding a hitch to tow a trailer, for example. That would be like the kernel allowing you to run a hardware device previously out of reach due to incompatibility.

Some will say the Linux kernel is closer to an engine, and normally, I’d be inclined to agree. However, since we’re talking about visible components here…my analogy is better suited for a greater understanding. The single biggest key to this isn’t how close I am in my analogy. It’s whether you understand the difference between GNOME, MATE, and Unity.

By the way, if someone was to install the GNOME desktop on Ubuntu MATE installation, this is on par with dropping the engine from one car manufacturer into another. Sure, I guess it’s sometimes possible…but what would be the point? 🙂

Do you have Linux questions you’d like Matt to help with? Hit the link here and perhaps you too, can Just Ask Matt!

There Is No Chrome For Pi

Today, Victor asks…

Hi Matt,

I have the Raspian OS installed on a Raspberry pi 2 and been trying to install the Google Chrome browser on it. I been trying apt-get, dpkg, but I get an error message, I understand that the OS is an embedded OS but what command should I use?

Sadly Victor, Chrome isn’t available for the Raspberry Pi. There are however, various methods for installing Chromium onto your Pi if you’re needing something that it provides. I must warn you though, you’re not going to be watching much browser powered video on your Pi 2. If you do, you will be pretty disappointed.

Instead, might I encourage you to do the following instead. Install an alternative browser like QupZilla or Midori. For YouTube videos, you might try something like SMTube or Minitube. Personally, I’d suggest Minitube as it provides the best overall experience. Still, watching videos on a Pi 2 is going to depend on what else you have running at the time.

If I was a betting man, I’d say you were hoping to watch Netflix via Chrome on your Pi. This simply isn’t possible….or is it? Again, I’m presuming a bit here. According to this tutorial, you can indeed go through a lot of steps to watch Netflix on a Pi. It won’t be very pleasant, but apparently it’s doable via ChromeOS, a lot of patience and the realization that you would do better to buy a Roku or Amazon Fire TV. In any event, hopefully this steers you in a better direction.

Do you have Linux questions you’d like Matt to help with? Hit the link here and perhaps you too, can Just Ask Matt!

Affiliate links for products mentioned above

Raspberry Pi – http://amzn.to/1r4rnKo

Roku – http://amzn.to/1TYA5V6

Fire TV – http://amzn.to/1U0JFbj

How To Set Up a Recipe Server

Today Gary asked…

Would you mind doing a walk-through on how to setup a file server, perhaps alongside my existing media server (movies and music) from a fairly new Linux user perspective.

Will different types of servers coexist without problems? I would like to setup a file server for recipes so that my wife could access them on a tablet that is mounted in the kitchen. How would you setup a menu that would allow her to quickly access a recipe when there are perhaps hundreds on the server? Would I also need a database of some kind to do this efficiently or is there a simpler way of getting this done? Will I need an app on the tablet (Android OS) to access the server?



Hi Gary,

The attached video will answer your questions while this article will help you get your server set up.

Before getting started: Some things to remember: One, I’m assuming you’ll have local access to the server in question. Thus, not needing SSH, etc. Two, Plex, ownCloud and WordPress all work great on a tablet. Just enter the IP address or better yet, set up static IP addresses on your router. Bookmark them and you’re golden.

Step #1 – What are you going to run your media/recipe server on – an old PC tower? As I talked about in the video, you could potentially run the entire thing on something as light on resources as a **Raspberry Pi. A better alternative might be an old desktop or laptop you’re not using. In the end, I’m a big fan of using what’s available vs needlessly spending money on yet another computer. If you have money to burn, you could even buy a NAS…but it sounds like you’re willing to use what’s already available. For the video, I used Ubuntu Server 14.04 for the Raspberry Pi 2.

Step #2 – Pick a media server. Based on what you’re describing, Plex might be the best bet. Now after following the linked guide for its installation, here are some additional things that will make setting up Plex much easier on a PC such as a laptop or a tower. I realize you have an existing media server. So if you’re happy with it, great. But Plex is a great option if you’re not already using it.

**Note: Freedom Penguin’s own editor, Eric Beyer has informed me that his “Plex-on-Pi” experiences have been less than stellar. Annoying to install, performance issues, etc. So while it can run on a Pi, it’s probably not the best solution overall. So while you can run a media server on a Pi, you might want to consider an actual PC if you plan on enjoying your media successfully.

To get Plex running successfully, enter the following commands for as to avoid common user/group/permission headaches. This assumes Ubuntu LTS is being used as your distro of choice. If you’re running something with systemd instead, this applies. For a media/recipe server, I highly recommend sticking with Ubuntu 14.04 LTS.

Create a Plex group:

sudo groupadd plex

Add yourself and Plex to the new group:

sudo addgroup YOUR-USER plex

 Change group:

sudo chgrp -R nogroup /var/lib/plexmediaserver/

Change ownership:

sudo chown -R plex /var/lib/plexmediaserver/

Use restart as it may already be running:

sudo service plexmediaserver restart

Finally, ensure that you’re firewall ports are allowing Plex to do what it needs to do.

Step #3 – Once you have Plex, or whatever media server you’ve decided to use , you’re ready to install ownCloud. This will allow you to import and manage your EXISTING recipes. I believe ownCloud is a great choice because it’s fantastic for tablet usage, plus it doesn’t require an app…although one is available if need be. Everything from here forward assumes that Ubuntu 14.04 LTS is being used.

sudo apt-get update && sudo apt-get upgrade -y

We’re going to use MariaDB over MySQL as I’ve found it performs better:

sudo apt-get install mariadb-server
sudo apt-get install apache2 php5 php5-mysql

the above apt install should cover the following automatically…but just in case:

sudo apt-get install php5-gd php5-json php5-curl php5-intl php5-mcrypt php5-imagick

Some or all of the items above may already be installed. Perhaps you’re running a LAMP stack already, negating the efforts above. If you have MySQL installed already, that’s fine, the interaction for database creation, etc, with both MariaDB and MySQL are similar. In short, don’t worry about it, the next steps worth with both options. When you install either MariaDB or MySQL, you will have been prompted to create a root password. Remember it, you’ll need it.

Let’s create a new database and user for your upcoming ownCloud installation. You can choose the database name and user. But for the sake of simplicity, I’m using ownclouduser and ownclouddb so you can spot the differences. Also, the items below are in some instances case sensitive – so just assume everything below is to be safe.

sudo mysql -u root -p
CREATE USER 'ownclouduser'@'localhost' IDENTIFIED BY 'yourpassword';
GRANT ALL ON ownclouddb.* TO 'ownclouduser'@'localhost';

Now it’s time to install ownCloud. Gary, usually I’d suggest getting the latest version from ownCloud directly. However for the sake of keeping things simple, we’ll simply use a reliable repo. After digging a bit, the best approach turned out to be as follows.

cd /tmp
wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
sudo apt-key add - < Release.key

Now we’re ready to add the repository to your sources.

sudo sh -c "echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list"

And finally, let’s install ownCloud itself.

sudo apt-get update
sudo apt-get install owncloud

To be on the safe side, restart the apache server.

sudo a2enmod rewrite
sudo service apache2 restart

This is a good time to make sure your ownCloud instance is configured.

cat /etc/apache2/conf-enabled/owncloud.conf

Running this command should present you with something like this.

Alias /owncloud "/var/www/owncloud/"

<Directory "/var/www/owncloud">

Options +FollowSymLinks

AllowOverride All


I’ve found that since I’m running WordPress as well, that it’s best to configure it as follows using nano/vim/whatever. Unless you’re familiar with it, I recommend nano for most people.

sudo nano /etc/apache2/conf-enabled/owncloud.conf

Change the contents to look like this.

Alias /owncloud "/var/www/owncloud/"

<Directory "/var/www/owncloud">

Options +FollowSymLinks

AllowOverride All

SetEnv HOME /var/www/owncloud

SetEnv HTTP_HOME /var/www/owncloud


 Full disclosure: I’m not an Apache expert nor do I claim to be. I simply now how to get things done and do so with the outcome I’m looking for. So feel free to try using the above environment variables or not. For myself, I’ve found that the latter configuration with the environment variables meets my expectations perfectly.

Whatever changes you make to owncloud.conf, be sure to refresh everything after every change.

sudo a2enmod rewrite
sudo service apache2 restart

Step #4 – Now that ownCloud has been installed, you can access it either locally on that machine via your Web browser…


or by its LAN IP address.


First, we need to create an administrator username and password. Anything is fine, but make sure you remember it. Next, look below the username and password boxes.You’re going to click on Storage & Database. You’ll notice that your data folder has already been selected for you. However, if it’s not, make sure it’s pointing to /var/www/owncloud/data or you going to have some issues.

Next, select the correct database option. In your case, this should be MySQL/MariaDB. Remember your database credentials you used earlier when building your new ownCloud database. Well, you’re going to need them for the boxes below.

Username: ownclouduser

Password: Whatever you chose for your database password earlier, NOT your administrator password

Database name: ownclouddb

Host: localhost

Congrats, you’ve just installed ownCloud to be used for recipe hosting, locally speaking.

Step #5 – Odds are better than fair that you’re going to want something a bit more “easy on the eyes” for your yetobe discovered recipes. As we discussed in the video, WordPress is a great option in this regard. So if you’re also wanting to install WordPress alongside ownCloud, here are the steps to take.

Now we already have MariaDB installed, along with all the other required prerequisites needed to run WordPress. Our next step is to begin by creating our database for WordPress.

mysql -u root -p
CREATE DATABASE wordpressrocks;
CREATE USER yourwordpressuser@localhost IDENTIFIED BY 'create a password';
GRANT ALL PRIVILEGES ON wordpressrocks.* TO yourwordpressuser@localhost;

With our WordPress database setup, the next step is to get a copy of WordPress. There are two ways of doing this. If you’re running the PC under a GUI, you could use your browser to download the package needed. Otherwise, use wget instead and extract the download afterward.

cd ~
wget http://wordpress.org/latest.tar.gz
tar xzvf latest.tar.gz
cd ~/wordpress
cp wp-config-sample.php wp-config.php
nano wp-config.php
// ** MySQL settings - You can get this info from your web host ** //

/** The name of the database for WordPress */

define('DB_NAME', 'wordpressrocks');

/** MySQL database username */

define('DB_USER', 'yourwordpressuser');

/** MySQL database password */

define('DB_PASSWORD', 'created password from above');

After modifying the file, get ready to copy the files over to your web root. Now you could use cp to copy the files over. But I personally like using rsync for stuff like this.

sudo rsync -avP ~/wordpress/ /var/www/html/
cd /var/www/html

Check your work to make sure everything was copied over.

ls /var/www/html/wordpress/

This should present you with all the expected files and directories for your WordPress installation.

Because this wasn’t one of those smooth apt installations where the magic happens and all the ownership/permissions are setup for you, there are a few things left to configure before we begin the actual WordPress setup.

Start off by looking at the user that appears in your terminal. In my case, it’s matt – yours may not be a name at all. Whatever it is, this is considered the user that we’ll use to work with WordPress. Not in the login sense, rather from an ownership perspective. I’ll assume yours is called “user” for the basis of the remainder of this tutorial. Confused? Don’t sweat it, just follow along.

sudo chown -R user:www-data *

Now let’s make our upload directory.

mkdir /var/www/html/wordpress/wp-content/uploads

And then give it needed ownership information.

sudo chown -R :www-data /var/www/html/wordpress/wp-content/uploads

Don’t forget to set the permissions for your wp-content directory.

sudo chmod -R 755 /var/www/html/wordpress/wp-content

At this point, you should be able to browse over to the IP of your machine hosting WordPress and complete the installation. But this is an install built to be something LAN specific, not a WWW website. FTP isn’t really a great fit in my opinion for situations like this. Sure, you could install libssh2-php and move files back/forth via SSH. But if you want to simply want to upload files from the ~/ to /var/www/html/wordpress/wp-content, then clearly a direct file move would be preferred.

Using nano, simply open up the config file…

nano /var/www/html/wordpress/wp-config.php

and add the following at the very bottom of the file.


While you could have done this earlier, I intentionally left it out in case you wanted to use FTP or SSH to upload files.

Now, before we go any further, this a good time to complete our WordPress installation.

From a the local Web browser:


or if on another machine:


You’ll be asked to provide a site title (doesn’t matter what), username and password. Then an email address. Now, being this isn’t a public facing site, there is no advantage to checking the box for search engines to index the site.

Once all of this is done, simply click on Install WordPress, then login when prompted.

Step #6 – Setting up Apache Rewrite is going to make your life a lot easier. Doing stuff like setting up custom permalinks and other related activities depend on it. We already did this with our ownCloud install, now let’s give our WordPress install the same courtesy.

sudo nano /etc/apache2/sites-available/wordpress.conf
Alias /wordpress /var/www/html/wordpress

<Directory /var/www/html/wordpress/>

Options +FollowSymlinks

AllowOverride All

<IfModule mod_dav.c>

Dav off


SetEnv HOME /var/www/html/wordpress

SetEnv HTTP_HOME /var/www/html/wordpress


As above (see ownCloud examples), the environment variables are a matter of personal choice. I use them because I’ve found they seem to play nicely together.

Now let’s pop this into enabled status.

a2enmod rewrite
service apache2 restart

Now let’s create the needed .htaccess file that Apache expects.

touch /var/www/html/wordpress/.htaccess
chown :www-data /var/www/html/wordpress/.htaccess
chmod 664 /var/www/html/wordpress/.htaccess

By chmod’ing this to 664, you can change the permalink structure within WordPress without needing to enter it into the .htaccess file manually.

But Matt, what about Nginx or _____?

As I wrap this up, I wanted to touch on a few things. First, there are a hundred different ways to do this. Yes, I could have suggested that Gary uses Nginx instead. At a later date, I may even write up an article on setting up a Nginx WordPress installation. The process is very similar to that with Apache, with some minor differences under the hood with the config.

Personally, for casual projects like this does…it really doesn’t matter. The advantages between Nginx and Apache for a personal project are non-existent and merely a matter of preferred workflow (in my opinion). The only time one should care is when you’re working on a larger and/or public facing project.

Gary, while this was incredibly long, I hope it serves you well and you’re able to get your media/recipe server up and running. I realize this seems like a ton of information, but I’m sure if you follow this guide closely you can have that recipe server set up in no time.

Do you have Linux questions you’d like Matt to help with? Hit the link here and perhaps you too, can Just Ask Matt!

Old iMac Ubuntu Studio Installation

On September 3rd, 2015 Joseph asked…

Hi Matt,

I have the opportunity to install Linux at work on a 5 year old iMac rarely used due to its age. My gut tells me I should go with Ubuntu Studio as it has access to just about every type of creative and office software imaginable upon installation. I see this as an opportunity to grab a few entry level users with varied needs. What would you install on an older system collecting dust that gets used maybe once a week?


Hi Joseph,

I definitely think you’re on the right path for getting more life out of the old Mac. I recently did some testing with a 2010 Macbook Pro (6,2 version) and Ubuntu MATE. It’s more involved than simply installing Linux onto a PC, but it’s totally possible. Unlike Macbook Pro laptops however, the process should be a bit easier with the iMac.

That article is coming out soon and addresses how to install Ubuntu MATE onto a Macbook Pro under UEFI mode. Needless to say, I was able to get the proprietary graphic drivers working despite some known issues (black screen). The existing fixes found on Google are not compatible with Ubuntu 15.10 (nomodeset and other related hacks) and will likely create new issues, hence, why my upcoming article addresses this directly. Okay, back to your question.

Preparing for the installation

First and foremost, you should prep the hard drive for the Ubuntu Studio installation while booted into OS X. This OS X article provides a relevant guide for preparing a second partition using Disk Utility. The key here is to shrink down the existing OS X partition. I’d shrink it down substantially, since we’re only keeping this partition for access to recovery stuff, should you need to access them in the future. The newly freed space on the drive should be left as unformatted, since we’re going to be using the Ubuntu installer to handle the file system creation. As for the USB flash drive, make sure it’s formated as Mac OS Extended (Journaled) and that your partition table is set to GUID.

The next step is to install a new boot manager. Yes, you could “try” the old “hold down the Option key” approach and hope it works. Historically, I’ve found this approach to be very hit and miss. By installing the boot manager known as rEFInd (download the zipped package), a USB flash drive with Ubuntu will show up as bootable. Simply follow these instructions for installation and you should be good to go. We’re avoiding the Option key approach as I found it was not working reliably.

At this stage, you are ready to take your downloaded Ubuntu ISO and install it to a USB flash drive. Personally, I’ve always done this from the Mac itself. Here’s how on OS X, using an Apple keyboard.

Cmd+Space then type terminal and hit Return.
(Windows) Cmd+Space then type terminal and hit Enter.

hdiutil convert /path/to/ubuntu.iso -format UDRW -o /path/to/target.img

Don’t worry about adding dmg to the file, this happens automatically.

Now you’ll want to list the existing drives available. For newer users, I recommend running the command twice – once without the flash drive and a second time with it. If you feel comfortable in doing so, once is fine as you’ll be able to recognize drive.

diskutil list

In the list, you should see something like /dev/disk#
(The # might be a 2 or a 3)

When we plugged in the flash drive, OS X automatically mounted it. We need to unmount it.

diskutil unmountDisk /dev/disk#

(Remember to replace # with the correct number corresponding to the flash drive)

If it doesn’t eject correctly and you see something like “Unmount failed”, I’d try dragging it to the trash to unmount it.

Okay, the next step is to dd the new dmg over to the flash drive.

sudo dd if=/path/to/downloaded.img.dmg of=/dev/diskN bs=1m

(Yes, it appended .dmg to the .img file. In the interest of just getting things moving, I left it.)

At this stage, your flash drive has an OS X bootable copy of Ubuntu on it. All that’s left is to unmount the flash drive.

diskutil eject /dev/diskN

(Or drag it to the trash – when you completed the dd, the drive re-mounts itself and needs to be unmounted.)


Reboot the Mac. Since we shrunk down the OS X partition, freeing up a ton of free space for Ubuntu…the installation process is painfully simple.

After rebooting, you should be staring at rEFInd. Simply arrow key over to the icon representing the flash drive. If there are multiple entries, it’s usually the first one.

Once booted, you’ll be asked to select Try or Install. Choose Try, then run Gparted. Historically, I’ve found that sometimes the installer hangs. To avoid this, I usually setup an ext4 partition in Gparted, close the program and THEN run the installer.

When promoted, choose “install along side of OS X” and the rest of the process is exactly like a normal Ubuntu installation. Once completed, reboot.

Pro tip: Nine times out of ten, you’ll find that you’re presented with a grub menu instead of rEFInd. When this happens, I hold down the power button and then restart the Mac. This time, I hold down the Option key (Alt on a PC keyboard). I then select the OS X partition. Once back in OS X, I run the terminal script for rEFInd again (see above). This provides me with my rEFInd menu at boot again and grub there after.

That said, this may not be critical. If you don’t plan on using the OS X option frequently, you may be able to boot into Ubuntu without the above step – I’ve never tried, so I don’t know for sure. If it fails, the above steps will get you going.

Getting stuff working in Ubuntu

With my tested Macs, I found everything worked great. The only exception in my case, was the brightness keys on a Macbook Pro. With your Mac being a 2010 iMac you’re likely looking at an AMD graphics card, AirPort Extreme wireless and gigabit Ethernet. Out of the box, everything “should” work. This means everything is set to the non-proprietary drivers ranging from the wifi to GPU.

Most common issues with iMacs of that vintage is the audio not working. I doubt this is an issue with today’s latest kernels, but just in case…keep reading.


ONLY DO THIS IF: After jumping into the Sound Preference dialog and confirming that A) Everything is turned up or B) Options in Output or Hardware are grayed out. I highly doubt this is needed any longer…but just in case it is I’ve provided some things to try. Also double-check alsamixer (run in a terminal) to verify nothing is muted). Any time you use alsamixer, the things to watch for are Master, PCM, Front, Surround and that the right card is selected and everything is unmuted. Use the function keys in the upper right to navigate.

From your Ubuntu terminal:

cat /etc/modprobe.d/options


 cat /etc/modprobe.d/options.conf

if the file doesn’t exist:

sudo touch /etc/modprobe.d/options.conf

Next we need to paste or type in the following into the file:

options snd-hda-intel model=imac24

Then do a Ctrl+X to save the file, and reboot.

If that failed to work after verifying that Sound Preferences shows you all the sound toggles are up all the way, then try this next.

sudo rm /etc/modprobe.d/options.conf

Try the same code in this file below:
options snd-hda-intel model=imac24
pasted at the bottom of:

 sudo nano /etc/modprobe.d/alsa-base.conf


 sudo alsa force-reload

And like before, run alsamixer to see if things are working and make sure stuff is unmuted.

To reiterate, I do not believe this will be an issue for most people using the modern kernel(s). This is just in case, as a grab bag of stuff to try.

In my case, the iSight camera worked out of the box with Cheese and other apps. If however, it doesn’t work for you, I recommend taking this for a spin.


Parting thoughts

In my case, everything except some of the function keys worked great. I seriously doubt you’ll have any need to fool with the audio tweaks above. The keys for brightness controls and what not are usually bound to the proprietary video drivers.
In an upcoming article, I’ll share exactly how I got proprietary drivers working and avoiding a black screen after installing them!

Until next time…keep an eye our for the Macbook Pro Ubuntu MATE article. I’ll show you how to avoid the black screen of death using NVIDIA proprietary drivers on an Intel/NVIDIA MBP.

Do you have Linux questions you’d like Matt to help with? Hit the link here and perhaps you too, can Just Ask Matt!

FTC required disclosure of Material Connection: The Ebay product links in the post above are “affiliate links.” This means if you click on the link and purchase the item, Freedom Penguin will receive an affiliate commission.

How to Setup SSH Keys on a Linux System


On September 9th, 2015 Chris L. asked…

Hello Matt and the Freedom Penguin staff! I have a question about generating RSA public and private keys under Linux. Is there a Linux/Open Source equivalent to PuTTYgen? A PuTTY GUI is available in Ubuntu GNOME 15.04 (what I am using), but is there a PuTTYgen GUI that I can install over the CLI? If not, can you give a short tutorial on how to do this in the command-line? Congratulations on the site, added to my favorites! I miss you on LAS, but I’m glad you are back with this awesome idea.

Chris (all the way from Japan)

Hi Chris!

Allow me to let you in on a little secret – I have never used PuTTY or PuTTYGen. All of my key generation has always been done in the Linux command line. Lucky for you my familiarity with the latter is going to help you overcome the former.

First, allow me to acknowledge that most documentation is convoluted. Despite interesting details being presented, often it comes across as a wall of text to Linux newcomers or those simply new to certain aspects of Linux.

On the client side, Ubuntu comes with the SSH client already installed. You can’t see it, because it’s not a GUI application. For the server (the remote computer you wish to SSH into), you’ll need to install the SSH server software.

The steps we’re going to be taking break down as follows:

1) Generate a key on your local machine.
2) Install OpenSSH Server on remote machine.
3) Send the key to your remote machine.
4) Lockdown the remote machine by removing the password authentication.

Step #1 – From your local machine, you need to create RSA keys. This provides a private key for your local machine and a public one for your remote machine.

On your local machine, in a terminal:

mkdir ~/.ssh

If it already exists that’s great, let’s make sure the permissions are correct.

chmod 700 ~/.ssh
cd ~/.ssh

Now let’s create our keys.

ssh-keygen -t rsa

This will kick out the following cryptic tidbit:

Generating public/private rsa key pair.

Let’s give the keys a name like this.

Enter file in which to save the key: (/home/USER/.ssh/id_rsa): type-something-clever-here

Next, you’re going to want to provide a pass phrase to protect the private key that resides on your local machine. This isn’t to be confused with the SSH server password or anything related. The entire purpose of this pass phrase is to protect the private key on your local machine in case of theft.

Now if you get an error like the one below, try a longer pass phrase. To do this, type ssh-keygen -t rsa and redo the process.

Error example mentioned above:
passphrase too short: have 4 bytes, need > 4
Saving the key failed:

If everything went correctly, your ~/.ssh should contain the following: somethingclever.pub and somethingclever – to see the ~/.ssh directory, browse to /home/USER/ and type Ctrl+h to make the hidden directories visible.

Step #2 – If the remote machine is a desktop PC, you’ll likely need to sit in front of it and install OpenSSH server yourself. If this is an Ubuntu Server provided by a web hosting company however, you’re most likely already set to go. Here’s a tidbit no one ever talks about. If the remote machine is a desktop PC, the SSH password is your user’s password. Same applies for the server. The difference is with the server. You may be looking at a root user. Do NOT use a root user for SSH. It’s asking for trouble and completely unnecessary. Best to follow this guide (hat tip to Digital Ocean) and setup a regular user with sudo privileges instead.

Regardless of which type of remote machine it happens to be, let’s get OpenSSH Server installed next.

sudo apt-get install openssh-server

This will install the server component and start the service up for you. If for any reason you don’t see ssh start/running or the process appearing, you can manually start up the server. If you’re root, you can forgo the sudo for each command.

Ubuntu 15.04+

systemctl restart ssh

Ubuntu 14.04

 service ssh restart

This will get the OpenSSH server running on your system. Now that we have the server running, we need to send the public key over to the remote machine from the local machine.

Step #3 – Now we need to send your public key to the remote machine. To do this, we need to enter this code from the client machine.

ssh-copy-id username@host

The host is going to be the local IP address for the remote machine. During this process, you will be prompted for a password – it will be the password for the remote machine.

Step #4 – At this point, SSH works to access the remote machine from the local one. The next step is to disable password authentication as it’s very insecure. With the public key installed on the remote machine, it’s time to allow it to handle the SSH authentication.

First, SSH into the remote machine:

ssh username@host

After entering your password again, go ahead and use the nano editor to edit your SSH config on the remote machine. Remember, if you’re NOT root, be sure to use sudo below.

nano /etc/ssh/sshd_config

Scroll down and look for #PasswordAuthentication yes
Next, change the entry accordingly:

#PasswordAuthentication yes

into this

PasswordAuthentication no

At this point, you’re ready to save the file. Type Ctrl-x. When promoted to “Save the modified buffer”, type the Y key. As it presents you with “File name to write”, just hit the enter key. This modifies your SSH configuration and ensures you will only be able to login using your SSH key.

Final words of advice

I imagine this seems like a ton of information. After all, this is all keyboard and no GUI. But once you complete it you will be shocked at how simple it really is.

The only issue you might run into could be the ufw blocking port 22 (both locally and potentially on the remote machine). Use ufw but be aware that if you can’t connect it’s either because you uploaded the public key to the wrong user, you’ve been trying to SSH to the wrong host IP or you simply have port 22 blocked some place. Another issue to consider is trying to SSH into a remote host with an encrypted directory or perhaps your remote machine’s ~/.ssh permissions are screwy. This would mean accessing the machine through other means and adjusting the permissions for the remote machine’s affected directory.

chmod go-w ~/
 chmod 700 ~/.ssh
 chmod 600 ~/.ssh/authorized_keys

I hope this is helpful and best of luck in your Linux SSH adventures!

Do you have Linux questions you’d like Matt to help with? Hit the link here and perhaps you too, can Just Ask Matt!