TeamViewer Alternatives for Linux

Hacked

If you follow tech news at all, you’ve heard about “the happening” over at TeamViewer and of the “stuff” the victims of this exploit inadvertently purchased for the bad guys. Now, some of you might be thinking that this is old news. After all, this was like a month ago. What’s in the past stays in the past – wrong.

I disagree and believe this instead highlights the need to further examine all of the X- compatible remote desktop options for Linux out there for accessing one’s desktop or for providing remote support. And that’s exactly what I’m going to do. Below, I’ll list all of the options I’ve had experience with and we’ll go through them to determine their merits on the Linux desktop.

TeamViewer….er wait, nope. Scratch that. Let’s play it safe and leave them off this list. I think it’s fair to say this is no longer an option I feel comfortable with.

Splashtop…hang on….no, they turned their backs on Linux users sometime ago. But hey, if you’re interested in running their app on Ubuntu 12.04, it’s a fantastic option. Yes, I am absolutely messing with you. Don’t do this. Instead, use software that is actually compatible with your system.

X2Go – If you’re comfortable exposing your SSH port (whether this is 22 or something else) to the wilds of the Internet, then this is one option for remote desktop over the Internet. You’ll want to make sure you’re also using fail2ban as you will have strangers knocking at your ports. Not ideal, but it works.

If you plan on using this to offer support to your family, well, you also had better make sure you control their router and port forwarding as well. Did I mention Dynamic DNS for easier connections when the IP address changes? Yeah, you’ll want that too. But once it’s set up, you know for a fact that security is all on you. This is a good thing! Best of all, no central server to hack! To install X2Go, simply locate and install it using your distro’s repositories.

Mikogo – Surprisingly, not bad at all. Despite its Java underpinnings, the only dependency I needed to install to make it work was on my 64bit machines. Simply running this on x64 Ubuntu will get you ready to run the executable once it’s downloaded.

sudo apt-get install libxtst6:i386

The first thing that became apparent is that Mikogo is designed for meetings first, everything else second. Still, it does offer the ability to provide TeamViewer like functionality, but without the concerns of using a previously exploited application. The obvious downside to using Mikogo is that it appears to be a closed source product…just like TeamViewer. Then again, it’s lesser known and may not be as big of a moving target.

Which option would you trust? Speaking for myself, I think I’ll be using X2go for my own needs with limited use of Mikogo for remote support. What say you? Maybe you’re screaming “NoMachine!” as you read this? Hit the comments and sound off.

Tar vs Rsync

Gnarly Backup Logo

Files getting lost or corrupted? A most heinous challenge, dude! So when strange things are afoot at the Linux-workstation, we totally hit our backups. We need to get started…but what commands should we use? We can use:

 

1. tar to make daily backups of all files,
2. or combine find and tar to back up changed files…
3. or use the magical rsync.

Some need more scripting to backup and some need more scripting to restore. Let’s hop in the phone booth and zoom through our options…

Tar: Whole Directories

Excellent: The clear advantage to tarring whole directories is that you are accounting for missing files when you restore from any particular point in time. You can also do a wholesale recovery very quickly, just by expanding the tarball for that directory. Below is an example of how to apply parallel compression to speed up your archive compression:

tar cf $backup.tbz2 --use-compress-program lbzip2 .

Bogus: Many of your data files are pretty large: megabytes, even gigabytes. Backing one of those up every day is likely going to be costly, especially when it rarely changes.

Tar: changed files

Excellent: We immediately start saving space. We can even detect when directories have had no changes since last backup and avoid making a misleading empty archive. You can detect when files disappear by creating a list of files with ls:

find -mindepth 0 -maxdepth 1 -type f | sort > .files
touch .recent
find -type f -newer .recent > backup-list
tar cf $backup.tbz2 --use-compress-program lbzip2 -T backup-list

The . filesfile will be picked up in the backup-list file. And while we’re here, let’s make a shortcut function for our tar command, so we can save our righteous keystrokes:

function Tarup() {
tar cf "$1" --use-compress-program lbzip2 $@
}
function Untar() {
tar xf "$1" --use-compress-program lbzip2 $@
}

Bogus: Restoration of this type of backup strategy is more difficult. To start a restoration, you have to first start with a full backup (a grandfather backup, yearly full, monthly full, or such). Then you have to apply each archive file, and at the end of that series, use your .files list and remove any files that were not present during the last backup.

cd /home
yesterday=2016-11-02
Untar /mnt/backups/yearly-home.tbz2
for f in /mnt/backups/home-*.tbz2 ; do
[ x$f == x/mnt/backups/home/home-$yesterday.tbz2 ] && break;
Untar $f
done
find -mindepth 0 -maxdepth 1 -type f > .newfiles
diff .files .newfiles | grep '^>'
read -p "These files will be deleted, proceed?" ANS
if [ x$ANS == xy ] ; then
diff .files .newfiles \
| grep '^>' \
| tr -d '>' \
| while read F; do rm -f $F ; done
done

You will have to verify this process. File names with spaces and subdirectories might not work with this example as I have coded it. This is why you totally verify your backup and restore process!

Rsync and daily backups

Excellent: there are a lot of advantages to rsync:

  • iffiles are modified but their mtimedoesn’t change they will still get backed up.
  • For simple backups you typically need little to no scripting.
  • It is most excellent with ssh!
  • A clever exclude syntax that comes with the command.
  • With the --deleteswitch you can remove files that were deleted on your disk.

So, rsyncis great if you want mirror directories.

Totally cool: rsynccan also do hard links across directories! You can save radical space by providing a previous backup directory on the backup file system to keep only one copy of the file. You use the --link-destswitch:

rsync -a --delete --link-dest=/mnt/backups/home-$yesterday.d \
/home /mnt/backups/home-$today.d

This requires a home-2016-11-06.d directory and creates the home-2016-11-07.ddirectory. Files deleted on the seventh are still there in sixth’s directory. Files that are the same are just hard-links between the directories home-2016-11-06.d and home-2016-11-07.d. (Refer to Jeff Layton’s article about using rsync for incremental backups.)

Bogus: Rsync might not be excellent for your needs:

  • No on-disk compression of backups (compression only over ssh)
  • A point-in-time set of backup uses a new directory for each day with the hard-link technique above. This requires scripting.
  • Be careful of the -b option! It means backup…but that renames each old file on the server directory you’re backing up to. If you have two years of backups, you’ll have 730 copies:
.ssh/known-hosts.729
.ssh/known-hosts.728
.ssh/known-hosts.727
...
.ssh/known-hosts
  • Millions of small files? Whoa, dude: rsynccan slow down with very large sets of small files. You might need to run a few rsync commands in parallel.
  • Limited memory? I’ve seen rsync take up hundreds of megabytes of memory backing when up hundreds of thousands of files (to store the list of file attributes for comparison). You might have to script your rsync(s) so they walk up a directory tree so as to only backup hundreds of files at a time.

Remember to backup! Stay Excellent!

Dude! Where’s My Data!

Dude!

Little in life sucks more than moving to a new desktop environment or distro, only to realize your data for specific applications didn’t make it. This usually happens with stuff like desktop-specific note taking apps or clipboard managers.

In today’s quick tip, I’ll show you some common places where you can recover this data or how to simply migrate it to another installation.

Parcellite

Some folks will tell you that what goes into Parcellite never comes out unless it’s installed. This is false. First off, Parcellite and many other clipboard managers store data in ~/.local/share/

Tip: For those who don’t know, clipboard managers allow us to take copy and paste to the next level. Instead of merely copy one instance at a time, you can use a clipboard manager like Parcellite to copy multiple/separate text entries, images, even entire directories.

For this particular clipboard manager, you only need worry about ~/.local/share/parcellite/history

Did you try reading the file? Garbled enough for ya? Don’t sweat it, just do this in your terminal.

strings ~/.local/share/parcellite/history

Nine times out of ten, you’ll have the entire clipboard history…garble free. Wait, you need a text file?

strings ~/.local/share/parcellite/history > history.txt

Now you have a history.txt in your user’s home directory. Awesome pants.

Sticky Notes

There are a number of places depending the specific app, however usually it’s going to be in ~/.local/share/ whereas the configuration data is often kept in ~/.config

In my case, I was (stupidly) using MATE’s sticky notes. Contrary to what you might think, the data for this applet was kept in ~/.config/mate/stickynotes-applet.xml

At any rate, you only need to open this up in your preferred text editor and the formatting will remain intact.

Browser data

Browser data is a wild beast as it’s storage location depends on the browser. Chrome, Midori and others are likely to be kept in ~/.configwhile Firefox is kept in ~/.mozilla– it’s just that simple.

Everything else

Obviously there are downsides to restoring an entire home directory if you’re switching desktop environments. However if you’re sticking to the same distro and desktop, generally speaking it’s perfectly fine to backup your entire home while excluding stuff like Dropbox, .cache, etc. Good luck!

How to Verify the Integrity of a File with SHA-1

(Matt’s addendum to Eric’s video) There are other Secure Hash Algorithms available, however for the demonstration in this video, Eric will be using SHA-1.

Raspbian for example, offers what you need to verify your download using SHA-1. While it’s depreciated when compared to other Secure Hash Algorithm options, it does provide you with an idea how to verify the media you’ve downloaded before installing it.

 

Backing Up Only Recent Files

Gnarly Backup Logo

 

Gnarly Backup School Series

Getting backups done is important. Sometimes what you have to backup and what you want to backup make quite the contrast. Consider I never want to back up my .mozilla/firefox//cache directory. Let’s cover how to avoid that. Because if you don’t, things get really bunk, little dude.

Before you freak out man, prepare for some regular expressions. More regular than, huh and eh. “Everthing” separated by the gnarly pipe | is a pattern that will be matched by egrep.

IGNORE="$LOGNAME/(\\.mozilla/firefox/aq0d3bz0xy/cache|\\.gvfs|\\.cache|\\.Trash)'
cd $HOME/..
find $HOME -type f \
| egrep -v $IGNORE \
> /tmp/backup_list
tar cf /mnt/backups/backup.tgz --use-compress-program lbzip2 -T /tmp/backup_list

That will backup your whole home directory except anything you slap into IGNORE. If that includes a whole git tree or two, well, you might be waiting for a long time.
Therefore, let’s concentrate on things that just changed since the last backup.

cd $HOME/..
if [ ! -f $HOME/.recent ]; then
   find $HOME -type f \
   | egrep -v $IGNORE \
   > /tmp/backup_list
else
   find $HOME -type f \
   -newer $HOME/.recent \
   | egrep -v $IGNORE \
   > /tmp/backup_list
fi
touch $HOME/.recent
tar cf /mnt/backups/backup.tgz --use-compress-program lbzip2 -T /tmp/backup_list

Now we have a way to backup only recent things. Using all our cores, courtesy of lbzip2.

Making a Non-Spamming Alert using Diff

Pound

Being on pager duty is a downer. And it’s a big mistake that ruins the on-call or first-responder position when you allocate no-no time to actually invest in a useful reporting platform. Like many before me, I used to create exactly that evil: write a bash script, and throw it in cron.

Example of the Wrong Method

MAILTO=pager.list@example.com
*/5 * * * * curl http://www.example.com/ || echo "website down"

There’s a pretty good technique to reduce the occurrence of duplicate pages, and that’s to use diff. Let’s work through an example of checking a web page for errors.

PAGE_1="http://localhost/example.php"
PREV_DIR=/var/spool/pager_prev
RECENT_DIR=/var/spool/pager_recent
REF_DIR=/var/spool/pager_refs
REF_PAGE="$REF_DIR/example.txt"
THRESHOLD_DELTA=3    # adjust this to 
DO_ALERT=0
ALERT_FILE=/tmp/example.mail
MAILTO=server.alerts@example.com

That sets up a series of directories: things presently getting tested get downloaded into the recent directory. Results from the previous test get moved to the previous directory.

[ -f $RECENT_DIR/example.txt ]   && mv $RECENT_DIR/example.txt $PREV_DIR
[ ! -f $RECENT_DIR/example.err ] && rm -f $PREV_DIR/example.err
[ -f $RECENT_DIR/example.err ]   && mv $RECENT_DIR/example.err $PREV_DIR

That did the basic move for every previous page request and error determination.

curl -o $RECENT_DIR/example.txt "$PAGE_1" &> $RECENT_DIR/example.err
[ $? -ne 0 -a ! -f $PREV_DIR/example.err] && DO_ALERT=1

That will tell us if this was the first time we haven’t been able to finish a page request.

diff $PREV_DIR/example.txt $RECENT_DIR/example.txt > $RECENT_DIR/example.diff
if [ $? -ne 0 ] ; then
   $lines = $( wc -l < /tmp/example.diff )
   [ $lines -gt $THRESHOLD_DELTA ] && DO_ALERT=1
fi

That recorded the change in the page if there was one. The end of the page might still say “java.IOException” but we don’t need to get paged unless that error actually changes.

the_diff=`head $RECENT_DIR/example.diff`
the_error=''
[ -f $RECENT_DIR/example.err -a `wc -l < $RECENT_DIR/example.err` -gt 0 ] && the_error=`cat $RECENT_DIR/example.err`
if [ $DO_ALERT -ne 0 ] ; then
      mail -s "[monitor] example.diff changed" $MAILTO <<

And that emails us the results of a change in page structure.