Managing User Accounts and Passwords

Back when I was just another user….

While reviewing the video that goes along with this article, my mind drifted back to my first real paying job in radio. It was a big station with a big news department and what makes it pertinent to this discussion is the fact that they had a mini computer running Unix. This machine kept up with stories coming in off the UPI and AP news wires and the reporters used it to write local stories. There was a terminal in each on-air studio and more back in the newsroom. I can still remember the meeting I had with the sysadmin to get my very own user account on the system. Believe it or not, I can also still remember the password I chose after nearly 30 years. And no, I’m not telling.

Up to that point, my experience with computers mainly consisted of playing games on a Commodore 64 and typing term papers into my brother’s Tandy 1000. At the time, I remember thinking that it was cool to have access to a “real computer.” This was before the Internet. It had no GUI and the printers were tractor-fed dot matrix that could only print plain text. It was basically a giant word processor. What I found fascinating about it was the way I could sit down at any terminal in the building and login to find all my stuff just the way I left it. I could write a story, save it to a file, and send it to a printer all while the guy in the next studio was doing something else with the same system.


HD video is available by clicking on the gear, bottom right once it’s playing.

I remember Mike, the sysadmin at WTAR, telling me all about file permissions and what groups I was in but to be honest, I didn’t pay much attention. I just wanted to know how to get the ball scores. I didn’t know it then, but there was a guy named Linus Torvalds in Finland using a very similar system in his computer science class who wanted to have one just like it at home. He was so fascinated with it that he went on to write the kernel that is still the center of the GNU/Linux operating system. If I had any clue what a big role Linux would play in my life today, I would have paid more attention to what Mike was saying and maybe poked around in that Unix system a bit more.

You’ve got the power – Use it!

Linux includes all the same tools those Unix systems used to manage user accounts. Unlike it was with those big Unix systems, these days we are our own sysadmins if we run Linux at home. We have to manage the accounts on our Linux machines, especially when we have friends and family using our computers. Fortunately, all those tools make it really quite easy and we now have GUI tools that make it super simple. Still, there are some rules that we need to keep in mind to keep our systems secure:

1. Every user should have their own account.

There is an old saying that goes, “Good fences make good neighbors.” It certainly applies to multi-user computer systems. Each user can have a configuration that works best for them, they can be assured that no one will have access to their files and they won’t have easy access to other users files. Each user having their own account is also good for the sysadmin who wants to keep tabs on what others are doing with the computer. Each user has their own browser history so you can check on Internet activities, especially if that user is a kid and you want to know what they are looking at. You can also lock a user’s account if you don’t want them to have access temporarily or delete them entirely.

2. The sysadmin is the only person who should know all the passwords.

Users should be strongly discouraged from sharing passwords and, if you’re working in an enterprise environment, those passwords should be changed often. Now, most who are reading this will be using Linux at home but if you have a lot of people using your system or you’re suspect of anyone’s intentions, you may want to change passwords yourself and assign them to each user. Users can change their own passwords, but the sysadmin can always gain access without knowing the password the user has chosen. (Nice, huh? Being a Super User can really stroke your ego.) The quickest way to deny access to a user is to just change the password on the account to something they don’t know.

3. Users should log out when they are done using the computer.

Leaving a computer logged in and walking away is a security risk and it makes life hard for the next person who wants to use the machine. If the machine is set to suspend or lock the screen, the next user may be forced to switch users to get to their account. Having multiple accounts logged in will suck up system resources and bog things down for those who are actually trying to use the machine.

4. Make users log in.

I am not a fan of setting a computer to automatically login at startup, even if I am the only person who has an account. If you have multiple users automatically logged in it defeats the purpose of having separate accounts completely. Some Linux distros offer this option when you’re setting up the admin account at install and others actually select this option automatically when you create additional user accounts. Yeah, it’s convenient but what happens if someone breaks into your house or steals the machine? You’ve just removed the first line of defense! While it is true that passwords and encrypted files won’t keep someone who really wants to get at your data from getting it, don’t make it any easier for them.

Going beyond the GUI

I used a combination of GUI tools and Technical commands to demonstrate some of the basic things you might want to do with accounts on your machine. It goes way beyond that, though. Remember, these tools were designed to work with hundreds of users on mainframe computers. Additionally, here is a short list of commands you may want to get to know better:

adduserCreates a new user account.
deluserRemoves a user account.
passwdUsed to set passwords and to lock user accounts

changeSets a minimum and maximum age for a user’s password.
suSwitches user accounts in a terminal.
groupaddAdds a user to a group.
groupsShows what groups a user belongs to.
usermodRemoves a user from a group.

Have fun!

Using Command Line Aliases – Frequently

Back in 1994, when I had been using SunOS for three years and Linux for almost two years, people started talking about how you could make DOS command line abbreviations. What swill. By that time, I already had some very useful keystroke saving aliases for the command line because command line usage makes up a majority of my workday.

Here are some of my frequently-used aliases:

List your files with extra info, in columns:

alias ,=’ls -CFs’

Jump back a directory:

alias ..=’cd ..’

List what’s most recently changed:

alias r=’ls -ltr’

How much space do we have?

alias h=’df -h’

How much space are we using here?

alias d=’du -sh .’

What is using the most space here?

alias D=’du -s * |sort -n’

Plain old vi is hard to use:

alias vi=’vim’

Let’s edit our aliases and use them immediately:

alias va=’vi ~/.bash_aliases && . ~/.bash_aliases’

You know about [ctrl-r] right? That allows you to search your command history. Also, there is:
control-u : delete to start of line
control-e : jump to end of line
control-p : previous command

If you really want to use your “scroll back” efficiently, ignore those commands because they typically are noise in your shell history. So you can level up immediately by adding:

export HISTIGNORE=”,:..:r:h:d:D:vi:va”

This can clean up your command history and saves me a ton of typing. My current list of aliases is pretty long. I have to write a perl one-liner just to generate my HISTIGNORE variable.

alias | perl -ne  '/alias (..?)=/ && print "$1:";',:..:00:B:Fw:G.:GC:GD:GP:GS:GU:Gt:H:L:NB:R:SS:Ss:d:h:l:la:ll:ls:r:s:sb:va:vf:vh:vi:vv:xx

So, sysadmin much? What’s your alias for sudo -s?

Featured photo on front page by zanaca

Schedule FiOS Router Reboots with a Pogoplug

Pogoplug_Mobile

There are few things in life more irritating than having your Internet go out. This is often caused by your router needing a reboot. Sadly, not all routers are created equal which complicates things a bit. At my home for example, we have FIOS Internet. My connection from my ONT to my FIOS router is through coaxial (coax cable). Why does this matter? Because if I was connected to CAT6 from my ONT, I could use the router of my choosing. Sadly a coaxial connection doesn’t easily afford me this opportunity.

So why don’t I just switch my FIOS over to CAT6 instead of using the coaxial cable? Because I have no interest in running the CAT6 throughout my home. This means I must get the most out of my ISP provided router as possible.

What is so awful about using the Actiontec router?

1) The Actiontec router overheats when using wifi and router duties.
2) This router has a small NAT table that means frequent rebooting is needed.

Thankfully, I’m pretty good at coming up with reliable solutions. To tackle the first issue, I simply turned off the wifi portion of the Actiontec router. This allowed me to connect to my own personal WiFi instead. As for the second problem, this was a bit trickier. Having tested the “Internet Only Bridge” approach for the Actiontec and watching it fail often, I finally settled on using my own personal router as a switch instead. It turned out to be far more reliable and I wasn’t having to mess with it every time my ISP renewed a new IP address. Trust me when I say I’m well aware of ALL of the options and this is what works best for me. Okay, moving on.

Automatic rebooting

As reliable as my current setup is, there is still the issue of the small NAT table with the Actiontec. Being the sort of person who likes simple, I usually just reboot the router when things start slowing down. It’s rarely needed, however getting to the box is a pain in the butt.

This lead me on a mission: how can I automatically reboot my router without buying any extra hardware? I’m on a budget, so simply buying one of those IP-enabled remote power switches wasn’t something I was going to do. After all, if the thing stops working, I’m left with a useless brick.

Instead, I decided to build my own. Looking around in my “crap box”, I discovered two Pogoplugs I had forgotten about. These devices provide photo backup and sharing for the less tech savvy among us. All I need to do was install Linux onto the Pogoplug device.

Why would someone choose a Pogoplug vs a Rasberry Pi? Easy, the Pogoplugs are “stupid cheap.” According to the current listings on Amazon, a Pi Model B+ is $32 and a Pi 2 will run $41 USD. Compare that to $10 for a new Pogoplug and it’s obvious which option makes the most sense. I’d much rather free up my Pi for other duties than merely managing my router’s ability to reboot itself.


Installing Debian onto the Pogoplug

I should point out that most of the tutorials regarding installing Debian (or any Linux distro) onto a Pogoplug are missing information, half-wrong and almost certain to brick the device. After extensive research I found a tutorial that provides complete, accurate information. Based on that research, I recommend using the tutorial for the Pogoplug v4 (both Series 4 and Mobile). If you try out the linked tutorial on other Pogoplug models you will “brick” the Pogoplug.

Getting started: When running the curl command (for dropbear), if you are getting errors – leave the box plugged in and Ethernet connected for at least an hour. If you continue to see the error: “pogoplug curl: (7) Failed to connect to”, then you need to contact Pogoplug to have them de-register the device.

Pogoplug Support Email
Pogoplug Support Email

If installing Debian on the Pogoplug sounds scary or you’ve already got a Raspberry Pi running Linux that you’re not using, then you’re ready for the next step.

Setting up your router reboot box

(Hat tip to Verizon Forums)

Important: After you’ve installed Debian onto your Pogoplug v4 (or setup your existing Rasberry Pi instead), you would be wise to consider setting up a common non-root user for casual SSH sessions. Even though this is behind your router’s firewall, you’re still running a Linux box as root with various open ports.

First up, login to your Actiontec MI424WR (or similar) FIOS router, browse to Advanced, click Yes to acknowledge the warning, then click on Local Administration on the bottom left. Check “Using Primary Telnet Port (23)” and hit Apply. This is for local administration only and is not to be confused with Remote Administration settings.

Go ahead and SSH into your newly tweaked Pogoplug. Next, you’re going to want to install a package called “expect.” Assuming you’re not running as root, we’ll be using “sudo” for this demonstration. I first discovered this concept on the Verizon forums last year. Even though it was scripted for a Pi, I found it also works great on the Pogoplug. SSH into your Pogoplug:

cd /home/non-root-username/
sudo apt-get install expect -y

Next, run nano in a terminal and paste in the following contents, edit any mention of your
/home/non-root-username/
and your router’s IP LAN address to match your personal details.

spawn telnet 192.168.1.1
expect "Username:"
send "admin\r"
expect "Password:"
send "ACTUAL-ROUTER-password\r"
expect "Wireless Broadband Router> "
sleep 5
send "system reboot\r"
sleep 5
send "exit\r"
close
sleep 5
exit

Now name the file verizonrouterreboot.expect and save it. You’ll note that we’re saving this in your
/home/non-root-username/ directory. You could call the file anything you like, but for the sake of consistency, I’m sticking with the file names as I have them.

The file we just created accesses the router via telnet (locally), then using hard returns (\r) is logging into the router and rebooting it. Clearly this file on it’s own would be annoying, since executing it just reboots your router. However it does provide the executable for our next file so that we can automate when we want it to run.

Let’s open nano in the same directory and paste in the following contents:

{
cd /home/non-root-username/
expect -f verizonrouterreboot.expect
echo "\r"
} 2>&1 > /home/non-root-username/verizonrouterreboot.log
echo "Nightly Reboot Successful: $(date)" >> /home/non-root-username/successful.log
sleep 3
exit

Now save this file as verizonrouterreboot.sh so it can provide you with a log file and run your expect script.

As an added bonus, I’m going to also provide you with a script that will reboot the router if the Internet goes out or the router isn’t connecting with your ISP.

Once again, open up nano in the same directory and drop the following into it:

#!/bin/bash
if ping -c 1 208.67.220.220
then
: # colon is a null and is required
else
/home/non-root-username/verizonrouterreboot.sh
fi

Save this file as pingme.sh and it will make sure you’ll never have to go fishing for the power outlet ever again. This script is designed to ping an OpenDNS server on a set schedule (explained shortly). If the ping fails, it then runs the reboot script.

Before I wrap this up, there are two things that must still be done to make this work. First, we need to make sure these files can be executed.

chmod +x /verizonrouterreboot.sh
chmod +x verizonrouterreboot.expect
chmod +x pingme.sh
Pogoplug Debian
Pogoplug Debian

Now that our scripts are executable, the next step is to schedule the scripts on their appropriate schedules. My recommendation is to schedule verizonrouterreboot.sh at a time when no one is using the computer, say at 4am. And I recommend running “pingme” every 30 minutes. After all, who wants to be without the Internet for more than 30 minutes? You can setup a cron job and then verify your schedule is set up correctly.

Are you a cable Internet user?

You are? That’s awesome! As luck would have it, I’m working on two different approaches for automatically rebooting cable modems. If you use a cable modem and would be interested in helping me test these techniques out, HIT THE COMMENTS and let’s put our heads together. Let me know if you’re willing to help me do some testing!

I need to be able to test both the “telnet method” and the “wget to url” method with your help. Ideally if both work, this will cover most cable modem types and reboot methods.

The X First Aid Kit For Linux

tmux with weechat, mc and w3m in panes

If you spend some time on Linux-based social media sites or forums, you inevitably see posts where a user is having trouble logging into their desktop. Either the boot finishes and they’re left looking at a black screen, or they can’t seem to get past the display manager login.

“I enter my user name and password, the screen goes black for a second, then it goes back to the login screen.”

One of the most common ‘fixes’ I see is the suggestion to reinstall the distro, or various parts of it. I cringe every time I see this. There’s really no reason to reinstall the OS unless one has been screwing around in the system to the point they’ve already trashed it beyond repair, or the installation ISO was bad to begin with. In which case reinstalling won’t fix anything when you’re using the same ISO to do it. In the second instance, here’s a suggested Rule of Thumb: always check the hash signature of a downloaded ISO against the one posted on the distro website before trying to install. Most distros put the signature right on the download page, but not all. With some you might actually have to search for elsewhere, but take the time to find and check it. Because it can save hours of headaches later.

Another common scenario occurs after an update of a perfectly working system, when the X server, display manager or desktop refuses to start. In either case, whether a new install or an update has gone bad, reinstalling, uninstalling or rolling-back shouldn’t be the first course of action. You can probably fix the problem yourself with a few tools and a little time.

Let me state here though, if you have a phobia or complete aversion to the command-line (which I will refer to with the acronym ‘CLI’ from here on), you might as well stop reading this now. Same goes for those who have no real interest in how their system works or trying to fix it themselves.

One of the beauties of Linux is that it was designed from the start to be a multi-user environment. My first install of Linux back in the Dark Ages didn’t have a GUI, but I quickly learned that if I pressed CTRL+ALT+ I could log into another session and have a whole new screen to work in. Pressing CTRL+ALT and a function key allowed me to switch back and forth between the different sessions where I would be doing different things, much like the virtual desktops in a GUI. A bit crude perhaps by today’s standards, but it worked and was instrumental in hooking me on Linux. Fortunately for those whose GUI has failed them, this still works on any Linux system. Even now, while you’re reading this, you should be able to press CTRL+ALT+F1 and switch to a command console with a prompt for you to log in. Most modern distros hook the X session to F5 or F7 to switch back, you’ll probably have to try it to see which one of the function keys brings you back to the GUI.

A WORD OF WARNING: on some distros when using the proprietary drivers (nVidia or ATI/AMD) switching to a console can cause the driver to unload, destroying your X session. If you use one of the proprietary drivers, don’t try switching to a console, especially if you have anything running in the GUI that you can’t or don’t want to lose. If you want to try it, start from a fresh, empty session. If you find you can’t switch back to the GUI, or X has died, do a ‘sudo reboot’ from the console to restart the system. Yes, there are ways of restarting the X server, but they can differ depending on the distro, so you would have to check with your particular one to find out how to do it. A reboot should work everywhere.

Now, we’re not going to discuss how to fix a broken X or display server here. There’s simply too many differences in how it gets started in the various distros, because what works in one may not in another. What we are going to cover is some tools almost all distros will have in their repositories. Tools I recommend installing that can help in the event of a failed GUI They’ll not only help you find and fix the problem, but more importantly some allow you to connect with the outside world to get help and information. They’re CLI programs that are the first things I add to a new install. After years of trying tons of different ones, I’ve chosen these due to the fact you don’t need to be an uber-geek to use them. They’re all fairly simple to understand and work with.

First thing I recommend is to learn how to use the disto’s CLI package manager. All distros have a command-line utility for installing, removing, updating or repairing packages. Whether it’s apt-get, pacman, zypper, urpmi or whatever they call it, it’s there and installed by default. Learning the basics can make all the difference when you find yourself stuck in console mode. Often a problem stems from a broken package or missing dependency, so learning how to make at least basic use of the CLI package manager is well worth the effort. With some distros it’s the only way of working with packages, since they have no GUI tool. For others, they wrap it in a pretty GUI and many users never touch the CLI version. Try it, in fact, try installing some or all of the following CLI applications with it. You might find you prefer it to the GUI one, and if not, at least you’ll know how to use it in an emergency.

Midnight Commander
Midnight Commander

 

Top of my list is Midnight Commander, otherwise known as ‘mc’. This is a dual-pane file manager on steroids. There are plenty of others, you may want to try some of them to compare for yourself. I chose mc because it’s intuitive and most operations can be done via menus or function keys. The most common file operation function keys are shown right on the bottom of the screen. It has a built-in help system, so it’s easy to find out how to do something that’s not in the menus you might need to do. It has a built-in text editor, and although most distros will install a text editor like nano, vi or vim (or all of them) by default, I find being able to navigate to a file with mc then edit it all within the same application fast and convenient. The built-in editor might not have all the power and options of the others, but for simple script editing it’s more than adequate. Like the file manager itself, the most common operations people do on text are mapped to the function keys listed at the bottom of the screen, while others can be found via the help screens.

MC Editor
MC Editor

When fixing a bug in startups, it involves editing a script or configuration file, and mc’s editor will do that easily. You’ll likely find yourself looking though log files to track down a problem, again, mc makes this so much easier than having to remember a bunch of terminal commands. Another nice feature is if you need to issue a terminal command while in mc, you can do that too and view the output, all without having to quit mc.

As with all the applications I’ll suggest here, or any you might choose to use instead, take some time to learn the basics of it’s use. After installing, pop it open in a terminal on your desktop and try it out. Get to know how to navigate around with it, open a file for editing, view a file, search for one, etc. A little time spent with it once in a while will help tremendously if and when you actually need it. Who knows, you might just like it enough to use it regularly. I do. I find myself using mc in a terminal just as much, if not more than my GUI file manager. Simply because in many things it’s faster and more powerful than the GUI one. Also, I tend to be keyboard-centric, keeping my hands on the keyboard, and GUI file managers require a lot of mouse work. With mc it’s pop-open a terminal, type ‘mc’ at the prompt, hit ENTER and go to work.

Htop
Htop

The next CLI application I install is htop. I think every Linux distro on the planet installs ‘top’ by default, and it is a great tool for seeing what processes are running (or not running) and much more. It’s truly a Swiss Army Knife of process investigation tools. There’s tons of things it can do, but it’s not very intuitive or user-friendly. And while there’s plenty of other process-viewing applications around, I find htop the easiest to read and most intuitive to use. Again, like mc it has common operations bound to function keys listed on the bottom of the window. Often when starting to diagnose a problem, one of the first things to do is find out what is running, and what isn’t. With a glance you can see if the X server is running, if the display manager is, or any other process you’re interested in. Using htop’s tree view you can see the parent/child relationships of processes, what process spawned what other, getting an understanding of how they all connect. With it’s sort command you can sort them by PID to see what order they started in, as the order of starting up can make a difference with things that are dependent on something else. You can sort by CPU usage to see if one’s hogging the system, and many other ways to see what you want to find out. You can see their status, whether they’re running, sleeping or “zombied”, and much more. While htop can’t fix anything, it’s a great tool that’s easy to use for getting an idea of the current state of the system, which can help in diagnosing what the problem is.

w3m CLI Web Browser
w3m CLI Web Browser

For many users, the computer they’re having problems with is the only computer they own. If X or the display manager or desktop fail to start, they have no access to their best resource for help, their web browser. While one can use a smart phone to seek help, if you’ve ever tried to read something like the Arch Wiki or Ask Ubuntu on a phone, it’s not all that easy. Fortunately, there are several very good CLI browsers available. My personal choice is w3m, mostly because it’s fairly easy to use, and it can display graphics. Lynx is another popular CLI browser. No CLI browser will give you the pretty laid-out view of a web page that a GUI browser will, but when we’re trying to find information, text is really all we need. w3m has the added benefit that if a web page has a graphic, like a screen-shot showing something as an example, w3m can display it. As you can see in the screen-shot here, a CLI browser is pretty basic in rendering a web page, but will get the job done. To invoke w3m at a console (or the GUI terminal) type ‘w3m web address’, “web address” being the web site you want. In the example, I entered ‘w3m https://google.com’. When the page came up, I typed in my search term (in this case it was ‘systemd’). The page displayed is the result. Arrow keys, PGUP & PGDWN all work to move around the page, TAB works to move from one link to another, and ENTER to go to that link. Admittedly it’s not the easiest way to browse the web, but CLI browsers are fast, light, and will get the job done. Again, it helps to spend at least a little time using the browser in a terminal window from your desktop, just to get familiar with it’s basic use. That way you’re not faced with trying to figure out how to work it when you might actually need to. And while I wouldn’t want to use w3m or any of the other CLI browsers as my daily driver, it and Google have bailed me out more than once when I was stuck at a console. I consider a CLI browser a must have for any Linux first-aid kit.

Weechat IRC
Weechat IRC

There are times when we just can’t seem to find what the problem is on our own, and we need more help than what we’re finding on the web. This is where IRC can make the difference. It’s one of the oldest forms of social networking. Almost every distro out there has an IRC channel, with many of the larger distros having several. Getting on a distro’s IRC channel and asking for help can sometimes make the difference in whether we get the system going again or not. Linux is blessed with many great CLI IRC clients. The two most popular are probably irssi and weechat. I prefer weechat myself, because it has the ability to split it’s own window into several panes, so you see multiple connected channels at the same time. I like weechat so much that it’s actually my daily driver on IRC. To get the full benefit of what it’s capable of takes some time in configuring and learning it, but it’s worth the effort in my opinion. However, as part of a first-aid kit, it doesn’t need any configuration, it’s perfectly usable with it’s default settings.

Whichever CLI IRC client you decide to use, learn how to connect to an IRC server and join a chat with it. If you’ve never used IRC before, now’s a great time to try it! You can learn most of the basics of IRC here and here. Then check with your distro’s website to see what IRC channels they offer, what network they’re on, and anything else they may think is important for you to know. When ready, take your CLI IRC client for a test drive and join the channel. You don’t have to say anything, you can just idle on the side and watch how things work if you wish, or jump right in and say “Hi”.

When using IRC for problem solving help: be patient. When you ask a question on IRC, it’s not uncommon for some time to pass before anyone answers. Even when you see several people chatting away on the channel, most will not answer a query when they honestly don’t know the answer themselves. Likely there’s someone there who can help, but they may not be looking at the IRC chat at the moment and don’t see the request. Give it some time, 5 or 10 minutes minimum to see if there’s a response. If not, then try again, politely. If you’ve been watching someone helping another with their problem, you could ask them directly if they have any experience with yours. Something like this:

“I’m having a problem with X. You seem to know quite a lot, think you could help? I’d greatly appreciate any advice.”

Most importantly, be patient and be polite. Over the years I’ve had great luck with help from IRC users, but admittedly it often took a while before someone came along who could help.

tmux with weechat, mc and w3m in panes
tmux with weechat, mc and w3m in panes

The last app for our first-aid kit is totally optional, but I find it quite handy. It’s what’s known as a console or terminal screen multiplexer. As I stated earlier, in Linux you can have several console sessions open on different terminals, switching with the CTRL+ALT+F# keys. You could have mc running in one, htop running in another, w3m in another, weechat in another, etc., etc.. Problem is, sometimes we’re following instructions from the web on something that’s a bit complex to do. Switching to another screen where we can’t see the original instructions can be an issue. Trying to remember a complicated sequence of commands or things that need to be done can lead to errors, possibly making things worse. Truth is, we’re quite spoiled with our GUIs, where we can have multiple windows open on the single screen, following the instructions directly. Fortunately, we can also do this in a CLI console with a multiplexer. The two most used and available in pretty much any distro are ‘screen‘ and ‘tmux‘. Personally I use tmux. With tmux you can have multiple screens, similar to tabs in a browser, as well as splits in the current screen, much like a tiling window manager. Like all of the other apps I’ve suggested tmux has many more things it can do, but for our purposes the most important is the ability to place several of the other tools into a single screen as shown in the screen-shot. It enables us to do what we need to while reading or getting instructions, all on the same screen. And like the other applications, tmux is extremely configurable but requires no configuration to use. The example in the screen-shot is plain out-of-the-box tmux. Learning some of it’s default key bindings so we can create splits in the window and move between them is really all that’s needed.

Admittedly, trying to diagnose and fix a problem with X—or anything else—from a command console is not for everyone. There are many of us who just want to use the computer and could care less about how it works. But if you’re like me, interested in how it functions, willing to try something new and learn new things, these tools can be invaluable when the GUI goes south. For many the command line is some mysterious place that only uber-geeks go. These tools put a more familiar, friendlier face on it. With them we can move around in our system, see what it’s doing, and connect to the outside world. Over the years I’ve tried many utilities and these are the ones I’ve settled with, because they’re simple to use and get the job done. Even though I’ve used Linux for nearly 20 years now, I’m not a programmer or command-line guru. But I do like to tinker and I’ve broken my systems more times than I can remember. Being able to find help from the web at the console has been priceless. And honestly, most fixes have been very simple once I found where the problem was. Things like the wrong permissions on an executable, a script calling something that doesn’t exist or had it’s name changed in an update, or a missing package. Only once in all those years have I ever screwed things up so bad that the only recourse was to completely reinstall the system. Besides, there’s a real feeling of accomplishment when we can fix something ourselves.

Sharing Files with Samba the Easy Way

Samba

Yeah, I hear you talking but I don’t believe a word. “You say you’re gonna keep it simple.” “You say you’re gonna stick to just having one Linux machine to play around with and you’re not gonna need to think too much about networking.” Well, let me tell you that it just doesn’t work that way at all.

You see, there is one dirty little secret that long time Linux users know but keep to themselves. Something no one tells newbies. But I will. So here it is: Linux is highly addictive, and having one Linux machine is like eating just one potato chip. You can’t do it, I tell you! Second-hand computers are cheap and Linux is a free download, so there’s nothing stopping you from finding yourself with a house full of happy Linux boxes, all humming away. There’s always another reason to add one more machine… Just wait, you’ll see.

Another dirty little secret they won’t tell you is that Samba, the almost universally accepted software used to interface Linux with Windows style peer to peer networks, is a complex program. That it’s hard to configure and is a source of frustration for even the most advanced Linux user at times. Yes, there are GUI based tools that promise a point and click setup of Samba shared folders, but they always disappoint in the end. To make things worse, the average home user will find a dizzying array of setup options and configuration schemes when they look on-line for guidance. I struggled with Samba for a long time until I ran into a very simple solution that makes working with it a breeze. This article and video will show you how to setup a client/server style network that will work whether you have 2 machines or 200.

Getting the server software is as easy as installing the samba package from your distro’s repositories. Once it’s installed, the hard part is getting it configured to do your bidding. If we want to have a trouble free networking experience, it’ll take a bit of thought and we need to set some goals.

Goal One: Keep it simple

I once took a job at a large radio and TV facility with several studios and offices. Needless to say, there were a lot of computers; some were for general office work and others were highly specialized production and automation machines. The production network was well designed, all going by the guidelines set by the company that provided the automation software. No worries for me there. However, the office network was a mess. Basically, every desk had a PC and every PC was set up to share files and printers in a Workgroup. Making it so everyone could share everything. There were lots of machines on the network with lots of folders shared on each one. There was no way to know where anything was so you’d have to call the person you wanted to share something with and have them click where you told them to find it. It was a mess and something had to be done. The solution was super simple and I have setup every network the same way since.

I grabbed one of the extra PC’s collecting dust in the storage room and stripped it down to be a lean, mean serving machine. All this thing did was sit in the corner and share files, as it added bonus it needed very little attention. This machine shared one folder only. Each person in the building had a folder within that folder and everyone had access to everything. Instead of each workstation sharing a folder, they just had a shortcut on their desktop to the server. Anyone who wanted to share files could simply drag them into their own corresponding folder, or drop the file in someone else’s folder on the server. By doing so, anyone on the network could access them. Files on the server could be added, deleted or modified in place. Everyone knew that what they put on the server was public and that they needed to keep local copies of stuff they didn’t want to lose, just in case something got blown off the server or it crashed. With a memo and a meeting to explain how it worked the office network was suddenly much more efficient and people finally knew where to look for the files, they wanted to share. I was a star and everybody loved me.

Most home users won’t want to set aside a separate machine just to serve files. Luckily, it’s no big deal because a simple Samba server can run in the background on whatever machine you choose. Preferably one that is online the most. You can also share things like music and video folders in such a way that anyone can read or copy the files, but they can’t delete them or change them. You don’t even have to be logged into the account you setup for the server or shared files from; those files will be available on the network as long as that machine is up and running. Someone else can even be using their own account on it too. The Samba server doesn’t care.

One of the really nice things about networking this way is that only one machine has to be configured as a server. Pretty much every Linux distro comes with a Samba client setup already and all you have to do is browse the network from a file manager to connect to any machine serving files. You could even setup multiple machines if you so desired.

Goal Two: Keep it real

Do you have a bunch of wireless devices that need to be hooked up to the Internet? Do you have more than one computer that needs Internet access? Most likely, the answer is yes and you probably do all that with a router. That router probably acts as a hardware firewall and the only way for anyone to get to your devices would be for them to get into your local network. Which is fine, so long as you have a strong Wi-Fi password and don’t give it out to everyone that passes by. If your password is secure, then you can feel relatively safe when it comes to local network shares. That said, there’s really no need to make users log into a shared folder just to play a song or copy a picture that you yourself have designated as shareable. Many of the pitfalls we encounter with Samba have to do with security features, so why not just turn them off?

Another major issue with Samba, even when sharing with Windows computers, is file permissions. By default, Samba tries to preserve file permissions. The standard Samba setup will mean that files you pull off of the network will have to be copied to a local folder to make them belong to you or manually edited to change permissions, by using root privileges in most cases. This is a pain and could be really confusing for less savvy users on your network. What most home users want is to just be able to copy a file onto a local machine and have it be theirs. . . so that’s what we’ll do.

A Word About Windows

Samba was designed to share files between Windows and Linux and it works quite well most of the time. Setting up a Windows machine to work with Samba is not something I’m prepared to get into, but it is worth noting that Windows 7 and up presents some challenges. Microsoft came up with a new network protocol called Homegroup that is totally incompatible with Samba. You’ll have to configure your Windows machine to use standard Workgroup style enterprise network protocol just to interact with Samba. There’s lots of info available on how to do this and it’s not too terribly hard to get going.

A curious quirk of sharing files with Windows is that sometimes a Linux file name won’t work with Windows. The naming conventions are just slightly different. Windows doesn’t care about the case of the text when dealing with files and Linux does. Also, there are some characters that Linux will gladly let you use in a file name that Windows won’t. Samba tries to reconcile these issues with mixed results. You may find yourself having to rename a file every once in a while to get it through the Samba network, even when going from one Linux machine to another Linux machine. One of the ways to get around these bugaboos is to take any files that absolutely must retain the exact same properties and throw them into a tar.gz file before putting them on the network. This is a good thing to keep in mind when using Samba to share configurations files. that you want to import into another machine.

https://www.youtube.com/watch?v=n-uSzkGhHDY

The Configuration Process

The goals we talked about above will be achieved by creating custom configuration files from scratch. I go through how to install the Samba server and create your own custom Samba configuration files step by step in he video, but here are some sample files to show you what needs to be in them. Use this as a quick reference when you’re configuring your own network:

Contents of /etc/samba/smb.conf

[global] 
        server string = Dell Desktop 
        workgroup = WORKGROUP 
        security = user 
        map to guest = Bad User 
        name resolve order = bcast hosts wins 
        include = /etc/samba/smbshared.conf

Contents of /etc/samba/smbshared.conf

[Joe's Music] 
       force user = joe 
       path = /home/joe/Music 
       writable = no 
       public = yes 

[Network File Server] 
       force user = joe 
       path = /home/joe/Public
       writable = yes 
       public = yes

 

You can change the share names and the name of the Workgroup, of course. It’s all in the video.

One Last Thing…

You may get a message asking whether you want Samba to automatically configure itself when you install an updated version of the Samba server. Don’t! Just say not to make sure your custom config files aren’t overwritten. It’s a good idea to keep a backup copy of your configs, as well. I keep a document file with the text I always share and cut and paste when I need to re-install Samba after a change of distro or an upgrade to the OS on the machine running the server. It takes two minutes at the most and then I’m back in business.

Good luck and don’t have too much fun playing with Linux file sharing!

Web Server File Permissions Mystery Solved

Ever wonder if this whole Linux thing is actually unholy devil worship? If you’ve ever worked on web server file permissions, you might think so. They take the right amount of searching and just short of not too much coffee for solving when a problem presents itself. (Already, I’ve probably had too much coffee, as I just caught myself bobbing up and down.) Continue reading to find out about a strange problem that might happen to you as well. I’d been watching a problem occur on a web server for about a month, where a file from a customer registration becomes unusable because the file permissions are wrong. Not just a little wrong either, bonkers wrong:

 

unable to open file 
/var/tmpcgi/registration.txt at /usr/local/bin/updater.pl line 33.
total 12
drwxr-xr-x. 34 wheel       wheel         4096 Aug 17 10:24 ..
--w--wx-wT.  1 wwwuser     wwwuser         30 Aug 27 05:51 registration.txt
drwxrwsr-x.  2 wheel       wheel         4096 Aug 27 05:51 .

What makes an error like that? All my scripts were setting wide open permissions on files for this process. (I know: tsk, tsk.) Despite this, the problem bugged me for weeks. I created a script just to find the oddball permissions. Actually that wasn’t a great solution, because my find command was even wrong. What could be wrong with:

find -type f -perm -220

Well, for starters it didn’t do what I wanted. So, this morning I finally searched for what creates files with “–w–wx-wT” and stumbled across something helpful. I found forum posts chastising a user for creating bug reports about his own ineptitude for using chmod “666” and not the octal chmod 0666.

Unlike that user, I *know* that I don’t want to use string 666, but it did give me something to search for:

$ grep -r 666

 

Now, how often do you search for the devil in the details? I wasn’t using a string, but it was still wrong. In perl, saying chmod 666, $filename; is just as bad. The 666 is decimal. Devil horns, that won’t work! Use chmod 0666, $filename; and you escape hell. Not to heaven, but to octal.

Later, I found something in my search results that you geeks will probably find useful; There is a table of unholy permissions low down in the man page for Stat::IsMode perl module. I recommend putting this in your hat for future reference!