More Networking Tricks

network-tricks

I consider the residential gateway overloaded. Your ISP is in the business of selling you the cheapest possible computer component to be a gateway device. Current devices also include a WiFi radio to double as home access point. By setting up a Raspberry Pi as your DHCP server and DNS forwarder, you can remove that burden from the residential gateway and just let it pass packets.

Separate WiFi?

Your standard residential gateway might have a 802.11 a/b/g/n 1×1 radio; maybe a 2×2 radio. This doesn’t obviously cripple your residential gateway, but the radio reception might be poor, and it probably won’t cover your whole house. You will likely get much better reception by getting one or more inexpensive 802.11 a/b/g/n 3×3 MIMO radio access points. The three antennas (when pointed in different directions) are much more effective.

Even more effective WiFi coverage comes from putting multiple APs in common areas at medium power to spread the coverage out. There are a lot of WiFi planning applications you can use to get to help you. One of the basic techniques is a “hex cell” design, and the idea is that you can choose three channels that are not near each other. Channel 1, Channel 6, Channel 11 are the common choices. As long as two adjacent hex cells are not on the same channel, you should have reasonable protection from channel interference. Using medium power means that people right next to an AP won’t suffer from over-driven signal strength, which also cripples performance.

More DHCP Tricks

There are a lot of things you can do with DHCP. And if you avidly tinker with your computers (and I encourage you to tinker), dnsmasq is a good way to explore what you can do with your DHCP server. Different network interfaces on your dhcp server can be granted different address pools. Specific MAC addresses can be given persistent IP addresses outside of an address pool. You can manage machine names from the /etc/hosts file of the dhcp server. At the same time, this approach allows computers to announce their own name. This allows you to connect to hamster1 by name on the network, where hamster1 is the name set locally on the machine. You can also publish multiple routes to networks outside of your LAN, such as your VPNed networks.

This might also be useful for separating your own “IoT” network. Most of the wee devices that want to connect to a WiFi controller typically join your default network. A shrewd network administrator will create a separate DHCP leases pool for MAC address patterns used by their devices and assign them a completely different network range that has no gateway address. Don’t use your APs to serve DHCP. Your AP doesn’t care how many different addresses it passes through it, so use your Raspberry Pi dnsmasq server to serve your DHCP addresses.

Controlling Ads

There is a wonderful project: hBlock, which is a curated list of adware/malware addresses. You can create a hosts file where all the ad server names can get mapped to 127.0.0.1 or 0.0.0.0.

In your /etc/dnsmasq.conf file, you would add an ‘addn-hosts’ directive to point to the hBlock output file. You have to edit the hBlock output file a bit, but it’s not difficult. I separate my own network hosts files up into three files:

  • Entries for 127/8 and localnet and own hostname:
    addn-hosts=/etc/hosts-localhost
  • Entries for my LAN hostnames:
    addn-hosts=/etc/hosts-bitratchet.net
  • Entries to block:
    addn-hosts=/etc/hosts-hblock

This keeps things pretty organized. It also protects all members of my home network from getting things from crapware sites.

Are Proxies Worth the Effort?

Back in 2010, before HTTPS Everywhere was not a Firefox plugin, a squid proxy was a great way to save on bandwidth. You can configure your DHCP server to announce a WPAD address and every browser on your network should have auto-configured itself to use your local http proxy. Now, with most popular sites using HTTPS, the value of a HTTP proxy is diminished as a form of acceleration…for those sites. You can create a certificate for your own proxy and install that certificate on your machine if you want to proxy https on your local network, but it won’t help guests. Also, in a public setting, these proxies can become a ripe target.

There are still many sites that are plain HTTP. Notably, Linux distribution repositories. All those .deb and .rpm files are mostly still available over plain HTTP, and caching them does your distribution a favor. If you regularly do updates, play with VM images of Linux, or have an office of linux machines, you might even consider installing IntelligentMirror on your squid server. You can edit your /etc/apt/apt.conf or /etc/dnf/dnf.conf files with proxy entries.

Access Control and Virus Scanning on Proxies

It is also possible to kick your kids off the internet at certain times of day using squid. This is possible by setting access control rules by IP where certain times of day are permissible to browse the Internet. Getting this working effectively might be difficult.

It is also possible to make squid use ClamAV to scan the files requested. I’ve not done this before, but there are how-to’s out there on how to do that. This would be a very effective complement to using hBlock.

Is DNS What Makes My Internet Slow?

dns

Recently, we saw what happened with the botnet attack on Dyn, but instead, let’s talk about a fine day on a happy Beatrix Potter-like Internet, shall we? We start on your local  network: your DSL modem (or cable box) is your residential gateway. There’s a little hedgehog inside, helping the hamster in your computer fetch those silly cat photos from the Internet. The problem: your ISP ships you a pretty sad hedgehog inside that residential gateway.

So my modem makes the Internet slow?

When you turn on your home computer, the hamster inside gets a zap, and starts spinning the CPU wheel really fast, providing energy to your network interface. To get a network address, the hamster blows a horn on your network card, requesting an address. This is a DHCP request.

Hearing this DHCP Discovery announcement, the hedgehog in your residential gateway puts on its glasses, straightens his bow-tie, and consults the network address leases ledger. This ledger associates your hamster’s MAC address (like 00:04:3a:af:32:0a ) with an IP address (like 192.168.0.2) He writes the DHCP offer on a slip of oiled paper with india ink and quietly passes the note to the hamster. Inevitably, the ink smears off the note and the hamster will have to blast the horn again. But before the ink smears, the hamster has enough info to fetch a silly cat picture.

What does DHCP have to do with DNS?

Your Linux computer is running dhclient (DHCP client) to ask for an IP address, and your residential gateway is running dhcpd (DHCP daemon). When your computer gets a DHCP Offer, it contains the IP of your network gateway, the IPs of other network gateways, and the IP of a DNS server. This determines where you ask for DNS answers. An offer might contain:

Your Address:    192.168.0.10
Gateway:         192.168.0.1
Options {
    1 (Netmask): 255.255.255.0
    3 (Router):  192.168.0.1
    6 (DNS):     205.171.2.25
   51 (Lease time): 86400 seconds
}

And the DNS part?

So when your hamster asks for www.bing.com, it zaps a DNS query to 205.171.2.25. So how do we know that number? Our dhclient daemon rewrites /etc/resolv.conf with that DNS server ip.

 $ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.0.1
nameserver 205.171.2.25

If you ping 205.171.2.25, you’ll be lucky to get a time as good as 35 milliseconds (ms).

 $ ping 205.171.2.25
64 bytes from 205.171.2.25: icmp_seq=1 ttl=60 time=24.1 ms
64 bytes from 205.171.2.25: icmp_seq=2 ttl=60 time=279 ms
64 bytes from 205.171.2.25: icmp_seq=3 ttl=60 time=302 ms
64 bytes from 205.171.2.25: icmp_seq=4 ttl=60 time=325 ms
64 bytes from 205.171.2.25: icmp_seq=5 ttl=60 time=41.1 ms
^C

Compare Google’s DNS:

 $ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=25.3 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=129 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=556 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=59 time=176 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=200 ms
^C

Compare my residential gateway:

 $ ping 192.168.0.1
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=6.12 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=5.46 ms
64 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=5.32 ms
^C

Compare two hosts on a gigabit Ethernet switch:

 $ ping 192.168.100.1
64 bytes from 192.168.100.24: icmp_seq=1 ttl=64 time=0.238 ms
64 bytes from 192.168.100.24: icmp_seq=2 ttl=64 time=0.269 ms
64 bytes from 192.168.100.24: icmp_seq=3 ttl=64 time=0.188 ms

(Looks like the modem I’m at, or the ISP I’m using, probably enforces a rate limit on ICMP. Looks like the first response was at full speed. The further responses were significantly delayed.)

Gone are the days where network times were measured in only milliseconds. A typical time to ping your network gateway on an Ethernet network is about 250 microseconds. So if205.171.2.25 returns an answer immediately, it takes 300.25 ms.

If you try that on a wireless, a ping to your gateway is more likely going to be 6ms, and a DNS result will be 306.25ms. Now repeat that for ten host names referenced in a web page, then you might spend as little as 3 seconds just asking for IPs to start making connections. What you have there is a rather dumb hedgehog.

You can use dig to query for DNS host name lookup times.

 $ dig www.bing.com @8.8.8.8 2>&1 | grep -i 'query time'
;; Query time: 249 msec

 $ dig www.bing.com @205.171.2.25 2>&1 | grep -i 'query time'
;; Query time: 208 msec

Looks like Google and CenturyLink are not so far off, mostly transit time. Let’s compare against our residential gateway:

 $ dig www.bing.com @192.168.0.1 2>&1 | grep -i 'query time'
;; Query time: 26 msec

 $ dig www.bing.com @192.168.0.1 2>&1 | grep -i 'query time'
;; Query time: 152 msec

 $ dig www.bing.com @192.168.0.1 2>&1 | grep -i 'query time'
;; Query time: 25 msec

Wow, was that inconsistent or what? Now, let’s not be too hasty…that was conducted over a WiFi network. Anytime you do that you can get a series of radio retransmissions…but still. Now I’ll compare it to an Intel Atom powered firewall running dnsmasq:

 $ dig www.bing.com @192.168.100.1 2>&1 | grep -i 'query time'
;; Query time: 20 msec

 $ dig www.bing.com @192.168.100.1 2>&1 | grep -i 'query time'
;; Query time: 0 msec

OH SNAP as my local hedgehogs say, a zero milliseconds query time! Actually, it’s a 236 microsecond response, because that’s our average ping time on that network. This is value of a caching forwarder! Compare the craptastic performance of my ISPs residential gateway: 25 ms is four times longer than a cached response that should look identical to our 6ms ping time.

Upgrading your Hedgehog

DNSmasq is both a DHCPD server and a caching DNS forwarder. It makes your hedgehog smarter. If you set up another computer on your network to serve DHCP and DNS, remember turn off DHCP on your residential gateway. If you get it right, you can get prompt speedup, but if you leave both running, you have dueling hedgehogs confusing your hamster until it passes out! Your computer will get conflicting responses from both DHCP servers and the faster hardware doesn’t always answer first.

Lets setup a Rasberry Pi with a fixed IP of 192.168.0.3 and configure dnsmasq on it. Your new dnsmasq DHCP offer would look like this:

Your Address:    192.168.0.10
Gateway:         192.168.0.1
Options {
    1 (Netmask): 255.255.255.0
    3 (Router):  192.168.0.1
    6 (DNS):     192.168.0.3
   51 (Lease time): 86400 seconds
}

Your Rasberry Pi will answer DNS queries by asking 205.171.2.25 if the answer isn’t cached. So the first DNS response is always going to be slowest. But if it takes a snappy hedgehog 1 microsecond to return the cached IP of www.bing.com,  your web page with ten host names in it could take 0.21ms (210us) to start downloading.

This performance gain is one reason I prefer move my DNS service away off my residential gateway. I’ll discuss more on using dnsmasq and ideas on splitting up your network devices in further posts.

Write Your Own CPU Meter in Bash

Here’s a fun little project that is a pretty good combination of array use and pattern manipulation.

Screenshot of Bash CPU Meter
Bash CPU Meter
#!/bin/bash
function get_mhz() {
   while read line; do
      if [[ $line =~ cpu\ MHz ]]; then
         local hunks=(${line})
         local ahz=(${hunks[3]})
         local bars=$(( (${ahz%%.*} * (${COLUMNS} - 12) )/3500))
         printf "%s:%${bars}s\n" $ahz '=' | tr ' ' '=' 
      fi  
   done < /proc/cpuinfo
}
while [[ 1 = 1 ]]; do
   stty_line=(`stty size`)
   COLUMNS=${stty_line[1]}
   get_mhz | sort -rn 
   echo ""
   sleep 1
done

I could go on and on about it, but I’d rather you just ask me questions.

Slow SSD 3: Beyond Memory Testing

Part 1 | Part 2

Welcome, ensign, to our debrief. It sounds like you had troubling times…using…computers. Starfleet reports that citizens apparently have already encountered a form of your file system corruption problem. They report…unexplained crashes, file system corruption but no corresponding SMART data or bad blocks as a cause.

I finally got to a point where I could put tasks aside do a memory test. I chose the Memory Test option on the Grub boot menu, let it run all night…and no problems.

Next day…apt-get update…read only file system! Grrrrr… If it’s not my memory, maybe there’s a problem with my system stability in general. I’ve seen problems with my system in the past acting up and then…pow! My power supply blew up! Captain! Hull integrity compromised! Warp core is Off Line!

Well, getting a quality power supply is a fine investment. I plunk down for more than I need and get an EVGA 750W Platinum. Impressive bit of kit, easily sufficient for a 180W processor, full RAM slots, two SSDs, six spinning hard drives and a water cooler plus all the fans the case will fit.

Klingon bird of prey de-cloaking off the starboard—damned read only mode in /var again! That was frustrating. So…if I have good power, maybe motherboard. So, I buy an ASUS Sabertooth TUF Shit Thingy. Reinstall components. Run a memory test, test passes. File system…read only again! OK–order new drives. I find a pair of Refurbished 256GB Corsiar drives on Newegg for a fine price and a few days later, I’ve installed Ubuntu 16.04 Server edition with them in RAID1. Get it booting and … donk…read only filesystem. I bet there was nothing wrong with my pair of OCZ Vertex3s anyhow, I never saw any ATA errors logged from them. And on the fourth boot of that system…one of those Corsair drives disappears. Dead.

(Hrm…more kit to the little warriors later, huh?)

OK…light some incense. Pray that Sto-vo-kor will allow me in and consider what other alterations I could try whilst sharpening my mec’Leth…and sharpen…and read. Memory is possible. CPU is possible. I find both 32GB of ECC ram and another FX-8350 won’t break the bank. And while I’m doing this research, and am eagerly awaiting my new shipment of photon
torpedoes…I start seeing some red shirt conversations about different versions of Memtest86. …there’re two memtests?

Well, yes…and many releases. There is a Memtest86, Memtest86+, and multiple versions. Apparently Memtest86+ forked and pulled ahead a few years ago, but now Memtest86 (by Passmark) has had two versions succeeding it: 5.1.0 and 6.1.2. And the latest Passmark version allows better parallel-CPU memory testing. Well, I put the bat’Leth back on the wall and make a new USB boot stick.

The Passmark Memtest86 boots in two different modes: a Legacy BIOS mode and a UEFI aware mode. Only the UEFI mode will allow version 6.1.2 to run. Nice interface. Larger selection of tests, and clearly a way to punish all those dishonorable RAM sticks and all the unworthy processing cores in parallel. Sto-vo-kor, I hear thy song!

Load…choose parallel…start test…and parallel CPU testing locks and reboots my system in about 90 seconds. In a day, new RAM and CPU arrive. Mix new CPU, old ram: passes single core test, fails parallel test. Old CPU, new RAM, same thing. What have I done to dishonor the Empire? My shame, so research tells me, is that the AMD FX series on-die memory controllers are dishonorable, second-rate Ferengi knock-off technology barely good enough to run a 30 year old holosuite!

Over-clocking the ram is clearly a poor idea. How about under-clocking the RAM? Does not pass memory test. Over-volt the RAM? Does not pass memtest. Over-volt, under-clock ram and under-clock and over-volt the CPU? Does not pass memtest.

Tribble feces! The impudence of AMD! They are now my blood enemy…and when next we meet….

But, I have a system that I’m amazed doesn’t crash more, and have a lot of programming to do on it. Well, it’s about as safe as it gets for now, until I find something that is actually server grade. And that leaves some Vulcan Intel Xeon technology to find at DS9.

While storytelling is a great Klingon tradition…to drag this tale out to another episode would be a waste of time. First, I found an unused SuperMicro X10SRI-F motherboard and I bought a used Xeon E5-1650 for it. But, I didn’t understand the difference between the X9 and X10 series of motherboards and the X10 series is Skylake, and what arrived was a Haswell processor. Another problem is that SuperMicro motherboards lack a lot of useful things like … USB3 and audio chipset and standard PC case jumpers.

Back to Amazon and Newegg. I don’t want new Skylake equipment. Those processors are hundreds of dollars more expensive than what I need and I already have plenty of DDR3 that shouldn’t actually have a problem. And what do you know…the Asus X79 platform is still around as used parts. This was the first bit of clear sky I saw on Rurapente in some time. When the UPS truck pulls away, I pounce on the front porch, growling, covering my new motherboard with my arms lest some filthy Romulans walk by.

By this time, I’ve picked up a old used tower case from the nearby computer repair shop, and have placed my old power supply and some new SSDs in it. I don’t want to yank out all the components (water cooler, motherboard, etc) from the previous case again. That’s just pointless time and wear. Add two more (new) 256GB Kingston SSDs, new processor and RAM and …boots. Whew…

This Bird of Prey, with a Xeon E5-1650 warp core and 32GB of ECC in the front collector array is sent straight to battle against the Memtest86 monster! Latest version, all parallel, immediately. Parallel transposition patterns passes…yes, that’s the test that fails after 90 seconds on the AMD. Random patterns passes…yes, never made it this far on the AMD. Its late…time for sleep.

We come out of warp around planet Earth, having passed the Hammer row tests! This is so much better than I expected. Now I know why Intel has totally crushed it in the server space.

And Sol rises above the limb of the planet, shining its dawn light across the oceans and glaring on the view screen. Computer…half brightness. Helm, disengage warp core, one quarter impulse, take your time to dry dock. We need to swap in all the other new components and give the used bits to the kids to use for their gaming systems. Number one, arrange shore leave for the crew. I’ll be in my ready room.

Slow SSD #2: Protection for your Partitions

rootboot

Part 3 | Part 1

Slow SSD, part 2: Protecting my Partitions

The beautiful spring weather was denied me by my flaky computer. I would be working away, notice that I needed a new package, and then get an

error: read only file system

failure message when trying it. Once, I even rebooted into an (initrd) prompt telling me to fsck my root partition. Not good…not happy…worried about my system blowing out the magic smoke and getting stuck in the muddy trenches of computer recovery.

To maximize the safety of my system in the short term (after adding the fsck check I described in part one), I decided I might need to reconfigure the particle phase inverter…uh…mount options on my partitions. The first thing to do is to make them use the data_journal option, so that data is written to the journal before being re-written in place in the file-system. Since I’m running SSDs (and I don’t know if the SSDs are going bad or not) this seemed like it would probably be a safe enough thing to do.

First we inject bosons into the nacelle plasma emitter…just turn on the data_journal flag using tune2fs:

 

$ sudo tune2fs -o data_journal /dev/md1
$ sudo tune2fs -o data_journal /dev/mapper/lv_root
$ sudo tune2fs -o data_journal /dev/mapper/lv_var

When I did this, I updated my /etc/fstab file. It’s a good idea to remove any mount option options that would conflict, like: data_journal or data_writeback. Some distros add these options at install time. It makes rebooting a challenge, because it will block the partition from actually mounting. Just remove any data_X mount options and they will be detected on mount.

While I was in the Jefferies tube…fstab, I added the synchronous options to the impulse engines. This option makes your process wait for writing to the disk to finish: sync,dirsync. These are not performance options…you don’t want to game with them, but you can get work done with them. Now my fstab looks more like:

 

/dev/md0    /boot   ext4    noatime,sync,dirsync 0 0
/dev/md1    /       ext4    noatime,sync,dirsync 0 0
/dev/md3    /var    ext4    noatime,sync,dirsync 0 0

There were other entries, but unrelated.

           __
         /| 
        / \ 
        \  \ 
        }]::)==-{) 
        /  / 
        \ / 
         \|__ 

And I did an update-grub2, and found it definitely went more slowly. But did it help? I thought it did, until I saw hull integrity drop to 30%…more in part 3.

Slow SSD #1: Recover your Boot and Root

rootboot

Part 2 | Part 3

File-system integrity! Boring words, right? You’ve got work to finish. Your system froze in the middle of doing updates. When was my last backup? About as boring as a heart attack when your system won’t boot because your / (root) partition is corrupt.

All of those thoughts raced through my head when I started getting file system corruption on my home workstation. Ever see one of these errors?

bogus-inode warning
bogus-inode warning

Use dmesg to find file-system warnings.

ext4_iget: bogus i_node
remounting as read-only

How did I find that message? Using dmesg. You can grep dmesg output if you want:

$ dmesg | egrep -i '(ext|mount)'

Or you can use the mount command to find things mounted read-only:

 $ mount | grep "ro,"

If you’re caught in this predicament and cannot finish your boot sequence because you can’t find fsck on your hard drive–because you’re stuck in initrd…I hope you have a USB stick with Ubuntu MATE on it! (initrd is the boot ramdisk environment that loads the kernel.) My advice is go make a Mate live-CD right now…very useful!

Make a MATE LiveCD image today!
Make a MATE LiveCD image today!

When you’re booted into your live-CD environment, your desktop has not begun all the media discovery tasks that allow fsck to actually get work done. In my case, my /boot, and volume-groups were all on top of mdraid partitions. You will download fdisk, mdraid, and lvm2 to get started:

$ sudo apt install mdadm fdisk

This installs and then we can scan our partitions using partprobe or kpartx. (I really appreciate that the MATE live cd allows me to install new packages.)

$ sudo partprobe $ sudo fdisk -l /dev/sda

 Device Boot Start End Sectors Size Id Type
 /dev/sda1 * 2048 1953791 1951744 953M fd Linux raid autodetect
 /dev/sda2 1953792 3907583 1953792 954M fd Linux raid autodetect
 /dev/sda3 3909630 250068991 246159362 117.4G 5 Extended
 /dev/sda5 3909632 101564415 97654784 46.6G fd Linux raid autodetect
 /dev/sda6 101566464 140625919 39059456 18.6G 83 Linux
 /dev/sda7 140627968 141125631 497664 243M 83 Linux
 /dev/sda8 141127680 250068991 108941312 52G 83 Linux

Because mirror partitions must be modified (sda1, sdb1) at the same time, you should NOT fsck them directly. You need to assemble the raid devices first:

$ sudo mdadm –assemble –scan

Then check which are assembled in /proc/mdstat:

$ cat /proc/mdstat

md1 : active raid1 sdb2[1] sda2[0]
 976320 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sdb5[1] sda5[2]
 48794624 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
 975296 blocks super 1.2 [2/2] [UU]

My md0 device was mounted on /boot. Let’s run fsck right on it:

$ sudo fsck.ext4 -vfy /dev/md0

The other items are all managed by LVM. We need to do a vgscan to activate them:

$ sudo vgscan

That should detect your logical volumes and assemble them, but you may need to activate them at some point with

sudo vgchange -ay $your_vgname

My volume name is vg_cholla_root. Those logical volumes are accessible through the Device Mapper, so look in /dev/mapper/. Let’s fsck our root volume:

$ sudo fsck -vfy /dev/mapper/lv_cholla_root

That should not take very long if you’re on an SSD, especially a mirrored SSD. You’ll log out of your live-CD environment, remove your usb stick, reboot the machine and your original system should come backup up.

To Skip that Tedious Step

You don’t really want to do that twice. This actually kept happening to me (hence part 1). Before your system wedges again, you want to edit /etc/grub.conf and add some on-boot fsck options:

GRUB_CMDLINE_LINUX_DEFAULT="fsck.mode=force fsck.repair=yes"
GRUB_CMDLINE_LINUX="fsck.mode=force fsck.repair=yes"

To be super cautious, also make sure both of your boot sectors (/dev/sda, /dev/sdb) have the grub boot loader installed:

$ sudo grub-install /dev/sda
$ sudo grub-install /dev/sdb

Then run sudo update-grub2. Your every boot-up will force a file system check. On my system it takes about four seconds. I have my /home directory on a separate (ZFS) mount point, and that doesn’t get checked at boot.

Next episode, I’ll cover some of the additional steps I’ve taken to protect my workstation file-systems from being brutalized by the vagaries of my staggering hardware. Stay tuned to Freedom Penguin!