The Eternal Journeyman

It seems like every “luminary” in the world of software development has chimed in on this one at some point, so why not one more, right? Sure, I’m not a famous “known name” in the software world. I’ve done some cool and important things, but you’ve probably never seen or used them, at least not personally. If you live in Ohio then you’ve definitely been a consumer of my code, but you’ve still never seen it. It’s invisible stuff running in multiple Ohio government agencies, quietly shuttling your information from place to place. If you have a concealed-carry license, parts of your background information totally passed through my code. And before you ask, no, I didn’t siphon any of it off.

I work on stuff that affects lots of people, but I’m not well known. So what do I have to offer to the conversation? What I have is another perspective, from a guy who aspires to the ideals of software craftsmanship, but has enough impostor syndrome to keep me humble, and certainly enough to stop me from declaring myself a “Master Craftsman” and telling you how to do things. Claiming the title of “Master Craftsman” actually flips your bozo bit pretty quickly in my head. I’m just a guy that writes code, and sometimes I share things with others.

I am not a master craftsman, and neither are you. Nor is anyone in our entire industry. How could we be when everything we know becomes obsolete every two years? That’s my point. That’s my perspective. I work in an industry where none of us are ever really going to truly “get there”. We will never achieve mastery.

We, as humans, have a lot of industries down to a science. We’ve been building actual physical buildings for millennia, and we’ve got that pretty much figured out, right? Every now and then someone comes up with some new high-tech composite material and shifts the landscape a bit. We develop large-scale computer modeling, C&C milling, thinner glass, and  suddenly our art museums start looking less like boxes and more like melty organic blobs. That’s just the skin though. The fundamental knowledge of how to safely prop up a structure and not impede the traffic flow within it hasn’t really changed that much. We’ve got the core science down, and we’ve had it down for generations.

Manufacturing, Automotive, Consumer Electronics, these are all industries that undergo constant evolution, but the core ideas of what we’re doing and how we do it are pretty much smoothed out. The rough edges have been sanded off, and the general “shape” of the industry doesn’t change that much. We get better at making chips smaller and smaller. We make more efficient CPUs by stacking more transistors in less space, but we don’t just throw out the transistor altogether and start using frob nodules instead. At least not yet. Some major shift will happen somewhere over the horizon and it will change how we do things, but for now it’s transistors and heatsinks.

Compare that to the software world. We’ve only been talking to computers since the 1960s. This industry is still in its infancy, and we’re changing our minds about the right way to do things on a daily basis. We don’t have as many languages in play as we used to, and the business world has largely settled on .Net, Java, and PHP as the main ways we get things done, but we still have these major tectonic shifts happening every now and then. The last big one was when everyone got all excited about functional languages and how they were going to change everything we do. Except they didn’t. They dominated our user group and convention topics for a year or so, changed how we do a few things out on the periphery of actual business, and then they faded out of the limelight. When’s the last time your local user group hosted an F# talk? Yeah, that’s what I thought.

And now it’s everything “in the cloud”. But which cloud? Azure? AWS? Should these things we’re putting in the cloud be containerized? Which container? Docker? Do we need Kubernetes? How do I even begin to pick one? What if I pick wrong? What happens if I advocate for building a client’s critical systems on top of Azure and then Microsoft loses interest and walks away like they’ve done with so many other things? Anyone remember Zune? Yeah… I have three or four of those. Windows Phones? Same thing. I have a drawer full of old Windows Phones. Microsoft changes directions like a crack-addled squirrel trying to cross a busy intersection. What if I’d told a client to build their front end on Silverlight? Now I’m stuck being the “Silverlight guy” while everyone else moves on to newer things.

Years ago, I noticed a pattern forming. Any new Microsoft technology that I personally got behind would get killed off. I am apparently the kiss of death for all things Microsoft; So much so that friends made me promise not to get a Hololens because they wanted it to be a thing. They still want it to be a thing… and it still isn’t. Maybe I should just go ahead and buy one just to put a bullet in its head once and for all. I’m frankly surprised that the Surface line is still around since I actually bought one of those. But I digress.

The point is that our industry is nowhere near settled on what we do. I’ve spent the last few years actively avoiding the front end of web applications because there’s still so much churn going on over there. Knockout, Angular, React, Ember, Vue… everyone wants to change the world of web applications, and I’ve tried to avoid the whole mess until the dust settles. It’s not that I’m jaded. I’ve just backed too many losing horses in the past and I’m experienced enough to know that, in all likelihood, none of these frameworks will emerge as the eventual winner, so I’m not hitching my wagon to any of them.

My prediction is that something far less revolutionary will come along. It will seem quiet and tame by comparison. It will make just enough sense that It will quietly take over as the boring but safe choice for actually getting stuff done in the same way that jQuery and Bootstrap became the de-facto tools in their areas. I also predict that someday we’ll look back on the chaos that was the front-end landscape of the early 21st century, and we’ll regret every single choice we made, no matter how right it seemed at the time.

Despite the metaphor that occasionally gets thrown around by leaders in our industry, this isn’t like Samurai in feudal Japan. You can’t just demonstrate a few katas at a new dojo (job) to easily establish your rank and standing because all of your katas are so two years ago. This is more like working for years to finally achieve your black belt in a particular style, only to suddenly find that your country is being invaded by foreigners from some strange new land that have a non-standard number of arms and a totally different center of gravity. Everything you know is wrong and you have to start all over again. You’re not a master anymore. You’re just a highly-experienced apprentice.

FlyingMachine

You know those old timey black and white films of men crashing their ill-conceived “flying machines”? We laugh at their idiocy, at the fact that they tried to fly without the most basic high-school-level understanding of aerodynamics and lift. What did they think they were doing? Yeah… Well that’s us. We have absolutely no idea what we’re doing. History will look back at our feeble efforts and laugh mercilessly at us.

In the midst of all this constant change, I refuse to believe that any of us can call ourselves master craftsmen. We are all journeymen at best, and will be for the rest of our careers. The only people who can call themselves “master” are those who keep doing the same thing for a significant period of time. They are the Fortran and Cobol programmers of the world. The ones who came out of retirement and commanded ridiculous salaries in the late ’90s preparing for Y2K because it was easier to dust them off and overpay them than it was to convince fresh, new developers to train up on skills that they’d be throwing away in a couple years time once the crisis was over.

Many of you are too young to remember when Y2K was the big scary monster that was about to bring everything crashing down. Evangelicals prepared themselves for the end times, and normally level-headed families stockpiled food and ammunition. We were expecting to wake up January 1st, 2000 with no power and no phones. Our banks were going to be on fire, and the fire trucks weren’t going to start. Violence and looting in the streets, dogs and cats living together, mass pandemonium. I was convinced that the phone system would crash, not because of the actual bug, but because we were going to simply overload the thing when everyone phoned their Mom first thing in the morning on 1/1/1 to make sure everything was okay and vice versa.

But none of that happened, and you want to know why? Because there were armies of true masters of an obsolete craft out there that scrambled to rewrite the world in time to save us all. These were men and women who still held a mastery over COBOL long after most of them had been forced to retire or move on to new and unfamiliar languages and idioms, and in that regard, they were not modern masters. In their new roles, and in their new languages, they were just like the rest of us, scrambling to keep up. But they were masters at their particular game. A game no-one else was playing anymore. They were Samurai. They were Jedi. And for the last year of the twentieth century, they were gods.

Someday, our great great grandchildren may have our industry well and truly sorted out once and for all, and maybe they’ll be able to call themselves master craftsmen, but not us. No way. We need to come to terms with the fact that only the very core motivations of our industry are settled. The general approach has been worked out, but not the specifics of implementation. The implementation is our best flailing attempt to build something for a client using the primitive stone tools we have available at the moment. We’re just coming into the bronze age here, and we think we see, ever so vaguely on the dim and foggy horizon, what the future looks like, and we’re still probably wrong about that. Machine Learning, Artificial Intelligence, Natural Language, all of that is becoming commonplace. You can’t attend a software conference without tripping over an ML presenter these days, but are we really going to use it for the day-to-day business of moving money around and balancing accounts? I don’t know… maybe?

So should we give up on the idea of Craftsmanship? Should we cut corners, and just do “whatever it takes” to get something (anything) shipped? Do we give up on getting things “right” and just get them done? I mean… why bother if we’re just going to throw it all out in the next big rewrite anyway, right?

Wrong. We still need to be taking  pride in what we do, and we need to leave behind code that the next guy can understand and improve on. That is how we learn and move forward. We still need to build to the best of our abilities, even though we know that someone will roll their eyes at our code in the future. With any luck, that person is just us, and the thought in our head will be “What was I thinking?” and not “What idiot wrote this?” We need to feel good today about the code we’re writing, even if we probably won’t feel the same way about it two years from now. If there’s one thing we can learn from the other industries, it’s this:

If it’s worth building, it’s worth building well.

Notice that I don’t say “correctly” or “right” here because whatever we do will inevitably be wrong in a few years time, but it needs to be as correct as we can get it for now. The better you build it, the longer it lasts. Do you think your grandchildren will be fighting over who gets your Ikea desk after you die? What about Great grandpa’s handmade, roll-top writing desk with the white oak and black walnut compass rose inlay? They’re the same thing, right?

Craftsmanship is not a destination, it’s a journey. You will probably never reach true mastery in your lifetime. But that’s not the point, is it? Who wants to be “done” anyway. Where’s the fun in that? I mean, sure, I’d like the occasional “vacation project” where it’s all stuff I already know, and I get to feel super smart for a few months, but I’ll always find that the world has moved on while I was enjoying my sense of mastery.

When someone asks me “What do you want to be doing in five years”, my answer is always the same; “This, but better”.

Advertisements
Posted in Computers and Internet, Programming, Work | Leave a comment

Pluralsight Course Updated

For those who have watched my Pluralsight Course, it has been recently updated to include changes brought by Raspbian Stretch. I wasn’t able to completely refresh the content end to end, but the former CrashPlan module has been completely replaced, and now talks about setting up remote backups using Duplicati.

Everything else up through Module 9 has been refreshed, updated, and had content replaced where possible. In some cases, this is simply an overlay on the video indicating changes, but there are numerous places where narration and video have been updated in-place to bring the course up to the current OS and software.

So, of course, they released a new Raspbian mere days after my updates. <sigh/>

Anyway, if you’ve watched my course in the past, I thank you, and suggest that you might want to go check out the updates. If you haven’t seen it, and you have a Pluralsight subscription, then you should check it out.

Thanks.

Posted in Computers and Internet, Home Server, Raspberry Pi | Tagged | Leave a comment

External SSH access on the Raspberry Pi

Over the course of this series, I’ve shown you several different ways to access your Raspberry Pi Home Server remotely. We’ve looked at OpenVPN, for connecting to your home network, RealVNC for opening a remote desktop session, and SSH for opening a remote terminal window. The first two options have worked whether you are on the same network at the time or not, but the last option only works when you’re at home, or already connected through VPN, at least so far.

In this post, I’m going to punch a hole in my router’s firewall to allow external SSH access to my server. I am doing this to support some other tools that I’ll discuss in future posts, but for now, I’m just going to get basic external SSH access up and running. If you’ve enabled SSH on your Pi, and can already connect to it over the local network, then all that’s required is to open up your router’s admin page, and map a port from the outside to port 22 on the Pi.

There, that was simple. It’s also asking for trouble. If your pi user’s password is nice and strong, then it’s not asking for a lot of trouble, but there’s always the possibility that someone’s going to come knocking and try to brute-force their way into your Pi, and therefore your network.

My first piece of advice would be to NOT simply map port 22 on the outside world to port 22 on the Pi. That’s the first thing a hacker’s going to try. The second thing they’ll try is port 222, then port 2222 and so on. Let’s not make it TOO easy, right? Apart from security concerns, there’s also the simpler problem that you can only map each external port on the router to a single device on the local network, so if you have multiple devices that you’d like to connect to using the same protocol, they can’t all use the same port. You’ll need some way to tell them apart.

For this example, I’m going to go with a simple convention of one or two digits to identify the device I want to connect to, and three digits for the port. If my server’s internal IP address is 192.168.1.5, and I want to talk to port 22 (SSH), then my external address might be 5022. A different server, at 192.168.1.15 would use 15022 for the external SSH port. Get it? This also means that I can remember which port goes to which computer easier. This starts to fall apart for higher-numbered ports, since port numbers only go up to 65535, so you might need to abbreviate things later on, but let’s at least start with something vaguely mnemonic.

That’s some basic security through obscurity, but we can do much better than that. Odds are, you’re using a fairly basic password for your pi account. Maybe you’ve added some special characters and some capitalization to strengthen things a bit, but you need to ask yourself “Is this password strong enough to protect all my stuff from evil?”. If you’re not totally confident in your password’s strength, then it’s time to take it to the next level.

Rather than using passwords to secure SSH access, lets set up public/private key-based authentication instead. You can think of a public/private key pair as kind of like a super-password. The private key is a way of asserting your identity, and the public key is a way of verifying that assertion. The private key is way more complex than you could ever hope to remember, and certainly more than anyone using current technology could brute-force their way through within our lifetimes.

You may have used keys such as this already in order to connect with systems like GitHub. If you have already generated a key, then you can skip this step and use the keys you already have. There’s no reason you can’t use the same key pair for any number of different services. 

Check your home directory for a hidden subdirectory called “.ssh”. For Linux users, this will be at “~/.ssh”, for Windows users, it will be at “C:\Users\USERNAME\.ssh”. If there are already files in that directory called “id_rsa” and “id_rsa.pub”, then you already have a key pair. If you’re missing just the public key, then keep reading. The public key is easily recreated, and we’ll get to that in just a minute.

Generate a key pair

Assuming you don’t already have a key pair on the device you want to connect from (the client, not the server), you’ll need to generate one. For Windows users (like me), this will be very different than it is for other operating systems. I’ll start with the command line instructions for Mac and Linux users first.

Mac & Linux Users

All Mac and Linux users need is one command.

ssh-keygen

The tool should prompt you for everything you need. You can accept the default for the filename, which will be id_rsa, and stored in your home folder, under a “.ssh” directory. You’ll also be prompted for a passphrase. This is optional, but assigning a passphrase means that even if someone got access to your computer, they still wouldn’t be able to SSH into your server without knowing that passphrase. This is your choice, and I won’t judge you if you leave it blank. You should now have two files in ~/.ssh directory called “id_rsa” and “id_rsa.pub”. That’s it, you’re done. Skip ahead to “Installing the public key”

Windows Users

For Windows users, I’m assuming you’ve already installed PuTTY, since I’ve used it for this entire series so far. If not, go install that now. It’s not the prettiest website in the world, but the tool is the de-facto standard for SSH in the Windows world, although I hear true OpenSSH is on the way for Windows users. We’ll need the “puttygen” tool that gets installed along with PuTTY. You can simply press the Windows key, and type “puttygen” to run it. The program looks like this:

Puttygen

Press the Generate button, and move the mouse around in the blank area until the key is generated. When it’s complete, it will look something like this:

Puttygen2

The public key is in plain text format in that central textbox. It’s also conveniently highlighted, so you can simply right-click it and copy it to your clipboard. We’ll need it in just a minute.

Press the “Save private key” button, and save this file to the .ssh folder under your home directory (e.g. C:\Users\Mel\.ssh). Create the folder if it doesn’t exist already, and call the file “id_rsa” by convention.

Missing .pub file?

If, for some reason, you have a private key file (id_rsa), but you don’t have the matching public key file, then there’s an easy fix. Remember that the public key is just a way of validating the private key. All the information needed to generate a public key is contained in the private key. For Mac and Linux users, you can recreate the public key file like this:

ssh-keygen -y -f ~/.ssh/id_rsa > ~/.ssh/id_rsa.pub

For Windows users, click the “Load” button in PuttyGen, and load up the id_rsa file. The rest of the UI will fill in, and you can right-click in the public key text box, and “select all”, then right-click again and copy it to the clipboard. You can also save the public key into a file, but it won’t be in the right format for the Pi to consume. What you really need is right there in that textbox, so just copy it to the clipboard.

Installing the public key

Next, you’ll need to install the public key onto the Pi, which will allow the Pi to validate the private key when it sees it. You do this by tacking it on to the end of a file that may not exist yet. Remote into the Pi either through SSH or VNC, get to a command line, and edit the authorized_keys file.

sudo nano /home/pi/.ssh/authorized_keys

Windows users can just paste in the public key we copied to the clipboard above. Mac and Linux users will need to get its contents from the id_rsa.pub file we generated earlier. Copy its entire contents to your clipboard on the client computer where you generated it, and then paste it into the nano editor on the Pi. Close and save the file.

For Mac and Linux users, this is all you should need. Windows users will need to install the new private key into PuTTY itself. Load up an existing profile, or create a new one with the internal IP address of the server, expand the “SSH” section on the left, and then click on the “Auth” node.

CaptureClick the “Browse” button, and then go find the id_rsa file in your .ssh folder. Scroll the left-hand section back up to the top, and click on Session. Give the new session a name, and save it. If you loaded an existing session, then clicking Save will update it. Either way, PuTTY should remember the private key now. Click “Open”, and you should get a login prompt as usual, but after you enter the username, you won’t be prompted for a password. That’s it. You’re authenticating using keys.

Locking it down

We’ve laid the groundwork for securely accessing your Pi from outside your own network now, but it’s still possible to log in using a plain name and password. If you were to SSH to your server from a different computer (or PuTTY profile), it would just go on asking for a name and password like it always has. We’ve made logging in more convenient if you have a key, but we’re not yet requiring a key. We need to turn off password-based authentication next.

Edit the SSH configuration file

sudo nano /etc/ssh/sshd.conf

If the file appears blank, try using the filename “sshd_config” instead. I’m not sure when the naming change occurred, but I’ve seen it both ways. If your server is older, it may be using the other name.

Scroll through the file, and look for the following values (or use ctrl-w to search for them), and set them accordingly.

PermitRootLogin no
PubKeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no

If this is the first time you’ve edited this file, then some of these lines may be commented out. Remove the pound sign from the beginning of a line to uncomment it. Finally, restart the SSH service to enable the changes.

sudo service ssh reload

All that’s left is to map that external port if you haven’t already, and you’re ready to connect from the outside world. Windows users will need to make a new PuTTY profile with the key and using the external IP address of your home network, rather than the internal address of the server. Unless you have a static IP address at home, you’ll need some kind of dynamic IP service such as no-ip.org, which I covered in the OpenVPN post.

Posted in Computers and Internet, Home Server, Raspberry Pi | Leave a comment

CrashPlan is dead

It looks like Code42 isn’t interested in home users anymore, and they’ve announced that they are shifting focus to enterprise users. Even if you were backing up to your own computers, your account is still going away, and with it your ability to back up your stuff.

Now is the time to start looking for alternatives. I don’t have a recommendation yet, but I’m looking into it. I’m interested in hearing what everyone else is using, and how it’s been working out. I’d like to be able to recommend a drop-in replacement for the CrashPlan workflow, but nothing has quite fit the bill just yet.

For the short-term, Windows 10 users like myself can use Windows’ built-in backup system with a network share living on a Raspberry Pi. You can also use Resilio Sync or SyncThing to mirror your important files to the Pi.

Posted in Computers and Internet | 17 Comments

Upgrading from Jessie to Stretch

On Wednesday, Aug 16th 2017, a new major Raspbian OS version was released. The “Stretch” release replaced the previous “Jessie” version, and makes a number of changes that may or may not affect you. I’m editing this post as I go through the process of upgrading my servers to the Stretch release, to let you know what I saw.

This will likely be another “live” post for a little while until I’m sure everything is stable, so remember to check back now and then, and please mention any problems you’ve seen in the comments.

Known issues (so far)

This upgrade is not quite ready for everyone to just jump in and do. I’m finding that a few things aren’t working correctly already.

Samba shares

My shares were gone after the upgrade, and even purging and reinstalling Samba couldn’t bring them back. I tried for hours to figure out what the problem was before deciding to go whole hog and do a dist-upgrade, which I normally don’t recommend. A lot of things can go wrong with a dist-upgrade since you’ll sometimes get newer versions of packages that aren’t quite ready for the real world. In this case, it worked. My shares are back online. DO NOT try this without a full backup. You’ve been warned.

Network UPS Tools (NUT)

The first time I tried the upgrade process, I got a rather troubling error message at the end saying that there were errors while processing “nut-client”, “nut-server”, and “nut”, so I gave apt-get upgrade a second pass just to make sure everything else was updated properly. Only these three packages had failed to upgrade, and it appears it’s because the nut-client service failed to restart. This is because the configuration files were overwritten by the upgrade, and didn’t have any useful information in them. After I restored my configuration settings, I was able to complete the installation. See below for more details.

Take a Backup

I should even have to tell regular readers to do this. We’re about to make major changes to the OS itself. You should shut down the Pi, and take a backup of the SD card before going any further. If you’re booting from a hard drive, then you’ll want to attach that to another computer and backup the root partition as well. After all, that’s where most of your stuff actually lives.

Consider upgrading from the desktop

If you can boot to the desktop, then I’d consider doing that. There are a couple times during the upgrade process where I found it convenient to open a second terminal window to examine something the upgrade process was going to change. If you connect through SSH, then you can use multiple sessions to achieve the same effect. If you’re running Raspbian Lite, and only use a directly-attached monitor and keyboard, then you may want to keep a note pad nearby in case you need to take note of proposed config file changes so you can restore your customizations afterward.

Update APT sources

To get the new software packages, APT will need to know where they live first. All you need to do is edit the two “source” files to point to the new “stretch” repositories.

sudo nano /etc/apt/sources.list

Change all instances of “jessie” to “stretch” and save the file. You can do this by hand, or you can let nano do some of the work for you. To do a search and replace, press ctrl-\. That’s the control key and the backslash. You’ll be prompted for the text to search for (jessie), and what to replace it with (stretch). If you haven’t done much to these files, then you should only find two matches, one on the first line, and one on the last line, commented out. Press “Y” for each match, or “A” to replace them all at once.

When you’re done, close the file, saving your changes (ctrl-x, y, enter). Next, do the same thing again, for a second file.

sudo nano /etc/apt/sources.list.d/raspi.list

Change all the “jessie”s to “stretch”es, and save the file. Finally, do a standard update/upgrade.

sudo apt-get update
sudo apt-get upgrade

What to expect during the upgrade

This is a pretty massive update, so you can expect this to take a while. Don’t just leave and come back later though because you’ll be prompted for answers several times during the upgrade process. The first prompt will be to approve the proposed changes, just like every time you do an upgrade, but the list will be huge. I’m talking hundreds and hundreds of packages have updates. You’ll also be prompted when configuration files have updates. I got prompted for changes to the following files

  • /etc/skel/.bashrc
    I’ve never made any changes to this file by hand, so I just took the new version by pressing “y” and then “enter” when prompted. Note, these kinds of changes will default to “n”, so fight the urge to just hit enter like you’re used to. If you’ve made any customizations to this file, then you might consider opening a copy of the file in another window, and then reapplying your changes by hand when the update is complete. See the dhcpcd.conf step below for details.
  • /etc/login.defs
    I took the new version of this file as well since I’ve never touched it myself.
  • Graphical prompt for the keyboard language
    I let this one “guess”, which is the default option.
  • /etc/dhcpcd.conf
    Now this IS a file that I’ve messed with. It’s how you set up static IP addresses on the Pi these days, so I first used the “D” option to examine the differences between the proposed new version and what I currently have. There were changes to things other than defaults; things I had modified by hand. I opted to open a copy of the file in nano from a second command prompt, and then hand apply my customizations when the upgrade completed. Press “y” to take the new version of the file. Don’t forget to come back and apply your customizations later on, though.
  • /etc/lightdm/lightdm-gtk-greeter.conf
    Another file I haven’t touched by hand. I took the new version.
  • /etc/nut/nut.conf and /etc/nut/upsmon.conf’
    I’ve definitely customized these as part of installing the battery backup (see Network UPS Tools), but the customizations aren’t that extensive. I opened a couple new terminal windows, opened the files in nano, and then took the new version of the file (“Y” option). The installation will fail to complete, but once we restore the customizations to these files, you’ll be able to pick back up and complete the process.

    • /etc/nut/ups.conf
      The part you’re interested in is at the bottom, and it’s where you set up the driver for your particular UPS. Mine looks like this:

      [RPHS]
       driver = usbhid-ups
       port = auto
       desc = "CyberPower SX550G"
    • /etc/nut/upsmon.conf files.
      This sets up the UPS monitor that’s in charge of actually shutting down the Pi when the power goes out. There are a few sections you’ll need here.
      The first is the MONITOR section. Mine looks like this:

      MONITOR rphs@localhost 1 upsmon NOTMYREALPASSWORD master

      The second section is the NOTIFYCMD. You will only have touched this part if you set up email notifications for power events. Mine looks like this:

      NOTIFYCMD /etc/nut/upssched-cmd.sh

      Finally, there’s the NOTIFYFLAG section. This tells NUT which power events you’re interested in getting notifications for. Not just email notifications though, this includes “wall” messages. Mine looks like this:

      NOTIFYFLAG ONLINE SYSLOG+WALL+EXEC
      NOTIFYFLAG ONBATT SYSLOG+WALL+EXEC
      NOTIFYFLAG LOWBATT SYSLOG+WALL+EXEC
      # NOTIFYFLAG FSD SYSLOG+WALL
      # NOTIFYFLAG COMMOK SYSLOG+WALL
      # NOTIFYFLAG COMMBAD SYSLOG+WALL
      # NOTIFYFLAG SHUTDOWN SYSLOG+WALL
      # NOTIFYFLAG REPLBATT SYSLOG+WALL
      # NOTIFYFLAG NOCOMM SYSLOG+WALL
      # NOTIFYFLAG NOPARENT SYSLOG+WALL

      It’s not important that your configuration looks like mine, and it probably won’t. The important thing is that you’re writing down, saving off, or opening a second window with your customizations so that we can restore them later on.

  • Graphical prompt for the “/etc/apt/apt.coasdfnf.d/50unattended-upgrades” file.
    I haven’t touched this one either, so I took the new version.

Removing PulseAudio

The Jessie release of Raspbian used the PulseAudio library for Bluetooth audio. If you’re not using it, you can safely remove it.

sudo apt-get -y purge pulseaudio*

Restoring NUT

I decided to take the new version of the configuration files simply because I don’t know what else has changed in the newer versions, and my own customizations aren’t that extensive. Taking the new files will cause the upgrade to fail because part of the upgrade involves restarting the services, but the new configuration files are missing all of the vital information about your UPS. We’ll restore these files one at a time.

/etc/nut/ups.conf

Scroll to the bottom and restore your UPS information from above.

sudo nano /etc/nut/upsmon.conf

Restore the MONITOR, NOTIFYCMD, and NOTIFYFLAG sections from above. Then, we’re ready to take another shot at completing the NUT upgrade. Pick up where we left off with the following command.

sudo dpkg --configure -a

Apt-get is just a polite shell around the dpkg command which is really doing all the work behind the scenes. This command tells dpkg to finish configuring any outstanding packages. You’ll get prompted again to keep or overwrite your files. We’ve already overwritten them once, and then reapplied our customizations, so this time, choose the default option of “N” to keep your configuration files the way they are now, and the installation should complete successfully this time.

Restoring Samba

As I mentioned above, my Samba shares stopped working after the upgrade, and the only thing that seems to have helped bring them back to life is this:

sudo apt-get dist-upgrade

Normally, I’d say don’t do this. I used to do it all the time until I got burned. dist-upgrades are the bleeding edge of upgrades. Not everything has been tested to make sure it gets along well. Most of the time you’re probably okay, but it’s that one time in ten that takes your machine down that you can avoid by only doing normal upgrades.

Checking one last time

Just to be sure nothing got left behind, I did another apt-get upgrade. I noticed a note about packages that were no longer needed. I decided to leave them alone for now. There is also an extensive list of packages that have been “kept back”. You can force these packages to update with a “sudo apt-get dist-upgrade”, but I advise against that. You can read more about it here, but the practical explanation is that dist-upgrade can leave your system pretty broken.

You’re welcome to try it if you’re feeling daring, but I’ve had bad luck with it in the past and generally avoid it. There’s definitely no way you should even consider this without a fresh backup. You’ve been warned.

Changes in Stretch

One that I’ve read about is a change to the way network interfaces are named. From what I’ve read, this only affects new installations, and upgrades will retain their previous naming scheme, so an upgrade should be safe. Previously, you could count on the Pi’s Ethernet port being named “eth0”. Much like hard drives though, if you happened to have more than one Ethernet port, there is the possibility that their names could end up in a different order on any given day. That’s a pretty rare case though. Most Pi’s are only ever going to have the one port that they came with.

Posted in Computers and Internet | 1 Comment

Raspbian Stretch Released

The next major version of the Raspbian OS was released today, and I wanted to take a moment to let you know that I’ll be walking my way through the entire series again on a fresh install, as well as upgrading my existing installations in-place. I’ll write a follow-up post detailing what I’ve found, but it might take me a while to complete.

In the meantime. If you try it on your own, please let me know if you’ve found it to be better or worse, and what issues you’ve run into along the way. Instructions for doing an in-place upgrade are available on the official Raspberry Pi blog (https://www.raspberrypi.org/blog/raspbian-stretch/), but it boils down to:

  1. Take a backup
  2. Edit your apt-get source lists and change all the instances of “jessie” to “stretch”.
  3. apt-get update
  4. apt-get upgrade

The instructions then tell you to purge the pulseaudio library unless you’re using it for Bluetooth audio. I’m not sure what that’s all about, since they didn’t go into detail, but maybe more details will come later on.

Posted in Computers and Internet | 4 Comments

CrashPlan Experiment

Important: This is an experiment, and should still be considered preliminary. I’m putting this out there “live” because I know a lot of people are interested in this topic. I’ll be updating this post as the experiment progresses, but just know up front; I’m not promising that this is a viable long-term solution. I have something that seemed to be working, but I’m still testing whether it’s stable or not (and it looks like “not”). Make sure you read all the way to the bottom, and pay attention to the updates there. These will be a running log of the experiment until such time as I’ve determined whether or not this is a viable solution. At that point, you can expect this whole post to be updated with the final results.

Things I’ve learned so far:

  • This didn’t work very well at all on a Pi 2, but seemed happier on a Pi 3
  • It ran very well on the Pi 3, and then stopped. The basic CrashPlan logs have nothing interesting to say, but the service.log.0 file is positively massive, and not entirely happy
  • Just when you’re about to give up on it, it starts working again.
  • Just when you think it’s working, it stops again.

Basically, yes, it runs, but not for very long. Maybe it’s a memory restriction, maybe it’s some other instability, I don’t know just yet, but one thing’s for sure. CrashPlan won’t stay running long enough to count.


Ever since Code42 started distributing pre-compiled binaries as part of CrashPlan, I’ve been hoping that they would realize just how many people want to run CrashPlan on Raspberry Pis and other similar devices, and start compiling binaries for ARM processors. I realize that they have no real incentive to do so since their business model relies on subscriptions to their cloud services, but still, it would be nice. That does not appear to be happening, and so I’ve been looking for other alternatives.

My first solution was to configure the Raspberry Pi like any normal file share, and simply use it as a backup location rather than counting on it to actually run CrashPlan. This has been working, but has two major problems.

  1. It requires a read/write share for storing the backups. If your main computer gets hit with a virus, it could destroy the files in this location, including the backups of other computers. You could mitigate this by creating different shares for different computers, and assigning them different permissions.
  2. The Pi itself has no way to back up its own files. We can fix this through regular backups of the SD card, or the boot partition on the hard drive, but it’s not automatic.

I’ve been looking for other alternatives, and found something that I thought would work. A company called Eltechs makes a product called Exagear Desktop that is an x86 emulator. I’m not talking about emulating an entire computer the way you would use RetroPie to run an old arcade game. Exagear Desktop is more of an emulated execution environment. I don’t want to get into splitting semantic hairs here, but it’s a way for us to run the regular x86 desktop Linux versions of programs like CrashPlan on the Raspberry Pi. We’re going to take a performance hit, of course, but the hope is that it’ll get us our automated backups again… once the kinks have been worked out.

It’s worth noting that Exagear Desktop is not free. While I’ve been able to pull off everything in this series at little or no cost so far, this is one area where you’ll need to spend some more money. A perpetual license for a single Raspberry Pi costs between $17 and $33, depending on which model of Pi you’re buying it for. In some cases, that’s more than the Pi itself costs, but consider this; A $35 Pi 3, plus a $33 license is still only $68, and uses a penny a day in electricity (not counting the hard drive), so it’s still a cheaper option than dedicating a regular desktop machine to the job. If you have an old leftover desktop machine, then by all means repurpose that, but if you’re stuck on the idea of a CrashPi like I am, then Exagear Desktop may (eventually) be a viable option. You can even try it out for free for three days to see how it performs on your Pi before deciding whether to spend the money or not.

Note: I’m not getting paid for this or anything. When I told them about the blog series, and specifically about me wanting to test CrashPlan on Exagear, Eltechs did give me a longer license for a single machine to do my testing with, but I didn’t get a free perpetual license or anything. If a viable free alternative x86 emulator comes out, I’ll be covering that too. This just seems to be the only emulator that actually works right now. Also, while I was just beginning the process of writing this, they posted their own blog on exactly this topic, so they beat me to it, but I don’t care much for their installation instructions, which are terribly manual, and won’t result in a free trial license, so I’m continuing on with my own post.

I’m examining the possibility that following their strict instructions may result in a more stable system… just not this weekend. I burned all of Saturday trying to coax this thing to life, without success.

Ready? Let’s get started.

Install Exagear Desktop

As I mentioned, there is a blog post on Eltech’s site already (drat, they beat me to it), about getting CrashPlan up and running, but I don’t advise following their instructions for installing their own product because their instructions only work for a purchased, licensed copy, and I’m presuming you want to try it before you buy it.

If you have a license for the product, then following their instructions will possibly get you a more recent version, but installing through apt-get will mean more convenient updates later on. You’ll only get one shot at a trial per machine though. Even if you wipe yourself back to a previous image, the trial will fail on a second attempt. I found this out while trying to verify my installation instructions. Still, if you just want to see how it will behave, this is the only way I know of to kick off the three-day trial period. The regular installation instructions on Eltech’s blog post will not get you a trial, you’ll need a full license for them to work.

First, make sure your Pi is either hooked up to a monitor, or you are connected through a VNC connection. Part of the installation process is going to require you to interact with a pop-up dialog in order to start your trial, so you need to be doing this from a desktop environment. If you already boot to the desktop, then great. Otherwise, you should be able to start the desktop manually with “startx” from the command line. If you built your Pi from a Raspbian Lite image, then you have no desktop, so you won’t be able to get a trial. Sorry. You could probably still install the full version using Eltech’s instructions though, but you’ll need a license.

We’re going to install Exagear Desktop using apt-get, and we won’t even have to go messing around in the source files this time because it’s already in the public repositories. Open up a terminal window, and install Exagear Desktop.

sudo apt-get install exagear-desktop

When the installation is complete, you’re ready to start Exagear, but first, check your current processor architecture.

arch

The output should indicate that you are on an “ARM” architecture. The exact string will vary depending on the model of Raspberry Pi you’re using. I’m installing CrashPlan on a Model 2B that used to be my main server, but got demoted to being the CrashPi when the Model 3s came out, so my output says “armv71”.

Now, start the virtualized x86 environment.

exagear

Since this is the first time we’ve run Exagear Desktop, we’ll get a pop-up asking for contact information, and for us to accept the EULA. Fill in your information, and click “Activate Trial”. After a couple seconds you’ll see a confirmation dialog. Click “close” to dismiss it, and you’ll find yourself back at the terminal window again. Nothing looks any different, but you’re now operating in an x86 “guest” environment. Check the processor architecture to convince yourself this is really happening

arch

It should say i686, which is a total lie, but exactly what we’re looking for. That’s it, we can run x86 Linux programs now. If you really want to get weird, you could even install Wine, and run an emulated Windows environment from your Raspberry Pi. Don’t expect any kind of performance though.

Install CrashPlan Prerequisites

There are a few prerequisites, but we need to make sure we are downloading the proper versions first. Since we’re in a virtualized x86 environment, the repositories that apt-get knows right now are the wrong architecture. Update your sources and install the prerequisites like this:

sudo apt-get update
sudo apt-get install lxrandr libgtk2.0-0 libXtst6 cpio

You can expect this to take a while. The prerequisites have their own prerequisites and so on. It’s prerequisites all the way down. If we were installing things directly on the Pi’s native Raspbian environment, a lot of these things would already be in place. So yeah, we’re going to end up with two copies of some stuff, but it’ll be worth it.

Install CrashPlan

We’re now ready to install CrashPlan, and the instructions are very much like what I originally wrote years ago, only without the hacking of Java files and the swapping of binaries. We’re down to a very clean install now.

Since you’re already in a desktop environment on the Pi, just go ahead and use the browser to download CrashPlan from https://www.crashplan.com/en-us/thankyou?os=linux. The downloaded file should end up in your Downloads folder on the Pi. While we’re here, we may as well make use of the convenient built-in archiver as well to extract the downloaded CrashPlan_?.?.?_Linux.tgz file. Right-click on the file and select “Extract Here”.

ExtractingCrashPlanReturn to your terminal window, which should still be emulating an x86 execution environment, and navigate to the new “crashplan-install” folder. From there, run the CrashPlan installation script.

Note: You need to do this from the emulated environment. If you have multiple Terminal windows open, make sure you’re in the right one with the “arch” command again before continuing.

cd ~/Downloads/crashplan-install/
sudo ./install.sh

Just like in the good old days, you should take the default answers for everything except the backup location. Make sure you point that somewhere on your external hard drive where you want the backups to go. In my case, that’s /mnt/data/backups/.

CrashPlanOptions.PNG

Let the CrashPlan installer complete, and it will ask if you want to run CrashPlan Desktop now. This part won’t work quite right, so answer “no”. There will also be a CrashPlan shortcut on your desktop. It won’t work either, so you may as well delete that. The only shortcut that’s going to work is the one that got created in the start menu, but don’t launch it just yet.

If you’re running this on a fresh Raspberry Pi, or at least one that isn’t trying to run the whole “Home Server” show, you may see a warning about how many files it is configured to watch in real time. We addressed this issue in the MiniDLNA post, but if your Pi isn’t running MiniDLNA, then you most likely didn’t configure the “watches” on it. If you’re only planning to use this machine as a backup destination, and it won’t be backing up its own contents somewhere, then you can ignore this warning, since CrashPlan won’t need to pay attention to local files anyway.

If you want to configure the Pi to watch more files at once, we’ll need to edit the sysctl.conf file. This virtualized environment doesn’t know what nano is though. I’ve gotten used to nano, and I’m sure it will come up again in the future, so I’m going to install it and use it to make the required changes.

sudo apt-get install nano
sudo nano /etc/sysctl.conf

At the bottom of the file, add a line that says

fs.inotify.max_user_watches = 65536

That should do it. Save the file and exit nano (ctrl-x, y, enter). Next, double check that the CrashPlan service is running.

sudo service crashplan status

You should see a sad, grey message that the service is inactive (dead), so I guess the installer didn’t get that part quite right either. It should have set the service up to run automatically though, and we’ll want to check that this is working, so all you should need to do is reboot. You don’t want to issue this command from the emulated command line, though. Either use the desktop menu to reboot the system, or exit the emulated x86 environment first.

exit
sudo reboot

When the system has restarted, get to a plain old terminal window (no need to run exagear again) and check the CrashPlan service.

sudo service crashplan status

It should be a happier green color now. Not only that, but if you read carefully, you’ll even notice the word “exagear” in the path to the service. Even though the service runs in the virtualized x86 environment, it can auto-start and be seen from the native Raspbian environment. Cool, right?

Now that the background CrashPlan service is running, let’s get it configured and attached to your CrashPlan account. Remember, you don’t have to pay for a subscription, but you do need to establish an account. That’s what allows all of your different machines to discover and talk to each other in order to coordinate things.

Start CrashPlan desktop using the main desktop menu, not the shortcut on the desktop, which you should just go ahead and delete if you didn’t do that already. You may want to go grab a snack or a drink or something. CrashPlan desktop is going to take a while to start up. Even in the original post on this topic, before we introduced any kind of emulation into the mix, this step took ages. Now, with the emulated x86 execution environment thrown into the mix, it takes even longer. Just keep looking at the CPU meter in the upper right corner of the desktop. As long as it looks busy, there’s still hope. Mine stayed at 30%-50% for a few minutes before I finally saw the CrashPlan splash screen.

After you’ve waited long enough, or left and come back, you should see the CrashPlan sign-up page.

CrashPlanSignUp.PNG

You can refer back to the original article if you want, but I’ll summarize here. Either start a new account, or sign in to your existing account if you already have one. Given how slow the Pi can be sometimes, I’d establish an account using your desktop computer, and then choose “Existing Account” here, but that’s just me. Either way, there’s going to be another very long delay, so be prepared to wait a little while. Keep your eye on the CPU meter again if you want assurance that the system hasn’t gone to sleep.

In fact, just about everything having to do with the CrashPlan UI is going to be incredibly slow on the Pi. You’ll just have to get used to that. Fortunately, once CrashPlan is up and running, you won’t need the desktop application very often. When the sign-in page finally goes away, you should be able to set up backups, listen for incoming backups from other computers, and all the other things you’re used to.

So far, I’ve noticed that after rebooting the Pi, it isn’t always noticed right away on the network. I’ve had to reboot my main computer before it has picked up on the CrashPi being available, but maybe I just wasn’t being patient enough.

Please make sure you give this a test drive using a trial installation before plunking down money on a license. I have not quite figured out the magic to getting the system to be totally stable yet. Sometimes kicking off the desktop application seems to wake things up, but not always. I imagine the CrashPlan engine just has a lot of housekeeping to do on startup, and it’s just not ready to listen for connections for a while after a reboot. The testing is continuing on this end.

I’ll be updating this post as time goes by to review how well the system is performing over time. Keep watching.

Update 1: It’s only been a few hours, and the CrashPi disappeared on me. The processes seems to have stopped, so I rebooted the Pi, but not before making some tweaks. I overclocked the Pi (Model 2B) to 1000mhz, and reduced the GPU memory to 16. That should free up some resources, I hope. Things may run better on a Pi 3. I’ll be experimenting with that in the near future. Also, this particular Pi does not boot from the hard drive, which means that its swap file is still on the SD card. I’ll be moving things to the hard drive next to see if that helps.

Update 2: A few more hours, and I’m already wishing I hadn’t posted this so early. Initial tests looked good. It was slow to start, but it worked. After leaving it running for a few hours, I’m very disappointed with the stability, at least on the Pi 2. I’ve moved the SD card to a Pi 3 to see if it makes any difference, and so far it’s looking pretty good. Things seem to start up a bit faster. One thing to note is that the engine still takes some time to get up and running, so there’s still a good five to ten minutes after rebooting when the desktop application will keep complaining that it can’t reach the backup engine. Give it some time, tell it to retry, and eventually it will connect. On the Pi 3, my backup is running once again, so it may just be that the Pi 3 is the minimum viable configuration. Since I simply moved the card from one machine to the other, it’s not actually running the right version of ExaGear though. There is a separate version for the Pi 3, which may perform even better. I’ll be looking into upgrading that as soon as my backup has completed. I’m a bit out of date, and safety comes first.

Update 3: Everything was going pretty well. Then I walked away to go do some stuff around the house, came back, and it was all gone again. The CrashPlan service was “active (exited)”, So I restarted the service (sudo service crashplan restart), and waited to see if the backup would resume on its own. 10 minutes went by with no connection, so I started the desktop application to see what was going on. 20 minutes later, still no connection, but the service is still up and running, and the desktop application on the Pi says it’s listening for inbound connections. I’m going to try something I wasn’t really expecting, but would not surprise me. I’m trying a better power supply next. Maybe it’s something simple like that. This Pi happens to have a screen on it (the official 7″ screen), and maybe that’s pulling more power than this supply would like. I have seen the lightning bolt in the corner, after all. But first, another round of update/upgrade just in case the move to the Pi 3 requires an update to ExaGear.

Update 4: Everything was going so well. And then it wasn’t. Rebooting both my laptop and the Pi doesn’t get me anywhere. They just don’t seem to want to talk anymore. I’ve changed to a beefier power supply, tried to update/upgrade everything, and there’s just nothing happening. The next thing to try is to boot from the hard drive and increase the size of the swap file. That may free up enough resources, and stop hammering my SD card long enough to get somewhere. That may happen tomorrow. I’ll post about it when it happens.

Update 5: I went ahead and changed it to boot from the hard drive tonight, and expanded the swap file to 2GB. It didn’t look like it had any effect, but once again, after sitting there for a good long time, it was suddenly ready to communicate. It’s been up for 18 minutes straight so far, backing up my stuff. I’m trying not to jinx it, but maybe it might work after all. Not very fast, and not always when you’re watching, but eventually, silently in the background, it might just back some stuff up.

Update 6: Well, that’s it for this weekend. Up next is a clean install following Eltech’s instructions to the letter. Maybe it’ll make a difference. We’ll find out later this week, assuming I find the free time.

Update 7: It took a little longer to make some sort of reportable progress than I’d hoped. There’s a new version of ExaGear desktop out, and supposedly people have had greater success using v2.2 than previous versions. I thought I’d give it a try, and last night I updated the version on the CrashPi to 2.2. It does seem to work better, but it’s still not what I’d call stable. I was able to get CrashPlan working again, and left it going overnight. This morning, the service was still up and running this morning, but the desktop application had died. When I restart it, it says “Unable to connect to the backup engine, retry?”, even though the service still seems to be running.

I have rebooted the CrashPi, and after giving it plenty of time to settle, eventually got the desktop application to connect. It looks like the backup of the CrashPi itself completed, so I’m going to try reconfiguring the CrashPi to boot to the command line in order to conserve resources, and try using it as a backup destination for another computer with more stuff on it.

Posted in Computers and Internet | 11 Comments