Saturday, July 10, 2021

Migrating from a static volume to a storage pool in QNAP

 I bought a QNAP TS-451+ NAS a number of years ago.  At the time, you could only set up what are now called "static volumes"; these are volumes that are composed of a number of disks in some RAID configuration.  After a firmware update, QNAP introduced "storage pools", which act as a layer in between the RAIDed disks and the volumes on top of them.  Storage pools can do snapshots and some other fancy things, but the important thing here is that QNAP was pushing storage pools now, and I had a static volume.

I wanted to migrate from my old static volume to a new storage pool.  I couldn't really find any examples of anyone who had performed such a migration successfully; most of the advice on the Internet was basically, "back up your stuff and reformat".  Given the fact that my volume was almost full and that QNAP does not support an in-place migration, I figured that if I added on some extra storage in the form of an expansion unit, I could probably pull it off with minimal hassle.

(The official QNAP docs generally agree with this.)

tl;dr It was pretty easy to do, just a bit time-consuming.  I'll also note that this was a lossless process (other than my NFS permissions); I didn't have to reinstall anything or restore any backups.

Here's the general workflow:

  1. Attach the expansion unit.
  2. Add the new disks to the expansion unit.
  3. Create a new storage pool on the expansion unit.
  4. Transfer each folder in the original volume to a new folder on the expansion unit.
  5. Write down the NFS settings for the original volume's folders.
  6. Delete the original volume.
  7. Create a new storage pool with the original disks.
  8. Create a new system volume on the main storage pool.
  9. Create new volumes as desired on the main storage pool.
  10. Transfer each folder from the expansion volume to the main volume.
  11. Re-apply the NFS settings on the folders on the main storage pool's volumes.
  12. Detach the expansion unit.
Some details follow.

QNAP sells expansion units that can act as additional storage pools and volumes, and the QNAP OS integrates them pretty well.  I purchased a TS-004 and connected it to my TS-451+ NAS via USB.  I had some new drives that I was planning to use to replace the drives currently in the NAS, so instead of doing that right away, I put them all in the expansion unit and created a new storage pool (let's call this the expansion storage pool).

I had originally tried using File Station to copy and paste all of my folders to a new volume in the expansion unit, but I would get permission-related errors, and I didn't want to deal with individual files when there were millions to transfer.  QNAP has an application called Hybrid Backup Sync, and one of the things that you can do is a 1-way sync "job" that lets you properly copy everything from one folder on one volume to another folder on another volume.  So I created new top-level folders in the expansion volume and then used Hybrid Backup Sync to copy all of my data from the main volume to the expansion volume (it preserved all the file attributes, etc.).

For more information how to use Hybrid Backup Sync to do this, see this article from QNAP.

(If you're coming from a static volume and you set up a storage pool on the expansion unit, then QNAP has a feature where you can transfer a folder on a static volume to a new volume in a storage pool, but this only works one way; you can't use this feature to transfer back from storage pool to storage pool, only from static volume to storage pool.)

I then wrote down the NFS settings that I had for my folders on the main unit (it's pretty simple, but I did have some owner and whitelist configuration).

Once I had everything of mine onto the expansion volume, I then deleted the main (system) volume.  QNAP was okay with this and didn't complain at all.  Some sites that I had read claimed that you'd have to reboot or reformat or something if you did this, but at least on modern QNAP OSes, it's fine with you deleting its system volume.

For more information on deleting a volume, see this article from QNAP.

I created a new storage pool with the main unit's existing disks, and then I created a small, thin volume on it to see what would happen.  QNAP quickly decided that this new volume would be the new "system" volume, and it installed some applications on its own, and then it was done.  My guess is that it installed whatever base config it needs to operate on that new volume and maybe transferred the few applications that I already had to it or something.

(I then rebooted the QNAP just to make sure that everything was working, and it ended up being fine.)

On the expansion unit, I renamed all of the top-level folders to end with "_expansion" so that I'd be able to tell them apart from the ones that I would make on the main unit.

Then I used Hybrid Backup Sync to copy my folders from the expansion volume to the main volume.  Once that was done, I modified the NFS settings on the main volume's folders to match what they had been originally.

I tested the connections from all my machines that use the NAS, and then I detached and powered down the expansion unit.  I restarted the NAS and tested the connections again, and everything was perfect.  Now I had a storage pool with thin-provisioned volumes instead of a single, massive static volume.

Monday, July 5, 2021

Working around App Engine's bogus file modification times in Go

When an App Engine application is deployed, the files on the filesystem have their modification times "zeroed"; in this case, they are set to Tuesday, January 1, 1980 at 00:00:01 GMT (with a Unix timestamp of "315532801").  Oddly enough, this isn't January 1, 1970 (with a Unix timestamp of "0"), so they're adding 1 year and 1 second for some reason (probably to avoid actually zeroing out the date).

If you found your way here by troubleshooting, you may have seen this for your "Last-Modified" header:

last-modified: Tue, 01 Jan 1980 00:00:01 GMT

There's an issue for this particular problem (currently they're saying that it's working as designed); to follow the issue or make a comment, see issue 168399701.

For App Engine in Go, I've historically bypassed the static files stuff and just had my application serve up the files with "http.FileServer", and I've disabled caching everywhere to play it safe ("Cache-Control: no-cache, no-store, must-revalidate").  Recently, I've begun to experiment with a "max-age" of 1-minute lined up on 1-minute boundaries so that I get a bit of help from the GCP proxy and its caching powers while not shooting myself in the foot allowing stale copies of my files to linger all over the Internet.

This caused me a huge amount of headache recently when my web application wasn't updating in production, despite being pushed for over 24 hours.  It turns out that the browser (Chrome) was making a request by including the "If-Modified-Since" header, and my application was responding back with a 304 Not Modified response.  No matter how many times my service worker tried to fetch the new data, the server kept telling it that what it had was perfect.

The default HTTP file server in some languages lets you tweak how it responds ("ETag", "Last-Modified", etc.), but not in Go.  "http.FileServer" has no configuration options available to it.

What I ended up doing was wrapping "http.FileServer"'s "ServeHTTP" in another function; this function had two main goals:

  1. Set up a weak ETag value using the expiration date (ideally, I'd use a strong value like the MD5 sum of the contents, but I didn't want to have to rewrite "http.FileServer" just for this).
  2. Remove the request headers related to the modification time ("If-Modified-Since" and "If-Unmodified-Since").  "http.FileServer" definitely respects "If-Modified-Since", and because the modification time is bogus in App Engine, I figured that just quietly removing any headers related to that would keep things simple.
Here's what I ended up with:

staticHandler := http.StripPrefix("/", http.FileServer(http.Dir("/path/to/my/files")))

myHandler.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
// Cache all the static files aligned at the 1-minute boundary.
expirationTime := time.Now().Truncate(1 * time.Minute).Add(1 * time.Minute)
w.Header().Set("Cache-Control", fmt.Sprintf("public, max-age=%0.0f, must-revalidate", time.Until(expirationTime).Seconds()))
w.Header().Set("ETag", fmt.Sprintf("W/\"exp_%d\"", expirationTime.Unix())) // The ETag is weak ("W/" prefix) because it'll be the same tag for all encodings.

// Strip the headers that `http.FileServer` will use that rely on modification time.
// App Engine sets all of the timestamps to January 1, 1980.
r.Header.Del("If-Modified-Since")
r.Header.Del("If-Unmodified-Since")

staticHandler.ServeHTTP(w, r)
})

Anyway, I fought with this for two days before finally realizing what was going on, so hopefully this will let you work around App Engine's bogus file-modification times.