Friday, March 17, 2017

Shrinking a disk to migrate to a smaller SSD

I have finally migrated all of my storage to solid state drives (SSDs).  For my older machines, the 256GB SSDs were bigger than their normal hard drives, so Clonezilla worked perfectly well.  The very last holdout was my 2TB Proxmox server, since my largest SSD was 480GB.

Since I was only using about 120GB of space on that disk, my plan was to shrink the filesystems, then shrink the partitions, and then clone the disk with Clonezilla.

tl;dr both Clonezilla and grub can be jerks.

Even after shrinking everything down, Clonezilla refused to clone the disk because the SSD was smaller than the original (even though all of the partitions would fit on the SSD with plenty of room to spare).  Clonezilla did let me clone a partition at a time, so I did that, but the disk wouldn't boot.  To make a long story short, you need to make sure that you run "update-grub" and "grub-install" after pulling a disk-switcheroo stunt.  And yes, to do that, you'll need to use a live OS to chroot in (and yes, you'll need to mount "/dev", etc.) and run those commands.

Just that paragraph above would have saved me about six hours of time, so there it is, for the next person who is about to embark on a foolish and painful journey.

Shrink a disk

My Proxmox host came with a 2TB spinner disk, and I foolishly decided to use it instead of putting in a 256GB SSD on day one.  I didn't feel like reinstalling Proxmox and setting all of my stuff up again, and I certainly didn't want to find out that I didn't back up some crucial file or couldn't remember the proper config; I just wanted my Proxmox disk migrated over to SSD, no drama.

Remember, you aren't actually shrinking the disk itself; rather, you're reducing the allocated space on that disk by shrinking your filesystems and partitions (and, if you're using LVM, your logical volumes and physical volumes, too).

So, the goal is: shrink all of your partitions down to a size that they can be copied over to a smaller disk.
Remember, you don't have to shrink all of your partitions, on the big ones.  In my case, I had a 1MB partition, a 500MB partition, and a 1.9TB partition.  Obviously, I left the first two alone and shrunk the third one.

First: Shrink whatever's on the partition

How easy this step is depends on if you're using LVM or not.  If you are not using LVM and just have a normal filesystem on the partition, then all you need to do is resize the filesystem to a smaller size (obviously, you have to have less space in use than the new size).

If you are using LVM, then your partition is full of LVM stuff (namely, all of the data needed to make the logical volumes (LVs) that reside on that partition (which is the physical volume, or PV)).
Note: you cannot mess with a currently-mounted filesystem.  If you need to resize your root filesystem, then you'll have to boot into a live OS and do it from there.
(From here on, we'll use "sda" as the original disk and "sdb" as the new SSD.)

 Option 1: Filesystem

Shrink the filesystem using the appropriate tool for your filesystem.  For example, let's assume that we want to shrink "/dev/sda3" down to 50GB.  You could do this with an EXT4 filesystem via:
resize2fs -p /dev/sda3 50G

Option 2: LVM

Shrink whatever filesystems are on the logical volumes that are on the physical volume that you want to shrink.  For example, let's assume that we want to shrink the "your-lv" volume down to 50GB.
resize2fs -p /dev/your-vg/your-lv 50G

Then, shrink the logical volume to match:
lvresize /dev/your-vg/your-lv --size 50G

Now that the logical volume has been shrunk, you should be able to see free space in its physical volume.  Run "pvs" and look at the "PFree" column; that's how much free space you have.

Now shrink the physical volume:
pvresize /dev/sda3 --setphysicalvolumesize 50G

Second: Shrink the partition

Be very, very sure of what you want the new size of your partition to be.  Very sure.  The surest that you've ever been in your life.

Then, use "parted" or "fdisk" to (1) delete the partition, and then (2) add a new partition that's smaller.  The new partition must start on the exact same spot that the old one did.  Make sure that the partition type is set appropriately ("Linux" or "Linux LVM", etc.).  Basically, the new one should look exactly like the old one, but with the ending position smaller.

Clone a disk

Use Clonezilla (or any other live OS, for that matter) to clone the partitions to the new disk.  With Clonezilla, I still had to make the partitions on the new disk myself, so while you're there, you might as well just "dd" the contents of each partition over.

Create the new partitions and copy the data

On the new disk, create partitions that will correspond with those on the old disk.  You can create them with the same size, or bigger.  I created my tiny partitions with the same sizes (1MB and 500MB), but I created my third partition to take up the rest of the SSD.

Copy the data over using whatever tool makes you happy; I like "dd" because it's easy.  Be very, very careful about which disk is the source disk (input file, or "if") and which is the target disk (output file, or "of").  For example:
dd if=/dev/sda1 of=/dev/sdb1 bs=20M
dd if=/dev/sda2 of=/dev/sdb2 bs=20M
dd if=/dev/sda3 of=/dev/sdb3 bs=20M

If you create larger partitions on the new disk, everything will still work just fine; the filesystems or LVM physical volumes simply will have the original sizes.  Once everything is copied over, you can simply expand the filesystem (or, for LVM, expand the physical volume, then the logical volume, and then the physical volume) to take advantage of the larger partition.

Remove the original disk

Power off and remove the original disk; you're done with it anyway.

If you're using LVM, that original disk will only get in the way, since it'll get confused about having double the number of everything and won't set up your volumes correctly.

(From here on, we'll use "sda" as the new SSD since we've removed the original disk.)

Bonus: expand your filesystems

Now that everything's copied over to the new disk, you can expand your filesystems to take up the full size of the partition.  You may have to run "fsck" before it'll let you expand the filesystem.

For a normal filesystem, simply use the appropriate tool for your filesystem.  For example, let's assume that we want to grow "/dev/sda3" to take up whatever extra space there might be.  You could do this with an EXT4 filesystem via:
resize2fs /dev/sda3

In the case of LVM, you'll have to expand the physical volume first, then the logical volume, and then the filesystem (essentially, this is the reverse of the process to shrink them).  For example:
pvresize /dev/sda3
lvresize -l +100%FREE /dev/your-vg/your-lv
resize2fs /dev/your-vg/your-lv
Note: the "-l +100%FREE" arguments to "lvresize" tell it to expand to include all ("100%") of the free space available on the physical volume.

Fix grub

I still don't fully grok the changes that UEFI brought to the bootloader, but I do know this: after you pull a disk-switcheroo, grub needs to be updated and reinstalled.

So, using your live OS, mount the root filesystem from your new disk (and any other dependent filesystem, such as "/boot" and "/boot/efi").  Then create bind mounts for "/dev", "/dev/pts", "/sys", "/proc", and "/run", since those are required for grub to install properly.

Finally, chroot in to the root filesystem and run "update-grub" followed by "grub-install" (and give it the path to the new disk):
update-grub
grub-install /dev/sda

Conclusion

That should do it.  Your new disk should boot properly.

For a decent step-by-step guide on shrinking an LVM physical volume (but with the intent of creating another partition), see this article from centoshelp.org.

Live-updating a Java AppEngine project's HTML with Maven

I recently switched all of my Java AppEngine projects to use Maven once the Google updated its Eclipse plugin to support Google Cloud SDK (instead of the AppEngine SDK); see the getting-started with Maven guide.  Google Cloud SDK is much more convenient for me because it gets regular updates via "apt-get" as the "google-cloud-sdk" package, plus it has a nice CLI for working with an app (great for emergency fixes via SSH).

Maven is relatively new to me; I tried it years ago, but it didn't play well with the AppEngine plugin for Eclipse, so I gave up.  Now that it's fully supported, I am very happy with it.  tl;dr is that it's the Java compiler and toolkit that you always thought existed, but didn't.

There is exactly one problem with the new Eclipse plugin for AppEngine/Maven support: the test server doesn't respect any of the plugin settings in my "pom.xml" file.  This means that it won't run on the port that I tell it to run on, etc.

That's not a huge deal because I can just run "mvn appengine:run" to run it perfectly via the command line (and I love my command line).
Aside: the command is actually slightly more involved since AppEngine needs Java 7 (as opposed to Java 9, which is what my desktop runs).
JAVA_HOME=/usr/lib/jvm/java-7-oracle/ mvn appengine:run
This worked perfectly while I was editing my Java source files.  The default "pom.xml" has a line in it that tells eclipse to compile its Java code to the appropriate Maven target directory, which means that the development server will reload itself anytime that, effectively, the Java source code changes.
In case you're wondering, that line is:
<!-- for hot reload of the web application--> <outputDirectory>${project.build.directory}/${project.build.finalName}/WEB-INF/classes</outputDirectory>
However, another time, I had to update the HTML portion of my app, and my changes weren't being applied.  I'd change a file, reload a million times, and nothing would work.  Restarting the development server would work, but that was crazy to me: why would the development server support a hot reload for Java source code changes but not for trivial static files (such as the HTML files)?

I spent hours and hours troubleshooting this very thing, and I finally learned how this whole Maven thing works:
  1. Maven compiles the application (including the Java code) to a "target".
  2. For development purposes, the target is a directory that can be scanned for changes; for production purposes, the target is a "jar" or "war" file, for example.
  3. That "outputDirectory" tag in "pom.xml" tells Eclipse to compile its Java changes to the appropriate target directory, allowing a running development server to hot reload the Java changes.
When the target directory is built, all of the static files are copied to it.  When you edit such a file in Eclipse, you're editing the original file in the "src/main/webapp" folder, not the target folder that the development server is running from.

To solve the problem (that is, to add hot reload for static files), you simply need to mirror whatever change you made in the original folder to the copy in the target folder.

I spent too many hours trying to figure out how to do this in Eclipse and eventually settled on a short bash script that continually monitors the source directory using "inotifywait" for changes and then mirrors those changes in the target directory (as I said, I love my command line).

My solution is now to run "mvn appengine:run" in one terminal and this script in another (it uses "pom.xml" to figure out the target directory):

#!/bin/bash

declare -A commands;
commands[inotifywait]=inotify-tools;
commands[xmlstarlet]=xmlstarlet;

for command in "${!commands[@]}"; do
   if ! which "${command}" &>/dev/null; then
      echo "Could not find command: ${command}";
      echo "Please intall it by running:";
      echo "   sudo apt-get install ${commands[$command]}";
      exit 1;
   fi;
done;

artifactId=$( xmlstarlet sel -N my=http://maven.apache.org/POM/4.0.0 --template --value-of /my:project/my:artifactId pom.xml );
version=$( xmlstarlet sel -N my=http://maven.apache.org/POM/4.0.0 --template --value-of /my:project/my:version pom.xml );

sourceDirectory=src/main/webapp/;
sourceDirectoryLength="${#sourceDirectory}";
targetDirectory=target/"${artifactId}-${version}"/;

echo "Source directory: ${sourceDirectory}";
echo "Target directory: ${targetDirectory}";

inotifywait -m -r src/main/webapp/ -e modify -e moved_to -e moved_from -e create -e delete 2>/dev/null | while read -a line; do
   echo "Line: ${line[@]}";
   fullPath="${line[0]}";
   operation="${line[1]}";
   filename="${line[2]}";
   
   path="${fullPath:$sourceDirectoryLength}";

   echo "Operation: ${operation}";
   echo "   ${path} :: ${filename}";

   case "${operation}" in
      CREATE|MODIFY|MOVED_TO)
         cp -v -a -f "${fullPath}${filename}" "${targetDirectory}${path}${filename}";
         ;;
      DELETE|MOVED_FROM)
         rm -v -r -f "${targetDirectory}${path}${filename}";
         ;;
      *)
         echo "Unhandled operation: ${operation}";
         ;;
   esac;
done;