Monday, October 16, 2017

Hidden dependencies with Gerrit and Bazel

This is going to be very short.

I was trying to build the "go-import" plugin for Gerrit (because apparently no one wants to build it and publish it), and I spent the past four hours trying to get "bazel build ..." to actually run.

If you're getting errors like these:
root@25f962af2a00:/gerrit# bazel fetch //...
ERROR: /gerrit/lib/jetty/BUILD:1:1: no such package '@jetty_servlet//jar': Argument 0 of execute is neither a path nor a string. and referenced by '//lib/jetty:servlet'
ERROR: /gerrit/lib/codemirror/BUILD:11:1: no such package '@codemirror_minified//jar': Argument 0 of execute is neither a path nor a string. and referenced by '//lib/codemirror:mode_q_r'
ERROR: /gerrit/lib/codemirror/BUILD:11:1: no such package '@codemirror_minified//jar': Argument 0 of execute is neither a path nor a string. and referenced by '//lib/codemirror:theme_solarized_r'
ERROR: /gerrit/lib/codemirror/BUILD:11:1: no such package '@codemirror_original//jar': Argument 0 of execute is neither a path nor a string. and referenced by '//lib/codemirror:theme_elegant'
ERROR: Evaluation of query "deps(//...)" failed: errors were encountered while computing transitive closure
Building: no action running

root@25f962af2a00:/gerrit# bazel build --verbose_failures plugins/go-import:go-import

ERROR: /gerrit/lib/BUILD:176:1: no such package '@jsr305//jar': Argument 0 of execute is neither a path nor a string. and referenced by '//lib:jsr305'
ERROR: Analysis of target '//plugins/go-import:go-import' failed; build aborted

Then you probably also need these packages:

  1. nodejs
  2. npm
  3. zip
I was building on a Docker container, and it didn't have any of those.

The "Argument 0 of execute" line appears to be referring to some internal call to "execute" that looks something like "execute("npm",...)" or "execute("zip",...").

Anyway, that instantly fixed my problem.

Friday, March 17, 2017

Shrinking a disk to migrate to a smaller SSD

I have finally migrated all of my storage to solid state drives (SSDs).  For my older machines, the 256GB SSDs were bigger than their normal hard drives, so Clonezilla worked perfectly well.  The very last holdout was my 2TB Proxmox server, since my largest SSD was 480GB.

Since I was only using about 120GB of space on that disk, my plan was to shrink the filesystems, then shrink the partitions, and then clone the disk with Clonezilla.

tl;dr both Clonezilla and grub can be jerks.

Even after shrinking everything down, Clonezilla refused to clone the disk because the SSD was smaller than the original (even though all of the partitions would fit on the SSD with plenty of room to spare).  Clonezilla did let me clone a partition at a time, so I did that, but the disk wouldn't boot.  To make a long story short, you need to make sure that you run "update-grub" and "grub-install" after pulling a disk-switcheroo stunt.  And yes, to do that, you'll need to use a live OS to chroot in (and yes, you'll need to mount "/dev", etc.) and run those commands.

Just that paragraph above would have saved me about six hours of time, so there it is, for the next person who is about to embark on a foolish and painful journey.

Shrink a disk

My Proxmox host came with a 2TB spinner disk, and I foolishly decided to use it instead of putting in a 256GB SSD on day one.  I didn't feel like reinstalling Proxmox and setting all of my stuff up again, and I certainly didn't want to find out that I didn't back up some crucial file or couldn't remember the proper config; I just wanted my Proxmox disk migrated over to SSD, no drama.

Remember, you aren't actually shrinking the disk itself; rather, you're reducing the allocated space on that disk by shrinking your filesystems and partitions (and, if you're using LVM, your logical volumes and physical volumes, too).

So, the goal is: shrink all of your partitions down to a size that they can be copied over to a smaller disk.
Remember, you don't have to shrink all of your partitions, on the big ones.  In my case, I had a 1MB partition, a 500MB partition, and a 1.9TB partition.  Obviously, I left the first two alone and shrunk the third one.

First: Shrink whatever's on the partition

How easy this step is depends on if you're using LVM or not.  If you are not using LVM and just have a normal filesystem on the partition, then all you need to do is resize the filesystem to a smaller size (obviously, you have to have less space in use than the new size).

If you are using LVM, then your partition is full of LVM stuff (namely, all of the data needed to make the logical volumes (LVs) that reside on that partition (which is the physical volume, or PV)).
Note: you cannot mess with a currently-mounted filesystem.  If you need to resize your root filesystem, then you'll have to boot into a live OS and do it from there.
(From here on, we'll use "sda" as the original disk and "sdb" as the new SSD.)

 Option 1: Filesystem

Shrink the filesystem using the appropriate tool for your filesystem.  For example, let's assume that we want to shrink "/dev/sda3" down to 50GB.  You could do this with an EXT4 filesystem via:
resize2fs -p /dev/sda3 50G

Option 2: LVM

Shrink whatever filesystems are on the logical volumes that are on the physical volume that you want to shrink.  For example, let's assume that we want to shrink the "your-lv" volume down to 50GB.
resize2fs -p /dev/your-vg/your-lv 50G

Then, shrink the logical volume to match:
lvresize /dev/your-vg/your-lv --size 50G

Now that the logical volume has been shrunk, you should be able to see free space in its physical volume.  Run "pvs" and look at the "PFree" column; that's how much free space you have.

Now shrink the physical volume:
pvresize /dev/sda3 --setphysicalvolumesize 50G

Second: Shrink the partition

Be very, very sure of what you want the new size of your partition to be.  Very sure.  The surest that you've ever been in your life.

Then, use "parted" or "fdisk" to (1) delete the partition, and then (2) add a new partition that's smaller.  The new partition must start on the exact same spot that the old one did.  Make sure that the partition type is set appropriately ("Linux" or "Linux LVM", etc.).  Basically, the new one should look exactly like the old one, but with the ending position smaller.

Clone a disk

Use Clonezilla (or any other live OS, for that matter) to clone the partitions to the new disk.  With Clonezilla, I still had to make the partitions on the new disk myself, so while you're there, you might as well just "dd" the contents of each partition over.

Create the new partitions and copy the data

On the new disk, create partitions that will correspond with those on the old disk.  You can create them with the same size, or bigger.  I created my tiny partitions with the same sizes (1MB and 500MB), but I created my third partition to take up the rest of the SSD.

Copy the data over using whatever tool makes you happy; I like "dd" because it's easy.  Be very, very careful about which disk is the source disk (input file, or "if") and which is the target disk (output file, or "of").  For example:
dd if=/dev/sda1 of=/dev/sdb1 bs=20M
dd if=/dev/sda2 of=/dev/sdb2 bs=20M
dd if=/dev/sda3 of=/dev/sdb3 bs=20M

If you create larger partitions on the new disk, everything will still work just fine; the filesystems or LVM physical volumes simply will have the original sizes.  Once everything is copied over, you can simply expand the filesystem (or, for LVM, expand the physical volume, then the logical volume, and then the physical volume) to take advantage of the larger partition.

Remove the original disk

Power off and remove the original disk; you're done with it anyway.

If you're using LVM, that original disk will only get in the way, since it'll get confused about having double the number of everything and won't set up your volumes correctly.

(From here on, we'll use "sda" as the new SSD since we've removed the original disk.)

Bonus: expand your filesystems

Now that everything's copied over to the new disk, you can expand your filesystems to take up the full size of the partition.  You may have to run "fsck" before it'll let you expand the filesystem.

For a normal filesystem, simply use the appropriate tool for your filesystem.  For example, let's assume that we want to grow "/dev/sda3" to take up whatever extra space there might be.  You could do this with an EXT4 filesystem via:
resize2fs /dev/sda3

In the case of LVM, you'll have to expand the physical volume first, then the logical volume, and then the filesystem (essentially, this is the reverse of the process to shrink them).  For example:
pvresize /dev/sda3
lvresize -l +100%FREE /dev/your-vg/your-lv
resize2fs /dev/your-vg/your-lv
Note: the "-l +100%FREE" arguments to "lvresize" tell it to expand to include all ("100%") of the free space available on the physical volume.

Fix grub

I still don't fully grok the changes that UEFI brought to the bootloader, but I do know this: after you pull a disk-switcheroo, grub needs to be updated and reinstalled.

So, using your live OS, mount the root filesystem from your new disk (and any other dependent filesystem, such as "/boot" and "/boot/efi").  Then create bind mounts for "/dev", "/dev/pts", "/sys", "/proc", and "/run", since those are required for grub to install properly.

Finally, chroot in to the root filesystem and run "update-grub" followed by "grub-install" (and give it the path to the new disk):
update-grub
grub-install /dev/sda

Conclusion

That should do it.  Your new disk should boot properly.

For a decent step-by-step guide on shrinking an LVM physical volume (but with the intent of creating another partition), see this article from centoshelp.org.

Live-updating a Java AppEngine project's HTML with Maven

I recently switched all of my Java AppEngine projects to use Maven once the Google updated its Eclipse plugin to support Google Cloud SDK (instead of the AppEngine SDK); see the getting-started with Maven guide.  Google Cloud SDK is much more convenient for me because it gets regular updates via "apt-get" as the "google-cloud-sdk" package, plus it has a nice CLI for working with an app (great for emergency fixes via SSH).

Maven is relatively new to me; I tried it years ago, but it didn't play well with the AppEngine plugin for Eclipse, so I gave up.  Now that it's fully supported, I am very happy with it.  tl;dr is that it's the Java compiler and toolkit that you always thought existed, but didn't.

There is exactly one problem with the new Eclipse plugin for AppEngine/Maven support: the test server doesn't respect any of the plugin settings in my "pom.xml" file.  This means that it won't run on the port that I tell it to run on, etc.

That's not a huge deal because I can just run "mvn appengine:run" to run it perfectly via the command line (and I love my command line).
Aside: the command is actually slightly more involved since AppEngine needs Java 7 (as opposed to Java 9, which is what my desktop runs).
JAVA_HOME=/usr/lib/jvm/java-7-oracle/ mvn appengine:run
This worked perfectly while I was editing my Java source files.  The default "pom.xml" has a line in it that tells eclipse to compile its Java code to the appropriate Maven target directory, which means that the development server will reload itself anytime that, effectively, the Java source code changes.
In case you're wondering, that line is:
<!-- for hot reload of the web application--> <outputDirectory>${project.build.directory}/${project.build.finalName}/WEB-INF/classes</outputDirectory>
However, another time, I had to update the HTML portion of my app, and my changes weren't being applied.  I'd change a file, reload a million times, and nothing would work.  Restarting the development server would work, but that was crazy to me: why would the development server support a hot reload for Java source code changes but not for trivial static files (such as the HTML files)?

I spent hours and hours troubleshooting this very thing, and I finally learned how this whole Maven thing works:
  1. Maven compiles the application (including the Java code) to a "target".
  2. For development purposes, the target is a directory that can be scanned for changes; for production purposes, the target is a "jar" or "war" file, for example.
  3. That "outputDirectory" tag in "pom.xml" tells Eclipse to compile its Java changes to the appropriate target directory, allowing a running development server to hot reload the Java changes.
When the target directory is built, all of the static files are copied to it.  When you edit such a file in Eclipse, you're editing the original file in the "src/main/webapp" folder, not the target folder that the development server is running from.

To solve the problem (that is, to add hot reload for static files), you simply need to mirror whatever change you made in the original folder to the copy in the target folder.

I spent too many hours trying to figure out how to do this in Eclipse and eventually settled on a short bash script that continually monitors the source directory using "inotifywait" for changes and then mirrors those changes in the target directory (as I said, I love my command line).

My solution is now to run "mvn appengine:run" in one terminal and this script in another (it uses "pom.xml" to figure out the target directory):

#!/bin/bash

declare -A commands;
commands[inotifywait]=inotify-tools;
commands[xmlstarlet]=xmlstarlet;

for command in "${!commands[@]}"; do
   if ! which "${command}" &>/dev/null; then
      echo "Could not find command: ${command}";
      echo "Please intall it by running:";
      echo "   sudo apt-get install ${commands[$command]}";
      exit 1;
   fi;
done;

artifactId=$( xmlstarlet sel -N my=http://maven.apache.org/POM/4.0.0 --template --value-of /my:project/my:artifactId pom.xml );
version=$( xmlstarlet sel -N my=http://maven.apache.org/POM/4.0.0 --template --value-of /my:project/my:version pom.xml );

sourceDirectory=src/main/webapp/;
sourceDirectoryLength="${#sourceDirectory}";
targetDirectory=target/"${artifactId}-${version}"/;

echo "Source directory: ${sourceDirectory}";
echo "Target directory: ${targetDirectory}";

inotifywait -m -r src/main/webapp/ -e modify -e moved_to -e moved_from -e create -e delete 2>/dev/null | while read -a line; do
   echo "Line: ${line[@]}";
   fullPath="${line[0]}";
   operation="${line[1]}";
   filename="${line[2]}";
   
   path="${fullPath:$sourceDirectoryLength}";

   echo "Operation: ${operation}";
   echo "   ${path} :: ${filename}";

   case "${operation}" in
      CREATE|MODIFY|MOVED_TO)
         cp -v -a -f "${fullPath}${filename}" "${targetDirectory}${path}${filename}";
         ;;
      DELETE|MOVED_FROM)
         rm -v -r -f "${targetDirectory}${path}${filename}";
         ;;
      *)
         echo "Unhandled operation: ${operation}";
         ;;
   esac;
done;

Sunday, January 8, 2017

Running transmission as a user in Ubuntu 16.04

I recently bought a QNAP TS-451+ NAS for my house.  This basically meant that all of my storage would be backed by NFS mounts.  QNAP (currently) only supports NFSv3, so the UIDs and GIDs on any files have to match the UIDs and GIDs on all of my boxes for everything to work.

The "standard" first user (in my case, "codon") in Ubuntu has a UID of 1000 and a GID of 1000 (also called, uncreatively "codon").  I have this user set up on every box in my network, so this means that any "codon" user on any box can read and write to anything that any other "codon" user created.

However, I use transmission to download torrents.  Transmission runs as the "debian-transmission" user, with a UID of 500 and a GID of 500.  Because of this, "codon" had to use "sudo" to change the ownership of any downloaded files before moving them to their final destinations.  In addition, my Plex server couldn't see the files (because the "plex" user was in the "codon" group with GID 1000).

Without jumping through a bunch of hoops, I figured that the simplest solution would be to run transmission as the "codon" user.  This should have been the simplest way, but I ran into a bunch of problems and outdated guides.  Here, I present to you how to run transmission as a user in Ubuntu 16.04.

Background: systemd

Ubuntu 16.04 uses systemd for its "init" system.  Systemd is great, but it's a whole different game than the older SysV init or now-defunct Upstart init systems.

The magic of systemd is that you do not have to modify any system-owned files; this means that you will not get conflicts when you upgrade.  Instead, you simply supplement an existing unit (a "unit" is what systemd operates on, not an "init script") by creating a new file in a particular location.

Step 1: run as a different user

Explanation

The systemd unit for transmission is called "transmission-daemon", and it is defined here:
/lib/systemd/system/transmission-daemon.service
In principle, you could simply modify that file to change the "User=debian-transmission" line, but that's not the systemd way.  Instead, we're going to create a supplement file in "/etc/systemd/" that has a different "User=..." line.  When systemd reads its configuration, it'll read the original "transmission-daemon.service" file, and then it'll apply the changes in our supplement file.

The appropriate place to put these supplement files is the following directory:
/etc/systemd/${service-path}.d/
Here, ${service-path} refers to the path after "/lib/systemd".  In transmission's case, this is "system/transmission-daemon.service".

What to do

Stop transmission (if it's already running).
sudo systemctl stop transmission-daemon;
Create the supplement file directory for transmission.
sudo mkdir -p /etc/systemd/system/transmission-daemon.service.d;
Create a new supplement file called "run-as-user.conf".
sudo vi /etc/systemd/system/transmission-daemon.service.d/run-as-user.conf
and put the following text in it.
[Service]
User=codon
Obviously, use your desired username and not "codon".

Tell systemd to reload its units.
sudo systemctl daemon-reload;
To confirm that the changes went through, ask systemd to print the configuration that it's using for transmission.
sudo systemctl cat transmission-daemon;
You should see something like this:
# /lib/systemd/system/transmission-daemon.service
[Unit]
Description=Transmission BitTorrent Daemon
After=network.target 
[Service]
User=debian-transmission
Type=notify
ExecStart=/usr/bin/transmission-daemon -f --log-error
ExecReload=/bin/kill -s HUP $MAINPID 
[Install]
WantedBy=multi-user.target 
# /etc/systemd/system/transmission-daemon.service.d/run-as-user.conf
[Service]
User=codon
You can see that our supplement file was appended to the end.

Step 2: get the configuration directory setup

Explanation

Transmission loads its configuration from the user's home directory, in particular:
~/.config/transmission-daemon/
Since Ubuntu runs transmission as the "debian-transmission" user, the transmission configuration resides in that user's home directory, which happens to be:
/var/lib/transmission-daemon
And here's where things get strange.

The normal place for system-wide configuration files is "/etc/", and transmission tries to pass itself off as a normal, system-wide daemon.  To get around this, the main configuration file for transmission appears to be "/etc/transmission-daemon/settings.json", but it's actually:
/var/lib/transmission-daemon/.config/transmission-daemon/settings.json
The Ubuntu configuration just happens to symlink that file to "/etc/transmission-daemon/settings.json".

Now that we're going to be running our transmission as our user and not "debian-transmission", the configuration will be loaded from our user's home directory (in my case, "/home/codon").  Thus, the fake-me-out "/etc/transmission-daemon/settings.json" configuration file will not be used.

When transmission starts, if it doesn't have a configuration directory set up already, it will set one up on its own.  So here, we want to start transmission briefly so that it configures this directory structure for us.

What to do

Start transmission and then stop transmission.
sudo systemctl start transmission-daemon;
sudo systemctl stop transmission-daemon;
You should now have the following directory in your user's home directory:
.config/transmission-daemon/

Step 3: change "settings.json" and start transmission

Explanation

Transmission will load its configuration from "~/.config/transmission-daemon/settings.json" when it starts, so just update that file with whatever your configuration is and then start transmission.

Note that transmission will overwrite that file when it stops, so make your changes while transmission is not running.  If you make changes while transmission is running, then simply issue the "reload" command to it and it'll reload the configuration live.
sudo systemctl reload transmission-daemon;

What to do

Update "~/.config/transmission-daemon/settings.json" to suite your needs.

Then, start transmission:
sudo systemctl start transmission-daemon;
Make sure that everything's running okay:
sudo systemctl status transmission-daemon;
You should see something like this:
● transmission-daemon.service - Transmission BitTorrent Daemon
   Loaded: loaded (/lib/systemd/system/transmission-daemon.service; disabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/transmission-daemon.service.d
           └─run-as-user.conf
   Active: active (running) since Sun 2017-01-08 19:28:27 EST; 1min ago
 Main PID: 3271 (transmission-da)
   Status: "Uploading 54.18 KBps, Downloading 0.17 KBps."
   CGroup: /system.slice/transmission-daemon.service
           └─3271 /usr/bin/transmission-daemon -f --log-error

Jan 08 19:28:27 transmission systemd[1]: Starting Transmission BitTorrent Daemon...
Jan 08 19:28:27 transmission systemd[1]: Started Transmission BitTorrent Daemon.