Friday, November 8, 2024

X-Commit-Powered-By "Header" - Updated!

 It's been a while since I've written a post here but I stopped by to pick up my script to add an X-Commit-Powered-By header to my git commit messages using a commit-msg hook and realized it's hopelessly broken now. If you want to read what it does, you should read the original post. I decided, it was time to update it for 2020s! This post covers the most interesting thing I learned while doing that.

The original script was:

#!/bin/sh
# Adds the currently playing iTunes track to the commit message

# Add a blank line
echo >> $1
state=`osascript -e 'tell application "iTunes" to player state as string'`;
if [ $state = "playing" ]; then
    artist=`osascript -e 'tell application "iTunes" to artist of current track as string'`;
    track=`osascript -e 'tell application "iTunes" to name of current track as string'`;
    echo "X-Commit-Powered-By: $artist - $track" >> $1;
else
    echo "X-Commit-Powered-By: Silent Meditation" >> $1;
fi

It clearly needed to be updated at the very least because the iTunes.app had long since been replaced by Music.app. A quick test of the osascript commands told me that replacing iTunes with Music worked just fine. Since I was going to update the script anyway, I wanted to improve it a bit further for myself.

I decided I wanted to keep a copy of the script in my ~/bin directory at all times for easier linking as a git hook. To that end, I needed a version of the script that would output the X-Commit-Powered-By header to standard output when I ran it from the ~/bin directory but to the commit message when it's run as a git commit-msg hook.

As you can read from the script source, however, it expects a filename as the first argument, $1, and appends the message to that file. Here you can find more details about that and other client-side git hooks. I could wrap all echo statements with an if statement that checks the value of $1 and branch to a version of that echo statement that outputs to the right location.

However, the ideal scenario would be to redirect standard output to a file if one was specified. After all we can do this when calling the script manually by just appending >> /path/to/filename after the script name. Why can't we do this from within the script?

I found myself on this Stack Overflow response to a similar question. Essentially all I had to do is close the file handle for standard output (file handle 1) and re-open that same file handle so it appends all its output to a file - $1 in my case. The bonus is that I don't have to append all of my echo lines with >> $1 any more - that just happens automatically.

With all that, the updated version of the script is:

#!/bin/sh
# Adds the currently playing Music track to the commit message

function displayHelp() {
    echo enable this as a git hook using:
    echo
    echo ln -s $0 /path/to/.git/hooks/commit-msg
}

if [ "$1" == "-h" ] || [ "$1" == "--help" ]; then
    displayHelp
    exit 1
fi

if [ "$1" != "" ]; then
    # We were given a file - presumably from git
    # Let's ensure all echos go to this file
    
    # https://stackoverflow.com/a/20564208/3766784
    # Close standard error file descriptor
    exec 2<&-

    # Open standard output to append to $1 file for write
    exec 1>>$1
fi

# Add a blank line
echo

state=`osascript -e 'tell application "Music" to player state as string'`;
if [ $state = "playing" ]; then
    artist=`osascript -e 'tell application "Music" to artist of current track as string'`;
    track=`osascript -e 'tell application "Music" to name of current track as string'`;
    echo "X-Commit-Powered-By: $artist - $track"
else
    echo "X-Commit-Powered-By: Silent Meditation"
fi

Put this script anywhere in your path, chmod +x it so it can be executed and directly linked to your .git/hooks/commit-msg and you can then run it without any arguments to test it out on standard output. Call it with a filename and it'll append its output to that file. Call it with the -h or --help argument to see how to link it to your .git/hooks directory.

One thing to keep in mind running in 2024 is that you should run the script at least once from the Terminal before installing it as a commit hook. It'll ask you for permission to let Terminal access your Music app which I figure will be necessary for it to run as a git commit hook - at least if you run git from the Terminal like I do.

Tuesday, July 28, 2015

X-Commit-Powered-By "Header"

Just wrote this commit-msg hook for git that just has me looking forward to each commit to the project.  It's only for OSX and its sole purpose is to add a tag that looks like a custom HTTP header which shows what song was playing when the user created the commit.  It only works with iTunes and can handle when iTunes is paused.

Here's an example of how it looks:
X-Commit-Powered-By: Cake - Short Skirt / Long Jacket
 When it detects that your iTunes is paused, it outputs this instead:
X-Commit-Powered-By: Silent Meditation
 You can change all this, of course, with a little bash-fu.

Without further ado, here's the script.  Move it to your git repository's .git/hooks directory and rename it to "commit-msg".  Throw in a chmod +x .git/hook/commit-msg and you're good to go!  If you already have a commit-msg hook, you can add this snippet to the end of it (minus the hash-bang line)

#!/bin/sh
# Adds the currently playing iTunes track to the commit message

# Add a blank line
echo >> $1

state=`osascript -e 'tell application "iTunes" to player state as string'`;
if [ $state = "playing" ]; then
    artist=`osascript -e 'tell application "iTunes" to artist of current track as string'`;
    track=`osascript -e 'tell application "iTunes" to name of current track as string'`;
    echo "X-Commit-Powered-By: $artist - $track" >> $1;
else
    echo "X-Commit-Powered-By: Silent Meditation" >> $1;
fi

 Happy committing!

Monday, October 1, 2012

Testing Validations using RSpec

I just ran into an interesting issue while testing a Rails application with RSpec.  A spec with the following line in it was failing:

    ar2.should_not be_valid
Here ar2 is a model that was constructed in a way that violated a logical constraint.  I attempted to make this spec pass by adding a custom validation method to the model as such:
class Registration < ActiveRecord::Base
  validate :fields_match

  private

  def fields_match
    return true if model2.nil?
    model2.model1.id == model1.id
  end
end
All field names have had their names changed to protect their identity.

This didn't work.  As it turns out merely returning false from the validation method isn't enough to mark a model as invalid.  This answer on StackOverflow helped identify the problem as not adding an error to the list of validation errors for the model.

An updated version where I add an error finally made everything pass:

class Registration < ActiveRecord::Base
  validate :fields_match

  private

  def fields_match
    return true if model2.nil?
    return true if model2.model1.id == model1.id
    errors.add(:model2_id, "model2 doesn't match model1")
  end
end

Wednesday, May 23, 2012

Setting up LVM on LUKS

I recently worked with Raju Chauhan to setup encrypted storage for a database and related files.  He brought up an interesting requirement to see if the encrypted storage could grow with the data as needed.  I hadn't dealt with that specific requirement in the past so I figured I'd see what my options were.  Thanks for that requirement Raju; I don't think I would've thought of doing this were it not for that :-)

After preliminary performance testing in which LUKS barely edged out TrueCrypt, I chose LUKS for the setup since it's integrated into the Linux kernel and seemed to be a better choice for larger filesystems.  For those who don't know, LUKS is a disk-encryption specification that is implemented using cryptsetup and the dm_crypt module in modern Linux kernels.

My solution: LVM on a bunch of LUKS devices to get the encryption and dynamic growth working together.  Here is how to play with that on your own machines if you have about 5G of space to work with and want to see how it looks.

Pre-requisite Packages

I did this on an Ubuntu system so the following pre-requisite package installation instructions are for that. You'll need to ensure the appropriate packages for your distribution are installed before proceeding.

aptitude install cryptsetup-luks lvm2

Creating LVM over LUKS Setup

Create 4 1G files corresponding to physical volumes:
  1. for i in 0 1 2 3; do dd if=/dev/zero of=/pv0$i.luks bs=1M count=0 seek=1000; done
  2. ls -l /pv*.luks
Attach all of them to loopback devices:
  1. for i in 0 1 2 3; do losetup /dev/loop$i /pv0$i.luks; done
  2. losetup -a
Setup all devices as LUKS volumes (answer all prompts):
  1. for i in 0 1 2 3; do cryptsetup luksFormat /dev/loop$i; done
Open all LUKS devices:
  1. for i in 0 1 2 3; do cryptsetup luksOpen /dev/loop$i pv0$i.luks.device; done
Create LVM Physical Volumes from each LUKS device:
  1. for i in 0 1 2 3; do pvcreate /dev/mapper/pv0$i.luks.device; done
  2. pvdisplay
Create LVM Volume Group from all LUKS PVs:
  1. vgcreate vg0 `for i in 0 1 2 3; do echo /dev/mapper/pv0$i.luks.device; done`
  2. vgdisplay
Carve out a LVM Logical Volume from the vg0 Volume Group:
  1. lvcreate --size 3000M --name demolv vg0
  2. lvdisplay
At this time you have a LVM volume group named demolv that is sitting on top of two encrypted physical volumes that is each part of a single LUKS volume.  You can give each LUKS volume different passwords to increase security or you can give them all the same password to increase convenience.

Format and Mount Logical Volume

Format and mount the demolv Logical Volume with whatever filesystem you choose:
  1. mkfs.ext4 /dev/vg0/demolv
  2. mkdir /demo
  3. mount /dev/vg0/demolv /demo
At the end of all this, demolv will contain a filesystem that can be expanded by adding more LUKS volumes to the mix.  Feel free to create files and/or use this volume in any way you can think of with the knowledge that all the data you're storing is encrypted on disk.  Yes, this is quite cool!

Unmount and Detach

Once you're done playing with it (or when you're ready to shut down your system) you can run the following commands to unmount and detach everything.  These steps assume you followed the steps in this tutorial to the letter without changing any names.  If you changed names, you should change the corresponding names in the commands below:
  1. for lv in /dev/vg0/*; do lvchange -an $lv; done
  2. vgchange -an /dev/vg0
  3. for i in 0 1 2 3; do cryptsetup luksClose pv0$i.luks.device; done
  4. for i in 0 1 2 3; do losetup -d /dev/loop$i; done

Saturday, October 15, 2011

Smoother MKV playback with VLC

If you're seeing stuttering in any MKV videos you've ripped, try out wipe0wt's suggestion and turn off loop filters in VLC altogether.  It results in a dramatic improvement in playback quality.  The gist is:
  1. Open up VLC and choose Tools -> Preferences from the menu
  2. In the bottom left corner of the preferences window you'll see a "Show settings" area.  Make sure you change it from "Simple" to "All".  This will change the left side of the preferences window so it's a tree view instead of a collection of icons.
  3. Navigate to the following preferences group from the tree on the left: Input / Codecs -> Video codecs -> FFmpeg.  The right side of the preferences window will now change to "FFmpeg audio/video decoder".
  4. Check the "Allow speed tricks" checkbox
  5. Set the "Skip the loop filter for H.264 decoding" to "All"
  6. Click on the Save button on the bottom right of the preferences window.
Enjoy!

Friday, August 12, 2011

Mounting MSDOS/FAT filesystems under Solaris

I needed to copy over a bunch of photographs to my EON NAS so I put them on a USB stick and attached the stick directly to the NAS to get the maximum speed while copying.  It turns out, while on Linux you type something like:
mount -t vfat /dev/sdd1 /tmp/usbstick

to mount the FAT or FAT32 filesystem from /dev/sdd1 to /tmp/usbstick, that command doesn't work on Solaris which is what EON NAS is running on.  Here are all the steps I took to mount the USB stick under Solaris:
  • Run the "format" command to see the device name of the new USB stick.  The output looks like:
#formatSearching for disks... 
The current rpm value 0 is invalid, adjusting it to 3600done 
c3t0d0: configured with capacity of 465.74GB 
AVAILABLE DISK SELECTIONS: 0. c0t0d0
     /pci@0,0/pci1458,b002@11/disk@0,0 1. c0t1d0
     /pci@0,0/pci1458,b002@11/disk@1,0 2. c0t2d0
     /pci@0,0/pci1458,b002@11/disk@2,0 3. c0t3d0
     /pci@0,0/pci1458,b002@11/disk@3,0 4. c3t0d0
     /pci@0,0/pci1458,5004@13,2/storage@5/disk@0,0 
Specify disk (enter its number): ^C

Use Ctrl-C to break out of the format command.  Based on the output of the format command, I know my Seagate FreeAgentGoFlex USB drive is /dev/dsk/c3t0d0.
  • Create a mount point for that USB stick using:
mkdir /tmp/usbstick"
  • Mount the FAT filesystem on the first partition of /dev/dsk/c3t0d0 using the command:
mount -F pcfs /dev/dsk/c3t0d0s0:c /tmp/usbstick

Et voila!  You should have the disk mounted and writable.  Finish copying to/from the disk and then issue a umount /tmp/usbstick command to unmount.  Don't forget to clean up and remove the /tmp/usbstick directory.

Saturday, April 30, 2011

cwRsync, Windows 7 and UNIX Targets

Lately whenever I've had to rsync anything to my E O N-based NAS from my Windows 7 machine, I've had permissions issues on the NAS.  Specifically, any sub-directories I sync over are created with ridiculous permissions e.g. 0500 or something odd.  No files are able to be transferred until I manually login to the NAS and run a something similar to:

find -type d -exec chmod 0755 {} \;

That got annoying very quickly.

I came across the rsync --no-perms flag which alleviated the problem to some extent in that the directories were at least writable during the initial transfer but the permissions still had to be resolved before transferring anything else over.

I'd gotten used to the whole issue but then came across this post and self-researched answer by karikas that talked about exactly my situation.  Turns out it's a known issue of cygwin (which is what the cwRsync application is using) and Windows 7.  The solution is to set the following environment variable in Windows 7:

set CYGWIN=nontsec