Monday, August 30, 2010

CSV Exports in Rails

If you're looking for an elegant way to generate CSV files from your index views (or search views or anything else for that matter) you should look no further than this post on StackOverflow by rwc9u.

The gist is to add a format line in the respond_to section of the index method in your controller that caters to CSV.  Then create an index.csv.erb file where you can generate the actual CSV using inline ruby like you would for e.g. your index.html.erb view.  Actually retrieving the data in CSV format from your application involves adding a ".csv" extension to the end of your normal index path e.g. /test/widgets becomes /test/widgets.csv to return CSV.

Keep in mind that you'll need to restart your rails server if you create the initializer file that he suggests.

Friday, August 27, 2010

Exception handling for Net::SSH

I'm writing a bit of Ruby automation code which requires me to connect to multiple servers using ssh and gather specific information from them using a loop like so:

hostList.each do |h|
  Net::SSH.start(h, "user", :password => "password", :timeout => 10) do |ssh|
  end
end


However, my test script kept dying at various points due to issues with the ssh connection to specific servers.

Naturally I thought of wrapping the Net::SSH.start call in a begin/rescue/end but couldn't for the life of me find any information about the exceptions that the start method could throw.  Finally after a bit of digging on google I came across this page which details them rather handily :-)  In short, here's how I have it wrapped now:

hostList.each do |h| 
  begin
    Net::SSH.start(h, "user", :password => "password", :timeout => 10) do |ssh|
    end 
  rescue Timeout::Error
    puts "  Timed out"
  rescue Errno::EHOSTUNREACH
    puts "  Host unreachable"
  rescue Errno::ECONNREFUSED
    puts "  Connection refused"
  rescue Net::SSH::AuthenticationFailed
    puts "  Authentication failure"
  end
end


This works fabulously since I don't really need to handle the exceptions but would like to know about them.

Monday, May 3, 2010

Building a Diskless MythTV Frontend with Mythbuntu 10.04 - Lucid Lynx - Part 1

I just upgraded to Mythbuntu 10.04 which was released just a few days back on April 29th.  Since I was already running Mythbuntu 9.10 on my MythTV backend server I was able to upgrade it to Lucid using the well documented and fairly simple steps on the ubuntu.com site:

$ sudo apt-get install update-manager-core
$ sudo do-release-upgrade

After upgrading the backend server I decided to finally figure out why my diskless frontend server wasn't working.  After a day of investigating it turned out to be a faulty stick of memory.  With that replaced, I needed to configure a diskless frontend on the MythTV backend server so I could use PXE to boot into a frontend without having to worry about anything else.  Since Mythbuntu hasn't had a graphical control panel to create a diskless frontend since Karmic, this blog post documents everything I had to do to get my diskless frontend up and running.  Some of the setup already existed but I'm documenting it here for future use.

Requirements for PXE Booting
PXE is a way for computers to boot up using resources found on the network.  On some computers you have to press F12 or some other key to get them to PXE boot but on all computers that have network cards that support PXE you should be able to set the first boot option in the BIOS to something like "Network" to allow your computer to PXE boot.
During a PXE boot a network card will perform the following tasks without any intervention from any OS stored anywhere on the computer:
  1. automatically obtain an IP address,
  2. automatically obtain a boot kernel and initial ramdisk, and
  3. boot up the computer using that kernel and initial ramdisk 
After booting up the kernel using the initial ramdisk in step 3, the OS can either use a built-in hard drive or the computer's RAM as a root disk or it can NFS mount a volume from a remote server and use it as a root disk.  The NFS scenario is most common for computers that PXE boot and is the one I use.

For a computer to be able to automatically obtain an IP address you need a DHCP server in your network.  However, this DHCP server has to be configured so it can inform the PXE booting computer where the kernel and initial ramdisk files are located.  The kernel and initial ramdisk files are hosted on a TFTP server.  Finally the NFS mounted root disk needs an NFS server which will export the directory of files that will become the root directory of a diskless MythTV frontend.

In my home network I have a separate DHCP server hosted on a Linux machine that used to be my main Linux server.  Now it just serves DHCP until I can move that functionality into my Cisco router.  The other two pieces of the puzzle i.e. the TFTP server and the NFS server are handled by the Mythbuntu backend server and are closely tied to each other.  However, there is no reason why they can't be split up onto their own servers.

The next post will focus on the steps needed to build a diskless image using the Linux Terminal Server Project's utilities.  The post after that will focus on configuring the individual servers needed to put it all together.  These three posts should serve as a more or less complete guide to setting up a diskless Mythbuntu frontend.

Saturday, March 20, 2010

Switching your Rails Database from SQLite3 to PostgreSQL or MySQL

I deployed a minimal new Rails application I wrote over a period of a couple of hours and started testing it with valid production data.  It's basically a very simple CRUD application for a very specific audience.  Very basic stress testing (using Apache's ab) showed me that it didn't work well when requests were coming in with concurrency > 2. I realized I'd started the project using SQLite3 as the database and that's probably where the bottleneck is since it can handle requests with a concurrency of 1 just fine.  The application is deployed using mod_passenger (my first experience with it) and is configured to never tear down application instances due to idle timeouts.  With 6 application instances listening, a concurrency of 6 should have been a cinch.

In any case, now that I need to switch it to using a real database like PostgreSQL or MySQL, there were no obvious solutions that would allow me to keep the data I already had in my production database.  Everything out there talks about going from development to test to production each of which environments gets their own schema but nothing else.  Migrations allow you to keep your production data in place but that's not the same as dumping it and loading it.  Using database-specific dump/load utilities might result in SQL that needs to be tweaked before it can be loaded into another database type.

In comes the Yaml Db plugin by Orion Henry and Adam Wiggins.  It does exactly what one would expect.  Similar to the way schema.db is database-agnostic, the Yaml Db has a database-agnostic dump format and a similar load format.  One of the specific use cases mentioned on that site is "One common use would be to switch your data from one database backend to another."

Excellent!  Btw, thank you George for the tip!  You know who you are :)

Monday, March 15, 2010

Spaces in /etc/fstab

I needed to mount a Samba share from my windows gaming / media center PC to my MythTV backend so I could navigate to all my videos from a single location instead of having to worry about which server they were on.  To that end I shared out the Videos folder from my account on the Windows PC.  Since my user on the Windows PC is "Shahbaz Javeed" that presented a problem when trying to auto-mount it using fstab on my MythTV host - spaces aren't allowed in any field of /etc/fstab.  Any spaces are considered field delimiters.  The solution is to escape all spaces with \040.  My /etc/fstab entry now looks like so:

//frey/Users/Shahbaz\040Javeed/Videos   /var/lib/mythtv/videos/frey cifs     guest,ro       0       0

This works swimmingly!