The ramblings of Steve-0Posts RSS Comments RSS

Using the Nexenta SA-API – nexenta-health plugin for Nagios

This is the first script I created using the NexentaStor SA-API, the install of which is detailed in my previous post here. We use a combination of Nagios and Zabbix to monitor our infrastructure, and a very helpful component of that has been the check_netapp plugin available here, which reports any sort of failure on the netapp, including array health, disk space/quota’s, etc. Netapp provides most of this data with a custom SNMP MIB, but unfortunately NexentaStor doesn’t have one (yet at least). I wanted the same functionality with our NexentaStor boxes, without having to remember to add a volume to our monitoring system every time I create a volume on the storage.

This script examines the state of each volume and the space used/free for each volume with a quota, and reports back any errors. The layout is copied from the check_netapp plugin, and is fairly simple to follow along. Of note, be sure to set the appropriate @INC path for your installation, and the location of your public key if using PKI with NexentaStor. Unfortunately some of the API is not documented, and the functionality is incomplete – I had to play around with the $folder->execute() method to find the correct command to get the space used and quota allocated. It seems like this should be a built in, documented method in the API, so here’s to hoping it makes it in.

BEGIN { unshift (@INC, "/opt/build/NexentaStor-SDK-Linux/examples/perl");}
use NZA::Common;
use strict;
$ENV{'NEXENTA_ID_RSA'} = '/home/nagios/.ssh/';
#Configure space % threshold here for warnings
my $space_threshold = .9;
my $IPADDR = $ARGV[0];
if (!$IPADDR) {
        print "Usage: <ip -ADDR>\n";
        exit 1;
my $volume = nms_remote_obj($IPADDR, '/Root/Volume');
my $folder = nms_remote_obj($IPADDR, '/Root/Folder');
my @errors;
my $volnames = $volume->get_names('');
for my $vol (@$volnames) {
        #errors: No known data errors
        #state: ONLINE
        my $volstatus = $volume->get_status($vol);
        if (${$volstatus}{'state'}[0] ne 'ONLINE') {
                push (@errors, "Bad volume status for $vol: ", ${$volstatus}{'state'}[0] );
        if ( ! ${$volstatus}{'errors'}[0] =~ /No known data errors/ ) {
                push (@errors, "Errors for $vol: ", ${$volstatus}{'errors'}[0] );
        my $folder_names = $folder->get_names('');
        for my $fol (@$folder_names) {
                my $quota;
                my $used;
                my $lines = $folder->execute($fol, "get -H used $fol");
                for my $line (@$lines) { $used .= $line };
                my $lines = $folder->execute($fol, "get -H quota $fol");
                for my $line (@$lines) { $quota .= $line };
                my @data = split (/\s+/, $used);
                $used = $data[2];
                my @data = split (/\s+/, $quota);
                $quota = $data[2];
                if ($quota =~ /none/i) {
                $used = convert_space_units($used);
                $quota = convert_space_units($quota);
                #print "$fol - used: $used, quota: $quota\n";
                if ($quota > 0) {
                        if ( ($used/$quota) > .9 ) {
                                push (@errors, "Volume almost full: $fol");
if (scalar(@errors) > 0 ) {
        print "Nexentastor healthcheck failed! ";
        for my $error (@errors) {
                print "  $error.";
        print "\n";
else {
        print "Nexentastor healthcheck OK\n";
sub convert_space_units($) {
        my $string = shift;
        # K, M, G, T
        my %factor = (
                K => 1000,
                M => 1000000,
                G => 1000000000,
                T => 1000000000000,
        if ($string =~ /([.\d]+)(\w+)/) {
                if ($factor{$2}) {
                        return $1 * $factor{$2};
        return "";

One response so far

Setting up the Nexentastor SA-API on Centos/RHEL 5

Nexentastor, a great storage appliance based on solaris/ZFS, has a pretty powerful API built in for remote management, but building the API on anything other than Ubuntu is an exercise in frustration. Included in this post are my notes on getting it installed and working, after a good bit of work by myself and their support/development team. I’m not going to clean it up too much, but if you follow the steps below on a Centos 5 machine you should be able to get it working. The API tarball has a couple good readme’s that you will definitely want to read/reference, especially for setting up API authentication.

yum install gcc make patch iso-codes-devel libnotify-devel NetworkManager-glib-devel libxml2 libxml2-devel dbus-libs dbus-glib-devel dbus-glib dbus-devel expat-devel perl-XML-Parser perl-XML-Twig

cd /root
tar -xf NexentaStor-SDK-Linux.tar.gz
cd NexentaStor-SDK-Linux/


cd ../examples/C/

Enter appliance's IP address:

Connection is successful

perl -MCPAN -e shell
cpan> o conf prerequisites_policy follow
cpan> o conf commit
cpan> install Carp::Assert
cpan> install Net::DBus

#Back up /libXX directory, replace dbus libs
mkdir /root/lib64-dbus.bak
cd /lib64
rsync -av *dbus* ~/lib64-dbus.bak/
cd /root/NexentaStor-SDK-Linux/lib
rsync -av *.so* /lib64/

cd /root/NexentaStor-SDK-Linux/examples/perl/

To setup authentication using ssh keys, do the following:
On the client machine, generate a rsa key if one doesn’t already exist:
ssh-keygen -N "" -q -f ~/.ssh/id_rsa
And on the nexentastor appliance, add that key. The Key ID should be a name you recognize to keep track of the key, and the key value should start with ssh-rsa AAAAzxvdsa….. and end with XXX== – make sure you leave off the name at the end of the key, this is confusing:

nmc@nexentastor:/$ setup appliance authentication keys add
Key ID : myclient
Key Value : ssh-rsa AAAAB3Nza
.... 1I89Q==

Authentication key 'myclient' added

No responses yet

PostgreSQL: Debugging “missing chunk number X for toast value Y” errors

If you found this through a search, you probably have seen something like this in your logs:
ERROR: missing chunk number 0 for toast value 87246446

Searching for this issue on the net, I found that this is usually caused by a corrupt row in the database. The solution is to find the corrupt row, and delete it (and/or update it with proper, non-corrupt values).

Unfortunately, there is no easy way to find the row that is giving you grief (or even the table if you don’t know what is causing this error). You basically have to narrow down your search by looking at parts of the database. If you have some sort of date or timestamp column in your table, you can narrow it down a lot more easily, if you know approximately when the error started:

mydb=# select id from mytable where date_created > timestamp '2009-02-27 00:00:00' and date_created < timestamp '2009-03-01 23:59:00'; id -------------------------------------- 3ab226cf-a972-463d-b5a1-148fe39672b5 10daca73-b2b3-470c-a258-5c92d21cfbb6 (2 rows)

Luckily it's only two rows, and the first one is the culprit:

mydb=# select * from mytable where id='3ab226cf-a972-463d-b5a1-148fe39672b5';
ERROR: missing chunk number 0 for toast value 87246446

Now, on to delete it:
mydb=# delete from mytable where id='3ab226cf-a972-463d-b5a1-148fe39672b5';

If you don’t have a date/timestamp column in your table, you could use LIMIT and OFFSET to narrow it down. I.e. a series of statements like (increasing the offset each time):

select * from mytable order by id limit 1000 offset 5000;

No responses yet

Terminal tips for OSX

I’ve been meaning to get around to it for the longest time, but I finally switched over my Mac’s to use bash as the default shell (previously I had it set to tcsh). I’ve become more accustomed to bash the past few years as its the default shell on pretty much any unix/linux nowadays, so it was time to migrate.

The migration itself was pretty easy, but I wanted to gain some of bash’s features, specifically the emac-like keystrokes for moving forward/back a word in the command line, and also to the beginning and end of the line. The keystrokes, among others, are:

ctrl-a Move cursor to beginning of line
ctrl-e Move cursor to end of line
meta-b Move cursor back one word
meta-f Move cursor forward one word
ctrl-w Cut the last word
ctrl-u Cut everything before the cursor
ctrl-k Cut everything after the cursor
ctrl-y Paste the last thing to be cut
ctrl-_ Undo

I’m a big fan of iTerm – I use it as my primary terminal and have been for the last few years, as it matured in terms of performance and has a lot of great features. The most compelling feature it has over the built in OSX, for me, is that you can specify which characters to include when selecting words. A common operation in a terminal is to select a path, i.e. /home/myuser/testfile. In, if you double click on home, it will just select home – so you have to go back, double click on the first “/” character, and mouse right to get the whole word. With iTerm, you can set a preference with characters to include when connecting, i.e. “/”, so when I double click on home in iTerm, it selects the full path. Specifically, in iTerm -> Preferences -> Mouse, i have “Characters considered part of word:” set to /-_.

One thing I couldn’t figure out was how to get the “meta” key in iterm set to the alt/option key on the mac keyboard. Turns out it’s buried in a non-intuitive place, but it can be done. In iTerm, go to Bookmarks->Manage Profiles, and under Keyboard Profiles select Global. On the right side, you need to set Option Key as “+Esc” for this to work – yes, its strange that it has to be Esc instead of Meta, but hey, at least it all works.

No responses yet

Setting up Oracle VM Server 2.2 with a NFS Repository

Oracle’s documentation on this is a not very well organized, and simply doesn’t work out of the box, at least with the 2.2 install CD that’s available for download. The main problem is that the version of ovs-agent that ships with Oracle VM 2.2 is broken, so following the instructions you’ll end up frustrated and wondering what you did wrong, i.e.:
[root@oraclevm8 /]# /opt/ovs-agent-2.3/utils/ --init
Cluster not available.

To set up OracleVM 2.2 Server, with a NFS repository for VM’s:

  • Download the ISO, and run through the install process
  • Once it is up and you are logged in:
  • Download and install the updated ovs-agent rpm from oracle here:
    • wget
    • rpm -Uvh ovs-agent-2.3-31.noarch.rpm
  • Add your repository with the tool, make it the root repository, and initialize it:

    # /opt/ovs-agent-2.3/utils/ --new
    # /opt/ovs-agent-2.3/utils/ -l
    # /opt/ovs-agent-2.3/utils/ -r 7c885f7e-7dfe-4108-bae4-4dc329e2f017
    # /opt/ovs-agent-2.3/utils/ -i

At this point, you are ready to go, and can continue setting things up (including your Oracle VM Manager VM).

No responses yet

Creating an NFS-root VM template for Xen / Oracle VM

Following up on my previous post about building a Xen / NFS-root kernel, this will take you through creating a VM template capable of using that kernel (or a P2V process for converting existing VM’s / linux servers to nfs-root). I think the intro from the last article still applies, so I’ll include it here:

Over the past year or two, we have transitioned all our servers and hosting to Xen – specifically, we use the Oracle VM management tools on top of xen, and most of our VM’s are Centos 5 x86_64. Since we use NFS NAS’ as storage across our infrastructure, it would be very convenient if we could use a NFS volume as the root drive for VM’s. With the NFS root, we gain things like easy use of filer snapshots, and on-the-fly volume resizing – if we’re ever running short on space in a given VM, its a single command (or a click on a web page) to expand the root drive.

The steps below are the result of a lot of work – the RHEL5 kernel has code in it for an NFS root, but I was never able to get it work correctly, at least under xen. In the end, after lots of experimenting, I was able to build a new kernel from kernel source, that is compatible with Xen and a NFS-Root. Repeat: you cannot build a custom RHEL5 kernel that is capable of booting from NFS under xen.

There are shortcomings/tradeoffs with this approach – you are not able to do any NFS exports from the nfs-root vm, and there is a bit more performance overhead with an NFS-root vm. If you are using a VM to host a high-transaction DB, for instance, I wouldn’t recommend a NFS-root, but for most purposes, it works and performs just fine.

  • Install linux / kickstart / etc to get a good, minimal install of linux, and configure it as you would any linux server. I would suggest a minimal install for your requirements, turn off any unneeded services, lock down permissions, firewall, and any other deployment process you usually go through. Since this is going to be a template, you want this to be ready for any task with minimal configuration changes.
  • On that configured linux system, mount a NFS share that will serve as your future NFS root (or storage for your VM template) to /mnt/tmp
  • Shut down as many running services as possible, so there are no file conflicts/open databases/etc.
  • Copy over the kernel modules from your nfs-root kernel build:rsync -av /path/to/kernel/modules/kernel-ver /mnt/tmp/lib/modules/
  • #copy all files over to nfs mount
    cd /
    cp -ax /{bin,dev,etc,lib,lib64,opt,root,sbin,usr,var,folders} /mnt/tmp
    mkdir /mnt/tmp/{home,proc,sys,tmp}
    chmod 777 /mnt/tmp/tmp
  • Edit /mnt/tmp/etc/fstab to look something like this, main change here is to /:

    /dev/nfs                /                       rootfs  defaults        0 0
    tmpfs                   /dev/shm                tmpfs   defaults        0 0
    devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
    sysfs                   /sys                    sysfs   defaults        0 0
    proc                    /proc                   proc    defaults        0 0
    rpc_pipefs              /var/lib/nfs/rpc_pipefs rpc_pipefs defaults     0 0
  • Edit /mnt/tmp/etc/sysconfig/network-scripts/ifcfg-eth0 as needed to setup the network
  • Create a vm.cfg file for the Xen vm. If using this is an Oracle VM template, make sure to add a zero-byte disk image, and put this under the seed_pool directory. Edit options as needed for your infrastructure.
    dhcp = 'off'
    extra = 'nfsroot=,noacl,nfsvers=3,tcp,rsize=32768,wsize=32768 selinux=0 acpi=off noapic'
    gateway = ''
    hostname = ''
    ip = ''
    kernel = '/OVS/P2V_STUFF/nfsboot-custom/vmlinuz-'
    memory = '512'
    name = 'nfsroot-vm'
    netmask = ''
    on_crash = 'restart'
    on_reboot = 'restart'
    root = '/dev/nfs'
    uuid = '9af0a816-0123-4567-ad50-bc32be92bff7'
    vcpus = 2
    vfb = ['type=vnc,vncunused=1,vnclisten=,vncpasswd=password']
    vif = ['bridge=vlan100,mac=00:16:3E:31:FF:37,type=netfront']
    vif_other_config = []
  • Start up the vm with Oracle VM manager, or start on the command line with xm create vm.cfg, and cross your fingers!
  • No responses yet

    Build a custom Xen kernel capable of booting from a NFS Root Filesystem

    Over the past year or two, we have transitioned all our servers and hosting to Xen – specifically, we use the Oracle VM management tools on top of xen, and most of our VM’s are Centos 5 x86_64. Since we use NFS NAS’ as storage across our infrastructure, it would be very convenient if we could use a NFS volume as the root drive for VM’s. With the NFS root, we gain things like easy use of filer snapshots, and on-the-fly volume resizing – if we’re ever running short on space in a given VM, its a single command (or a click on a web page) to expand the root drive.

    The steps below are the result of a lot of work – the RHEL5 kernel has code in it for an NFS root, but I was never able to get it work correctly, at least under xen. In the end, after lots of experimenting, I was able to build a new kernel from kernel source, that is compatible with Xen and a NFS-Root. Repeat: you cannot build a custom RHEL5 kernel that is capable of booting from NFS under xen.

    There are shortcomings/tradeoffs with this approach – you are not able to do any NFS exports from the nfs-root vm, and there is a bit more performance overhead with an NFS-root vm. If you are using a VM to host a high-transaction DB, for instance, I wouldn’t recommend a NFS-root, but for most purposes, it works and performs just fine.

    First, you need to build a kernel that supports both xen and an NFS-root – you’ll need a linux machine with a complete build environment, i.e. gcc, make, etc. I have been through this process with 2.6.31, but I would guess that the latest stable kernel version available at will work just fine.
    – Download the latest kernel source
    – unzip, cd into source directory
    – Copy in attached .config as starting point ( copy as …./linux-(version)/.config )
    – make menuconfig

    Important config options are listed here:

    Most importantly, nfs client options and nfs_root need to be built into kernel (not as modules).  Also need to make sure to build Xen modules, and select most of the iptables filters (state is an important one). 

    – make
    – make modules_install

    Once the make and install is complete, you will have a kernel capable of nfs-booting under xen. To collect all the pieces needed:
    – copy over the file vmlinux in the base build directory (this one is ~85MB), this is the kernel (can’t use bzImage with xen nfs boot).
    – Tar up /lib/modules/kernelVer to distribute to nfs client vm.

    See the next article for creating a proper vm.cfg under xen / Oracle VM.

    No responses yet

    Mounting a xen disk image file on dom0

    If you need to get in and edit some files on your xen domU instance, i.e. it isn’t booting up properly, etc, here’s how to mount it on dom0. In this case, we want to mount the second partion on the virtual disk (our root partition):

  • Print out the partition layout:
    fdisk -l /path/to/img/file.img

    Disk System.img: 0 MB, 0 bytes
    255 heads, 63 sectors/track, 0 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    System.img1 1 4 32098+ 83 Linux
    System.img2 * 5 619 4939987+ 83 Linux
    System.img3 620 750 1052257+ 82 Linux swap / Solaris

  • create directory to mount image under: mkdir /mnt/tmp
  • mount the image using the offset from the fdisk output, the start block (in this case 5)
    mount mount -o loop,offset=$((512*5) /path/to/img/file.img /mnt/tmp
  • Edit files as needed, and unmount when done.
  • No responses yet

    Synchronizing multiple iTunes libraries

    On and off over the past few years, I’ve looked for a good way to synchronize 2 or more iTunes libraries, and until recently, never found a good solution. The main problem is synchronizing library metadata – syncing files isn’t that difficult, but things like ratings, play counts, etc. is.

    My previous solution was simple – I considered one computer my “master”, and simply sync’ed all the files from it to my other computers. Since I do most of my itunes listening (and rating) at work, I used my iMac as the master and my macbook and home desktop were the slaves. This has its downfalls though, as anything I added or rated on my other systems never made its way back to the master system, and I had to make those changes manually.
    I decided to search again a few days ago, and found two applications that do exactly what I want:

    myTuneSync from SocketHead studios
    Syncopation from Sonzea

    I checked them both out, and they both work as advertised (both offer free time-limited trials). You install the application, point it at the other itunes library to keep in sync with, and set them up for automatic updates. Syncopation is the clear winner for me, with its cleaner interface, faster performance, and ability to synchronize file deletions as well. Syncopation works only on macs, so if you have a mix of mac and Windows (or only Windows), myTuneSync is for you.

    Interestingly, I found these programs two days before the new iTunes 9 was released, and looking at the keynote, I wondered if I just wasted money on something that was now a standard feature of iTunes. (Unfortunately) It turns out that the syncing included with iTunes is limited to purchases from the iTunes store, and doesn’t include anything added by you. It also seems that the metadata does not get transferred over, only the purchased song files.

    One response so far

    Creating a true read-only user in PostgreSQL

    I develop a product at work that (among many other things) allows users to easily create and manage databases to interact with other packages on their appliance. One feature that was requested and green-lighted is the ability to create a read-only user for existing databases- a user that can connect to a given database and access all tables and views in that DB, and nothing else.

    Since PostgreSQL is our database of choice, I started researching the process, thinking it would be a couple mysql-esqe GRANT statements and that would be it. Turns out that it is a huge PITA in postgres – even my solution I’m documenting here has its shortcomings, but as far as I know is the best / only way to accomplish this task. I ran across quite a few sites that helped with pieces of this, but none that actually tied the process together for production usage. I am by no means a Postgres expert, but do have a good bit of experience mucking around with back-end settings and figuring how to script common tasks.

    There are two main things to watch out for when trying to create a read-only user in PostgreSQL, especially if you come from a DB like MySQL:

  • PostgreSQL only sets permissions on objects, not on databases, so you need to grant read access to all your tables/views/etc, and if you add a table down the line, you need to remember to manully grant read access to it after creating the table.
  • I’m guessing 95% of postgres users just use the default “public” schema, and as such, you need to revoke create privileges from the PUBLIC group. Otherwise, your “read-only” user will still be allowed to create tables that it owns, even if you’ve only given it read only access to all other objects in your database.
  • For this example, we’ll use database name “mydb”, database user/owner “mydbuser”, and we’ll create a read-only user named “mydbuser_ro”. This assumes that you did not define a schema for your database and are using the default “public” schema.

  • Revoke default permissions from public group:
  • Add back permissions for your database owner:
    GRANT CREATE ON SCHEMA public TO mydbuser;
    GRANT USAGE ON SCHEMA public TO mydbuser;
  • Create the new user via the command line, or pgadmin/etc:
    psql -U postgres -t -c "create role mydbuser_ro password 'abc123' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;
  • Grant usage permissions for your read-only user:
    GRANT USAGE ON SCHEMA public TO mydbuser_ro;
  • Grant select permissions on all database tables from the command line:
    psql -U postgres -qAt -c "select 'grant select on ' || tablename || ' to \"mydbuser_ro\";' from pg_tables where schemaname = 'public'" mydb | psql -U postgres mydb
  • Setup remote access for the read only user in pg_hba.conf as appropriate
  • Once complete, you can verify the settings with a quick sql query. You should see something like this, with the user=UC (for Usage/Create), and user_ro=U (for Usage):
    mydb=> select * from pg_namespace where nspname='public';
    nspname | nspowner | nspacl
    public | 10 |

    One response so far

    Syncing Adium chat logs across multiple Macs, v2.0

    I previously posted about a rather convoluted method to use mobileme and idisk to sync adium logs across computers, but after using that for a while, it turned into an exercise in frustration, as it was not reliable, and idisk is horribly slow.
    I just came across a new solution to the same problem that is very simple, costs nothing, and performs great. Basically, let dropbox do the syncing for you. Instructions as follows:
    Sign up, download and install dropbox here
    – Shutdown adium (or other chat client)
    – Open a Terminal session
    – Move your adium logs folder to dropbox:
    mv ~/Library/Application\ Support/Adium\ 2.0/Users/Default/Logs ~/Dropbox/Private/AdiumLogs
    -create a symbolic link from your adium folder to your dropbox private folder:
    ln -s ~/Dropbox/Private/AdiumLogs ~/Library/Application\ Support/Adium\ 2.0/Users/Default/Logs

    On any other computer you’d like to keep in sync with:
    – sync over your existing logs with rsync:
    rsync -avl ~/Library/Application\ Support/Adium\ 2.0/Users/Default/Logs/ ~/Dropbox/Private/AdiumLogs
    – Move your old logs folder out of the way:
    mv ~/Library/Application\ Support/Adium\ 2.0/Users/Default/Logs ~/Library/Application\ Support/Adium\ 2.0/Users/Default/Logs.old
    -create a symbolic link from your adium folder to your dropbox private folder:
    ln -s ~/Dropbox/Private/AdiumLogs ~/Library/Application\ Support/Adium\ 2.0/Users/Default/Logs

    It really is that simple, and works great – thanks Dropbox for a great bit of software.

    One response so far

    Converting a Physical Windows Server to Xen / Oracle VM, by way of VMware

    At work, we are moving from a huge datacenter space (about 12′ x 12′, 12 cabinets) to less than 1 cabinet. This will be a great change in all respects, mostly financially, for all the saved space, but also environmentally, for all the saved power. We probably use about the same amount of power to run our whole infrastructure as we used to to run two or three servers.

    We are using an IBM Bladecenter, with fully loaded blades – 5 for now, expandable to 14. Each blade has 8 CPU cores, and 32GB of memory. It would be a waste to run a server OS on the blade itself, it would definitely not be properly utilized and we’d be wasting compute power, electrical power, and space. So, we’re moving ahead with deploying everything on Oracle VM Server – a Xen based virtualization platform, that does some cool stuff like load balancing, failover, live migration, etc.

    In our old infrastructure, we have a few Windows servers, whose functionality we need to keep around, but we’re not too excited to move over these old, Pentium III waste-of-space servers. Last week my main task was to figure out how to get these old servers moved into Xen virtual machines.

    There’s not much information out there on making this conversion – there are some commercial tools that will do everything for you, but they are quite expensive. The included P2V functionality in Oracle VM is documented only for linux, and for systems with large hard drives with lots of free space, is pretty wasteful.

    The process I came up with requires no downtime on the server to be cloned, only minimal changes beforehand (installing the VMware converter software package), and allows you to resize hard disks as needed. Once you get the hard disk cloned, you change the drivers, installed software, etc. on the VM only, so the original machine still works and is your backup/fail safe. The process isn’t all that time consuming either – it mostly depends on your network and disk speed to clone the image, and convert it.

    The basic process is as follows:
    – Clone the physical server using VMware converter – you need to install the VMware converter software on a management host, and the agent on the physical server to clone. You need to provide proper login credentials, and a samba/cifs share that both machines can access. Full instructions are available here:

    – Once you have the vmware image (.vmdk and .vmx files) open them in vmware server, fusion, or whatever version of VMware you have install / converted to.
    – Install vmware tools to make your life easier.
    – Just in case, install the Windows recovery console:
    – Run MergeIDE: MergeIDE — This is the secret sauce that copies in and installs the proper basic drivers in the Windows registry.
    – Shutdown the VM.
    – Copy vmdk files over to Oracle VM server (or Xen flavor of choice)
    – Create a new windows VM in Oracle VM manager, power off after creating
    – Delete the created disk file (System.img)
    – Convert the vmware images with qemu-img: qemu-img convert vmwarefile.vmx System.img
    – Start vm in Oracle VM, let it detect devices and install drivers
    – Use and enjoy being unbound from crappy old hardware!

    6 responses so far

    Cheat code to unlock all songs in Guitar Hero World Tour

    If you’re like me, you just brought home guitar hero 4, and want to play the songs listed on the box. Unfortunately, it comes with only half of the (or so) available to play. I found this elsewhere, although its not too obvious that this is the code you’re looking for on other sites.

    To unlock all songs:
    Blue, Blue, Red, Green, Green, Blue, Blue, Yellow.

    This was tested on my Wii, and should work on all other consoles as well. You get here by going to Options->Cheats->Enter new code. Once you enter it, you have to find it in the list, and turn it “on”. I.e., if you just enter the code, the cheat is by default off.

    Other cheat codes can be found here:

    No responses yet

    Use iDisk to synchronize IM Chat Logs

    Like email, my IM chat logs have become a critical reference and database for me. With both OS X’ spotlight and Adium’s chat transcripts, searching for a conversation in these logs is quite easy. However, not so much so when I’m on my laptop at home, and the chat took place on my work computer. I am looking into services like, which act as a proxy and store your chat logs an their servers – but, these don’t have the search interface I’m used to, and I’m not sure I’m ready for yet another company to have access to my personal communications.

    So, here’s the procedure I am using to sync my logs using my iDisk included with my MobileMe subscription. It is pretty rudimentary, but works nicely for what I want it to do.

    Here are the relevant references I used when putting this process together, as I didn’t feel like spending much time on this:

    First, I built the latest rsync based on the link above – not that its really needed, but I figured it wasn’t a bad idea to have the latest, greatest, and most efficient.

    Second, you need the actual script, here’s mine, with some names removed:


    # Sync all data from Chat logs to MobileMe iDisk

    export LOG=/Users/myname/idisk.log
    rm -f $LOG
    echo `date` > $LOG
    echo "Starting copy of Adium Chat History to iDisk..." >> $LOG
    export IDISK=/Users/myname/idiskmount
    export PWFILE=/Users/myname/bin/idiskpw.txt

    cat $PWFILE | mount_webdav -a0 $IDISK

    rsync -a -E -4 -u --exclude=.DS_Store --stats --progress /Users/myname/Library/"Application Support"/"Adium 2.0"/Users/Default/Logs/ ${IDISK}/ChatLog/ >> $LOG
    rsync -a -E -4 -u --exclude=.DS_Store --stats --progress ${IDISK}/ChatLog/ /Users/myname/Library/"Application Support"/"Adium 2.0"/Users/Default/Logs/ >> $LOG

    umount $IDISK

    echo "Backup of Chat Logs to iDisk complete..." >> $LOG
    echo "" >> $LOG
    echo `date` >> $LOG

    exit 0

    Now, I created a new job in Lingon to run the sync script. I named it com.myname.idisk.rsync, selected the script I had created (saved in ~/bin), and set it to run every 2 hours. After logging out and back in, everything was up and running. Status can be checked in the log file – after a long initial run, things run very quickly.

    You’ll want to repeat this process on any other computer you have for it to work properly. After spotlight reads in all the new files, you should have searchable chat logs on all your macs.

    Notes: You’ll need to create the appropriate directories here (idiskmount), and the funky webdav password file. This process is detailed in the macosxhints forum post, it involves typing a few non-std characters in your editor of choice.

    2 responses so far

    Firefox 3 – Opening a URL in new tab using command-enter

    Download/Install UseMetaKeys FireFox Extension

    In celebration of the release of Firefox 3 yesterday (and since I had forgotten to install my extension previously), I am posting this quick, dirty, and oh so helpful firefox extension.
    Firefox has long been my preferred browser, although for the last year or so Safari had been gaining ground. Firefox 2 was just to bloated and slow, especially on OS X, and Safari was much faster. I always had a hard time choosing between the additional functionality Firefox provides, and the simplicity and speed of Safari. With Firefox 3, that’s pretty much over, and it is back in its place as my primary browser.
    However, one thing that has always bugged me about firefox on Mac OS X was that you can’t open a URL in a new tab using Command+Enter, it only works with Option(alt)+Enter. I couldn’t find a fix a year or two ago, and with some quick searching today, I still did not find a good way to reassign the key sequence. So, I looked up a couple howto’s, found a sample/donor project, and whipped up a simple extension with a single purpose – remap Command-Enter to open a url in a new tab (i.e. when typing in the address or search bars). As expected, no support is provided, no warranty intended, etc., but if you’re using Firefox on OS X, I highly suggest installing this.

    For some reason, I called it “UseMetaKeys”, and now, a year later, I am too lazy to change the name.

    2 responses so far

    Updating RedHat/CentOS Kickstart with new drivers

    At work, we have a kickstart setup we have been using for a couple years now, with probably 150 servers out in the field based on this install. Our distro of choice is CentOS, a RedHat clone, and we are at version 4.4. This is out of date now, but it still works great for our needs, as security fixes are regularly back-ported. It would also be a major pain to upgrade our existing installations, and/or support multiple OS versions.

    On to the issue at hand: we recently received some new server models that we’ll be supporting, both which have hardware not supported in CentOS 4.4. One machine has a RealTek RTL-8110 ethernet chip, and the other as a 3Ware 9650SE Raid controller. As I later discovered, this presented two unique problems with the kickstart – without the proper storage controller driver, one server didn’t find any disk to install on, and without the proper network driver, the other server couldn’t even connect to our kickstart server at all.

    So, as you might guess, there are two different solutions here. The more elegant is for the storage controller, we can create a driver disk with the proper drivers, and make it available on the network during the kickstart. The network driver is more difficult – we need to insert it into the initrd image we provide for PXE boot, and then somehow copy it over after installation (this is an updated driver, r8169.ko, that exists in CentOS 4.4 but doesn’t support our newer card).

    Adding a RAID/Storage Card Driver to the Kickstart:
    For the driver disk, things are especially easy, as 3Ware provides a driver-disk compatible download, although not yet in the correct format to share over the network.

    The driver provided by 3ware ( ) includes the following:

    -rwxr-xr-x 1 stever stever 66B Oct 10 2007 modinfo*
    -rwxr-xr-x 1 stever stever 249B Oct 10 2007 modules.alias*
    -rw-r--r-- 1 stever stever 377K Oct 10 2007 modules.cgz
    -rwxr-xr-x 1 stever stever 28B Oct 10 2007 modules.dep*
    -rwxr-xr-x 1 stever stever 463B Oct 10 2007 modules.pcimap*
    -rwxr-xr-x 1 stever stever 192B Oct 10 2007 pci.ids*
    -rwxr-xr-x 1 stever stever 339B Oct 10 2007 pcitable*
    -rwxr-xr-x 1 stever stever 37B Oct 10 2007 rhdd*

    This is all you need on a driver disk, so all you need to do is create a disk image, and copy these files over:

    #Create a blank, 20MB image
    dd if=/dev/zero of=/root/driverdisk.img bs=1M count=2
    #Format the image with ext2
    mkfs -t ext2 -q /root/driverdisk.img
    #mount it and copy the files over
    mount -o loop /root/driverdisk.img /mnt/tmp
    cp /root/3ware/* /mnt/tmp/
    umount /mnt/tmp

    Now, copy the image over to somewhere accesible on kickstart, and update your ks.cfg with the following:
    driverdisk --source=nfs:servername:/vol/kickstart/CentOS-4.4-x86/drivers/driverdisk.img

    On network kickstart, anaconda should grab the driver, load it, and proceed normally. This should work for any non-network-card driver you need.

    Adding a Network Card Driver to the Kickstart:
    This is considerably more arduous, but not too difficult with the magic commands. Much of the information here comes from my friend Steve,

    There is no nicely package/built driver provided by RealTek, just some source code with instructions for compiling.

    I downloaded the driver here:

    After untar’ing unzip’ing, I ran make with the default settings, and manually changed the kernel version to build a smp driver as well (assuming you’re building on a single-cpu system):

    [root@lb4 ~]# cd r8169-6.006.00/
    [root@lb4 ~]# make
    [root@lb4 ~]# mv src/r8169.ko r8169.ko.2.6.9-42.EL
    [root@lb4 ~]# make clean
    (edit src/Makefile, change the line "KVER := $(shell uname -r)" to "KVER := 2.6.9-42.ELsmp"
    [root@lb4 ~]# make
    [root@lb4 ~]# mv src/r8169.ko r8169.ko.2.6.9-42.ELsmp

    Now you should have two .ko module files compatible with the different kernels – we need to get these inserted into the initrd image. An initrd is basically a disk image that holds various drivers and programs needed to pre-boot your system. It is usually a gzipped disk image file, so its nothing too special. Basically, you need to unzip & mount the initrd image, gunzip/cpio the modules.cgz file in the initrd, make the required changes, and package everything back up.

    Here’s those steps in gory detail:

    mkdir /mnt/tmp
    mkdir /mnt/initrd
    mkdir /var/tmp/work
    mkdir /var/tmp/work/bootnet
    mkdir /var/tmp/work/drvnet
    gunzip < /root/tftpboot/initrd.img > /var/tmp/work/bootnet/initrd.img.ungzipped
    cd /var/tmp/work/bootnet/
    mount -o loop initrd.img.ungzipped /mnt/tmp2
    cd /mnt/tmp2/modules
    gunzip < modules.cgz | (cd /var/tmp/work/bootnet && cpio -idv) cd /var/tmp/work/bootnet/2.6.9-42.EL/i686 cp /root/r8169-6.006.00/r8169.ko.2.6.9-42.EL r8169.ko cd /var/tmp/work/bootnet/2.6.9-42.ELsmp/i686 cp /root/r8169-6.006.00/r8169.ko.2.6.9-42.ELsmp r8169.ko cd /var/tmp/work/bootnet/ find 2.6.9-42.EL | cpio -ov -H crc | gzip > /mnt/tmp2/modules/modules.cgz
    #edit /mnt/initrd/modules/pcitable
    #add this:
    0x10ec 0x8167 "r8169" "Realtek|RTL-8110 Gigabit Ethernet"
    umount /mnt/initrd
    gzip < initrd.img.ungzipped > initrd.r8169.img

    I had to boot up DSL, run lspci & lspci -n, to get the ID to put in here – third column has 10ec:8167, which is what we need

    So now you can replace your initrd.img with the one you just created. The kickstart should work fine now, but upon reboot, the system will not be able to find the right driver. After the kickstart, you need to copy over the .ko files to the appropriate directories – we added a line in our post-install script to do this for us, it simply copies the .ko file to the appropriate directory (/lib/modules/`uname -r`/kernel/drivers/net/)

    Hopefully this is useful to someone, I couldn’t find a good, comprehensive guide on how to do this, I had to pull data from a bunch of different sources.

    9 responses so far

    Photos from our trip now online

    I finally fought the jet-lag, and got around to posting photos from most of our trip.

    All of them are available on my main picasa page,

    And now for individual galleries:
    I added some underwater photos to our snorkeling gallery:

    A great Hike we went on in Taiwan – Caoling Historic Trail
    Along the Chao Phraya in Bangkok – River Cruise
    Pattaya, Thailand
    Misc Thailand Photos
    Around Taipei

    No responses yet

    New photos from Malaysia

    I posted some new photos from our trip to Malaysia – I’ll be writing something about the trip in a day or two, but for now, enjoy the pictures :).

    Picasa Photo Gallery

    No responses yet

    Gaggia Baby Pressure Adjustment with OPV Valve

    When I first brought home my Gaggia Baby espresso machine, it had been used for a number of years, and not cleaned very well. I tried a few shots from it, but no matter what I did, they always came through too fast. When I adjusted my grinder to the point that the burrs were touching, the espresso would come out slowly at first (and look good), and get gradually faster until the shot was done after about 15 seconds. If you are having these issues too, its time to take a close look at the brew pressure of your machine.

    The OPV, or over-pressure valve (also known as pressure relief valve), is fitted to most mid-level home machines with vibratory pumps. Vibe pumps aren’t really precision adjustable devices, so they

    are basically either on or off, and inside an espresso machine, whatever pressure they’re operating at is the pressure you’re getting through your espresso. In most espresso machine marketing literature, they advertise the power of pumps, 16 bar in this one, 18 bar in that one, but this really doesn’t matter – espresso needs about 9 bar, give or take, to be brewed properly, and this is where the OPV comes in. As the pressure raises, it will gradually open up to maintain a set pressure, usually about 9 bar in an espresso machine. Excess water is routed back to the reservoir, and the group head, and thus your espresso puck) see’s the proper pressure. Gaggia Baby and Classic models are fitted with an adjustable OPV, other models don’t have one, but if you can find one its a great upgrade for a Carezza or Espresso.

    As you can read in some other posts here, the Gaggia machines are very easy to take apart and repair/clean, so while going through that process, I spent extra time on cleaning / restoring the OPV valve. It turns out that mine was completely sealed shut from past years’ scale deposits, and it wasn’t opening at all. I took the valve apart completely, and soaked it in a durgol bath and in a citric acid bath about 5 or 6 times – there was a lot of crud on there, and it took a while to get it off. Afterwards, things looked pretty good, and it was very easy to put back together.

    When my machine re-assembly was complete, I hooked up my newly built pressure gauge, and proceeded to dial in the correct pressure. This is a bit complicated because the valve is not easily accessible, and adjustment requires disassembly of the valve. So its turn on machine, read pressure, turn off, unscrew valve, adjust (while not burning hands on hot boiler), reassemble, and repeat. It took 4 or 5 iterations to get it the way I wanted, I was moving about 1/2 turn of the adjustment nut each time. I think these come from the factory set very high, so I’m guessing you could improve your results just by loosening the nut a turn or two from factory tight. Espresso is now much easer to make (and much better tasting), and with the pressure gauge, I know any problems are my fault, and not the machine’s.

    Details of the pressure gauge: I had a friend weld a piece of stainless tubing to a blank filter basket (could probably get this done at a welding shop for $10 and a six pack as well), and attached to a tee, a needle valve, and a liquid-filled pressure gauge. I can adjust the flow to approximate espresso flow rates, and dial in the pressure from there.

    No responses yet

    Blood, Sweat, and Sledgehammers

    Call us crazy, but this weekend we started on our third bathroom remodel (for those keeping count, that’s the third of three bathrooms in the house). It’s always fun when you can grab a three-pound hammer and go to town on the walls, but it does draw both blood and sweat (I sliced up my hands even with gloves on). Now that we’re seasoned pro’s when it comes to this stuff ;), things move pretty quickly, and I’d say we spent 6 or 7 hours total on the demo, and basically stripped the bathroom to walls and floor only.

    In the weeks to come, we’ll be installing new plumbing (fixtures, etc), new tub, drywall/backer board, tile, new pedestal sink, wainscotting, etc. We’ll post photos and notes along the way, here are some pics of this weekend’s adventures.

    No responses yet

    Next »