Difference between revisions of "How to build a WormBase Virtual Machine"
Line 22: | Line 22: | ||
vmware-cmd <cfg> shutdown | vmware-cmd <cfg> shutdown | ||
+ | |||
+ | ''Tip: The vmware-cmd command has a bunch of options for interacting with running VMXs. Try vmware-cmd --help for information'' | ||
3. Attach some virtual disks | 3. Attach some virtual disks | ||
Line 48: | Line 50: | ||
./package_vmx.sh WS180 YYYY.MM.DD | ./package_vmx.sh WS180 YYYY.MM.DD | ||
+ | =Core Virtual Machines= | ||
− | + | Currently, I maintain a one core virtual machine, running CentOS 5 configured in particular for use as a server. It contains a lot of other superfluous things that make it useful as a desktop. | |
− | |||
− | I maintain | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | This core virtual machine is essentially a production node virtualized. This makes them very convenient for development, testing, and even stop-gap emergency server recovery. | |
− | + | The key difference between the core machine and a production server is the location of database directories. On live nodes, the databases are maintained on the local file structure. In the virtual machines, databases are maintained as separate virtual disks (VMDKs). This makes it possible to update the software and databases independently, a great advantage when it comes to maintain mirror sites. | |
Since databases are maintained as virtual disks, the virtual machine needs to know where to find them in order to launch. For the core virtual machine, the directory structure looks like this: | Since databases are maintained as virtual disks, the virtual machine needs to know where to find them in order to launch. For the core virtual machine, the directory structure looks like this: | ||
Line 73: | Line 62: | ||
During build: | During build: | ||
− | wormbase-live-server/wormbase.vmx | + | WSXXX/ |
− | + | | | |
− | + | --wormbase-live-server/wormbase.vmx | |
− | + | --database/ | |
− | + | | | |
− | + | --acedb/ | |
− | + | --autocomplete/ | |
− | + | --c_elegans/ | |
+ | --other_species/ | ||
+ | --support/ | ||
+ | --current_databases -> databases | ||
Thus, the virtual machine expects the databases to be located at : | Thus, the virtual machine expects the databases to be located at : | ||
− | ../current_databases/acedb.vmdk | + | ../current_databases/acedb/20GB.vmdk |
=Updating the software= | =Updating the software= |
Revision as of 20:24, 4 October 2007
Contents
Overview
WormBase Virtual Machines are created for each release of the database. This process is almost entirely scripted, created from base virtual machines that run all the time and are automatically kept up-to-date with the production nodes.
To simplify the download and update process, WormBase virtual machines are split into modules. The primary virtual machine contains all software and configuration running under CentOS 5 (for servers) or Ubuntu 6.06 (for desktops). Databases are maintained as virtual disks (VMDKs).
Creation of a new VM requires three steps:
1. Syncing the software to the staging rsync module hosted on the main WormBase development site.
2. Creation of VMDKs for available databases.
3. Tarring and gzipping.
The process is described in more detail below.
Quick Start Guide
1. Log on to the host machine
2. Shutdown the appropriate guest.
vmware-cmd <cfg> shutdown
Tip: The vmware-cmd command has a bunch of options for interacting with running VMXs. Try vmware-cmd --help for information
3. Attach some virtual disks
./prepare_virtual_machine.sh WSXXX
4. Reboot the guest
vmware-cmd <cfg> start
5. Log on to the guest
vmware-cmd <cfg> getguestinfo "ip" ssh wormbase@[ip] ; pass = wormbase
6. Update the software (optional: should already be up-to-date as this runs under cron)
~/wormbase/bin/pull_software.sh
7. Build the VMDKs
~/wormbase/bin/build_vmdks.sh
8. Shutdown the guest and package the new VMX from the host
./package_vmx.sh WS180 YYYY.MM.DD
Core Virtual Machines
Currently, I maintain a one core virtual machine, running CentOS 5 configured in particular for use as a server. It contains a lot of other superfluous things that make it useful as a desktop.
This core virtual machine is essentially a production node virtualized. This makes them very convenient for development, testing, and even stop-gap emergency server recovery.
The key difference between the core machine and a production server is the location of database directories. On live nodes, the databases are maintained on the local file structure. In the virtual machines, databases are maintained as separate virtual disks (VMDKs). This makes it possible to update the software and databases independently, a great advantage when it comes to maintain mirror sites.
Since databases are maintained as virtual disks, the virtual machine needs to know where to find them in order to launch. For the core virtual machine, the directory structure looks like this:
During build:
WSXXX/ | --wormbase-live-server/wormbase.vmx --database/ | --acedb/ --autocomplete/ --c_elegans/ --other_species/ --support/ --current_databases -> databases
Thus, the virtual machine expects the databases to be located at :
../current_databases/acedb/20GB.vmdk
Updating the software
Software on the base virtual machines is kept in sync with the shell script ~wormbase/bin/pull_software.sh. This script syncs to the production nodes every day and is set to run under non-privileged cron every day.
* 2 * * * /home/wormbase/bin/pull_software.sh
Building virtual disks for a new release
Shutdown the core virtual machine:
vmare-cmd <cfg> shutdown
Run the prepare_virtual_machine.sh script:
prepare_virtual_machine.sh WSXXX
This will set up a directory structure like this and untar some empty VMDKs:
wormbase-live-server/wormbase.vmx WSXXX-databases/ current_databases -> WSXXX-databases
It's important that the databases maintain this relative structure or they will not be available to the VMX.
Reboot and log on to the core virtual machine.
Run the database script.
~wormbase/bin/build_vmdks.sh WSXXX
You will need to be me. Sorry, I haven't fixed this yet.
Establishing the Virtual Machine
Build and install VMware Server (currently vers 1.0.3)
cd ~/build tar xzf ../src/vmware-server-1.0.3.tar.gz sudo ./vmware-install.pl
For WormBase, I place the virtual machines in /usr/local/vmx.
Installing the OS
Fetch a suitable ISO. From the console interface, edit options for the CD-ROM. Attach the ISO and make sure the "Connect on Startup" option is checked.
Users and groups
WormBase virtual machines have a slightly different user and group arrangment than we have traditionally used.
The main user is WormBase User:
Login: wormbase pass: wormbase home: /home/wormbase
To keep things copacetic with WormBase proper, I've created a symlink: /usr/local/wormbase -> /home/wormbase
Preparing a VMX for release
Periodically, it's good to shrink the size of the core WormBase virtual machines.
1. Start the guest OS.
2. In the guest, purge things like access logs, tarballs, etc
3. Shrink the disk in the guest by first zeroing empty space
todd> sudo dd if=/dev/zero of=/empty_file todd> rm /empty_file
4. Shutdown the guest
5. Defragment the disk from the VMWare console:
Edit options > Hard Disk > Defrag the disk
6. Restart the guest
7. Finish shrinking the disk using the vmware-toolbox:
todd> vmware-toolbox (select shrink)