Difference between revisions of "How to build a WormBase Virtual Machine"

From WormBaseWiki
Jump to navigationJump to search
 
(21 intermediate revisions by one other user not shown)
Line 1: Line 1:
 
= Overview =
 
= Overview =
  
WormBase Virtual Machines are created for each release of the database.  This process is almost entirely scripted, created from base virtual machines that are automatically kept up-to-date with the production nodes.
+
WormBase Virtual Machines are created for each release of the database.  This process is almost entirely scripted, created from base virtual machines that run all the time and are automatically kept up-to-date with the production nodes.
  
To simplify the download and update process, WormBase virtual machines are maintained as modules.  The primary virtual machine contains all software and configuration running under CentOS 5 (for servers) or Ubuntu 6.06 (for desktops).
+
To simplify the download and update process, WormBase virtual machines are split into modules.  The primary virtual machine contains all software and configuration running under CentOS 5 (for servers) or Ubuntu 6.06 (for desktops).  Databases are maintained as virtual disks (VMDKs).
  
 
Creation of a new VM requires three steps:
 
Creation of a new VM requires three steps:
Line 15: Line 15:
 
The process is described in more detail below.
 
The process is described in more detail below.
  
=Base Virtual Machines=
+
= Quick Start Guide =
  
The base virtual machines are located at:
+
1. Log on to the host machine
  
wormbase-live-server : CentOS 5, configured in particular for server use
+
2. Shutdown the appropriate guest.
IP: 192.168.20.100
 
  
wormbase-live-desktop : Ubuntu 6.06
+
vmware-cmd <cfg> shutdown
IP: 192.168.
 
  
=About the WormBase core virtual machines=
+
''Tip: The vmware-cmd command has a bunch of options for interacting with running VMXs.  Try vmware-cmd --help for information''
  
Core virtual machines are essentially production nodes virtualized.  This makes them very convenient for development, testing, and even stop-gap emergency server recovery. The key differences between production nodes and the core virtual machines are:
+
3. Attach some virtual disks
  
* Database are maintained as virtual disks instead of in the main virtual machine itself.
+
./prepare_virtual_machine.sh WSXXX
* Perl libraries are maintained in private directories (/usr/local/wormbase/extlib), with corresponding modifications to perl.startup. This allows me to push new modules onto existing virtual machine installations with ease.
 
  
I maintain two variations of the WormBase virtual machines.
+
4. Reboot the guest
  
wormbase-live-desktop:
+
  vmware-cmd <cfg> start
Operating system: Ubuntu 6.06-desktop
 
  
wormbase-live-server:
+
5. Log on to the guest
Operating system: CentOS 5
 
  
The WormBase core virtual machines
+
vmware-cmd <cfg> getguestinfo "ip"
 
+
  ssh wormbase@[ip] ; pass = wormbase
 
 
=Updating the software=
 
 
 
Software on the base virtual machines is kept in sync with the production nodes by rsync. This script (pull_software.sh) is located in ~/wormbase/bin, and set to run under cron once per day.
 
 
 
* 2 * * * /home/wormbase/bin/pull_software.sh
 
 
 
=Virtual Machine exceptions=
 
  
 +
6. Update the software (optional: should already be up-to-date as this runs under cron)
  
 +
~/wormbase/bin/pull_software.sh
  
 +
7. Build the VMDKs
  
 +
~/wormbase/bin/build_vmdks.sh [VERSION]
  
 +
''Note: you will need root privs on the VM to mount/unmount VMDKs and ssh access to transfer DBs from brie3!''
  
 +
8. Shutdown the guest and package the new VMX from the host
  
=Updating the software after distribution=
+
./package_vmx.sh WS180 YYYY.MM.DD
  
Once distributed Wormbase virtual machines can be kept up-to-date by a simple script. This has several advantages.
+
=Core Virtual Machines=
  
1. It keeps download sizes small
+
Currently, I maintain a one core virtual machine, running CentOS 5 configured in particular for use as a server. It contains a lot of other superfluous things that make it useful as a desktop.
  
2. It keeps local configuation from being rewritten with every update.
+
This core virtual machine is essentially a production node virtualized. This makes them very convenient for development, testing, and even stop-gap emergency server recovery.
  
3. It modularizes required databases so that users can choose what databases they wish to install.
+
The key difference between the core machine and a production server is the location of database directories. On live nodes, the databases are maintained on the local file structure.  In the virtual machines, databases are maintained as separate virtual disks (VMDKs).  This makes it possible to update the software and databases independently, a great advantage when it comes to maintain mirror sites.
  
 +
Since databases are maintained as virtual disks, the virtual machine needs to know where to find them in order to launch.  For the core virtual machine, the directory structure looks like this:
  
=== Establishing the Virtual Machine ===
+
During build:
  
Build and install VMware Server (currently vers 1.0.3)
+
WSXXX/
 +
    |
 +
    --wormbase-live-server/wormbase.vmx
 +
    --database/
 +
      |
 +
        --acedb/
 +
        --autocomplete/
 +
        --c_elegans/
 +
        --other_species/
 +
        --support/
 +
    --current_databases -> databases
  
cd ~/build
+
Thus, the virtual machine expects the databases to be located at :
tar xzf ../src/vmware-server-1.0.3.tar.gz
 
sudo ./vmware-install.pl
 
 
For WormBase, I place the virtual machines in /usr/local/vmx.
 
  
=== Installing the OS ===
+
  ../current_databases/acedb/20GB.vmdk
  
Fetch a suitable ISO.  From the console interface, edit options for the CD-ROM.  Attach the ISO and make sure the "Connect on Startup" option is checked.
 
  
 
== Users and groups ==
 
== Users and groups ==
WormBase virtual machines have a slightly different user and group arrangment than we have traditionally used.
+
The core virtual machines have the following users and groups:
  
 
The main user is WormBase User:
 
The main user is WormBase User:
  
Login: wormbase
+
Login: wormbase
pass: wormbase
+
pass: wormbase
home: /home/wormbase
+
home: /home/wormbase
  
To keep things copacetic with WormBase proper, I've created a symlink:
+
root is wermbace.  Don't tell anyone.
/usr/local/wormbase -> /home/wormbase
 
  
 +
=Updating the software=
  
 +
Software on the base virtual machines is kept in sync with the shell script ''~wormbase/bin/pull_software.sh''.  This script syncs to the production nodes every day and is set to run under non-privileged cron every day.
  
 +
* 2 * * * /home/wormbase/bin/pull_software.sh
  
 +
= Updating the databases =
  
 +
Updating the databases for the core virtual machine is a bit of a misnomer.  What we will really do is populate new empty virtual disks with the current databases.  Here's how.
  
 +
Shutdown the core virtual machine:
  
 +
vmare-cmd <cfg> shutdown
  
=== Preparing a VMX for release ===
+
Run the prepare_virtual_machine.sh script:
  
1. Start the guest OS.
+
  prepare_virtual_machine.sh WSXXX
  
2. In the guest, purge things like access logs, tarballs, etc
+
This will set up a directory structure like this and untar some empty VMDKs:
  
3. Shrink the disk in the disk by first zeroing empty space
+
  wormbase-live-server/wormbase.vmx
 +
  databases/
 +
  current_databases -> databases
  
  sudo dd if=/dev/zero of=/empty_file
+
It's important that the databases maintain this relative structure or they will not be available to the VMX.
  rm /empty_file
 
  
4. Set the VMX to graphical runlevel 5 (/etc/inittab)
+
Reboot and log on to the core virtual machine.
  
5. Shutdown the guest
+
Run the database script.
  
6. Copy the current wormbase-live to a directory named by release:
+
~wormbase/bin/build_vmdks.sh WSXXX
  
  cp -r wormbase-live wormbase-WSXXX.YYYY.MM.DD
+
You will need to be me. Sorry, I haven't fixed this yet.
  
7. In the console, set the version and release date
+
= House-cleaning of the core virtual machine =
  
      WormBase (WSXXX; DD Feb YYYY)
+
It's good to periodically clean the guest OS.  This includes defragging and purging temporary files to keep the size of the virtual machine in check.  Here's a general outline.
  
8. Defragment the disk from the VMWare console:
+
1. Start the guest OS.
  
    Edit options &gt; Hard Disk &gt; Defrag the disk
+
2. In the guest, purge things like access logs, tarballs, etc
  
9. In the VMWare console, set networking to NAT (assume desktop usage) and restart the guest.
+
3. Shrink the disk in the guest by first zeroing empty space
  
10. Start the new VMX.
+
  todd> sudo dd if=/dev/zero of=/empty_file
 +
  todd> rm /empty_file
  
11. Reset the MAC address
+
4. Shutdown the guest
  
12. Finish shrinking the disk using the vmware-toolbox:
+
5. Defragment the disk from the VMWare console:
  
  $ vmware-toolbox (select shrink)
+
    Edit options &gt; Hard Disk &gt; Defrag the disk
 
 
13. When complete, shut down the VMX
 
 
 
14. Package
 
 
 
tar czf wormbase-WSXXX.YYYY.MM.DD.tgz
 
 
 
15. Symlink to make it available via http
 
 
 
cd /usr/local/wormbase/html/vmx ln -s /usr/local/vmx/wormbase-WSXXX.YYYY.MM.DD.tgz wormbase-WSXXXX.YYYY.MM.DD.tgz
 
 
 
16. Upload the new VM to BitTorrent
 
 
 
17. Update the [[Virtual_Machines Virtual Machines] page on the Wiki
 
 
 
=== Configuring VMXs as hosted frozen releases ===
 
 
 
To use a Virtual Machine as a server, a few small modifications need to be made.
 
 
 
1. From the VMWare Server console, launch the virtual machine
 
 
 
2. Set a static IP (must be assigned!)
 
 
 
In this example, the guest OS IP is 143.48.220.208. This should be changed to whatever your assigned IP address is.
 
 
 
ifconfig eth0:0 143.48.220.208 netmask 255.255.255.0 broadcast 143.48.220.255 route add -host 143.48.220.208 dev eth0
 
 
 
You can also do this from the GUI if you prefer, under System Settings -&gt; Network. Double click on the network adaptor.
 
 
 
Address:  Your assigned IP address
 
Subnet mask: 255.255.255.0
 
Default gateway: 143.48.220.254
 
Broadcast host:  143.48.220.255 (not explicitly set in the GUI)
 
 
 
3. Reset the MAC ID of the guest
 
 
 
System Tools &gt; Network
 
 
 
Double click on the network adaptor and select the "Hardware" tab. Click on "Probe", then "OK"
 
 
 
4. Add the following lines to /etc/resolve.conf for DNS
 
 
 
search cshl.edu
 
nameserver 143.48.1.1
 
nameserver 143.48.1.20
 
 
 
5. Set the hostname
 
 
 
This can be done either in the GUI under the Network panel, or using the following command line terms.
 
 
 
If you have a static IP address, then /etc/hosts is configured as follows:
 
 
 
127.0.0.1           localhost.localdomain      localhost
 
143.488.220.44 mybox.mydomain.com mybox
 
 
 
After updating the /etc/hosts file correctly, the "hostname" command should be run as follows to set your hostname:
 
  
hostname mybox.mydomain.com
+
6. Restart the guest
  
6. Edit /usr/local/wormbase/conf/localdefs.pm and httpd.conf with the appropriate hostname
+
7. Finish shrinking the disk using the vmware-toolbox:
  
 +
  todd> vmware-toolbox (select shrink)
  
8. Shutdown the virtual machine and copy it as a backup
 
  
''I append "server" to the name to indicate that it is configured as a server''
 
  
  tar czf wormbase-WS100.2003.05.13-server.tgz wormbase-WS100.2003.05.13
+
[[Category:User Guide]]
 +
[[Category:Developer documentation]]

Latest revision as of 23:33, 13 August 2010

Overview

WormBase Virtual Machines are created for each release of the database. This process is almost entirely scripted, created from base virtual machines that run all the time and are automatically kept up-to-date with the production nodes.

To simplify the download and update process, WormBase virtual machines are split into modules. The primary virtual machine contains all software and configuration running under CentOS 5 (for servers) or Ubuntu 6.06 (for desktops). Databases are maintained as virtual disks (VMDKs).

Creation of a new VM requires three steps:

1. Syncing the software to the staging rsync module hosted on the main WormBase development site.

2. Creation of VMDKs for available databases.

3. Tarring and gzipping.

The process is described in more detail below.

Quick Start Guide

1. Log on to the host machine

2. Shutdown the appropriate guest.

vmware-cmd <cfg> shutdown

Tip: The vmware-cmd command has a bunch of options for interacting with running VMXs. Try vmware-cmd --help for information

3. Attach some virtual disks

./prepare_virtual_machine.sh WSXXX

4. Reboot the guest

vmware-cmd <cfg> start

5. Log on to the guest

vmware-cmd <cfg> getguestinfo "ip"
ssh wormbase@[ip] ; pass = wormbase

6. Update the software (optional: should already be up-to-date as this runs under cron)

~/wormbase/bin/pull_software.sh

7. Build the VMDKs

~/wormbase/bin/build_vmdks.sh [VERSION]

Note: you will need root privs on the VM to mount/unmount VMDKs and ssh access to transfer DBs from brie3!

8. Shutdown the guest and package the new VMX from the host

./package_vmx.sh WS180 YYYY.MM.DD

Core Virtual Machines

Currently, I maintain a one core virtual machine, running CentOS 5 configured in particular for use as a server. It contains a lot of other superfluous things that make it useful as a desktop.

This core virtual machine is essentially a production node virtualized. This makes them very convenient for development, testing, and even stop-gap emergency server recovery.

The key difference between the core machine and a production server is the location of database directories. On live nodes, the databases are maintained on the local file structure. In the virtual machines, databases are maintained as separate virtual disks (VMDKs). This makes it possible to update the software and databases independently, a great advantage when it comes to maintain mirror sites.

Since databases are maintained as virtual disks, the virtual machine needs to know where to find them in order to launch. For the core virtual machine, the directory structure looks like this:

During build:

WSXXX/
    |
    --wormbase-live-server/wormbase.vmx
    --database/
      |
       --acedb/
       --autocomplete/
       --c_elegans/
       --other_species/
       --support/
    --current_databases -> databases

Thus, the virtual machine expects the databases to be located at :

 ../current_databases/acedb/20GB.vmdk


Users and groups

The core virtual machines have the following users and groups:

The main user is WormBase User:

Login: wormbase
pass: wormbase
home: /home/wormbase

root is wermbace. Don't tell anyone.

Updating the software

Software on the base virtual machines is kept in sync with the shell script ~wormbase/bin/pull_software.sh. This script syncs to the production nodes every day and is set to run under non-privileged cron every day.

* 2 * * * /home/wormbase/bin/pull_software.sh

Updating the databases

Updating the databases for the core virtual machine is a bit of a misnomer. What we will really do is populate new empty virtual disks with the current databases. Here's how.

Shutdown the core virtual machine:

vmare-cmd <cfg> shutdown

Run the prepare_virtual_machine.sh script:

 prepare_virtual_machine.sh WSXXX

This will set up a directory structure like this and untar some empty VMDKs:

 wormbase-live-server/wormbase.vmx
 databases/
 current_databases -> databases

It's important that the databases maintain this relative structure or they will not be available to the VMX.

Reboot and log on to the core virtual machine.

Run the database script.

~wormbase/bin/build_vmdks.sh WSXXX

You will need to be me. Sorry, I haven't fixed this yet.

House-cleaning of the core virtual machine

It's good to periodically clean the guest OS. This includes defragging and purging temporary files to keep the size of the virtual machine in check. Here's a general outline.

1. Start the guest OS.

2. In the guest, purge things like access logs, tarballs, etc

3. Shrink the disk in the guest by first zeroing empty space

 todd> sudo dd if=/dev/zero of=/empty_file
 todd> rm /empty_file

4. Shutdown the guest

5. Defragment the disk from the VMWare console:

   Edit options > Hard Disk > Defrag the disk

6. Restart the guest

7. Finish shrinking the disk using the vmware-toolbox:

 todd> vmware-toolbox (select shrink)