Software Life Cycle: 1. Updating The Development Server

From WormBaseWiki
Revision as of 15:12, 12 January 2009 by Tharris (talk | contribs)
Jump to navigationJump to search

This is a quick description of how to update the development server with a new release of the database. In general, the development process involves mirroring a large number of files from Sanger, unpacking, and in many cases, massaging into a format suitable for driving the website.

Development Server

The current development server is

 brie3: brie3.cshl.org / dev.wormbase.org

System paths referred to in this document:

      FTP : ~ftp/pub/wormbase (on brie3 this is /usr/local/)
 WORMBASE : /usr/local/wormbase
    ACEDB : /usr/local/acedb

Update Pipeline Code

The update pipeline code is available in the WormBase admin module:

 brie3> svn co svn+ssh://brie3.cshl.org/usr/local/svn/admin

Change into the development directory:

 brie3> cd admin/update/development 

The contents are:

  logs/      -- the logs directory for each step/update
  README.txt -- nothing 
  steps/     -- Perl scripts that launch each step
  Update.pm  -- the top level module for the update process
  Update/    -- Perl modules corresponding to each step
  updatelog.conf  -- a configuration file for the update process
  update.sh  -- master script that fires off each step of the pipeline
  util/       -- various helper scripts for the update process

A single shell script fires off all steps of the process.

 brie3> ./update.sh

The steps it performs are:

  • Purge disk space
  • Create necessary directories
  • Mirror and unpack ACeDB from Sanger
  • Mirror ontology files from Sanger
  • Compile ontology resources for the site
  • Create BLAST databases for available species
  • Create BLAT database for available species
  • Create ePCR databases for select species
  • Load genomic GFF databases for available species
  • Build and load GFF patches
  • Convert GFF3 into GFF3
  • Create a GBrowse-driven genetic map
  • Create a GBrowse-drive physical map
  • Create dump files of common datasets
  • Load the CLUSTALW database
  • Mirror annotation files from Sanger to the FTP site

Each step is described below.

Check logs and resolve issues

Log files are under admin/development/logs and the logging levels are set to record ERRORS and WARNINGS

Update Steps

Purge Disk Space

Remove obsolete files from the (staging) FTP site on the development site. These have already been mirrored to the production FTP site and do not need to be maintained on the development server.

 Usage  : ./steps/purge_disk_space.pl

Create necessary directories

Create staging directories for the update process.

  Usage : ./steps/create_directories.pl {WSRELEASE}
 Output : Directories in ${WORMBASE}/databases (primarily)

Mirror and unpack ACeDB from Sanger

Mirror and unpack the new release of the database from Sanger. Add in appropriate control files for the new acedb database: serverpasswrd.wrm, passwrd.wrm, serverconfig.wrm pulled from the checked out development source (/usr/local/wormbase/wspec).

Files will be mirrored and unpacked to /usr/local. Please make sure that there is sufficient space in this directory! You will most likely need approximately 25 GB of disk space. Possible places to free up disk space:

 /usr/local/mysq/data
 /usr/local/acedb/tmp
 ~{you}/mp3s
  Usage : ./steps/create_directories.pl {WSRELEASE}
  Input : Files mirrored from Sanger to ${ACEDB}/tmp
 Output : Unpacked AceDB files in ${ACEDB}/elegans_{WSRELEASE} 

Note: This can take a *long* time. You might to run this in a screen:

 brie3> screen
 brie3> ./steps/mirror_acedb.pl WS198
   (to disconnect your screen)
 brie3> ^a ^d
   (to resume your screen)
 brie3> screen -r   
 
 screen command reference

When complete, check to make sure that the following directory and symlink exist:

 brie3> ${ACEDB}/elegans -> elegans_{WSRELEASE}

Trouble shooting notes:

  • gzip version incompatibilies may break the step.

Mirror ontology from Sanger

Mirror OBO files from Sanger. These are necessary for the ontology searches.

  Usage : ./steps/mirror_ontology_files.pl {WSRELEASE}
  Input : none
 Output : Files mirrored to ${WORMBASE}/databases/{WSRELEASE}/ontology

Compile ontology resources for the site

Take the mirrored files and compile them into the databases for the ontology searches.

  Usage : ./steps/compile_ontology_resources.pl {WSRELEASE}
  Input : OBO files mirrored earlier in ${WORMBASE}/databases/{WSRELEASE}/ontology
 Output :
Resulting files list in /usr/local/wormbase/databases/{WSRELEASE}/ontology: 
*anatomy_association.WSRELEASE.wb
*gene_association.WSRELEASE.wb.ce
*gene_ontology.WSRELEASE.obo
*name2id.txt
*search_data.txt
*anatomy_ontology.WSRELEASE.obo
*gene_association.WSRELEASE.wb.cjp
*id2association_counts.txt
*parent2ids.txt
*gene_association.WSRELEASE.wb
*gene_association.WSRELEASE.wb.ppa
*id2name.txt
*phenotype_association.WSRELEASE.wb
*gene_association.WSRELEASE.wb.cb
*gene_association.WSRELEASE.wb.rem
*id2parents.txt
*phenotype_ontology.WSRELEASE.obo

Create BLAST databases for available species

Notes...

 Usage: ./steps/create_blast_databases.pl {WSRELEASE}

Create BLAT databases for available species

Notes...

 Usage: ./steps/create_blat_databases.pl {WSRELEASE}

Create ePCR databases for available species

Notes...

 Usage: ./steps/create_epcr_databases.pl {WSRELEASE}


Load genomic GFF DBs for available species

Get genomic gff files from Sanger and load into the DBs

 Usage : ./steps/load_genomic_gffdb.pl {WSRELEASE}
 Input : GFF : ${FTP}/${SPECIES}/genome_feature_tables/GFF2/${SPECIES}.${VERSION}.gff.gz
         DNA : ${FTP}/${SPECIES}/sequences/dna/${SPECIES}.${VERSION}.dna.fa.gz 
 Output: (This script both creates/mirrors and uses the files above).


Trouble shooting notes:

  • File and directory names need to be consistent with format specified in Update.pm circa line 36
  • If necessary, e.g. files were incorrectly named, they should be manually downloaded from the source site, uncompressed and renamed correctly
  • Progress can be monitored by checking the log file ..admin/development/logs/{WSRELEASE}/load genomic feature gff databases.log and the building of the mysql db files in /usr/local/mysql/data
  • Also

Build and Load GFF patches

Notes...


Convert GFF2 into GFF3

Notes...

 Usage: ./steps/convert_gff2_to_gff3.pl {WSRELEASE}

Create a GBrowse-driven genetic map

Notes...

 Usage: ./steps/load_gmap_gffdb.pl {WSRELEASE}

Create a GBrowse-driven physical map

Notes...

 Usage: ./steps/load_pmap_gffdb.pl {WSRELEASE}

Create dump files of common datasets

Notes...


Load the CLUSTALW database

Notes...

 Usage: ./steps/load_clustal_db.pl {WSRELEASE}

Mirror annotation files from Sanger to the FTP site

Notes...

 Usage: ./steps/mirror_annotations.pl {WSRELEASE}


  • Create BLAST databases for available species
  • Create BLAT database for available species
  • Create ePCR databases for select species
  • Load genomic GFF databases for available species
  • Build and load GFF patches
  • Convert GFF2 into GFF3
  • Create a GBrowse-driven genetic map
  • Create a GBrowse-driven physical map
  • Create dump files of common datasets
  • Load the CLUSTALW database
  • Mirror annotation files from Sanger to the FTP site