Difference between revisions of "Software Life Cycle: 1. Updating The Development Server"
Line 280: | Line 280: | ||
*anatomy_ontology.RELEASE.obo | *anatomy_ontology.RELEASE.obo | ||
*gene_association.RELEASE.wb.cjp | *gene_association.RELEASE.wb.cjp | ||
− | *id2association_counts.txt | + | *id2association_counts.txt (ontology/tree_lister) |
− | *parent2ids.txt | + | *id2total_associations.txt (ontology/tree_lister) |
+ | *parent2ids.txt (ontology/tree_lister) | ||
*gene_association.RELEASE.wb | *gene_association.RELEASE.wb | ||
*gene_association.RELEASE.wb.ppa | *gene_association.RELEASE.wb.ppa | ||
− | *id2name.txt | + | *id2name.txt (ontology/tree_lister) |
*phenotype_association.RELEASE.wb | *phenotype_association.RELEASE.wb | ||
*gene_association.RELEASE.wb.cb | *gene_association.RELEASE.wb.cb | ||
*gene_association.RELEASE.wb.rem | *gene_association.RELEASE.wb.rem | ||
− | *id2parents.txt | + | *id2parents.txt (ontology/tree_lister) |
*phenotype_ontology.RELEASE.obo | *phenotype_ontology.RELEASE.obo | ||
− | |||
=== Unpack and Load the CLUSTALW database === | === Unpack and Load the CLUSTALW database === |
Revision as of 15:33, 31 May 2011
Contents
- 1 Overview
- 2 Document Conventions
- 3 Staging Pipeline Code
- 4 Running the Update Pipeline
- 4.1 Log Files
- 4.2 Executing the Pipeline
- 4.3 Update Steps
- 4.3.1 Mirror a new release
- 4.3.2 Create necessary directories
- 4.3.3 Unpack Acedb
- 4.3.4 Create BLAST databases
- 4.3.5 Create BLAT databases
- 4.3.6 Load genomic GFF annotations
- 4.3.7 Compile Gene Resources
- 4.3.8 Compile Ontology Resources
- 4.3.9 Unpack and Load the CLUSTALW database
- 4.3.10 Create files of commonly requested datasets
- 4.3.11 Last step: steps/update_symlinks.pl
- 4.4 Update Steps
- 4.4.1 Compile Orthology Resources
- 4.4.2 Compile Interaction Data
- 4.4.3 Create ePCR databases for available species
- 4.4.4 Build and Load GFF patches
- 4.4.5 Convert GFF2 into GFF3
- 4.4.6 Create a GBrowse-driven genetic map
- 4.4.7 Create a GBrowse-driven physical map
- 4.4.8 Mirror annotation files from Sanger to the FTP site
- 4.5 Compiled File Table
- 4.6 Update Records
Overview
This document describes the process of staging a new release of WormBase on the development server.
The automated staging pipeline consists of:
- a harness that handles logging, error trapping, and basic shared functions
- a suite of modules -- one per step -- that implement the step or make calls to helper scripts
- helper scripts in Perl or shell that assist in implementation
Control of the pipeline: You can use the pipeline in several ways:
- Launch the full pipeline via the control script, the preferred and automated method.
- Run individual steps in the context of the pipeline using control scripts in steps/, useful if the pipeline fails at a specific point.
- Directly run helper scripts outside of the logging facilities of the pipeline, useful if you need to rebuild something quickly.
Document Conventions
The current development server is
wb-dev: wb-dev.oicr.on.ca (FQDN); aka: dev.wormbase.org
When indicated, substitute WSXXX for ${RELEASE}.
System paths referred to in this document:
FTP : /usr/local/ftp/pub/wormbase WORMBASE : /usr/local/wormbase ACEDB : /usr/local/wormbase/acedb
Staging Pipeline Code
The update pipeline code is available in the website-admin module on github:
tharris> git clone git@github.com:WormBase/website-admin.git tharris> cd website-admin/update lib/ -- the shared library suite that handles updates. staging/ -- code related to staging data on the development site. production/ -- code related to the releasing data/code into production.
The contents are:
logs/ -- the logs directory for each step/update bin/ -- Perl scripts for manually launching individual steps. README.txt -- directory listing updatelog.conf -- a configuration file for the update process update.sh -- master script that fires off each step of the pipeline util/ -- various helper scripts for the update process
Running the Update Pipeline
Log Files
The Staging Pipeline creates informative logs for each step of the process. Logs are located at:
/usr/local/wormbase/logs/staging/WSXXX master.log -- Master log tracks all steps; useful for a meta-view of the pipeline. Contains INFO, WARN, ERROR, and FATAL messages. master.err -- Master error log tracks ERROR and FATAL messages encountered across all steps.
Each individual step creates its own log file capturing STDERR and STDOUT containing informative messages from the pipeline. These are useful for tracking progress and triaging problems. For example:
/usr/local/wormbase/logs/staging/WSXXX/build_blast_databases/ step.log -- step-specific log tracking everything from TRACE on up. step.err -- step-specific error log tracking ERROR and FATAL messages. Good place to check if a step breaks.
Executing the Pipeline
A single script fires off all steps of the process. You should run it inside a screen.
tharris> screen tharris> ./stage_via_pipeline.pl WSXXX (to disconnect your screen) tharris> ^a ^d (to resume your screen) tharris> screen -r
Monitor progress of the update by following the master log file:
tharris> tail -f /usr/local/wormbase/logs/staging/WSXXX/master.log
screen command reference
Update Steps
The steps that comprise the pipeline, the script to launch them, and the module that implements are listed below.
step | control script | helper scripts | module |
---|---|---|---|
Mirror a new release | steps/mirror_new_release.pl (manual) | W::U::Staging::MirrorNewRelease | |
Unpack ACeDB | steps/unpack_acedb.pl (manual) | W::U::Staging::UnpackAcedb | |
Create directories | steps/create_directories.pl | W::U::Staging::CreateDirectories | |
Create BLAST databases | steps/create_blast_databases.pl | helpers/create_blastdb_nucleotide.sh, create_blastdb_protein.sh | W::U::Staging::CreateBlastDatabases |
Create BLAT databases | steps/create_blat_databases.pl | W::U::Staging::CreateBlatDatabases | |
Load Genomic GFF databases | steps/load_genomic_gff_databases.pl | W::U::Staging::LoadGenomicGFFDatabases | |
Compile Gene Summary resources | steps/compile_gene_resources.pl | W::U::Staging::CompileGeneResources | |
Compile Ontology resources | steps/compile_ontology_resources.pl | W::U::Staging::CompileOntologyResources | |
Unpack and Load the ClustalW database | steps/unpack_clustalw_database.pl | W::U::Staging::UnpackClustalWDatabase | |
Create commonly requested datasets | steps/dump_annotations.pl | W::U::DumpAnnotations |
- Compile orthology resources
- Compile interaction resources
- Create ePCR databases for select species
- Build and load GFF patches
- Convert GFF2 into GFF3
- Create a GBrowse-driven genetic map
- Create a GBrowse-drive physical map
- Update strains database
- Mirror annotation files from Sanger to the FTP site
Mirror a new release
New releases are mirrored directly from the Hinxton FTP site to the primary WormBase FTP site hosted on wb-dev:/usr/local/ftp. This process is run via cron but can also be run manually.
# Mirror the next incremental release newer than what we already have: ./steps/mirror_new_release.pl # Or mirror a specific release: ./steps/mirror_new_release.pl WS150 // Mirror the WS150 release to /usr/local/ftp/pub/wormbase/releases/WS150
Create necessary directories
Create staging directories for the update process.
Usage : ./steps/create_directories.pl ${RELEASE} Output : Directories in ${WORMBASE}/databases
Unpack Acedb
Unpack AceDB from the new release. Customize the new installation with skeletal files located at /usr/local/wormbase/website/classic/wspec. Files will be unpacked to /usr/local. Please make sure that there is sufficient space in this directory! You will need approximately 25 GB of disk space per release.
via pipeline: ./steps/unpack_acedb.pl ${RELEASE} via helper : helpers/unpack_acedb.sh ${RELEASE} Input : Files staged at ${FTP}/releases/${RELEASE}/species Output : Unpacked AceDB files in ${ACEDB}/wormbase_${RELEASE}
When complete, you should have a new acedb directory containing:
-- database -- wgf -- wquery -- wspec
It is also good to check for a functional db -- try to connect to the acedb via a test script that creates a db handle. It may be necessary to restart the database:
> ps -ax | grep acedb ## to get acedb process number > kill -9 {AceDB proc number} ## stop current acedb process > sudo /etc/init.d/xinetd restart > saceclient localhost -port 2005
Create BLAST databases
Build BLAST databases. We automatically build nucleotide and protein BLAST DBs for species with genomic sequence and conceptual translations. In addition, for C. elegans and C. briggsae, we build blast databases for ESTs and genes.
Usage : ./steps/create_blast_databases.pl ${RELEASE} Input : Genomic sequence and protein FASTA files staged at: ${FTP}/releases/species/${SPECIES}.${RELEASE}.genomic.fa.gz ${FTP}/releases/species/${SPECIES}.${RELEASE}.protein.fa.gz Gene and EST sequences derived from AceDB Output : BLAST databases in ${WORMBASE}/databases/${RELEASE}/blast/${SPECIES}.
Create BLAT databases
Build BLAT databases of genomic sequence.
Usage : ./steps/create_blat_databases.pl ${RELEASE} Input : Genomic sequence FASTA files staged at ${FTP}/releases/species/${SPECIES}/${SPECIES}.${RELEASE}.genomic.fa.gz Output : BLAT .nib files in ${WORMBASE}/databases/${RELEASE}/blat/${SPECIES}
Load genomic GFF annotations
Convert GFF files into Bio::DB::GFF (GFF2) or Bio::DB::SeqFeature::Store (GFF3) databases.
Usage : ./steps/load_genomic_gff_databases.pl ${RELEASE} Input : GFF and FASTA files staged at: GFF : ${FTP}/releases/species/${SPECIES}/${SPECIES}.${RELEASE}.gff[2|3].gz DNA : ${FTP}/releases/species/${SPECIES}/${SPECIES}.${RELEASE}.genomic.fa.gz
Compile Gene Resources
Create precompiled gene page files specifically to populate the Phenotype tables.
Usage : ./steps/compile_gene_resource.pl ${RELEASE} Input : AceDB data Output : Files ${WORMBASE}/databases/${RELEASE}/gene
- gene_rnai_pheno.txt
- gene_xgene_pheno.txt
- phenotype_id2name.txt
- rnai_data.txt
- variation_data.txt
Compile Ontology Resources
TODO: This step relies on a number of external helper scripts that should ALL be folded into CompileGeneResources. They are located at
staging/helpers/gene_summary
Take the mirrored ontology files and compile them into the databases for the ontology searches.
Usage : ./steps/compile_ontology_resources.pl ${RELEASE} Input : OBO files staged at: /usr/local/ftp/pub/wormbase/releases/WSXXX/ONTOLOGY compiled data files from Compile Gene Resources step Output : to ${WORMBASE}/database/${RELEASE}/ontology:
- anatomy_association.RELEASE.wb
- gene_association.RELEASE.wb.ce
- gene_ontology.RELEASE.obo
- name2id.txt
- search_data.txt
- anatomy_ontology.RELEASE.obo
- gene_association.RELEASE.wb.cjp
- id2association_counts.txt (ontology/tree_lister)
- id2total_associations.txt (ontology/tree_lister)
- parent2ids.txt (ontology/tree_lister)
- gene_association.RELEASE.wb
- gene_association.RELEASE.wb.ppa
- id2name.txt (ontology/tree_lister)
- phenotype_association.RELEASE.wb
- gene_association.RELEASE.wb.cb
- gene_association.RELEASE.wb.rem
- id2parents.txt (ontology/tree_lister)
- phenotype_ontology.RELEASE.obo
Unpack and Load the CLUSTALW database
Usage: ./steps/load_clustal_db.pl {WSRELEASE} Input: ${FTP}/releases/${RELEASE}/COMPARATIVE_ANALYSIS/wormpep${RELEASE}.clw.sql.bz2 Output: a new mysql database called clustal_${RELEASE}
Create files of commonly requested datasets
Usage: ./steps/dump_annotations.pl {WSRELEASE} Output: datasets in ${FTP}/releases/${RELEASE}/annotations and species/annotations
The staging harness will automatically run scripts in annotation_dumpers/*. These scripts should abide by the following conventions:
1. Be located in update/staging/annotation_dumpers 2. Be named either dump_species_* for species level data (like brief IDs) dump_resource_* for resource level data (like laboratories) 3. Follow existing examples, including available parameters. 4. Dump to STDERR and STDOUT. Notes: 1. dump_species_* will be called for each species managed by WormBase and will end up in ${FTP_ROOT}/releases/[RELEASE]/species/[G_SPECIES]/annotation/[G_SPECIES].[RELEASE].[DESCRIPTION].txt dump_resource_* will be called once and end up in ${FTP_ROOT}/datasets-wormbase/wormbase.[RELEASE].[DESCRIPTION].txt 2. The filename will be created by stripping off dump_species_ or dump_resource_. Species specific resources will be prepended with the appropriate species.
Last step: steps/update_symlinks.pl
update_symlinks.pl WSXXX
This script will create symlinks in species/ pointing to WSXXX files. In addition it will create *current* symlinks to the provided release.
TODO: This also needs to handle
- mysql databases
- acedb symlink
Update Steps
Compile Orthology Resources
Create precompiled orthology and disease display and search related files
Usage : ./steps/compile_gene_data.pl ${RELEASE} ./steps/compile_ortholog_data.pl ${RELEASE} ./steps/compile_orthology_resources.pl ${RELEASE} Input : AceDB data, omim.txt and morbidmap files from OMIM, ontology resources files Output : Files ${WORMBASE}/databases/${RELEASE}/orthology (summary page using files in parenthesis)
- all_proteins.txt
- disease_page_data.txt
- disease_search_data.txt (orthology/search)
- full_disease_data.txt
- gene_association.$RELEASE.wb.ce
- gene_id2go_bp.txt
- gene_id2go_mf.txt
- gene_id2omim_ids.txt (orthology/disease)
- gene_id2phenotype.txt
- gene_list.txt
- go_id2omim_ids.txt (orthology/disease)
- go_ids2descendants.txt (orthology/gene)
- hs_ensembl_id2omim.txt (orthology/gene)
- hs_proteins.txt
- id2name.txt (orthology/disease, orthology/gene)
- last_processed_gene.txt
- name2id.txt (orthology/disease)
- omim2disease.txt
- omim_id2all_ortholog_data.txt (orthology/disease)
- omim_id2disease_desc.txt (orthology/disease)
- omim_id2disease_name.txt (orthology/disease)
- omim_id2disease_notes.txt (orthology/disease)
- omim_id2disease_synonyms.txt
- omim_id2disease.txt (orthology/gene)
- omim_id2gene_name.txt (orthology/search)
- omim_id2go_ids.txt
- omim_id2phenotypes.txt
- omim_reconfigured.txt
- ortholog_other_data_hs_only.txt
- ortholog_other_data.txt
Compile Interaction Data
DEPRECATED. NO NEED TO MIGRATE THIS INTO THE NEW STAGING PIPELINE.
Create precompiled gene page files specifically to populate interaction listing pages.
Usage : ./steps/compile_interaction_data.pl ${RELEASE} Input : AceDB interaction data Output : Files ${WORMBASE}/databases/${RELEASE}/interaction
- compiled_interaction_data.txt
Create ePCR databases for available species
Build ePCR databases for each species.
Usage : ./steps/create_epcr_databases.pl ${RELEASE} Input : Mirrored genomic sequence files from Sanger to ${FTP}/genomes/${SPECIES}/sequences/dna/${SPECIES}.dna.fa.gz Output : ePCR databases to ${WORMBASE}/databases/${RELEASE}/epcr
Build and Load GFF patches
Create and load number of patches for the c_elegans GFF database, including protein motifs and genetic limits.
Usage : ./steps/load_gff_patches.pl ${RELEASE} Input : Files created to ${FTP}/genomes/c_elegans/genome_feature_tables/GFF2 Output : Files created above.
Convert GFF2 into GFF3
Notes...
Usage: ./steps/convert_gff2_to_gff3.pl ${RELEASE}
Create a GBrowse-driven genetic map
Notes...
Usage: ./steps/load_gmap_gffdb.pl ${RELEASE}
Create a GBrowse-driven physical map
Notes...
Usage: ./steps/load_pmap_gffdb.pl {WSRELEASE}
Mirror annotation files from Sanger to the FTP site
DEPRECATED. NO NEED TO MIGRATE THIS TO THE NEW STAGING PIPELINE
Notes...
Usage: ./steps/mirror_annotations.pl {WSRELEASE}