Difference between revisions of "WormBase-Caltech Weekly Calls"

From WormBaseWiki
Jump to navigationJump to search
 
(122 intermediate revisions by 7 users not shown)
Line 23: Line 23:
 
[[WormBase-Caltech_Weekly_Calls_2020|2020 Meetings]]
 
[[WormBase-Caltech_Weekly_Calls_2020|2020 Meetings]]
  
 +
= 2021 Meetings =
  
= 2021 Meetings =
+
[[WormBase-Caltech_Weekly_Calls_January_2021|January]]
 +
 
 +
[[WormBase-Caltech_Weekly_Calls_February_2021|February]]
  
== Feb 4th, 2021 ==
+
[[WormBase-Caltech_Weekly_Calls_March_2021|March]]
====How the "duplicate" function works in OAs with respect to object IDs (Ranjana and Juancarlos)====
 
*A word of caution: when you duplicate a row, for those OAs with Object IDs (eg., WBGenotype00000014) note that the object ID gets duplicated as well and does not advance to the next ID like the PGID does
 
  
== January 28, 2021 ==
 
  
=== String Matching Pipelines ===
+
== April 1, 2021 ==
* Old pipelines on textpresso-dev are not compatible with the new TPC system
 
* New TPC API does not support string matching
 
* New Python library (wbtools) - used by variation pipeline - supports batch processing of WB literature and regex matching
 
* Email extraction
 
** No longer needed for concise description community curation tracker
 
** Juancarlos, Valerio, and Chris will meet to establish a new, streamlined email address extraction pipeline
 
* Old AFP Display CGI (http://tazendra.caltech.edu/~postgres/cgi-bin/author_fp_display.cgi)
 
** Still uses old Textpresso-Dev; no longer needed? Probably not; Karen can look if there's anything there worth keeping (nothing critical)
 
* Valerio will determine priorities (e.g. antibody stuff first), and send issues to curators as needed
 
  
=== Tracking interlibrary loan requests ===
+
=== Antibodies ===
* Raymond: Do we have a common place to track interlibrary loan requests? Could be useful for Alliance/WB
+
* Alignment of the antibody class to Alliance:
 +
** Propose to move possible_pseudonym (192) and Other_animal (37) to remarks. Those tags are not currently used for curation.
 +
*** Other animal is sometimes used for older annotations, e.g. authors say that the antibodies were raised both  in rats and rabbits. Standard practice would create 2 records, one for the rat antibody and one for the rabbit.
 +
*** Possible pseudonym was used when  a curator was not able to unambiguously assign a previous antibody to a record. (we have a Other name -synonym- tag to capture unambiguous ones). When moving to remarks we can keep a controlled vocabulary for easy future parsing, e.g. “possible_pseudonym:”
 +
** Antigen field: currently separated into Protein, peptide, and other_antigen (e.g.: homogenate of early C.elegans embryos, sperm). Propose to use just one antigen field to capture antigen info.
  
=== Cengen data ===
+
All changes proposed above were approved by the group
  
Eduardo: I got the cengen 2020 data over christmas, and I have repackaged the full release data (all 100k cells with annotations, but prior to the soupX processing) in an h5ad file which I make available here https://wormcells.com/
+
=== textpress-dev clean up ===
 +
* Michael has asked curators to assess what they have on textpresso-dev as it will not be around forever :-(
 +
* is it okay to transfer data and files we want to keep to tazendra? and then to our own individual machines?
 +
* Direct access may be possible via Caltech VPN
 +
* Do we want to move content to AWS? May be complicated; it is still easy and cheap to maintain local file systems/machines
  
* This is the repo that makes that website: https://github.com/Munfred/wormcells-data
+
=== Braun servers ===
* I figured a way to spin up the interface I made for doing differential expression  through google colab, so now people can do it with any h5ad file they want. As an example I wrote a notebook that runs it with the 100k cells from cengen: https://colab.research.google.com/github/Munfred/scdefg/blob/main/scdefg.ipynb
+
* 3 servers stored in Braun server room; is there a new contact person for accessing these servers?
* Since they are thinking about simple things that can be offered in wormbase, I will also briefly talk about this dashboard that i made for a UCLA group that wanted to look at nuclei data. It uses the tissue enrichment analysis code for the bottom 3 plots.
+
* Mike Miranda replacement just getting settled; Paul will find out who is managing the server room and let Raymond know
  
=== Tracking corresponding authors for papers at Alliance ===
+
=== Citace upload ===
* Corresponding authors not tracked in ACEDB, because authors are just text names not IDs
+
* Next Friday, April 9th, by end of the day
* Maybe Cecilia could link a WBPerson as corresponding author for a paper during curation?
+
* Wen will contact Paul Davis for the frozen WS280 models file
  
== January 21, 2021 ==
 
  
=== Neural Network (NN) Paper Classification Results ===
+
== April 8, 2021 ==
* Linking to Paper Display tool (as opposed to Paper Editor) from Michael's webpage for NN results (Michael will make change)
 
* NN results will be incorporated into the Curation Status Form
 
* For AFP and VFP, there is now a table with mixed SVM and NN results ("blackbox" results); for a given paper, if NN results exist, they take priority over any SVM results
 
* Decision: we will omit blackbox results (at least for now) from curation status form (just add the new NN results separately)
 
* We have stopped running SVM on new papers
 
* Interactions SVM has performed better than new NN results; would be worth attempting a retraining
 
  
=== Community Phenotype Curation ===
+
=== Braun server outage ===
* On hold for a few months to commit time to updating the phenotype annotation model to accommodate, e.g. double mutant phenotypes, multiple RNAi targets (intended or otherwise), mutant transgene products causing phenotypes, expressed human genes causing phenotypes, etc.
+
* Raymond fixed; now Spica, wobr and wobr2 are back up
* Changes made for WB phenotypes may carry over to Alliance phenotype work
 
* [https://www.preprints.org/manuscript/202101.0169/v1 Paper out now] on undergrad community phenotype curation project with Lina Dahlberg; we may get more requests for trying this with other undergrad classes
 
  
=== AFP Anatomy Function flagging ===
+
=== Textpresso API ===
* Sometimes it is difficult to assess whether an author flag is correct (often times can be wrong/absent)
+
* Was down yesterday affecting WormiCloud; Michael has fixed
* What about giving authors/users feedback on their flagging results?
+
* Valerio will learn how to manage the API for the future
* Would be good to provide content from paper where this data is said to exist (automatically from a Textpresso pipeline or manually from author identified data)
 
* We want to be careful about how we provide feedback; we should be proactive to make improvements/modifications on our end and bring those back to users for feedback to us
 
  
== January 14th, 2021 ==
+
=== Grant opportunities ===
 +
* Possibilities to apply for supplements
 +
* May 15th deadline
 +
* Druggable genome project
 +
** Pharos: https://pharos.nih.gov/
 +
** could we contribute?
 +
* Visualization, tools, etc.
 +
* Automated person descriptions?
 +
* Automated descriptions for proteins, ion channels, druggable targets, etc.?
  
===PubMed LinkOut to WormBase Paper Pages (Kimberly) ===
+
=== New WS280 ONTOLOGY FTP directory ===
* Other databases [https://www.ncbi.nlm.nih.gov/projects/linkout/doc/nonbiblinkout.html link out from PubMed] to their respective paper pages
+
* Changes requested here: https://github.com/WormBase/website/issues/7900
* For example, https://pubmed.ncbi.nlm.nih.gov/20864032/ links out to GO and MGI paper pages
+
* Here's the FTP URL: ftp://ftp.wormbase.org/pub/wormbase/releases/WS280/ONTOLOGY/
* Would like to set this up for WormBase and ultimately for the Alliance, but this will require some developer help
+
* Known issues (Chris will report):
* Work on this next month (after AFP and GO grant submissions)?
+
** Ontology files are provided as ".gaf" in addition to ".obo"; we need to remove the ".gaf" OBO files
 +
** Some files are duplicated and/or have inappropriate file extensions
  
===Update cycle for HGNC data in the OA (Ranjana) ===
+
=== Odd characters in Postgres ===
*Juancarlos had these questions for us:
+
* Daniela and Juancarlos discovered some errors with respect to special characters pasted into the OA
<pre style="white-space: pre-wrap;
+
* Daniela would like to automatically pull in micropublication text (e.g. figure captions) into Postgres
white-space: -moz-pre-wrap;
+
* We would need an automated way to convert special characters, like degree symbols ° into html unicode \&deg\;
white-space: -pre-wrap;
+
* Juancarlos and Valerio will look into possibly switching from a Perl module to a Python module to handle special characters
white-space: -o-pre-wrap;
 
word-wrap: break-word">
 
  
There's a script here that repopulates the postgres obo_*_hgnc tables
 
based off of Chris and Wen's data
 
/home/postgres/work/pgpopulation/obo_oa_ontologies/populate_obo_hgnc.pl
 
  
It's not on a cronjob, because I think the files are not updated that
+
== April 15, 2021 ==
often.  Do we want to run this every night, or run it manually when
 
the files get re-generated ?  Or run every night, and check if the
 
files's timestamps have changed, then repopulate postgres ?
 
  
</pre>
+
=== Special characters in Postgres/OA ===
 +
* Juancarlos working on/proposing a plan to store UTF-8 characters in Postgres and the OA which would then get converted, at dumping, to HTML entities (e.g. &alpha;) for the ACE files
 +
* There is still a bit of cleanup needed to fix or remove special characters (not necessarily UTF-8) that apparently got munged upon copy/pasting into the OA in the past
 +
* Note: copy/paste from a PDF often works fine, but sometimes does not work as expected so manual intervention would be needed (e.g. entering Greek characters by hand in UTF-8 format)
 +
* Would copy/pasting from HTML be better than PDF?
 +
* For Person curation it would be good to be able to faithfully store and display appropriate foreign characters (e.g. Chinese characters, Danish characters, etc.)
 +
* Mangolassi script called "get_summary_characters.pl" located here: /home/postgres/work/pgpopulation/grg_generegulation/20200618_summary_characters
 +
** Juancarlos will modify script to take a data type code as an argument on the command line and return all Postgres tables (and their respective PGIDs) that have special characters, e.g.
 +
*** $ ./get_summary_characters.pl exp
 +
*** $ ./get_summary_characters.pl int
 +
*** $ ./get_summary_characters.pl grg
 +
** or could pass just the datatype + field (postgres table). e.g.
 +
*** $ ./get_summary_characters.pl pic_description
 +
** Juancarlos will email everyone once it's ready.  It's ready, email sent.  Script is at /home/postgres/work/pgpopulation/oa_general/20210411_unicode_html/get_summary_characters.pl  Symlink this to your directory and run it from there, it will create files in the directory you are at when running it.
 +
* Action items:
 +
** Juancarlos will update the "get_summary_characters.pl" script as described above
 +
** Curators should use the "get_summary_characters.pl" to look for (potentially) bad characters in their OAs/Postgres tables
 +
** Need to perform bulk (automated) replacement of existing HTML entities into corresponding UTF-8 characters
 +
** Curators will need to work with Juancarlos for each OA to modify the dumper
 +
** Juancarlos will write (or append to existing) Postgres/OA dumping scripts to:
 +
*** 1) Convert UTF-8 characters to HTML entities in ACE files
 +
*** 2) Convert special quote and hyphen characters into simple versions that don't need special handling
  
===Minutes===
+
=== CeNGEN pictures ===
====PubMed LinkOut to WormBase Paper Pages====
+
* Model change went in to accommodate images from the CeNGEN project
 +
* Want gene page images for CeNGEN data; have the specifications for such images been worked out? Maybe not yet
 +
* Raymond and Daniela will work with data producers to acquire images when ready
  
====Update cycle for HGNC data in the OA====
+
=== Supplement opportunities ===
*We will update when Alliance updates the data
+
* Money available for software development to "harden" existing software
*Juancarlos will set it to check the timestamps and if they change will do an update for the OAs
+
* Might be possible to make Eduardo's single cell analysis tools more sustainable
 +
* Could make WormiCloud adapted to Alliance?
 +
* Put Noctua on more stable production footing? (GO cannot apply as they are in final year of existing grant)
  
====CENGEN====
+
=== Student project for Textpresso ===
*Wen, Daniela, and Raymond will look at the datasets to work out how to incorporate. Start simple.
+
* Create tool to allow user to submit text and return a list of similar papers
*We will make links to pages on their site.
+
* Use cases:
 +
** curator wants an alert to find papers similar to what they've curated
 +
** look for potential reviewers of a paper based on similar text content

Latest revision as of 19:34, 15 April 2021

Previous Years

2009 Meetings

2011 Meetings

2012 Meetings

2013 Meetings

2014 Meetings

2015 Meetings

2016 Meetings

2017 Meetings

2018 Meetings

2019 Meetings

2020 Meetings

2021 Meetings

January

February

March


April 1, 2021

Antibodies

  • Alignment of the antibody class to Alliance:
    • Propose to move possible_pseudonym (192) and Other_animal (37) to remarks. Those tags are not currently used for curation.
      • Other animal is sometimes used for older annotations, e.g. authors say that the antibodies were raised both in rats and rabbits. Standard practice would create 2 records, one for the rat antibody and one for the rabbit.
      • Possible pseudonym was used when a curator was not able to unambiguously assign a previous antibody to a record. (we have a Other name -synonym- tag to capture unambiguous ones). When moving to remarks we can keep a controlled vocabulary for easy future parsing, e.g. “possible_pseudonym:”
    • Antigen field: currently separated into Protein, peptide, and other_antigen (e.g.: homogenate of early C.elegans embryos, sperm). Propose to use just one antigen field to capture antigen info.

All changes proposed above were approved by the group

textpress-dev clean up

  • Michael has asked curators to assess what they have on textpresso-dev as it will not be around forever :-(
  • is it okay to transfer data and files we want to keep to tazendra? and then to our own individual machines?
  • Direct access may be possible via Caltech VPN
  • Do we want to move content to AWS? May be complicated; it is still easy and cheap to maintain local file systems/machines

Braun servers

  • 3 servers stored in Braun server room; is there a new contact person for accessing these servers?
  • Mike Miranda replacement just getting settled; Paul will find out who is managing the server room and let Raymond know

Citace upload

  • Next Friday, April 9th, by end of the day
  • Wen will contact Paul Davis for the frozen WS280 models file


April 8, 2021

Braun server outage

  • Raymond fixed; now Spica, wobr and wobr2 are back up

Textpresso API

  • Was down yesterday affecting WormiCloud; Michael has fixed
  • Valerio will learn how to manage the API for the future

Grant opportunities

  • Possibilities to apply for supplements
  • May 15th deadline
  • Druggable genome project
  • Visualization, tools, etc.
  • Automated person descriptions?
  • Automated descriptions for proteins, ion channels, druggable targets, etc.?

New WS280 ONTOLOGY FTP directory

Odd characters in Postgres

  • Daniela and Juancarlos discovered some errors with respect to special characters pasted into the OA
  • Daniela would like to automatically pull in micropublication text (e.g. figure captions) into Postgres
  • We would need an automated way to convert special characters, like degree symbols ° into html unicode \&deg\;
  • Juancarlos and Valerio will look into possibly switching from a Perl module to a Python module to handle special characters


April 15, 2021

Special characters in Postgres/OA

  • Juancarlos working on/proposing a plan to store UTF-8 characters in Postgres and the OA which would then get converted, at dumping, to HTML entities (e.g. α) for the ACE files
  • There is still a bit of cleanup needed to fix or remove special characters (not necessarily UTF-8) that apparently got munged upon copy/pasting into the OA in the past
  • Note: copy/paste from a PDF often works fine, but sometimes does not work as expected so manual intervention would be needed (e.g. entering Greek characters by hand in UTF-8 format)
  • Would copy/pasting from HTML be better than PDF?
  • For Person curation it would be good to be able to faithfully store and display appropriate foreign characters (e.g. Chinese characters, Danish characters, etc.)
  • Mangolassi script called "get_summary_characters.pl" located here: /home/postgres/work/pgpopulation/grg_generegulation/20200618_summary_characters
    • Juancarlos will modify script to take a data type code as an argument on the command line and return all Postgres tables (and their respective PGIDs) that have special characters, e.g.
      • $ ./get_summary_characters.pl exp
      • $ ./get_summary_characters.pl int
      • $ ./get_summary_characters.pl grg
    • or could pass just the datatype + field (postgres table). e.g.
      • $ ./get_summary_characters.pl pic_description
    • Juancarlos will email everyone once it's ready. It's ready, email sent. Script is at /home/postgres/work/pgpopulation/oa_general/20210411_unicode_html/get_summary_characters.pl Symlink this to your directory and run it from there, it will create files in the directory you are at when running it.
  • Action items:
    • Juancarlos will update the "get_summary_characters.pl" script as described above
    • Curators should use the "get_summary_characters.pl" to look for (potentially) bad characters in their OAs/Postgres tables
    • Need to perform bulk (automated) replacement of existing HTML entities into corresponding UTF-8 characters
    • Curators will need to work with Juancarlos for each OA to modify the dumper
    • Juancarlos will write (or append to existing) Postgres/OA dumping scripts to:
      • 1) Convert UTF-8 characters to HTML entities in ACE files
      • 2) Convert special quote and hyphen characters into simple versions that don't need special handling

CeNGEN pictures

  • Model change went in to accommodate images from the CeNGEN project
  • Want gene page images for CeNGEN data; have the specifications for such images been worked out? Maybe not yet
  • Raymond and Daniela will work with data producers to acquire images when ready

Supplement opportunities

  • Money available for software development to "harden" existing software
  • Might be possible to make Eduardo's single cell analysis tools more sustainable
  • Could make WormiCloud adapted to Alliance?
  • Put Noctua on more stable production footing? (GO cannot apply as they are in final year of existing grant)

Student project for Textpresso

  • Create tool to allow user to submit text and return a list of similar papers
  • Use cases:
    • curator wants an alert to find papers similar to what they've curated
    • look for potential reviewers of a paper based on similar text content