Difference between revisions of "WormBase-Caltech Weekly Calls"

From WormBaseWiki
Jump to navigationJump to search
 
(133 intermediate revisions by 6 users not shown)
Line 23: Line 23:
 
[[WormBase-Caltech_Weekly_Calls_2020|2020 Meetings]]
 
[[WormBase-Caltech_Weekly_Calls_2020|2020 Meetings]]
  
 +
= 2021 Meetings =
 +
 +
[[WormBase-Caltech_Weekly_Calls_January_2021|January]]
 +
 +
[[WormBase-Caltech_Weekly_Calls_February_2021|February]]
 +
 +
[[WormBase-Caltech_Weekly_Calls_March_2021|March]]
 +
 +
 +
== April 1, 2021 ==
 +
 +
=== Antibodies ===
 +
* Alignment of the antibody class to Alliance:
 +
** Propose to move possible_pseudonym (192) and Other_animal (37) to remarks. Those tags are not currently used for curation.
 +
*** Other animal is sometimes used for older annotations, e.g. authors say that the antibodies were raised both  in rats and rabbits. Standard practice would create 2 records, one for the rat antibody and one for the rabbit.
 +
*** Possible pseudonym was used when  a curator was not able to unambiguously assign a previous antibody to a record. (we have a Other name -synonym- tag to capture unambiguous ones). When moving to remarks we can keep a controlled vocabulary for easy future parsing, e.g. “possible_pseudonym:”
 +
** Antigen field: currently separated into Protein, peptide, and other_antigen (e.g.: homogenate of early C.elegans embryos, sperm). Propose to use just one antigen field to capture antigen info.
  
= 2021 Meetings =
+
All changes proposed above were approved by the group
  
== Feb 4th, 2021 ==
+
=== textpress-dev clean up ===
====How the "duplicate" function works in OAs with respect to object IDs (Ranjana and Juancarlos)====
+
* Michael has asked curators to assess what they have on textpresso-dev as it will not be around forever :-(
*A word of caution: when you duplicate a row, for those OAs with Object IDs (eg., WBGenotype00000014) note that the object ID gets duplicated as well and does not advance to the next ID like the PGID does
+
* is it okay to transfer data and files we want to keep to tazendra? and then to our own individual machines?
*If you do use the "duplicate" function, remember to manually change the Object ID
+
* Direct access may be possible via Caltech VPN
 +
* Do we want to move content to AWS? May be complicated; it is still easy and cheap to maintain local file systems/machines
  
== January 28, 2021 ==
+
=== Braun servers ===
 +
* 3 servers stored in Braun server room; is there a new contact person for accessing these servers?
 +
* Mike Miranda replacement just getting settled; Paul will find out who is managing the server room and let Raymond know
  
=== String Matching Pipelines ===
+
=== Citace upload ===
* Old pipelines on textpresso-dev are not compatible with the new TPC system
+
* Next Friday, April 9th, by end of the day
* New TPC API does not support string matching
+
* Wen will contact Paul Davis for the frozen WS280 models file
* New Python library (wbtools) - used by variation pipeline - supports batch processing of WB literature and regex matching
 
* Email extraction
 
** No longer needed for concise description community curation tracker
 
** Juancarlos, Valerio, and Chris will meet to establish a new, streamlined email address extraction pipeline
 
* Old AFP Display CGI (http://tazendra.caltech.edu/~postgres/cgi-bin/author_fp_display.cgi)
 
** Still uses old Textpresso-Dev; no longer needed? Probably not; Karen can look if there's anything there worth keeping (nothing critical)
 
* Valerio will determine priorities (e.g. antibody stuff first), and send issues to curators as needed
 
  
=== Tracking interlibrary loan requests ===
 
* Raymond: Do we have a common place to track interlibrary loan requests? Could be useful for Alliance/WB
 
  
=== Cengen data ===
+
== April 8, 2021 ==
  
Eduardo: I got the cengen 2020 data over christmas, and I have repackaged the full release data (all 100k cells with annotations, but prior to the soupX processing) in an h5ad file which I make available here https://wormcells.com/
+
=== Braun server outage ===
 +
* Raymond fixed; now Spica, wobr and wobr2 are back up
  
* This is the repo that makes that website: https://github.com/Munfred/wormcells-data
+
=== Textpresso API ===
* I figured a way to spin up the interface I made for doing differential expression  through google colab, so now people can do it with any h5ad file they want. As an example I wrote a notebook that runs it with the 100k cells from cengen: https://colab.research.google.com/github/Munfred/scdefg/blob/main/scdefg.ipynb
+
* Was down yesterday affecting WormiCloud; Michael has fixed
* Since they are thinking about simple things that can be offered in wormbase, I will also briefly talk about this dashboard that i made for a UCLA group that wanted to look at nuclei data. It uses the tissue enrichment analysis code for the bottom 3 plots.
+
* Valerio will learn how to manage the API for the future
  
=== Tracking corresponding authors for papers at Alliance ===
+
=== Grant opportunities ===
* Corresponding authors not tracked in ACEDB, because authors are just text names not IDs
+
* Possibilities to apply for supplements
* Maybe Cecilia could link a WBPerson as corresponding author for a paper during curation?
+
* May 15th deadline
 +
* Druggable genome project
 +
** Pharos: https://pharos.nih.gov/
 +
** could we contribute?
 +
* Visualization, tools, etc.
 +
* Automated person descriptions?
 +
* Automated descriptions for proteins, ion channels, druggable targets, etc.?
  
== January 21, 2021 ==
+
=== New WS280 ONTOLOGY FTP directory ===
 +
* Changes requested here: https://github.com/WormBase/website/issues/7900
 +
* Here's the FTP URL: ftp://ftp.wormbase.org/pub/wormbase/releases/WS280/ONTOLOGY/
 +
* Known issues (Chris will report):
 +
** Ontology files are provided as ".gaf" in addition to ".obo"; we need to remove the ".gaf" OBO files
 +
** Some files are duplicated and/or have inappropriate file extensions
  
=== Neural Network (NN) Paper Classification Results ===
+
=== Odd characters in Postgres ===
* Linking to Paper Display tool (as opposed to Paper Editor) from Michael's webpage for NN results (Michael will make change)
+
* Daniela and Juancarlos discovered some errors with respect to special characters pasted into the OA
* NN results will be incorporated into the Curation Status Form
+
* Daniela would like to automatically pull in micropublication text (e.g. figure captions) into Postgres
* For AFP and VFP, there is now a table with mixed SVM and NN results ("blackbox" results); for a given paper, if NN results exist, they take priority over any SVM results
+
* We would need an automated way to convert special characters, like degree symbols ° into html unicode \&deg\;
* Decision: we will omit blackbox results (at least for now) from curation status form (just add the new NN results separately)
+
* Juancarlos and Valerio will look into possibly switching from a Perl module to a Python module to handle special characters
* We have stopped running SVM on new papers
 
* Interactions SVM has performed better than new NN results; would be worth attempting a retraining
 
  
=== Community Phenotype Curation ===
 
* On hold for a few months to commit time to updating the phenotype annotation model to accommodate, e.g. double mutant phenotypes, multiple RNAi targets (intended or otherwise), mutant transgene products causing phenotypes, expressed human genes causing phenotypes, etc.
 
* Changes made for WB phenotypes may carry over to Alliance phenotype work
 
* [https://www.preprints.org/manuscript/202101.0169/v1 Paper out now] on undergrad community phenotype curation project with Lina Dahlberg; we may get more requests for trying this with other undergrad classes
 
  
=== AFP Anatomy Function flagging ===
+
== April 15, 2021 ==
* Sometimes it is difficult to assess whether an author flag is correct (often times can be wrong/absent)
 
* What about giving authors/users feedback on their flagging results?
 
* Would be good to provide content from paper where this data is said to exist (automatically from a Textpresso pipeline or manually from author identified data)
 
* We want to be careful about how we provide feedback; we should be proactive to make improvements/modifications on our end and bring those back to users for feedback to us
 
  
== January 14th, 2021 ==
+
=== Special characters in Postgres/OA ===
 +
* Juancarlos working on/proposing a plan to store UTF-8 characters in Postgres and the OA which would then get converted, at dumping, to HTML entities (e.g. α) for the ACE files
 +
* There is still a bit of cleanup needed to fix or remove special characters (not necessarily UTF-8) that apparently got munged upon copy/pasting into the OA in the past
 +
* Note: copy/paste from a PDF often works fine, but sometimes does not work as expected so manual intervention would be needed (e.g. entering Greek characters by hand in UTF-8 format)
 +
* Would copy/pasting from HTML be better than PDF?
 +
* For Person curation it would be good to be able to faithfully store and display appropriate foreign characters (e.g. Chinese characters, Danish characters, etc.)
 +
* Mangolassi script called "get_summary_characters.pl" located here: /home/postgres/work/pgpopulation/grg_generegulation/20200618_summary_characters
 +
** Juancarlos will modify script to take a data type code as an argument on the command line and return all Postgres tables (and their respective PGIDs) that have special characters, e.g.
 +
*** $ ./get_summary_characters.pl exp
 +
*** $ ./get_summary_characters.pl int
 +
*** $ ./get_summary_characters.pl grg
 +
** or could pass just the datatype + field (postgres table). e.g.
 +
*** $ ./get_summary_characters.pl pic_description
 +
** Juancarlos will email everyone once it's ready.  It's ready, email sent.  Script is at /home/postgres/work/pgpopulation/oa_general/20210411_unicode_html/get_summary_characters.pl  Symlink this to your directory and run it from there, it will create files in the directory you are at when running it.
 +
* Action items:
 +
** Juancarlos will update the "get_summary_characters.pl" script as described above
 +
** Curators should use the "get_summary_characters.pl" to look for (potentially) bad characters in their OAs/Postgres tables
 +
** Need to perform bulk (automated) replacement of existing HTML entities into corresponding UTF-8 characters
 +
** Curators will need to work with Juancarlos for each OA to modify the dumper
 +
** Juancarlos will write (or append to existing) Postgres/OA dumping scripts to:
 +
*** 1) Convert UTF-8 characters to HTML entities in ACE files
 +
*** 2) Convert special quote and hyphen characters into simple versions that don't need special handling
  
===PubMed LinkOut to WormBase Paper Pages (Kimberly) ===
+
=== CeNGEN pictures ===
* Other databases [https://www.ncbi.nlm.nih.gov/projects/linkout/doc/nonbiblinkout.html link out from PubMed] to their respective paper pages
+
* Model change went in to accommodate images from the CeNGEN project
* For example, https://pubmed.ncbi.nlm.nih.gov/20864032/ links out to GO and MGI paper pages
+
* Want gene page images for CeNGEN data; have the specifications for such images been worked out? Maybe not yet
* Would like to set this up for WormBase and ultimately for the Alliance, but this will require some developer help
+
* Raymond and Daniela will work with data producers to acquire images when ready
* Work on this next month (after AFP and GO grant submissions)?
 
  
===Update cycle for HGNC data in the OA (Ranjana) ===
+
=== Supplement opportunities ===
*Juancarlos had these questions for us:
+
* Money available for software development to "harden" existing software
<pre style="white-space: pre-wrap;
+
* Might be possible to make Eduardo's single cell analysis tools more sustainable
white-space: -moz-pre-wrap;
+
* Could make WormiCloud adapted to Alliance?
white-space: -pre-wrap;
+
* Put Noctua on more stable production footing? (GO cannot apply as they are in final year of existing grant)
white-space: -o-pre-wrap;
 
word-wrap: break-word">
 
  
There's a script here that repopulates the postgres obo_*_hgnc tables
+
=== Student project for Textpresso ===
based off of Chris and Wen's data
+
* Create tool to allow user to submit text and return a list of similar papers
/home/postgres/work/pgpopulation/obo_oa_ontologies/populate_obo_hgnc.pl
+
* Use cases:
 +
** curator wants an alert to find papers similar to what they've curated
 +
** look for potential reviewers of a paper based on similar text content
  
It's not on a cronjob, because I think the files are not updated that
 
often.  Do we want to run this every night, or run it manually when
 
the files get re-generated ?  Or run every night, and check if the
 
files's timestamps have changed, then repopulate postgres ?
 
  
</pre>
+
== April 22, 2021 ==
  
===Minutes===
+
=== LinkML hackathon ===
====PubMed LinkOut to WormBase Paper Pages====
+
* Need to consider who works on what and how to coordinate
 +
* Need to practice good Git practice
 +
** Merge main branch into local branch before merging back into main branch to make sure everything works
 +
* How will we best handle AceDB hash structures? likely use something like Mark QT demonstrated
 +
** Do we have any/many hash-within-hash structures? #Molecular_change is used as a hash and tags within that model all reference the #Evidence hash
 +
* GO annotation extensions offer an interesting challenge
  
====Update cycle for HGNC data in the OA====
+
=== IWM workshop ===
*We will update when Alliance updates the data
+
* Need to submit a workshop schedule (who speaks about what and when) by next Thursday April 29th
*Juancarlos will set it to check the timestamps and if they change will do an update for the OAs
+
* An initial idea was to promote data in ACEDB that may be underutilized or many users may be unaware of
 +
** An example might be transcription factor data: the ?Transcription_factor class and the modENCODE TF data
 +
** Single cell data and tools? CeNGEN, Eduardo's single cell tools
 +
** RNA-Seq FPKM values for genes and related data; Wen will write script to pull out FPKM values from SRA data and send to Magdalena
 +
* In addition to WB data types, we will cover Alliance, AFP, and community curation
 +
* Google doc for workshop here: https://docs.google.com/document/d/1H9ARhBRMKBNuOhjyxVQ_1o6cysvpppI7uA-TJrO_UZ4/edit?usp=sharing
  
====CENGEN====
+
=== WB Progress Report ===
*Wen, Daniela, and Raymond will look at the datasets to work out how to incorporate. Start simple.
+
* Due April 30th
*We will make links to pages on their site.
+
* There will be two documents: progress and plans
 +
* Place text in the appropriate places (don't write as a single integrated unit)
 +
* Paul S will put together a Google doc
 +
* We CAN include Alliance harmonization efforts
 +
* 2020 Progress report: https://docs.google.com/document/d/1f3ettnkvwoKKiaAA4TSrpSQPEF7FmVVn6u2UdflA_So/edit?usp=sharing
 +
* Last year milestone was WS276; we will compare to WS280
 +
* Google "WormBase Grants" folder: https://drive.google.com/drive/folders/1p8x9tEOfZ4DQvTcPSdNR5-JoPJu--ZAu?usp=sharing
 +
* 2021 Progress Report document here: https://docs.google.com/document/d/13E9k5JvDpUN4kWnrTm4M2iphnAJSTpk02ZiGl8O6bM4/edit?usp=sharing

Latest revision as of 19:06, 22 April 2021

Previous Years

2009 Meetings

2011 Meetings

2012 Meetings

2013 Meetings

2014 Meetings

2015 Meetings

2016 Meetings

2017 Meetings

2018 Meetings

2019 Meetings

2020 Meetings

2021 Meetings

January

February

March


April 1, 2021

Antibodies

  • Alignment of the antibody class to Alliance:
    • Propose to move possible_pseudonym (192) and Other_animal (37) to remarks. Those tags are not currently used for curation.
      • Other animal is sometimes used for older annotations, e.g. authors say that the antibodies were raised both in rats and rabbits. Standard practice would create 2 records, one for the rat antibody and one for the rabbit.
      • Possible pseudonym was used when a curator was not able to unambiguously assign a previous antibody to a record. (we have a Other name -synonym- tag to capture unambiguous ones). When moving to remarks we can keep a controlled vocabulary for easy future parsing, e.g. “possible_pseudonym:”
    • Antigen field: currently separated into Protein, peptide, and other_antigen (e.g.: homogenate of early C.elegans embryos, sperm). Propose to use just one antigen field to capture antigen info.

All changes proposed above were approved by the group

textpress-dev clean up

  • Michael has asked curators to assess what they have on textpresso-dev as it will not be around forever :-(
  • is it okay to transfer data and files we want to keep to tazendra? and then to our own individual machines?
  • Direct access may be possible via Caltech VPN
  • Do we want to move content to AWS? May be complicated; it is still easy and cheap to maintain local file systems/machines

Braun servers

  • 3 servers stored in Braun server room; is there a new contact person for accessing these servers?
  • Mike Miranda replacement just getting settled; Paul will find out who is managing the server room and let Raymond know

Citace upload

  • Next Friday, April 9th, by end of the day
  • Wen will contact Paul Davis for the frozen WS280 models file


April 8, 2021

Braun server outage

  • Raymond fixed; now Spica, wobr and wobr2 are back up

Textpresso API

  • Was down yesterday affecting WormiCloud; Michael has fixed
  • Valerio will learn how to manage the API for the future

Grant opportunities

  • Possibilities to apply for supplements
  • May 15th deadline
  • Druggable genome project
  • Visualization, tools, etc.
  • Automated person descriptions?
  • Automated descriptions for proteins, ion channels, druggable targets, etc.?

New WS280 ONTOLOGY FTP directory

Odd characters in Postgres

  • Daniela and Juancarlos discovered some errors with respect to special characters pasted into the OA
  • Daniela would like to automatically pull in micropublication text (e.g. figure captions) into Postgres
  • We would need an automated way to convert special characters, like degree symbols ° into html unicode \&deg\;
  • Juancarlos and Valerio will look into possibly switching from a Perl module to a Python module to handle special characters


April 15, 2021

Special characters in Postgres/OA

  • Juancarlos working on/proposing a plan to store UTF-8 characters in Postgres and the OA which would then get converted, at dumping, to HTML entities (e.g. α) for the ACE files
  • There is still a bit of cleanup needed to fix or remove special characters (not necessarily UTF-8) that apparently got munged upon copy/pasting into the OA in the past
  • Note: copy/paste from a PDF often works fine, but sometimes does not work as expected so manual intervention would be needed (e.g. entering Greek characters by hand in UTF-8 format)
  • Would copy/pasting from HTML be better than PDF?
  • For Person curation it would be good to be able to faithfully store and display appropriate foreign characters (e.g. Chinese characters, Danish characters, etc.)
  • Mangolassi script called "get_summary_characters.pl" located here: /home/postgres/work/pgpopulation/grg_generegulation/20200618_summary_characters
    • Juancarlos will modify script to take a data type code as an argument on the command line and return all Postgres tables (and their respective PGIDs) that have special characters, e.g.
      • $ ./get_summary_characters.pl exp
      • $ ./get_summary_characters.pl int
      • $ ./get_summary_characters.pl grg
    • or could pass just the datatype + field (postgres table). e.g.
      • $ ./get_summary_characters.pl pic_description
    • Juancarlos will email everyone once it's ready. It's ready, email sent. Script is at /home/postgres/work/pgpopulation/oa_general/20210411_unicode_html/get_summary_characters.pl Symlink this to your directory and run it from there, it will create files in the directory you are at when running it.
  • Action items:
    • Juancarlos will update the "get_summary_characters.pl" script as described above
    • Curators should use the "get_summary_characters.pl" to look for (potentially) bad characters in their OAs/Postgres tables
    • Need to perform bulk (automated) replacement of existing HTML entities into corresponding UTF-8 characters
    • Curators will need to work with Juancarlos for each OA to modify the dumper
    • Juancarlos will write (or append to existing) Postgres/OA dumping scripts to:
      • 1) Convert UTF-8 characters to HTML entities in ACE files
      • 2) Convert special quote and hyphen characters into simple versions that don't need special handling

CeNGEN pictures

  • Model change went in to accommodate images from the CeNGEN project
  • Want gene page images for CeNGEN data; have the specifications for such images been worked out? Maybe not yet
  • Raymond and Daniela will work with data producers to acquire images when ready

Supplement opportunities

  • Money available for software development to "harden" existing software
  • Might be possible to make Eduardo's single cell analysis tools more sustainable
  • Could make WormiCloud adapted to Alliance?
  • Put Noctua on more stable production footing? (GO cannot apply as they are in final year of existing grant)

Student project for Textpresso

  • Create tool to allow user to submit text and return a list of similar papers
  • Use cases:
    • curator wants an alert to find papers similar to what they've curated
    • look for potential reviewers of a paper based on similar text content


April 22, 2021

LinkML hackathon

  • Need to consider who works on what and how to coordinate
  • Need to practice good Git practice
    • Merge main branch into local branch before merging back into main branch to make sure everything works
  • How will we best handle AceDB hash structures? likely use something like Mark QT demonstrated
    • Do we have any/many hash-within-hash structures? #Molecular_change is used as a hash and tags within that model all reference the #Evidence hash
  • GO annotation extensions offer an interesting challenge

IWM workshop

  • Need to submit a workshop schedule (who speaks about what and when) by next Thursday April 29th
  • An initial idea was to promote data in ACEDB that may be underutilized or many users may be unaware of
    • An example might be transcription factor data: the ?Transcription_factor class and the modENCODE TF data
    • Single cell data and tools? CeNGEN, Eduardo's single cell tools
    • RNA-Seq FPKM values for genes and related data; Wen will write script to pull out FPKM values from SRA data and send to Magdalena
  • In addition to WB data types, we will cover Alliance, AFP, and community curation
  • Google doc for workshop here: https://docs.google.com/document/d/1H9ARhBRMKBNuOhjyxVQ_1o6cysvpppI7uA-TJrO_UZ4/edit?usp=sharing

WB Progress Report