Difference between revisions of "WormBase-Caltech Weekly Calls"

From WormBaseWiki
Jump to navigationJump to search
 
(152 intermediate revisions by 7 users not shown)
Line 23: Line 23:
 
[[WormBase-Caltech_Weekly_Calls_2020|2020 Meetings]]
 
[[WormBase-Caltech_Weekly_Calls_2020|2020 Meetings]]
  
 +
= 2021 Meetings =
 +
 +
[[WormBase-Caltech_Weekly_Calls_January_2021|January]]
 +
 +
[[WormBase-Caltech_Weekly_Calls_February_2021|February]]
 +
 +
[[WormBase-Caltech_Weekly_Calls_March_2021|March]]
 +
 +
 +
== April 1, 2021 ==
 +
 +
=== Antibodies ===
 +
* Alignment of the antibody class to Alliance:
 +
** Propose to move possible_pseudonym (192) and Other_animal (37) to remarks. Those tags are not currently used for curation.
 +
*** Other animal is sometimes used for older annotations, e.g. authors say that the antibodies were raised both  in rats and rabbits. Standard practice would create 2 records, one for the rat antibody and one for the rabbit.
 +
*** Possible pseudonym was used when  a curator was not able to unambiguously assign a previous antibody to a record. (we have a Other name -synonym- tag to capture unambiguous ones). When moving to remarks we can keep a controlled vocabulary for easy future parsing, e.g. “possible_pseudonym:”
 +
** Antigen field: currently separated into Protein, peptide, and other_antigen (e.g.: homogenate of early C.elegans embryos, sperm). Propose to use just one antigen field to capture antigen info.
 +
 +
All changes proposed above were approved by the group
 +
 +
=== textpress-dev clean up ===
 +
* Michael has asked curators to assess what they have on textpresso-dev as it will not be around forever :-(
 +
* is it okay to transfer data and files we want to keep to tazendra? and then to our own individual machines?
 +
* Direct access may be possible via Caltech VPN
 +
* Do we want to move content to AWS? May be complicated; it is still easy and cheap to maintain local file systems/machines
 +
 +
=== Braun servers ===
 +
* 3 servers stored in Braun server room; is there a new contact person for accessing these servers?
 +
* Mike Miranda replacement just getting settled; Paul will find out who is managing the server room and let Raymond know
  
= 2021 Meetings =
+
=== Citace upload ===
 +
* Next Friday, April 9th, by end of the day
 +
* Wen will contact Paul Davis for the frozen WS280 models file
 +
 
 +
 
 +
== April 8, 2021 ==
 +
 
 +
=== Braun server outage ===
 +
* Raymond fixed; now Spica, wobr and wobr2 are back up
 +
 
 +
=== Textpresso API ===
 +
* Was down yesterday affecting WormiCloud; Michael has fixed
 +
* Valerio will learn how to manage the API for the future
 +
 
 +
=== Grant opportunities ===
 +
* Possibilities to apply for supplements
 +
* May 15th deadline
 +
* Druggable genome project
 +
** Pharos: https://pharos.nih.gov/
 +
** could we contribute?
 +
* Visualization, tools, etc.
 +
* Automated person descriptions?
 +
* Automated descriptions for proteins, ion channels, druggable targets, etc.?
 +
 
 +
=== New WS280 ONTOLOGY FTP directory ===
 +
* Changes requested here: https://github.com/WormBase/website/issues/7900
 +
* Here's the FTP URL: ftp://ftp.wormbase.org/pub/wormbase/releases/WS280/ONTOLOGY/
 +
* Known issues (Chris will report):
 +
** Ontology files are provided as ".gaf" in addition to ".obo"; we need to remove the ".gaf" OBO files
 +
** Some files are duplicated and/or have inappropriate file extensions
 +
 
 +
=== Odd characters in Postgres ===
 +
* Daniela and Juancarlos discovered some errors with respect to special characters pasted into the OA
 +
* Daniela would like to automatically pull in micropublication text (e.g. figure captions) into Postgres
 +
* We would need an automated way to convert special characters, like degree symbols ° into html unicode \&deg\;
 +
* Juancarlos and Valerio will look into possibly switching from a Perl module to a Python module to handle special characters
  
== Feb 4th, 2021 ==
 
====How the "duplicate" function works in OAs with respect to object IDs (Ranjana and Juancarlos)====
 
*A word of caution: when you duplicate a row, for those OAs with Object IDs (eg., WBGenotype00000014) note that the object ID gets duplicated as well and does not advance to the next ID like the PGID does
 
  
== January 28, 2021 ==
+
== April 15, 2021 ==
  
=== String Matching Pipelines ===
+
=== Special characters in Postgres/OA ===
* Old pipelines on textpresso-dev are not compatible with the new TPC system
+
* Juancarlos working on/proposing a plan to store UTF-8 characters in Postgres and the OA which would then get converted, at dumping, to HTML entities (e.g. α) for the ACE files
* New TPC API does not support string matching
+
* There is still a bit of cleanup needed to fix or remove special characters (not necessarily UTF-8) that apparently got munged upon copy/pasting into the OA in the past
* New Python library (wbtools) - used by variation pipeline - supports batch processing of WB literature and regex matching
+
* Note: copy/paste from a PDF often works fine, but sometimes does not work as expected so manual intervention would be needed (e.g. entering Greek characters by hand in UTF-8 format)
* Email extraction
+
* Would copy/pasting from HTML be better than PDF?
** No longer needed for concise description community curation tracker
+
* For Person curation it would be good to be able to faithfully store and display appropriate foreign characters (e.g. Chinese characters, Danish characters, etc.)
** Juancarlos, Valerio, and Chris will meet to establish a new, streamlined email address extraction pipeline
+
* Mangolassi script called "get_summary_characters.pl" located here: /home/postgres/work/pgpopulation/grg_generegulation/20200618_summary_characters
* Old AFP Display CGI (http://tazendra.caltech.edu/~postgres/cgi-bin/author_fp_display.cgi)
+
** Juancarlos will modify script to take a data type code as an argument on the command line and return all Postgres tables (and their respective PGIDs) that have special characters, e.g.
** Still uses old Textpresso-Dev; no longer needed? Probably not; Karen can look if there's anything there worth keeping (nothing critical)
+
*** $ ./get_summary_characters.pl exp
* Valerio will determine priorities (e.g. antibody stuff first), and send issues to curators as needed
+
*** $ ./get_summary_characters.pl int
 +
*** $ ./get_summary_characters.pl grg
 +
** or could pass just the datatype + field (postgres table). e.g.
 +
*** $ ./get_summary_characters.pl pic_description
 +
** Juancarlos will email everyone once it's ready.  It's ready, email sent.  Script is at /home/postgres/work/pgpopulation/oa_general/20210411_unicode_html/get_summary_characters.pl  Symlink this to your directory and run it from there, it will create files in the directory you are at when running it.
 +
* Action items:
 +
** Juancarlos will update the "get_summary_characters.pl" script as described above
 +
** Curators should use the "get_summary_characters.pl" to look for (potentially) bad characters in their OAs/Postgres tables
 +
** Need to perform bulk (automated) replacement of existing HTML entities into corresponding UTF-8 characters
 +
** Curators will need to work with Juancarlos for each OA to modify the dumper
 +
** Juancarlos will write (or append to existing) Postgres/OA dumping scripts to:
 +
*** 1) Convert UTF-8 characters to HTML entities in ACE files
 +
*** 2) Convert special quote and hyphen characters into simple versions that don't need special handling
  
=== Tracking interlibrary loan requests ===
+
=== CeNGEN pictures ===
* Raymond: Do we have a common place to track interlibrary loan requests? Could be useful for Alliance/WB
+
* Model change went in to accommodate images from the CeNGEN project
 +
* Want gene page images for CeNGEN data; have the specifications for such images been worked out? Maybe not yet
 +
* Raymond and Daniela will work with data producers to acquire images when ready
  
=== Cengen data ===
+
=== Supplement opportunities ===
 +
* Money available for software development to "harden" existing software
 +
* Might be possible to make Eduardo's single cell analysis tools more sustainable
 +
* Could make WormiCloud adapted to Alliance?
 +
* Put Noctua on more stable production footing? (GO cannot apply as they are in final year of existing grant)
  
Eduardo: I got the cengen 2020 data over christmas, and I have repackaged the full release data (all 100k cells with annotations, but prior to the soupX processing) in an h5ad file which I make available here https://wormcells.com/
+
=== Student project for Textpresso ===
 +
* Create tool to allow user to submit text and return a list of similar papers
 +
* Use cases:
 +
** curator wants an alert to find papers similar to what they've curated
 +
** look for potential reviewers of a paper based on similar text content
  
* This is the repo that makes that website: https://github.com/Munfred/wormcells-data
 
* I figured a way to spin up the interface I made for doing differential expression  through google colab, so now people can do it with any h5ad file they want. As an example I wrote a notebook that runs it with the 100k cells from cengen: https://colab.research.google.com/github/Munfred/scdefg/blob/main/scdefg.ipynb
 
* Since they are thinking about simple things that can be offered in wormbase, I will also briefly talk about this dashboard that i made for a UCLA group that wanted to look at nuclei data. It uses the tissue enrichment analysis code for the bottom 3 plots.
 
  
=== Tracking corresponding authors for papers at Alliance ===
+
== April 22, 2021 ==
* Corresponding authors not tracked in ACEDB, because authors are just text names not IDs
 
* Maybe Cecilia could link a WBPerson as corresponding author for a paper during curation?
 
  
== January 21, 2021 ==
+
=== LinkML hackathon ===
 +
* Need to consider who works on what and how to coordinate
 +
* Need to practice good Git practice
 +
** Merge main branch into local branch before merging back into main branch to make sure everything works
 +
* How will we best handle AceDB hash structures? likely use something like Mark QT demonstrated
 +
** Do we have any/many hash-within-hash structures? #Molecular_change is used as a hash and tags within that model all reference the #Evidence hash
 +
* GO annotation extensions offer an interesting challenge
  
=== Neural Network (NN) Paper Classification Results ===
+
=== IWM workshop ===
* Linking to Paper Display tool (as opposed to Paper Editor) from Michael's webpage for NN results (Michael will make change)
+
* Need to submit a workshop schedule (who speaks about what and when) by next Thursday April 29th
* NN results will be incorporated into the Curation Status Form
+
* An initial idea was to promote data in ACEDB that may be underutilized or many users may be unaware of
* For AFP and VFP, there is now a table with mixed SVM and NN results ("blackbox" results); for a given paper, if NN results exist, they take priority over any SVM results
+
** An example might be transcription factor data: the ?Transcription_factor class and the modENCODE TF data
* Decision: we will omit blackbox results (at least for now) from curation status form (just add the new NN results separately)
+
** Single cell data and tools? CeNGEN, Eduardo's single cell tools
* We have stopped running SVM on new papers
+
** RNA-Seq FPKM values for genes and related data; Wen will write script to pull out FPKM values from SRA data and send to Magdalena
* Interactions SVM has performed better than new NN results; would be worth attempting a retraining
+
* In addition to WB data types, we will cover Alliance, AFP, and community curation
 +
* Google doc for workshop here: https://docs.google.com/document/d/1H9ARhBRMKBNuOhjyxVQ_1o6cysvpppI7uA-TJrO_UZ4/edit?usp=sharing
  
=== Community Phenotype Curation ===
+
=== WB Progress Report ===
* On hold for a few months to commit time to updating the phenotype annotation model to accommodate, e.g. double mutant phenotypes, multiple RNAi targets (intended or otherwise), mutant transgene products causing phenotypes, expressed human genes causing phenotypes, etc.
+
* Due April 30th
* Changes made for WB phenotypes may carry over to Alliance phenotype work
+
* There will be two documents: progress and plans
* [https://www.preprints.org/manuscript/202101.0169/v1 Paper out now] on undergrad community phenotype curation project with Lina Dahlberg; we may get more requests for trying this with other undergrad classes
+
* Place text in the appropriate places (don't write as a single integrated unit)
 +
* Paul S will put together a Google doc
 +
* We CAN include Alliance harmonization efforts
 +
* 2020 Progress report: https://docs.google.com/document/d/1f3ettnkvwoKKiaAA4TSrpSQPEF7FmVVn6u2UdflA_So/edit?usp=sharing
 +
* Last year milestone was WS276; we will compare to WS280
 +
* Google "WormBase Grants" folder: https://drive.google.com/drive/folders/1p8x9tEOfZ4DQvTcPSdNR5-JoPJu--ZAu?usp=sharing
 +
* 2021 Progress Report document here: https://docs.google.com/document/d/13E9k5JvDpUN4kWnrTm4M2iphnAJSTpk02ZiGl8O6bM4/edit?usp=sharing
  
=== AFP Anatomy Function flagging ===
 
* Sometimes it is difficult to assess whether an author flag is correct (often times can be wrong/absent)
 
* What about giving authors/users feedback on their flagging results?
 
* Would be good to provide content from paper where this data is said to exist (automatically from a Textpresso pipeline or manually from author identified data)
 
* We want to be careful about how we provide feedback; we should be proactive to make improvements/modifications on our end and bring those back to users for feedback to us
 
  
== January 14th, 2021 ==
+
== April 29, 2021 ==
  
===PubMed LinkOut to WormBase Paper Pages (Kimberly) ===
+
=== IWM Workshop Schedule ===
* Other databases [https://www.ncbi.nlm.nih.gov/projects/linkout/doc/nonbiblinkout.html link out from PubMed] to their respective paper pages
+
* Schedule format due today (April 29th)
* For example, https://pubmed.ncbi.nlm.nih.gov/20864032/ links out to GO and MGI paper pages
+
* [https://docs.google.com/document/d/1H9ARhBRMKBNuOhjyxVQ_1o6cysvpppI7uA-TJrO_UZ4/edit#bookmark=id.jrjo4xhfnh7b Tentative schedule here]  
* Would like to set this up for WormBase and ultimately for the Alliance, but this will require some developer help
+
* Format proposal is 4, 15-minute talks followed by 30 minutes of open discussion / Q&A
* Work on this next month (after AFP and GO grant submissions)?
+
* Still need someone to speak (~15 minutes) about the Alliance
  
===Update cycle for HGNC data in the OA (Ranjana) ===
+
=== WB Progress Report ===
*Juancarlos had these questions for us:
+
* 2021 documents in [https://drive.google.com/drive/folders/1p8x9tEOfZ4DQvTcPSdNR5-JoPJu--ZAu?usp=sharing this Google Drive folder]
<pre style="white-space: pre-wrap;
+
* Note: there is one [https://docs.google.com/document/d/13E9k5JvDpUN4kWnrTm4M2iphnAJSTpk02ZiGl8O6bM4/edit?usp=sharing 2021 "Progress" document] and a second (separate) [https://docs.google.com/document/d/1j0HkCwuimK6DD-ui1tAkYMNpLRhxR9xb1FdSDZXFXCI/edit?usp=sharing "Future Plans" document]
white-space: -moz-pre-wrap;
+
* Existing future plans text has been moved to the "Future Plans" document
white-space: -pre-wrap;
 
white-space: -o-pre-wrap;
 
word-wrap: break-word">
 
  
There's a script here that repopulates the postgres obo_*_hgnc tables
+
=== OpenBiosystems RNAi clone IDs ===
based off of Chris and Wen's data
+
* User looking to map Open Biosystems RNAi clone names to WB clone names
/home/postgres/work/pgpopulation/obo_oa_ontologies/populate_obo_hgnc.pl
+
* We may need to get a mapping file from Open Biosystems
  
It's not on a cronjob, because I think the files are not updated that
+
=== FPKM data ===
often.  Do we want to run this every night, or run it manually when
+
* Wen has produced a csv file of FPKM values; can generate as part of the SPELL pipeline
the files get re-generated ?  Or run every night, and check if the
+
* May be better to generate at Hinxton
files's timestamps have changed, then repopulate postgres ?
 
  
</pre>
+
=== OA Dumpers ===
 +
* Daniela and Juancarlos have been working on the Picture OA and Expr OA dumpers
 +
* Inconsistencies have accumulated for all OA dumpers as each has been made separately
 +
* Juancarlos is working on a generalized, modular way to handle dumping
 +
* Should we handle historical genes in the same way across OAs?
 +
** Sure, but we need the "Historical_gene" tag in the respective ACEDB model
 +
** Decision: we will continue to only dump historical genes for specific OAs, with a plan to maybe make consistent across OAs in the future
 +
* Could we retroactively deal with paper-gene connections? We could possibly look in Postgres history tables to see which genes had been replaced previously (by Kimberly)
  
===Minutes===
+
=== Gene name ambiguities ===
====PubMed LinkOut to WormBase Paper Pages====
+
* Jae noticed that some gene names associated with multiple WBGene IDs (e.g. one public name is the same as another gene's other name) have the same references attached
 +
* May require updating the paper-gene connections for some of these
 +
* One example is cep-1 gene. It associates with 3 diff WBgeneID and sharing papers in the reference widget.
  
====Update cycle for HGNC data in the OA====
+
=== NIH Supplement for AI readiness ===
*We will update when Alliance updates the data
+
* Could we set up curation for neural circuits using a knowledge graph (e.g. GO-CAM)?
*Juancarlos will set it to check the timestamps and if they change will do an update for the OAs
+
** Maybe we could convert the anatomy function model to LinkML -> OWL statements?
 +
** Maybe setup a graphical curation interface?
 +
* Transcriptional regulation
 +
** Would be good to establish a common model (for the Alliance?)
 +
** CeNGEN project produced lots of predictions of TF binding sites based on single-cell expression data; Eduardo: these models should be able to be regenerated each time new data sets are published, but this requires greater integration in a central, sustainable resource
 +
* Paul S can send a link for the supplement
  
====CENGEN====
+
=== Variant First Pass Pipeline ===
*Wen, Daniela, and Raymond will look at the datasets to work out how to incorporate. Start simple.
+
* Valerio: Are there any existing pipelines to make allele-paper and/or strain-paper associations?
*We will make links to pages on their site.
+
* Not sure, should ask Karen

Latest revision as of 19:13, 29 April 2021

Previous Years

2009 Meetings

2011 Meetings

2012 Meetings

2013 Meetings

2014 Meetings

2015 Meetings

2016 Meetings

2017 Meetings

2018 Meetings

2019 Meetings

2020 Meetings

2021 Meetings

January

February

March


April 1, 2021

Antibodies

  • Alignment of the antibody class to Alliance:
    • Propose to move possible_pseudonym (192) and Other_animal (37) to remarks. Those tags are not currently used for curation.
      • Other animal is sometimes used for older annotations, e.g. authors say that the antibodies were raised both in rats and rabbits. Standard practice would create 2 records, one for the rat antibody and one for the rabbit.
      • Possible pseudonym was used when a curator was not able to unambiguously assign a previous antibody to a record. (we have a Other name -synonym- tag to capture unambiguous ones). When moving to remarks we can keep a controlled vocabulary for easy future parsing, e.g. “possible_pseudonym:”
    • Antigen field: currently separated into Protein, peptide, and other_antigen (e.g.: homogenate of early C.elegans embryos, sperm). Propose to use just one antigen field to capture antigen info.

All changes proposed above were approved by the group

textpress-dev clean up

  • Michael has asked curators to assess what they have on textpresso-dev as it will not be around forever :-(
  • is it okay to transfer data and files we want to keep to tazendra? and then to our own individual machines?
  • Direct access may be possible via Caltech VPN
  • Do we want to move content to AWS? May be complicated; it is still easy and cheap to maintain local file systems/machines

Braun servers

  • 3 servers stored in Braun server room; is there a new contact person for accessing these servers?
  • Mike Miranda replacement just getting settled; Paul will find out who is managing the server room and let Raymond know

Citace upload

  • Next Friday, April 9th, by end of the day
  • Wen will contact Paul Davis for the frozen WS280 models file


April 8, 2021

Braun server outage

  • Raymond fixed; now Spica, wobr and wobr2 are back up

Textpresso API

  • Was down yesterday affecting WormiCloud; Michael has fixed
  • Valerio will learn how to manage the API for the future

Grant opportunities

  • Possibilities to apply for supplements
  • May 15th deadline
  • Druggable genome project
  • Visualization, tools, etc.
  • Automated person descriptions?
  • Automated descriptions for proteins, ion channels, druggable targets, etc.?

New WS280 ONTOLOGY FTP directory

Odd characters in Postgres

  • Daniela and Juancarlos discovered some errors with respect to special characters pasted into the OA
  • Daniela would like to automatically pull in micropublication text (e.g. figure captions) into Postgres
  • We would need an automated way to convert special characters, like degree symbols ° into html unicode \&deg\;
  • Juancarlos and Valerio will look into possibly switching from a Perl module to a Python module to handle special characters


April 15, 2021

Special characters in Postgres/OA

  • Juancarlos working on/proposing a plan to store UTF-8 characters in Postgres and the OA which would then get converted, at dumping, to HTML entities (e.g. α) for the ACE files
  • There is still a bit of cleanup needed to fix or remove special characters (not necessarily UTF-8) that apparently got munged upon copy/pasting into the OA in the past
  • Note: copy/paste from a PDF often works fine, but sometimes does not work as expected so manual intervention would be needed (e.g. entering Greek characters by hand in UTF-8 format)
  • Would copy/pasting from HTML be better than PDF?
  • For Person curation it would be good to be able to faithfully store and display appropriate foreign characters (e.g. Chinese characters, Danish characters, etc.)
  • Mangolassi script called "get_summary_characters.pl" located here: /home/postgres/work/pgpopulation/grg_generegulation/20200618_summary_characters
    • Juancarlos will modify script to take a data type code as an argument on the command line and return all Postgres tables (and their respective PGIDs) that have special characters, e.g.
      • $ ./get_summary_characters.pl exp
      • $ ./get_summary_characters.pl int
      • $ ./get_summary_characters.pl grg
    • or could pass just the datatype + field (postgres table). e.g.
      • $ ./get_summary_characters.pl pic_description
    • Juancarlos will email everyone once it's ready. It's ready, email sent. Script is at /home/postgres/work/pgpopulation/oa_general/20210411_unicode_html/get_summary_characters.pl Symlink this to your directory and run it from there, it will create files in the directory you are at when running it.
  • Action items:
    • Juancarlos will update the "get_summary_characters.pl" script as described above
    • Curators should use the "get_summary_characters.pl" to look for (potentially) bad characters in their OAs/Postgres tables
    • Need to perform bulk (automated) replacement of existing HTML entities into corresponding UTF-8 characters
    • Curators will need to work with Juancarlos for each OA to modify the dumper
    • Juancarlos will write (or append to existing) Postgres/OA dumping scripts to:
      • 1) Convert UTF-8 characters to HTML entities in ACE files
      • 2) Convert special quote and hyphen characters into simple versions that don't need special handling

CeNGEN pictures

  • Model change went in to accommodate images from the CeNGEN project
  • Want gene page images for CeNGEN data; have the specifications for such images been worked out? Maybe not yet
  • Raymond and Daniela will work with data producers to acquire images when ready

Supplement opportunities

  • Money available for software development to "harden" existing software
  • Might be possible to make Eduardo's single cell analysis tools more sustainable
  • Could make WormiCloud adapted to Alliance?
  • Put Noctua on more stable production footing? (GO cannot apply as they are in final year of existing grant)

Student project for Textpresso

  • Create tool to allow user to submit text and return a list of similar papers
  • Use cases:
    • curator wants an alert to find papers similar to what they've curated
    • look for potential reviewers of a paper based on similar text content


April 22, 2021

LinkML hackathon

  • Need to consider who works on what and how to coordinate
  • Need to practice good Git practice
    • Merge main branch into local branch before merging back into main branch to make sure everything works
  • How will we best handle AceDB hash structures? likely use something like Mark QT demonstrated
    • Do we have any/many hash-within-hash structures? #Molecular_change is used as a hash and tags within that model all reference the #Evidence hash
  • GO annotation extensions offer an interesting challenge

IWM workshop

  • Need to submit a workshop schedule (who speaks about what and when) by next Thursday April 29th
  • An initial idea was to promote data in ACEDB that may be underutilized or many users may be unaware of
    • An example might be transcription factor data: the ?Transcription_factor class and the modENCODE TF data
    • Single cell data and tools? CeNGEN, Eduardo's single cell tools
    • RNA-Seq FPKM values for genes and related data; Wen will write script to pull out FPKM values from SRA data and send to Magdalena
  • In addition to WB data types, we will cover Alliance, AFP, and community curation
  • Google doc for workshop here: https://docs.google.com/document/d/1H9ARhBRMKBNuOhjyxVQ_1o6cysvpppI7uA-TJrO_UZ4/edit?usp=sharing

WB Progress Report


April 29, 2021

IWM Workshop Schedule

  • Schedule format due today (April 29th)
  • Tentative schedule here
  • Format proposal is 4, 15-minute talks followed by 30 minutes of open discussion / Q&A
  • Still need someone to speak (~15 minutes) about the Alliance

WB Progress Report

OpenBiosystems RNAi clone IDs

  • User looking to map Open Biosystems RNAi clone names to WB clone names
  • We may need to get a mapping file from Open Biosystems

FPKM data

  • Wen has produced a csv file of FPKM values; can generate as part of the SPELL pipeline
  • May be better to generate at Hinxton

OA Dumpers

  • Daniela and Juancarlos have been working on the Picture OA and Expr OA dumpers
  • Inconsistencies have accumulated for all OA dumpers as each has been made separately
  • Juancarlos is working on a generalized, modular way to handle dumping
  • Should we handle historical genes in the same way across OAs?
    • Sure, but we need the "Historical_gene" tag in the respective ACEDB model
    • Decision: we will continue to only dump historical genes for specific OAs, with a plan to maybe make consistent across OAs in the future
  • Could we retroactively deal with paper-gene connections? We could possibly look in Postgres history tables to see which genes had been replaced previously (by Kimberly)

Gene name ambiguities

  • Jae noticed that some gene names associated with multiple WBGene IDs (e.g. one public name is the same as another gene's other name) have the same references attached
  • May require updating the paper-gene connections for some of these
  • One example is cep-1 gene. It associates with 3 diff WBgeneID and sharing papers in the reference widget.

NIH Supplement for AI readiness

  • Could we set up curation for neural circuits using a knowledge graph (e.g. GO-CAM)?
    • Maybe we could convert the anatomy function model to LinkML -> OWL statements?
    • Maybe setup a graphical curation interface?
  • Transcriptional regulation
    • Would be good to establish a common model (for the Alliance?)
    • CeNGEN project produced lots of predictions of TF binding sites based on single-cell expression data; Eduardo: these models should be able to be regenerated each time new data sets are published, but this requires greater integration in a central, sustainable resource
  • Paul S can send a link for the supplement

Variant First Pass Pipeline

  • Valerio: Are there any existing pipelines to make allele-paper and/or strain-paper associations?
  • Not sure, should ask Karen