Difference between revisions of "WormBase-Caltech Weekly Calls"
From WormBaseWiki
Jump to navigationJump to searchm (→March 4, 2021) |
|||
(77 intermediate revisions by 8 users not shown) | |||
Line 25: | Line 25: | ||
= 2021 Meetings = | = 2021 Meetings = | ||
− | |||
− | + | [[WormBase-Caltech_Weekly_Calls_January_2021|January]] | |
− | |||
− | |||
− | |||
− | |||
− | + | [[WormBase-Caltech_Weekly_Calls_February_2021|February]] | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | == March 4, 2021 == | |
− | |||
− | |||
− | |||
− | + | === Webinar Monday === | |
+ | * Juancarlos can send a reminder email; he just needs the text for the body of the email message | ||
+ | * Can we send the webinar reminder to WB staff? Cannot use staff@wormbase.org to register | ||
+ | * Raymond will try to remember to forward to staff | ||
+ | |||
+ | === Neural Network (NN) evaluation === | ||
+ | * Kimberly (and other curators) looking through NN results | ||
+ | * Had previously prevented LOW scoring SVMs(?) from being sent to authors | ||
+ | * We need an agreed upon protocol for evaluation | ||
+ | * To avoid bias, we should randomly sample papers; usually will find negatives; how do we represent high, medium and low-scoring papers in the set without making curators review >> 100 papers? | ||
+ | * Michael will come up with protocol and send around list of papers for each data type (in Google Doc) | ||
+ | * Perhaps evaluation should happen on non-curated, newer papers |
Latest revision as of 18:42, 4 March 2021
Contents
Previous Years
2021 Meetings
March 4, 2021
Webinar Monday
- Juancarlos can send a reminder email; he just needs the text for the body of the email message
- Can we send the webinar reminder to WB staff? Cannot use staff@wormbase.org to register
- Raymond will try to remember to forward to staff
Neural Network (NN) evaluation
- Kimberly (and other curators) looking through NN results
- Had previously prevented LOW scoring SVMs(?) from being sent to authors
- We need an agreed upon protocol for evaluation
- To avoid bias, we should randomly sample papers; usually will find negatives; how do we represent high, medium and low-scoring papers in the set without making curators review >> 100 papers?
- Michael will come up with protocol and send around list of papers for each data type (in Google Doc)
- Perhaps evaluation should happen on non-curated, newer papers