Hello! I am a bit busy. I will listen to the recording.
Hi Jian,
Hope the meeting went well and I am terribly sorry for not being able to make it today. Iāll listen to the recording.
No drastic updates from my end, very minor ones:
- Imputation code has been merged and I have started documenting methods and results from imputation. Iāll upload the doc into google drive.
- Started comparing our imputation approach to the reference publication of GANs that was posted on this forum.
In general have started reading about GANs for next steps, but happy to pivot to anything that needs more attention.
Today and tomorrow joining meeting + coding session is a little bit difficult since I need to tend to a health care issue.
Hi all -
We had a short meeting today and I did not end up recording it but will write an update here instead (faster this way).
-
Evaluation of MICE. We looked over the mechanism of MICE and reasoned through the conditions when it should be most effective. This exploration is summarized in these powerpoint slides. Please take a look.
-
Moving forward, I think we need to focus on some evaluations of the imputation, i.e., more cross validation tests. One test that @jakubm had suggested in our last meeting is to artificially remove some datasets (even though it is very limited; we need to be highly selective in doing this) and compare the imputed value with the real value (artificially removed). There might also be other statistical measures. We need a good literature review and have a table of all of these measures (need help!).
-
Think about sequential imputation: we evaluated and know that MICE currently uses a ārandomā imputation approach: filling data rows randomly. I think we can improve this by doing a non-random, sequential approach by first evaluating existing information content at each row. Then filling data first at the rows that have the most information already. However this is an hypothesis, we can test this by making a direct comparison. @vaishnavi.nagesh, does the library currently support this?
See you all tomorrow at the coding session. I will go over the github repo and we can talk more about the dataset & the dashboard that is being developed.
Best,
Jian.
Dear all -
In preparation to the group coding session today, I want to make sure you have:
-
VS Code installed on your local computer.
-
A Github account. If you donāt already have one, it might be a good idea to get one now. It takes a few steps and some minutes to set it up (involving authentication with your phone). Then, you want to log into your Github account from within VS Code.
-
Get the Live Share extension within VS Code. This is easy: one click of a button.
Here is a test link if you need to test the connection: Visual Studio Code for the Web
I havenāt tested this with a lot of people, and if we hit a bottle-neck, we will switch to something else .
See you all soon,
Jian.
Hi @jgong and all: I will be able to join at 11:30 today.
Hello @jgong! This is what I see when I tried opening the link to work on the code.
Here is the publication behind the archs4 dataset we were discussing in the meeting:
https://www.nature.com/articles/s41467-018-03751-6
It will give some insights how it is made.
Jian.
downloaded most relevant data with respect to the genes for different eye conditions here.
Hi @vaishnavi.nagesh and all, I am still working on my actions from last meeting, which I summarize below. I have completed Steps 1 and 2, so here is the link to a text file with the list of genes that come from the gene sets that are related to the phenotypes. V, you could go ahead and impute these genes and see how it performs!
Step 1: Identify a list of gene sets most closely related to each phenotype. See list of gene sets here.
Step 2. Combine all the genes from the phenotype-related gene sets in a non-redundant manner.
Step 3. Use linear regression to identify which genes are most predictive of each phenotypic measurement.
Step 4. Identify how much overlap there is between the regression genes and the gene set genes. If thereās a lot of overlap (eg > 70%), use that list. If there isnāt much overlap (eg < 50%) then impute both gene lists and see which imputation performs better.
Thanks so much Lauren! Iāll get on this!
Hi @vaishnavi.nagesh and all, I solved the issue of mapping human-to-mouse ENSEMBL gene IDs that I mentioned this morning (I used the approach described here)
As a result, the text file now has a list of mouse ENSEMBL gene IDs that you can use for imputation, derived from the gene sets most closely related to each phenotype.
Iām now working on steps 3-4 above. Will share code once Iām finished.
Thanks so much Lauren! That medium link is so cool!
Thanks for sharing Lauren!
Hi @jgong and team, hereās an interesting idea: using images of the eye to predict/identify diseases in other parts of the body. Could we have an aspect of our digital twin that provides predictions for other health effects of spaceflight?
Dr. David Rhew, Microsoftās Global Chief Medical Officer & VP of Healthcare, speaks briefly about this idea at the Gilbert Beebe Symposium on AI: https://www.youtube.com/live/318i4lFrhHQ?si=KgkspHgV5FYLbSyT&t=1079
Here are some of the citations from his slide:
Chopra R, Chander A, Jacob JJ. The eye as a window to rare endocrine disorders. Indian J Endocrinol Metab. 2012 May;16(3):331-8. doi: 10.4103/2230-8210.95659. PMID: 22629495; PMCID: PMC3354836. The eye as a window to rare endocrine disorders - PubMed
Eyes offer a window into Alzheimerās disease The Eyes Offer a Window into Alzheimerās Disease | UC San Francisco
Yuan TH, Yue ZS, Zhang GH, Wang L, Dou GR. Beyond the Liver: Liver-Eye Communication in Clinical and Experimental Aspects. Front Mol Biosci. 2021 Dec 24;8:823277. doi: 10.3389/fmolb.2021.823277. PMID: 35004861; PMCID: PMC8740136. Beyond the Liver: Liver-Eye Communication in Clinical and Experimental Aspects - PubMed
Flammer J, Konieczka K, Bruno RM, Virdis A, Flammer AJ, Taddei S. The eye and the heart. Eur Heart J. 2013 May;34(17):1270-8. doi: 10.1093/eurheartj/eht023. Epub 2013 Feb 10. PMID: 23401492; PMCID: PMC3640200. The eye and the heart - PubMed
Mendez I, Kim M, Lundeen EA, Loustalot F, Fang J, Saaddine J. Cardiovascular Disease Risk Factors in US Adults With Vision Impairment. Prev Chronic Dis. 2022 Jul 21;19:E43. doi: 10.5888/pcd19.220027. PMID: 35862513; PMCID: PMC9336192. Cardiovascular Disease Risk Factors in US Adults With Vision Impairment - PubMed
Glover K, Mishra D, Singh TRR. Epidemiology of Ocular Manifestations in Autoimmune Disease. Front Immunol. 2021 Nov 2;12:744396. doi: 10.3389/fimmu.2021.744396. PMID: 34795665; PMCID: PMC8593335. Epidemiology of Ocular Manifestations in Autoimmune Disease - PMC
20 Surprising Health Conditions an Eye Exam can Catch 20 Surprising Health Problems an Eye Exam Can Catch - American Academy of Ophthalmology
Loving this idea, Lauren! Will dig into this and it feels like a powerful application for the digital twin. Eye behavior is indeed predictive to a whole suite of other potential issues of the human body.
Even the redness in the eye tells us a person is tired/didnāt sleep well. We do this all the time. Vision is one of our most powerful sensor to the world, but reversely, it is also a reverse sensor, a window to our own internal/holistic health.
Jian.
@dchander here is the Retinal group which was mentioned this morning
This collaboration is a subgroup in the @AIMLawg (the meeting this morning was the @ALSDAawg )
Great idea!
Interesting! Examination of the eye is a seperate section in any medical case assessment. Ranging from Iron deficiency anemia and jaundice to KayserāFleischer (KF) rings in Wilsonās disease and anisocoria in CVAs, the eye really is a window to the body. During our internal medicine training, if we skipped examination of the eye by chance, there was no doubt that the examiner would fail us. I still remember diagnoses being made just by a 1 minute Eye exam!
This is beautiful!
Reminiscent that for centuries (and in school-training) the old adage was the āeyes are a window to the soulā (and I read in between the lines, ie health), from āThe Prince of Medicineā, Galen, and āThe Father of Opticsā Hasan Ibn al-Haytham.
Sweet, connecting this to space
This sounds awesome! Would you guys like to use a CNN model that only classifies the diseased and healthy images or any open-source large language model to create a chatbot that provides the QnA facility? If we plan to use an LLM, we could train it on textual data (e.g PDFs) that state the disease etiology, preventive measures and possible treatments along with space medications to use in LDSF and on the ISS. We can use research articles for training.