Dear all -
Our AI/ML digital twin subgroup is actively exploring the use of AI Foundation Models. I want to post this topic here as a placeholder before thinking/planning for a larger project. I appreciate your input and any experiences that you can share.
Here is a list of some interesting models (this is an expanding list that I will curate and keep up-to-date here in this post):
-
TabPFN: Accurate predictions on small data with a tabular foundation model | Nature
-
Geneformer: Transfer learning enables predictions in network biology | Nature
-
scPlantFormer: scPlantFormer: A Lightweight Foundation Model for Plant Single-Cell Omics Analysis | Research Square
-
RETFound: A foundation model for generalizable disease detection from retinal images | Nature
Questions:
-
What are the usual or standard inputs for such models and what are the expected outputs? How flexible are these models: multimodal (text, image, time-series, tabular), bio-sequence?
-
What are the expected gains in using such models compared with tradational ML methods? What are you thinking and planning to use them for?
-
Computational efficiencies (what resources do you need)? Time efficiencies in setting things up?
-
Any other FMs that you are using? Could you please share?
Thank you all!