Bolivian biologists building a better future for science and health
Three generations of scientists visit CZ Biohub SF to learn metagenomic sequencing techniques and analysis
For centuries, scientists have relied on staining techniques to visually enhance cellular structures for study under microscopes, from natural dyes in the 17th century to advanced fluorescent markers in use today. However, modern staining methods for live cell imaging are not only labor-intensive, but they may also alter the biology of cells and also limit the number of biological structures that can be imaged simultaneously, making it difficult for biologists to capture cellular dynamics that unfold slowly.
Can AI help? Indeed, researchers at the Chan Zuckerberg Biohub San Francisco have developed robust “virtual staining” techniques, which use deep learning to translate label-free images into fluorescence-like images. Virtually stained cells can be segmented and tracked for many hours or even days. Their paper has now been published in Nature Machine Intelligence; development was led by Shalin Mehta, a pioneer in label-free imaging and robust AI methods for dynamic imaging, and leader of the CZ Biohub SF Computational Microscopy Platform.
Shalin Mehta and Carolina Arias discuss Cytoland.
Mehta calls their robust virtual staining approach “Cytoland”: “cyto” a prefix meaning “cell,” and “land” referring to landmark organelles, such as nuclei and cell membranes. By removing the need to put fluorescent tags on every cellular landmark, Cytoland allows scientists to reserve fluorescence for the proteins and organelles they really wish to study. What’s more, the same models generalize across different microscopes, cell types, and experimental conditions.
“By leveraging recent advances in deep learning, specifically self-supervised pre-training and data augmentations that simulate how microscopes form images, we trained networks to predict fluorescence images directly from phase contrast or brightfield images,” says Mehta, who is also an Allen Distinguished Investigator. “This frees up the light spectrum for other tasks like imaging sensors of cell function or performing photomanipulation.”
The technology is built around a novel convolutional neural network architecture known as UNeXt2. To make the model generalize across different cell types, microscopes, and experimental conditions, the team employed a combination of supervised and self-supervised learning, as well as physics-informed data augmentations. This enables the algorithm to perform reliably even in scenarios it wasn’t explicitly trained on.
Using Cytoland, scientists are able to observe, for example, cells as they develop in an embryo or as they respond to an attack by a virus. It is already in use by biologists at CZ Biohub SF, and is accessible to all researchers on the Chan Zuckerberg Initiative’s Virtual Cells Platform, including a guide for running model inference and tutorials demonstrating applications in Neuromasts and HEK cells. And view an interactive demo on the Chan Zuckerberg Initiative’s Huggingface space.
Cytoland models were built, validated, and disseminated through a collaborative effort between the interdisciplinary teams at CZ Biohub SF and CZI’s Virtual Cells Platform.
Three generations of scientists visit CZ Biohub SF to learn metagenomic sequencing techniques and analysis
Learn More
Using frogs as a model organism, CZ Biohub SF Investigator Helen Willsey is revealing surprising connections between autism and hair-like structures on the...
Learn More
Probing the sphinx tile to explore geometry and chirality in life
Learn More
Stay up-to-date on the latest news, publications, competitions, and stories from CZ Biohub.
Cookies and JavaScript are required to access this form.