Note: This essay was originally published as part of my PhD thesis in Aug. 2022.
Deep down, we all want to do meaningful work. As a scientist, I want to do research that expands our collective knowledge and leads to a better future. But these broad aspirations are often unhelpful in figuring out which questions to work on, especially when first starting a PhD. Should you do basic or translational research? Do you pick a trendy or dormant field? While there are no universal answers, I think everyone interested in accelerating scientific progress should consider working on new technologies.
One way of thinking about how new technologies drive scientific progress is through tech trees. Tech trees are a concept from Civilization, a game that lets you play as the ruler of a global power over thousands of years. As ruler, it is your responsibility to decide which new technologies your civilization should research. At first, not all options are available: each technology has a set of prerequisites that enable it, as well as a set of successors it enables. The most advanced options ā space travel, nukes, the Internet ā are only unlocked if you spend sufficient resources early on to discover fundamentals such as mathematics, physics, and electricity. As you develop a foundation of knowledge, the dependency network branches outwards, revealing a tree of future possibilities.
While Civilization is just a game, the framework of tech trees can be helpful for thinking about scientific progress in the real world. Every technology can be seen through the lens of the foundational research that made it possible and the future discoveries it enables. However, there is one major difference between the game and reality: In the game, you can scroll to the end of the tech tree to decide whether going down a particular branch will pay dividends in the future. In the real world, the future is unknown, so itās up to us to imagine new technologies.
I find this prospect both thrilling and a bit terrifying. Itās terrifying to consider your hard work may lead to a dead end, or that advances in an adjacent field might make your efforts obsolete. But itās thrilling to believe there may be circumstances where all of the prerequisite knowledge for a solution has existed for years, just waiting for someone to fit the puzzle pieces together. To me, this possibility is where the adventure in science truly lies ā how can we combine all of what we know to see into the unknown?
Creating a breakthrough technology can be likened to unlocking a sense, allowing us to perceive the world in new ways. In Sequencing is the new microscope, Laura Deming describes a recent paradigm shift between two such technologies. The first is the microscope, which was invented in the 1600s and allowed us to see cells for the first time. Early microscopes were not much more than a light source and optical lenses used to illuminate a sample. But as Deming explains, advancements in physics allowed us to devise new technologies for seeing biology in higher resolution:
As the progress of physics ramped up in the early 1900s, so did biology. JJ Thomsonās device for looking at cathode rays became the first mass spectrometer. X-rays were found ā now used, not just in hospitals for treatment and diagnosis, but also as invaluable biological reagents. NMR gave rise to MRI. Einsteinās general relativity didnāt make much of a dent, but the photoelectric effect allowed us to understand and manipulate fluorescence. Microscopes got way better ā the Nobel in Physics was won for phase-contrast microscopy, then electron microscopy. Marvin Minsky patented the first confocal microscope.
The technology that supplanted microscopy was DNA sequencing, pioneered by Fred Sanger in 1977. But why was sequencing such a breakthrough? In science fiction, a common trope is that the true names of objects or people hold magical powers; in reality, biological sequences are our version of true names. If the microscope opened our eyes to the world of biology, the sequencer taught us to hear its language. By sequencing the world around us, we learned the hidden rules of how sequence encodes function. We then exploited this knowledge to co-opt everything from fluorescent jellyfish powers (GFP) to bacterial defense mechanisms (CRISPR) for our own designs. Deming theorizes that our rapid adoption of this language accelerated the rate of scientific discovery by creating whatās known as a flywheel effect:
Sequencing has become the new microscope. Itās easier for us to cross-link, fragment and sequence a full genome to figure out its 3-dimensional structure than it is for us to figure that out by looking at it head-on. We used to rely on photons and electrons bouncing off of biological samples to tell us what was going on down there. Now weāre asking biology directly ā and often the information we get back comes through a natural biological reagent like DNA. Which we then sequence using motors made from the DNA itself! [ā¦] This is important because itās a self-reinforcing loop. The more things in biology we discover today, the faster we can discover things tomorrow.
As technologies improve faster and faster, so does our ability to describe the world of biology. When Robert Hooke saw cells under a microscope for the first time, he shared his findings as detailed illustrations in his famous work, Micrographia. With modern sequencing methods, we can encapsulate individual cells in oil droplets that are uniquely labeled by DNA barcodes, and then add a cocktail of molecular motors and glues to quantify RNA, chromatin accessibility, and surface proteins all at once. This approach is aptly called DOGMA-seq because it measures each component of biologyās central dogma. Unfathomably, weāve gone from hand-drawn illustrations to being able to measure the expression of all 20,000 human genes inside a single cell. The cost to sequence the first human genome was $2.7 billion; 20 years later, weāve pushed it under $1,000, a rate of improvement that far exceeds Mooreās Law.
While DNA sequencing is eating the world, microscopy is quietly having a renaissance driven by advances in computer science. Even before this resurgence, however, imaging has long been the preferred technology for making certain types of measurements. For decades, pathologists have looked at cell shape, size, and structures to make medical diagnoses, while in basic research, live cell imaging is still the gold standard for reconstructing cell lineages. But the key advancement that has given microscopy new life is the idea that you donāt need an actual person to analyze your images. New ~deep learning~ methods can crunch through millions of images in hours and even identify patterns invisible to the human eye. Though often overhyped, these approaches are gradually making their way into the clinic, while also being used by companies like Recursion to screen millions of drug candidates.
Progress in microscopy and sequencing shows no signs of slowing in the near future. In microscopy, we will take more images and train deeper neural nets, whereas in sequencing, we will measure more cells and add more modalities. I have no doubt there are still many, many discoveries to be made using both technologies. But if we want to turn the flywheel faster and speed up the rate of progress, we need to turn our eyes to whatās next. One direction that people are particularly excited about right now is the idea that microscopy and sequencing provide largely orthogonal information. Microscopy provides spatial context, whereas sequencing reads out molecular identities ā they let you touch different parts of the proverbial blind menās elephant, so to speak. But what if we could zoom out and see the bigger picture by combining them?
The usefulness of combining different types of information can often be abstract, so letās try an analogy. Imagine you know absolutely nothing about the geography of San Francisco, but want to learn more. As a starting point, I provide you with two sources of info:
Letās look at the photo first:
There are a thousand things you could say about this photo, but for simplicity, here are two:
In this analogy, this photo stands in for a microscopy image. Itās a snapshot of three-dimensional space that accurately shows the distance between bridges (xy), or the height of each skyscraper (z). Photos and microscopy images both have high spatial resolution. In terms of limitations, this photo does not tell you the names of things ā for example, you canāt identify the bridges beyond saying the one at the top of the photo is red and the other is gray. This equates to microscopy images having low molecular resolution.
Now letās take a look at a snippet of the article:
The full article is far longer, but for the sake of this exercise, here are a few tidbits:
This article is a stand-in for sequencing in our analogy. It lets us learn the names of things, similar to how sequencing lets us read out molecular identities (high molecular resolution). On the flip side, the article lacks info about physical locations, e.g. how far each island is from the mainland, just as sequencing lacks info about the 3D location of each molecule (low spatial resolution).
What can you say about San Francisco from these two sources alone? The photo and article both provide different types of information, but they are largely disconnected. You know the city is surrounded by bodies of water, and you also know these bodies of water are the Pacific Ocean and the Bay, but you have no way of knowing which one is where. To do this, you need another source to bridge the two types of information ā what you need is a map.
Maps can come in many forms ā some strive to accurately represent geography, while others (such as this one) distort geography to highlight landmarks. The defining feature of maps is that they integrate a representation of space with the names of things. Using this map, we can now say the western body of water is the Pacific Ocean, while the eastern one is the Bay. The red bridge from the photo is the Golden Gate (duh), while the gray one is the Bay Bridge.
If you wanted to see the Golden Gate up close, you would look at a photo; if you wanted to know the name of the architect, you would find an article. In this sense, maps are not a replacement for photos or writing. Instead, they let you transfer information between what you see and what you read, enhancing both of the original sources. So if microscopy is analogous to photography, and sequencing is like writing, how do we create a new technology that bridges the two like a map?
How does one go about creating a new technology anyhow? In the past, I assumed this was the sphere of scienceās chosen āgeniusesā, those born with more creativity in their pinky finger than the rest of us mortals combined. But lately, Iāve come to believe that scientific creativity is less nature, and more nurture. The person responsible for this change is Ed Boyden, a MIT neuroscientist whose research group is a renowned haven for outside-the-box thinkers. If you watch any of Edās public seminars, youāll notice he constantly shares strategies for creative thinking (e.g. tiling trees). In doing so, he espouses a philosophy that attracts curious minds like moths drawn to a flame: anyone can do creative science. I liken this message to the famous āanyone can cookā line from Ratatouille ā not everyone can be a creative scientist, but creativity can come from anywhere.
One of Edās favorite strategies for creativity is inversion: if youāre stuck on a scientific problem, try flipping it on its head. This strategy was key to the invention of expansion microscopy. Several years ago, Ed, Paul Tillberg, and Fei Chen were brainstorming ways to image large regions of the brain at high resolution. The logical approach would be to build a bigger, faster super-resolution microscope, but these machines are crazy expensive and already operate near the limits of physics. Then came the key inversion: instead of making our microscopes see better, letās make our samples bigger! This breakthrough made the next steps much clearer. The team figured out how to embed samples in hydrogels that expand like a baby diaper when placed in water. This spreads out molecules that were originally super close together, allowing you to distinguish them with an ordinary confocal microscope capable of imaging larger regions as intended.
So back to our essential question: how do you both image and sequence a cell? It turns out that inversion also works here, but first, we need to understand how sequencing works. The first step is to extract and amplify DNA. Next, you load the amplified DNA onto a flowcell, which is a surface containing billions of evenly-spaced nanowells. The flowcell then goes into the sequencer, where it undergoes successive rounds of four color imaging, as seen below. In this video, each spot corresponds to an amplified DNA molecule, the four colors correspond to fluorescently-labeled DNA bases (ACGT), and each frame corresponds to a round of sequencing. Finally, this video is used to read out the order of nucleotides for each DNA molecule. Now, did you catch the key point? A DNA sequencer is actually a simplified microscope! (Inversion alert!) So what if we could use a traditional microscope to perform sequencing inside of cells? This idea is known as in situ sequencing.
The term in situ means in its original place or position. To perform in situ sequencing, we first use chemical fixatives to keep DNA and RNA in its original place instead of extracting it from cells. After amplification, we then mimic the sequencing chemistry that occurs within an Illumina flowcell, only here, it all occurs within cells. This lets us measure both the sequence (ACGT) and the 3D spatial position (xyz) of each molecule.
In situ sequencing was first demonstrated for RNA by Je Lee, Evan Daughtery, and George Church with FISSEQ in 2014. But subsequent progress was slow, since sequencing inside the cluttered environment of a cell is both time consuming and technically challenging. And while 30 rounds is sufficient for measuring RNA, it isnāt nearly enough for other modalities, such as the genome. If youāre interested in the creative leaps it took to solve these challenges, check out our method for in situ genome sequencing.
In this paper, you will learn how combining sequencing and imaging in a single technology allows us to see what your genome looks like at the very first stages of life. But for the sake of being nonredundant, here I will skip ahead to discuss what in situ technologies might look like in the future.
To imagine a new technology is to envision the future. Noubar Afeyan, founder of Flagship Pioneering (the venture firm behind Moderna), has thought at length on how to systematically create new technologies using a process called emergent discovery. The whole piece is worth a read, but my favorite part is the idea that innovation requires taking a leap of faith to break free from the status quo:
Almost by definition, breakthroughs in their embryonic stages defy existing theories, principles, and bounds of experience. As such, they should be considered leaps of faith. So to foster emergent discovery in your organization, you need to make it acceptable to consider the seemingly impossible.
Afeyan explains that once you take this leap, you can work backwards to discover what you should currently be doing. Letās try this together: Imagine you are in charge of designing the microscope-sequencer of the future, a device vastly beyond the limits of our current technology. For now, letās put aside how it will work, and focus on what it should be able to do. Personally, I can think of three fundamental capabilities:
Collectively, these capabilities would allow us to visualize everything happening within cells in real time, ideally without any negative effects. This technology would be invaluable to preventing disease, elongating healthy lifespan, and perhaps even augmenting ourselves to explore the vast reaches of the universe. Slightly ridiculous, right? But the point of taking a leap of faith is to consider the impossible and translate that vision into tangible first steps you can take today. Here are some roadmaps towards the in situ technologies of the future.
[Despite my best efforts, the next three sections get fairly technical. If youāre not super familiar with genomics, I wonāt be offended if you skim or skip them.]
How might we eventually identify all of the molecules within cells? Examining current technological trends might offer clues. In 2019, Nature Methods annointed single-cell multi-omics as its Method of the Year, highlighting an emerging class of assays used to make several types of measurements from the same cell. A few prominent examples include:
Though the breadth of modalities continues to rise, certain experimental obstacles may disrupt this upward trend. In particular, simultaneously measuring many proteins and epigenetic marks may be difficult because these methods rely on oligo-conjugated antibodies, which notoriously have specificity issues. Epigenetic marks are extra challenging because diploid cells only have two sets of chromosomes. In this case, the unique strength of sequencing ā reading out nucleotide order ā is simultaneously a weakness, since our ability to read out multiple modalities may ultimately be constrained by the finite quantity of DNA you can extract from a cell.
Luckily for us, in situ technologies donāt rely on DNA extraction. In a recent paper, Takei and colleagues showed it is possible to jointly measure DNA, RNA, proteins, and epigenetic marks in situ in the same sample. In this particular approach, epigenetic measurements were read out through microscopy instead of sequencing, which yielded lower genomic resolution, but enabled the authors to discover nuclear zones defined by unique combinations of histone marks. Meanwhile, in situ proteomics methods such as CODEX can sidestep issues of antibody specificity using successive rounds of immunofluorescence to image over 50 proteins, rather than trying to sequence all of them at once. Though in situ technologies come with certain trade offs, Iād argue they offer a much clearer path to measuring everything inside a cell than single-cell multi-omic approaches.
Why is it so important to measure everything at once, as opposed to performing many single modality measurements? One good reason is experimental throughput. Instead of limiting ourselves to mostly normal and well-characterized disease states, we want to explore the full perturbation landscape with genome-scale CRISPR methods or vast libraries of pharmacological compounds. Weād also like to associate the molecular state of a cell and perturbations with image-based phenotypes, such as cell division defects or cell-to-cell interactions. With single modality measurements, weād have to perform millions of experiments to access every combination of modality, perturbation, and phenotype, but with molecular multiplexing, we can constrain our search space to the latter two. Thereās also a second, compounding advantage to measuring everything at once: by identifying each moleculeās neighbors in 3D space, we can also measure all molecular interactions.
Measuring molecular interactions is not as straightforward as it sounds. While multi-omic sequencing approaches can read out several layers of regulation in a cell, they struggle to distinguish between direct interactions and correlative associations. Is the methylation of a regulatory region causing lower expression of a nearby gene? Or alternatively, perhaps they are both downstream products of the same pathway, but do not interact with each other. While most sequencing approaches cannot make this distinction, in situ technologies can identify direct interactions based on 3D distance, given sufficient spatial resolution.
All microscopy methods have a fundamental tradeoff between spatial resolution, plex, and throughput. For instance, electron microscopy has the highest spatial resolution (~0.1 nm), but minimum plex (only one ācolorā). Super-resolution methods like STORM that use randomly-blinking fluorophores to distinguish crowded molecules enable high resolution fluorescence imaging, but are low-throughput because samples must be imaged repeatedly. Though advancements in physics may gradually minimize these tradeoffs, what paths are available to us to increase the spatial resolution of in situ technologies by several orders of magnitude?
Expansion microscopy, which we introduced earlier as an example of creative inversion, is a form of super-resolution imaging compatible with in situ technologies. With a method called ExSeq (Expansion Sequencing), the Boyden lab showed that expansion can drastically improve both the yield and spatial resolution of in situ RNA sequencing. One reason I am particularly optimistic about expansion is its spatial resolution is infinitely scalable in theory. By varying the chemical compositions of your hydrogel, you can increase the amount it expands in water (i.e. its expansion factor) from 4-fold all the way up to 24-fold. Although larger expansion factors exacerbate gel handling difficulty, these capabilities offer a theoretical path forward for one day capturing molecular interactions at biologyās tiniest scales. This approach is also currently practical as well: in the ExSeq paper, even basic 4-fold expansion allowed RNA molecules to be localized to nanoscale compartments such as dendritic spines.
Another reason to be excited about expansion microscopy is that it enables in situ technologies to be applied to thick 3D samples. Right now, nearly all spatial technologies are designed for thinly-sliced tissue sections that contain a single layer of cells (10-20 micron thickness). While 2D spatial information is better than nothing, biology exists in 3D, and ideally our measurements should reflect that. Because expansion involves embedding your sample in a 3D hydrogel, we can take advantage of tissue clearing techniques to remove debris and improve enzymatic diffusion. These techniques unlock spatial profiling in thick samples where 3D context is critically important, such as embryos, organoids, or neural synapses. In the future, it may even be possible to measure all the molecular interactions within expanded intact organisms. But even if we reach this point, these measurements still represent a single snapshot in time ā how can we expand our capabilities to measure the dynamic nature of cellular processes?
Now that weāve considered bringing in situ measurements into 3D, letās explore the 4th dimension: time. For all of sequencingās strengths, perhaps its biggest limitation is its destructive nature, precluding dynamic measurements of cells. Most technologies that claim to capture temporal information from sequencing actually take individual measurements of cells at varied time points, in contrast to repeatedly measuring a single cell over time. One exception is this method that involves sorting cells into microfluidic wells for live imaging, followed by single-cell sequencing. While technically impressive, this approach is limited in throughput, and each cell is measured in isolation as opposed to its native tissue context.
Unfortunately for temporal measurements, one technology we wonāt see any time soon is the ability to sequence in live cells*. Although this would seem to limit us to the combination of live imaging and sequencing described above, I believe in situ technology has the potential to make this approach far more versatile as follows: Instead of sorting cells into individual wells, we can simply perform live imaging on all of them at once, followed by fixation and multiplexed in situ measurements. Because the latter preserves spatial structure, it is then trivial to link the final cell locations from the live imaging with the corresponding fixed measurements. This framework has already been demonstrated in a paper combining calcium imaging to record electrical activity and RNA FISH to identify neuronal subtypes, offering a generalizable template for linking live cell phenotypes to fixed in situ measurements.
Since this approach to temporal measurements requires both live imaging and in situ technologies, we must also develop better methods for time-lapse imaging of multiple markers at once. One promising solution is a computational imputation technique known as label-free microscopy. To set up this technique, you first capture a basic imaging modality (e.g. brightfield, phase contrast) in parallel with immunostaining for cellular structures such as the nuclear lamina or mitochondria. Next, you train deep learning models to create mappings from the basic modality to each immunostain, and then lastly, perform continuous live imaging for the basic modality and apply your models to predict the immunostain at every time point. Though label-free microscopy is still at the proof-of-principle stage, it offers the promise of one day allowing you to predict the dynamics of any protein for āfreeā from basic live imaging data.
While live imaging followed by fixed in situ measurements might reveal how past behavior affects cell state, we also want to learn how cell state predicts future behavior. Given that we canāt perform live imaging of a cell after fixed measurements, how can we accomplish this? One possibility is to train deep learning models that can foresee the future. Several years ago, Buggenthin and colleagues demonstrated that live imaging can be used to predict a stem cellās lineage prior to the appearance of known molecular markers. In theory, you could train similar models for any cellular system with a heterogenous response, such as drug resistance or epigenetic reprogramming, and then perform in situ measurements at an early time point to identify which molecular states are most commonly associated with each predicted fate.
By synthesizing the very best of microscopy and sequencing, the in situ technologies of the future will let us perceive biology at unprecedented resolution. And the new discoveries they enable will propel us to create even more advanced technologies that make the future a brighter place.
Why should we care about the future? I recently watched a video speculating on when the last human would live. The main premise was that if we manage to avoid cataclysmic events that wipe out all of humanity (a big if, but stay with me), humans will conservatively survive for at least a million years. And if we solve global warming and space travel and other future issues, we may survive many, many orders of magnitude beyond that. But given that weāve existed for only 200 thousand years, itās exceedingly likely we live right at the start of human history rather than towards the end. On the cosmic scale of human civilization, we are still discovering the foundational technologies.
Happy exploring,
ZC
My original plan for my PhD dissertation was to staple all of my papers together. I know many people disagree with this practice, but this has always seemed patronizing to me. If Iāve published actual articles, why should I waste time re-writing them in a form nobody will read? Then some of my collaboratorsā experiments failed, and during my newfound free time, I thought a lot about what it really means to share your science.
Hereās what I came up with: the ideal purpose of sharing science is to stimulate discussion, inspire new ideas, and in the best cases, shift the collective consciousness.
Though Science Twitter is known for its disagreements, itās safe to say we all agree on one thing: our current system for sharing science does not live up to our ideals. We could discuss the issues with scientific journals all day, but the problem with actual papers is they are both longer and emptier than we would like. Modern papers are filled with pages and pages of supplementary figures to appease cantankerous reviewers, while devoid of the thought-provoking speculations and musings once found in older literature. Most PhD dissertations, on the other hand, are little more than glorified lab notebooks, written more for obsessive completeness than for readability to fellow scientists.
Thankfully, many people are experimenting with better ways for sharing science. Preprints let us share our work faster and theoretically open up peer review to the public, but are still largely beholden to the formatting and style whims of the traditional publishing system. Arcadia is piloting open notebooks on PubPeer, but for now, this is more at the institutional level rather than a choice an individual can make. Personally, I believe the long-term solution is not a single approach, but a buffet of options for every circumstance. Here, Iād like to advocate for a format that may appeal more to those in academia: the scientific essay.
Scientific essays are intriguing because they are free to be everything papers are not: opinionated, informal, and dare I say, fun to read. Instead of every sentence being assembled by committee to avoid a reviewerās wrath, essays offer an opportunity for unfettered scientific expression. We even already have a platform for distributing them: Twitter! Over the past few years, Twitter is where Iāve discovered my favorite scientific essays, some of which Iāll link here, here, and here. It doesnāt escape my attention that none of these were written by scientists in academia. While Iād love for this to change overnight, I realize that academics donāt have many tangible incentives to write. So for now, hereās my more concrete suggestion: PhD students should write part of their dissertation as a scientific essay.
In the spirit of being the change I want to see, I have shared my own attempt here. It was certainly harder than I thought it would be! After years of writing papers, it was difficult to deprogram the jargon from my brain and write in a more accessible way. I also worry that people will think Iām boring or stupid or pretentious for believing my thoughts are worth sharing. But in the end, my goal was to write the essay I wouldāve wanted to read as a 1st year grad student, and I feel Iāve put forth my best effort.