What if we could make comprehensive, high throughput analysis of the proteome accessible to all researchers and clinicians? Doing so would enable advances in our understanding of protein function, the creation of new therapeutics and diagnostics based on the proteome, and more.
Nautilus, the home of a brand new approach to proteomic analysis, was conceived with that very goal in mind. Many successful companies are built on incremental thinking and advancement. But to accomplish our audacious goal of exploring the full proteome we needed to start with a blank sheet of paper.
The leaps in proteomic technology and innovation that drive Nautilus are powered by cross-disciplinary thinking and collaboration — two characteristics vital to true breakthroughs. Indeed, these have become core values at Nautilus. In this series of blog posts, we’ll give you an inside view on how cross-disciplinary thinking and collaboration continue to propel us into a proteomics-powered future.
What is proteomics?
Proteomics is the study of all of the proteins inside a cell, tissue, or organism, collectively known as the proteome. Studying the proteome lets researchers see how the billions of proteins inside cells interact to carry out essential processes. By giving us new insights into biology, the applications of proteomics research could unlock new treatments for cancer, Alzheimer’s, and many other major diseases.
How the intersection of science and magic created a new path to proteomics profiling
Our scientific co-founder Parag Mallick is not just an accomplished professor and proteomics researcher at Stanford but also a professionally trained magician. His diverse expertise enabled him to conceive of the foundational thinking that would become the core of the Nautilus proteomics platform.
Parag set out to create the Nautilus Proteome Analysis Platform after being consistently frustrated by the status quo. Traditional and emerging proteomics technologies like mass spectrometry, affinity arrays, and protein sequencing have shown that the human proteome is highly dynamic and have enabled fundamental advances in our understanding of cell biology in both health and disease. However, these proteomic profiling techniques are ill-equipped to comprehensively assess the proteome for a few key reasons:
- Techniques like mass spectrometry are too insensitive to measure the majority of the proteome. In the best case, routine profiling only covers ~30% of the proteome. This makes it very difficult to conduct comprehensive protein analysis and compare across samples, as researchers may not even be able to measure the same subset of the proteins in each. These techniques also typically cannot easily measure modified proteins and proteoforms.
- Current proteomics technologies are too low-throughput to meet researchers’ demands. For instance, clinical researchers may want to look at the proteomes of thousands of tumor samples to assess proteomic differences amongst patients. This could take months with current technologies.
- Current proteomics technologies have low dynamic range. Signals from more common proteins often drown-out signals from other proteins making low-abundance proteins difficult to detect. Even though they are not abundant, these proteins can still have large impacts on biological function and need to be measured.
- Many proteome profiling approaches are difficult to use. It can take a lot of time for even a highly trained postdoctoral researcher to learn to use current proteomics technologies.
Human proteomics: Conjuring a new approach
Frustrated by these issues, Parag set out to find ways to solve his proteomics conundrum. As a scientist who firmly values the deep understanding that comes from building upon past work, Parag looked for ways he could improve traditional proteomics technologies. He wracked his brain and considered many options but could not conceptually tweak mass spectrometry or other technologies enough to solve the problems with proteomics research.
Feeling as if he had hit a wall, Parag decided to clear his head. He got out of the lab and went on a road trip. Separated from the trappings of the academic world, Parag’s magician mind took hold: He knew that people can be misled by the obvious or traditional. Indeed, these distractions make magic tricks possible. By putting on his magician’s hat, he could set obvious solutions aside. As he says:
Focusing on traditional proteomics research techniques made it seem like radical advances in proteomic sensitivity, throughput, and accessibility were impossible. So, like a good magician, Parag stepped away from the obvious and conceptualized an entirely new way of exploring the proteome. After returning from his road trip, he had an epiphany involving the use of relatively non-specific protein binding reagents (multi-affinity probes) that would become the theoretical basis of the Nautilus’ platform.
In brief, the technology behind the Nautilus platform isolates individual proteins on billions of landing pads on a nano-fabricated array — one protein per pad. This is a huge shift away from protein microarrays of the past. By repeatedly flowing reagents (multi-affinity probes) that bind to short stretches of amino acids found in many proteins and then computationally assessing their patterns of protein binding, the platform is designed to systematically identify the proteins on each landing pad. Identifications are based on predicted binding patterns developed from known protein sequences and machine learning algorithms. By simply counting up how many times a protein is identified on this hyper-dense single-molecule array, the platform is designed to determine protein abundance across a wide dynamic range, offering an unprecedented look at the proteome of a sample. The user will ultimately get a list of identified proteins and their abundances.
This is nothing like current proteomics technologies, which generally identify pieces of proteins (peptides) in aggregate, destroy the proteins in the process, and have poor dynamic range. According to calculations based on the size of our arrays, the known sequences of proteins in the human proteome, and the design of our affinity reagents, our single molecule analysis platform is expected to be able to quantify more than 95% of the proteome from a human cell or body fluid sample.
Our current proteomic profiling prototype, developed over five years after Parag’s epiphany, is remarkably similar to Parag’s original vision. His cross-disciplinary thinking and willingness to look beyond the traditional set the foundations for what Nautilus is today.
In the next post in this series, Nautilus: Driving a revolution in proteomic analysisn Part 2, we’ll discuss how interdisciplinary thinking allowed us to achieve this growth.
Nautilus: Driving a revolution in proteomic analysis (Part 2)
We give you an inside view on how cross-disciplinary thinking and collaboration will propel us into a proteomics-powered future.