Biotech’s Cambrian Era

Announcing BioCoder, an insider's review of DIY biotech

As I write this article, I’m reflecting on the long expanses of otherworldly playa I’ve just left, watching sandstorms pass in front of me while in altered mental states and contemplating the future of our beloved biotech industry.

I have, until recently been living a double life with one foot in the corporate biotech world and another deeply in the world of biohacking/radical science (working on DIY biolabs and equipment, longevity research, and ALS therapeutic development). I believe in the principles of citizen science and shared (or at least leaky) IP as a means of accelerating scientific progress, but I felt I needed to play my part in the “real” biotech industry. That changed three months ago when I realized that to create the innovation we want in biotech, we may have to burn the bridges that got us here and re-create it ourselves, with or without the dinosaur the current biotech industry has become.

Read more…

Comment: 1 |

DNA: The perfect backup medium

DNA storage could change the way we store and archive information.

It wasn’t enough for Dr. George Church to help Gilbert “discover” DNA sequencing 30 years ago, create the foundations for genomics, create the Personal Genome Project, drive down the cost of sequencing,  and start humanity down the road of synthetic biology. No, that wasn’t enough.

He and his team decided to publish an easily understood scientific paper (““Next-generation Information Storage in DNA“) that promises to change the way we store and archive information. While this technology may take years to perfect, it provides a roadmap toward an energy efficient, archival storage medium with a host of built-in advantages.

The paper demonstrates the feasibility of using DNA as a storage medium with a theoretical capacity of 455 exabytes per gram. (An exabyte is 1 million terabytes.) Now before you throw away your massive RAID 5 cluster and purchase a series of sequencing machines, know that DNA storage appears to be very high latency. Also know that Church, Yuan Gao, and Sriram Kosuri are not yet writing 455 exabytes of data, they’ve started with a more modest goal of writing Church’s recent book on genomics to a 5.29 MB “bitstream,” here’s an excerpt from the paper:

We converted an html-coded draft of a book that included 53,426 words, 11 JPG images and 1 JavaScript program into a 5.27 megabit bitstream. We then encoded these bits onto 54,898 159nt oligonucleotides (oligos) each encoding a 96-bit data block (96nt), a 19-bit address specifying the location of the data block in the bit stream (19nt), and flanking 22nt common sequences for amplification and sequencing. The oligo library was synthesized by ink-jet printed, high-fidelity DNA microchips. To read the encoded book, we amplified the library by limited-cycle PCR and then sequenced on a single lane of an Illumina HiSeq.

If you know anything about filesystems, this is an amazing paragraph. They’ve essentially defined a new standard for filesystem inodes on DNA. Each 96-bit block has a 19-bit descriptor. They then read this DNA bitstream by using something called Polymerase Chain Reaction (PCR). This is important because it means that reading this information involves generating millions of copies of the data in a format that has been proven to be durable. This biological “backup system” has replication capabilities “built-in.” Not just that, but this replication process has had billions of years of reliability data available.

Read more…

Comment |