Combining MAGE and synthetic metagenomics for bio-manufacturing

There have been a couple interesting papers out recently that, in my mind, point towards where synthetic biology is going.

The first, by Travis and co. from Chris’s lab, combined available genomic data with gene synthesis to make and screen a library of methyl halide transferases. Methyl halide transferases convert S-adenoyl methionine (SAM – a metabolite that is pretty ubiquitous in organisms) to methyl halide. Turns out the oil industry already knows how to go from methyl halides to things like gasoline. Their approach is pretty cool. Take a known methyl halide transferase, blast it against Genbank, and find all the homologs. They synthesized enzymes with as little as 18% sequence identity. Then screen them all for function to find good candidates for a biosynthetic pathway.

The second, by Harris and co. from George’s lab, describes the automation of whole genome editing in E. coli. (Note that the technique Harris and all use of electroporating oligos into E. coli to make point mutations has been published previously; by automating the process, they made it practical to make mutations genome-wide.) Harris even uses the MAGE-machine to optimize the production of lycopene in cells.

So what’s the obvious path forward? Combine the two. Find the best heterologous genes for your biosynthetic pathway of choice via synthetic metagenomics and then use MAGE to tune up the chassis and pathway to optimize production. Technologies like these are what separate synthetic biology from classic genetic engineering.

Why then, you might ask, do we need genetic parts given synthetic metagenomics and MAGE? I’ll cover that in another post. 🙂

P.S. Unfortunately both articles are paid subscription access only. If you don’t have access, you might try visiting your local university library or nicely asking the corresponding authors to post a preprint on their lab website and/or send you a copy.

Posted By: Reshma Shetty

  1. If it helps, I’ve written up a description of MAGE here: http://blog.thebiomachine.com/2009/07/harvard-hits-the-fast-forward.html

    The interesting thing about MAGE is the way it combines a form of directed evolution by using degenerate sequences coupled with targeted mutations. It also neatly complements large-scale gene synthesis by allowing edits to be made throughout a genome more cheaply, just as long as the total percentage of changes is quite low.

    One thing I think MAGE offers genetic part-based design is the ability to tune up RBS and promoter sites after a module has been assembled. If a promoter or RBS is not quite working out, use the degenerate replacements to find better ‘matches’.

  2. What you’re missing is the screening strategy. MAGE can generate mutants, but you still need to pick out the best ones. The Voigt lab synthesized ~80 enzymes, and could characterize each one individually. The Church lab constructed millions of mutants and screened (by eye, based on color) ~10,000 of those. Neither approach is particularly scalable/generalizable.

    1. I agree. With better DNA construction technologies like DNA synthesis, MAGE, and DNA assembly, the bottleneck in the pipeline moves from construction to screening and testing. High throughput culture growth followed by GC analysis is a doable but imperfect solution. If only there was some lab working on developing in vivo sensors of metabolites that could be connected to a reporter system to enable visual screening of colonies for any compound of interest. 😉

Comments are closed.