New technology, old technology

Day 2, and we’re starting to sketch out ideas and directions. Loads of ideas and thoughts, but we’re still very much in the divergent phase.

The wireless link in this venue is so flakey that we’ve ended up printing out all 65 comments to the blog and sticking them to the wall.

Blog corner

Advertisements

17 Responses to “New technology, old technology”


  1. 1 Phillip Huggan January 9, 2007 at 5:49 pm

    An event designed to brainstorm new research proposals that will unleash new technology infrastructures is held where there is no wireless connection…

  2. 2 NanoEnthusiast January 9, 2007 at 6:21 pm

    Judging by the previous picture, might stone walls be the culprit?

  3. 3 Richard Jones January 9, 2007 at 7:49 pm

    I blame the continuous rain, myself.

  4. 4 Phillip Huggan January 9, 2007 at 9:32 pm

    Don’t worry, two decades from now it will be snow instead and the reception will be improved.

  5. 5 Martin G. Smith January 9, 2007 at 11:55 pm

    I suggest it matters not that the wireless connection was/is wonky. If what is hoped to appear, actually does appear, we will all be better for it. Fro myself I don’t need instant response even thought I have a T3 line terminating at a Tent Squat next to a Surf Beach.
    We wait in anticipation of the good things to come and hope the adventure does not get too bogged down in Process.

  6. 6 Tom Craver January 10, 2007 at 1:20 am

    The sequence, to develop a “true” nanofactory, would seem to have to follow a path rather like:

    1. Deterministic Chemistry (DC) methods are developed

    2. Devices are built and refined to do DC materials science

    3. Devices become good enough to begin making “nano-toys” – as happened with MEMS

    4. Simple but useful nanoscale devices are made

    5. Methods improve, complexity of nanoscale devices increases

    6. Nanoscale DC devices supplement macroscale DC devices

    7. Capabilities/complexity of nanoscale DC devices increases

    8. “Production-to-design” nanofactory, able to self-copy

    Mechanochemistry has barely started down the path.
    Synthetic biology seems to be at stage 3, approaching 4.

  7. 7 Martin G. Smith January 10, 2007 at 1:44 am

    Tom Craver has made a good start albeit only a start. There will come more, and then more interesting paths as things develop.

  8. 8 Phillip Huggan January 10, 2007 at 2:24 am

    “(T.Craver wrote:)
    Synthetic biology seems to be at stage 3, approaching 4.”

    I would say synthetic biology has stage 4 (useful nanoscale devices), but is not capable of stage 2 (build deterministic chemistry materials sciences tools). IMO, deterministic chemistry (chemical bonding via mechanical positioning of a reactive moeity) requires an UHV or maybe an inert gas. It is my understanding most synthetic biology is solution-phase, or are areas like engineering clamshell simulants considered to be synthetic biology too?

  9. 9 davidbott January 10, 2007 at 7:41 am

    What Richard didn’t mention is that the picture he posted is of the “blog wall”. We have extracted all the challenge responses as a way to expand our thinking. Yesterday, we came to realise that to move beyond the conceptual phase of the ideas factory, we need to “reduce to practice” – at least in theory. Depending on the end application, we will probably need to complie different types of matter, and that might require a different type of compiler. The way you would assemble an inorganic material (say for a high temperature application) is probably different from how you would assemble a polymeric/biological material (say for a medical application). Although we all brought ideas for such applications with us, the addition of all te blog ideas has given s a wider pallette to consider.

    David Bott

  10. 10 Phillip Huggan January 10, 2007 at 8:27 am

    There is much to be gained from interdisciplinary study, but at the very least some basic reaction site classifications need to be sectioned.
    Building up a precision covalent lattice in vacuum will be achieved differently than will be channeling ions in solution.

    Maybe the best way to classify desired site-specific reaction categories is by the actuator/transducer blueprint of the (desired and hopefully someday realized)nanoproduct? Ion flows and membrane chemistry versus creating an “artificial” diode atom-by-atom. I guess the heart of this classification scheme would be in whether you are trying to replace mitochondria (or whatever powers our cells), or trying to replace PZT (piezoceramic crystal that transduces SPMs and sound systems).

  11. 11 Chris Phoenix January 10, 2007 at 9:14 am

    Sorry to be a nag, but in response to David’s note, it seems worth repeating the comment someone made a few days ago: A useful question might be whether the matter compiler can build matter compiler components. This answer will be very different for each different type of product. And the answer may be non-obvious, to the point that this may not be a useful question to ask in early stages. But if you want the system to scale up to macro-scale products, the matter compiler had better be either extremely inexpensive (which probably means extremely limited), or else largely self-building.

    For example, a proximal-probe additive-chemistry system might want to build probe tips, or perhaps tip arrays or actuators. If it’s a biopolymer-directed-assembly system, can it build biopolymer-building catalysts or molecular selectors (perhaps analogous to the tRNA binder in ribosomes) or assembly-directors?

    Coming at it from another direction, what small physical systems can build NEMS? What small physical systems can catalyze chemistry selectively and switchably? Ask these questions–then turn them around: what systems can NEMS be used to build? What can switchable chemistry be used to build? (Ned Seeman is building a programmable polymer factory.) If you can find a circuit here in any combination of systems -that-build- things -that-build- *the same* systems, then you might have a concept for a scaleable matter compiler.

    Chris

  12. 12 Brian Wang January 10, 2007 at 2:13 pm

    relating what Chris is saying to the ranking survey.

    which methods have a clearly defined pathway to improvement to be molecularly precise ?

    Which methods have a clearly defined pathway to reducing the size of the building system ?

    Which methods can scale to different volumes of molecularly precise material ? nanograms, micrograms, milligrams, grams, kilograms, tons etc… For each per hour

  13. 13 Tom Craver January 10, 2007 at 3:58 pm

    Phillip – I was trying for a term (Deterministic Chemistry) that contrasts with conventional chemistry that relies on random interactions of molecules and uses tricks to statistically bias the products.

    It might be mechanochemistry, but I’d also include adding snips of DNA to an organism to make it product a specific protein.

  14. 14 Tom Craver January 10, 2007 at 4:07 pm

    Phillip – also I want to clarify that in my stages 2&3, “devices” implicitly meant devices or systems for reliably producing objects (molecules, crystals, whatever) at the nanoscale – not nanoscale devices.

    My understading is that synthetic biology does have that capability, i.e. has passed stage 2.

  15. 15 Chris Phoenix January 10, 2007 at 5:51 pm

    Tom, I’m not sure there’s a sharp dividing line between deterministic and conventional chemistry.

    Some of the reaction conditions–effective concentration, for example–can vary over a dozen orders of magnitude. That can result in similar variations in reaction rate and yield. It can make two chemical systems look very dissimilar. But there’s probably a third chemical system halfway between them.

    Mechanically constrained chemical systems can attain reaction conditions that conventional chemistry can’t access. Effective concentration of 10^9 per nm^3. Pressures of 500 GPa. Direct mechanical site selectivity (which is still statistical, thanks to thermal noise). But even mechanosynthesis doesn’t guarantee zero error rate. Just low enough to allow us to speculate about 10^15 step synthetic procedures.

    Consider this range of techniques:

    1) You have a reaction that’s too slow. You attach complementary DNA strands to each reactant so that the effective concentration is increased when the strands bind.

    2) You have a molecule or particle attached to an AFM tip with a ten-base DNA link. You use the tip to bring it to a selected site among identical sites in a DNA lattice. The constrained molecule attaches to the site with a 20-base link. You pull the tip away.

    3) You cover a building block with zinc finger proteins in pairs. (Speculation ahead:) Two fingers might hold a zinc, but four fingers per zinc may be happier. So the block surface may be covered with zinc, giving the block a charge that resists aggregation. Mechanically pressing a block into a similarly covered surface (perhaps built of pre-deposited blocks) squeezes out extra zinc and forms a strong bond; without pressure it may take orders of magnitude longer for a block to bond to unwanted locations. So you immerse a scanning tip, charged so as to attract blocks from solution, in a pool of liquid covering the surface and containing a solution of blocks. You can load up and deposit blocks repeatedly in selected locations without having to move the tip out of the area.

    4) You use a strong covalent joining system between more rigid blocks. The rigidity constrains several reactions to happen simultaneously, and the reaction energy barrier is sufficiently high that the error rate from unwanted joining is low. You can again use blocks in solution, or you can load the tip elsewhere. Depending on the bond pattern, you can either constrain the block to bond only at lattice points a nm apart or more, or at overlapping positions with spacing smaller than a block width.

    5) Like 4, except the “block” is a very reactive small species, and must be bound to the tip elsewhere to prevent it from reacting with the surface all by itself.

    So there’s a continuum from a synthetic chemistry trick all the way to Drexler-style.

    Chris

  16. 16 Tom Craver January 10, 2007 at 10:33 pm

    Chris:

    By calling it “Deterministic chemistry”, I’m trying not to specify too much about the methods, but more to specify how one would design a process to achieve some end product.

    In more conventional chemistry, my perception is that there is a huge range of known paths to a vast range of known products, and a chemist carefully designs combinations of those ‘recipes’ (and invents new ones from fundamental principles or by analogy) to bring together intermediate products to interact and ultimately yield the final product. While it is carefully planned, novel combinations of products can still result in unexpected and undesired interactions. The more complex the final product, the more likely such interactions become. Often the chemist can modify a failed step to make it work – but sometimes will have to give up on a particular approach and come at it from a different direction.

    With DC, one would expect each simple step to proceed with nearly 100% certainty of correct completion in nearly all circumstances, so that one can plan a long sequence of simple additive steps to arrive at the product, and – if one has planned it logically – reasonably expect to arrive at a desired (though novel) product.

  17. 17 Chris Phoenix January 11, 2007 at 1:25 am

    Tom, now I understand, and I agree completely. And you indirectly raise a point I hadn’t raised yesterday, when I was talking about constraining the number of reactions in order to select the few that have the highest yield and thus increase the number of steps.

    What you’re talking about addresses the other half: not reliability (error rate) but predictability; not the reaction but the product. It relies on the reaction not “seeing” the increasing intricacy or complexity of the molecule thus-far built. Of course this isn’t absolute. Diamond deposition pathways probably will change as they get near an edge or other feature; conversely, there are certain features of organic molecules that don’t affect certain reactions. But it does seem to constrain the product/reaction combinations.

    One way to accomplish this is to find a set of reactions such that whatever products they produce won’t further constrain or modify the reaction set. Make a product that’s unchanging from the point of view of the reactants, even as it grows. The alternative is to learn to predict a variable product. But my impression is that computational chemistry isn’t nearly ready for that yet–if it were, mechanosynthesis simulations would be taken more seriously.

    This discussion points–again–to the utility of mechanical constraint of molecules, because that allows competing reaction sites to be ignored. Of course adding to a linear polymer, as in DNA synthesis, doesn’t increase the number of reaction sites.

    Our conversation so far has focused on covalent chemistry, which points to an alternative. DNA hybridization has a large number of very predictable “reactions” that can select among a thousand “reaction sites” yet work under essentially identical conditions. There are probably other coded-recognition schemes out there waiting to be found.

    Another alternative is indicated by the fact that this discussion assumes many steps of adding small things to big and growing products. What if the product is built by adding a few similar-sized pieces together at each step? This involves far fewer size levels. And there may be only a few ways for the reaction to happen at each step, greatly reducing the complexity of analyzing it. And you can still have a very large product set.

    I think this might be interesting enough to explore.

    So suppose you want to build a billion-atom product. Log2(1E9) ~ 30. Of course, it’d be impractical to have a million vials containing thousand-atom components, or a thousand vials containing million-atom components. And you have a second reason to want to reduce the number of components at intermediate levels: you know that whatever you build on the left side has to bond with whatever you build on the right side–so you want to keep the set of possible sub-blocks easily characterized–and this can be achieved by recombining a few well-understood blocks at each level.

    So you start with two blocks, call them A and B, of 16 atoms each. You can make them into AA, AB, BA, or BB, but let’s say you are going for a minimalist approach, so you only pick two: say, AA and AB. Then, the next level up, you can build any of the four:
    AA AA AB AB
    AA AB AA AB
    and again you pick two. You’re choosing among six alternatives per level, at each of 26 levels. By the time you get to a billion atoms, you’ve built one of 6^26 = 2E20 molecules, and you’ve built only 52 molecules, and you’ve only had to analyze A and B joining to each other.

    Suppose you still have only A and B to work with, but you pick three products at each level. For the first level you’re selecting three of four choices. But for the second and higher levels, you’re selecting three of nine: 84 choices, if I remember my combinatorics. Now you’ve got 4×84^25=5E48 molecules to choose from, building 26 more molecules and using no extra analysis.

    There are at least two other ways to make this more interesting, besides adding choices at each level. One, of course, is to add starter blocks C, D, and E to the mix. The other, which may be easier to analyze and almost as rich, is to choose A and B such that short spatial sequences of them have different properties once assembled, but they don’t interfere with their neighbors while being joined–so you only have to analyze the joining between pairwise blocks, rather than the joining between short sequences of blocks. If the “different properties” are chemical, this could be difficult. But back to DNA again–adjacent bases don’t have much effect on each other’s pairing, but restriction enzymes can read the sequence and act on it. I could imagine making a cube of DNA, then letting enzymes eat complicated channels in it, or mixing in short complementary strands with other molecules attached.

    Possible complications:
    How do you line up the blocks when you join them?
    Do big blocks diffuse slowly enough to require mechanical assist?

    The next question is what those big molecules will be good for. That would really depend on what’s in the building blocks and how they’re joined. General capabilities that might be useful:

    1) A general-purpose block joining mechanism that could be separated from the block’s interior structure and function.
    2) Rigid structure of the final molecule. (Probably necessary anyway so that the faces of large sub-blocks line up correctly when joining.)
    3) Some way to make holes–sacrificial material (perhaps photo-lysed?)
    4) A way to conduct electricity from block to block

    And so on…

    Chris


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: