The Matter Compiler

The final project to go forward from the Ideas Factory on the Software Control of Matter is based on theoretical chemistry/materials science and computer science, and we anticipate this linking strongly to the experimental activities funded from the Ideas Factory. As with the two experimental projects, a few administrative hurdles need to be jumped before EPSRC funding can be confirmed.

An ambition to assemble molecules and materials under atomically precise control demands a big leap forward in control engineering and computer science. Is it possible to anticipate the properties and needs of a ‘nano-assembler’? If so, there is a need for a high level instruction language and a computer compiler that translates commands in this language into instructions for the ‘nano-assembler’. This development will require a breakthrough in understanding of chemical synthesis that must embrace the radically new ‘pick and place’ assembly method which is now possible in scanning probe microscopy (SPM). The Matter Compiler project is thus both an exercise in foresight, to anticipate developments in this area, and a prototype implementation for the engineering control and computer science aspects of directed molecular assembly. It has as inputs data from SPM experiments of collaborators, energy landscapes for ‘pick and place’ reactions and the vast knowledge base of classical synthetic chemistry, including methodologies such as retrosynthesis. This will be supplemented by reaction schemes for ‘pick and place’ reactions deduced from first principles quantum chemistry calculations and the technology of object oriented databases and inference engines.

The team is led by Dr Harris Makatsoris (Engineering, Brunel University) and comprises Professor Malcolm Heggie (Chemistry, University of Sussex), Dr Nick Holliman (Computer Science, University of Durham), Dr Helen Wright (Computer Science, University of Hull) and Professor Jeremy Ramsden (Advanced Materials, Cranfield University).

19 Responses to “The Matter Compiler”


  1. 1 Phillip Huggan January 17, 2007 at 7:37 pm

    Is this proposal to develop software for use in an SPM, or is it to accurately model SPM tip geometries for DFT simulations?

    This is the last proposal? I thought there were two speculative projects.

  2. 2 Richard Jones January 17, 2007 at 8:29 pm

    Yes, this is the last one. There were two experimental projects, and this theory/computer science one. It’s got much more general goals than just making SPM software.

  3. 3 Martin G. Smith January 17, 2007 at 11:39 pm

    Bouncing across the floor Jaz yells ‘What is RETROSYNTHESIS’. A unanimous ‘Look it up!’, was the response. A short while later she was running a tutorial she had found on the University of East Anglia site.
    I have said this before and suggest it bears repeating, this process has created a momentum of thinking with one group many will have not thought of. The potential for the work that is to come is incredible and the benefits, if only to advance the knowledge base. As for the possibilities which could/will come for this project, we anxiously await the Beta.

  4. 4 Kurt9 January 17, 2007 at 11:51 pm

    This is software project, but also appears to be one that can figure out the basic science underlying the tip-substrate interactions in SPM. Even if this does not work out at all, it should result in useful new analytical techniques for SPM, which could then be licensed to the existing SPM manufacturers.

  5. 5 NanoEnthusiast January 18, 2007 at 1:31 am

    I believe the two types of software, simulation and control, will be intertwined above and beyond testing different tips. If you have a structure that you want to build, there will be the question of how do you do it in the most chemically stable manner? For example, if you wanted to build a line of carbon atoms using carbon dimers on an unterminated diamond substrate, you would likely use the method in Freitas’s paper of staggering reaction sites; it was just too defect-prone when the dimers were placed one following the other. I wonder, how would you build up a layer at a time? (To the best of my knowledge that has not been determined) Should you make lines and leave gaps between the lines to be filled later i.e. interlace, or should you create a pattern more like a chess board, and fill in the empty, ‘white’ squares? I know diamond is not a simple cube lattice and is a tetrahedral one, but you get my drift. If I understand correctly, this project could answer these types of questions. It may be that the most, or only, stable, low-defect way to build-up a desired structure will look rather convoluted and cumbersome compared to macro-scale manufacturing techniques. Ideally, you would want an SPM control program to handle all these defect minimization strategies without having to think about it. After enough DFT simulations, you would have a library of such strategies that can be mixed and matched with at least some degree of confidence that the computer recommended synthesis path will be the correct one; at least that’s my reading of this proposal.

  6. 6 Chris Phoenix January 18, 2007 at 5:32 am

    Wow, what a great encore to follow the two tech projects!

    In some ways, this is the most forward-thinking and general-purpose of the three projects. It should advance molecular manufacturing in at least three ways:

    1) By making explicit the preparation for pick-and-place chemistry, it should make it clear to everyone that that type of chemistry is now “on the table” as a general-purpose synthetic method worthy of research.

    2) By working to develop a theoretical understanding of that chemistry, it should resolve a lot of uncertainty about how close we are to being able to design nanomachines that will someday be buildable.

    3) When the hardware becomes available to build the designs, a body of familiar design software should speed the application of that hardware and broaden the scope of the products.

    4) Possibly, a sufficiently successful project could allow pre-design of nanomachines. If those designs were sufficiently impressive (e.g. high-performance motors or computers) and reliable, they could increase the desire for molecular manufacturing’s capabilities. We already have molecular mechanics simulations of planetary gears that handle >10 GW/cm^3 at 99.8% efficiency. But they have no synthetic pathway, so people feel free to ignore them.

    More generally, thinking of manufacturing:

    If you’re manufacturing something, you’re going to want to build a lot of copies. Whatever you build will have to do what you expect. For the foreseeable future, that means that you’ll be building a large number of identical copies. But the copies will only have to be designed once.

    (I can imagine a design system that propagates desired functions and properties downward, all the way to the atom level, so that the nanostructures and their synthetic pathways are designed on the fly, and each cubic nanometer is custom-designed and unique. But I think we are very far away from the time when such a system could be built, and I doubt it would ever be efficient.)

    The description sounded like the software would be integrated into the control system of a nanofactory. But I think this software will be used to generate recipes that will simply be copied into the nanofactory and carried out blindly. Even the CAD package that produces nanofactory blueprints may not invoke this software; instead, I think it will usually mix and match pre-supplied and pre-tested recipes. A product designer will want low-level designs that are guaranteed to work all the time exactly as specified. I don’t think this project can advance the state of the art of computational chemistry *that* much.

    I can see a number of distinct but overlapping modes of using such software.

    1) Plan synthetic sequences with a known set of reactions. For example, given a volume to be filled with diamond, in what order should the atoms be deposited? This is perhaps the simplest version, requiring not much more than searching a solution space and verifying the result. (AI researchers will be scoffing, and rightly so, at my use of the word “simple.” It is not simple at all in the general case. But in special cases it can be straightforward.) This is a post-research application, useful for product design with mostly-understood reactions.

    2) Predict the effects of tweaks within a known reaction family. For example, given observations and models of diamond deposition on a surface, compute what will happen when depositing near an edge. This is less a problem of search among options, and more a problem of interpolation and computation: model-building. It’s sort of a meta-problem: develop the skills and techniques to predict reactions correctly from increasingly scanty data. The applied version is to design and optimize reaction trajectories for specific reactions.

    3) Predict completely novel reactions. I don’t know enough about modern computational chemistry to say how difficult this would be. A special case of this might be predicting reactions between a known tool tip and a variable surface. I don’t know if that makes the problem any easier.

    A very powerful system might combine #3 and #1 to plan synthetic pathways for novel covalent solid structures like those in the NanoEngineer Gallery.
    http://www.nanoengineer-1.com/mambo/index.php?option=com_docman&task=cat_view&gid=30
    Just input the structure, and out pops the recipe. That sounds like what this project is aiming at. But this would require massive amounts of CPU time, as well as very reliable comp chem results, and I’d be pretty surprised if this could be accomplished in five years.

    An advanced interim system might combine #1 (search) and #2 (interpolation) into an automated system. But the system would not be general-purpose, at least at first. The reaction set would be hard-won and somewhat idiosyncratic, and a designer would have to have a pretty good idea of what was buildable–the system would just find and verify the pathway. Capability #3 (prediction) would be a big help to researchers developing new reactions in response to the needs of desired structures that lacked known synthetic pathways.

    My impression, from a brief long-ago conversation with Drexler, is that he expects the fully automated design system to be available at some point. But I don’t remember if this was before or after a basic molecular manufacturing capability makes massive amounts of computer resources and massively parallel physical testing available.

    Chris

  7. 7 Chris Phoenix January 18, 2007 at 6:35 am

    So, now that all projects have been announced, I have an observation and several questions on a theme.

    The observation is that this event, far more than anything else I’m aware of, has a good chance of being judged by history as the “official” launch of molecular manufacturing. Almost two dozen leading scientists got together, spent a solid week thinking about molecular manufacturing, and…

    On the experimental side, they came up with two startlingly innovative projects, planning to achieve some very impressive and useful capabilities in just a few years. I suspect that most nanotechnologists, if shown the published summaries with no scientific backing, would have estimated the capabilities to be 20-40 years in the future.

    On the theoretical side, they came up with a project that effectively takes the position that scanning probe chemistry is going to be important as a general-purpose method of synthesis and manufacturing, and it’s already time to start designing the tools.

    My questions are on the theme of: How much more progress is ready to be made? I’d be interested in responses to any of the questions below from any of the participants…

    Could this have happened a year ago? (Sub-questions: If the proposals had been limited to using year-old technologies and knowledge, how much (if at all) would they have been diminished? If the same Ideas Factory with the same people and attitudes had been held in January 2006, so that the week’s discussion was limited to year-old capabilities and concepts, how much more would that have limited the results? And if someone had tried to do this a year ago, with the people and attitudes and knowledge then available, what would have happened?)

    If this would not have been so successful a year ago, then what changed in 2006? Can we expect it to continue, or was it a one-time change?

    If this would have been equally successful a year ago, doesn’t that imply there were several opportunities going unexploited for at least a year? How many more opportunities are there likely to be?

    In either case, can we draw any conclusions about the likely payoff of future Ideas Factories in related areas? Extrapolating forward, what would be the results of a similar event a year or two from now?

    And finally, combining the likely results of the Ideas Factory with the field’s overview you gained during the event, and factoring in whatever judgements you may have previously heard about molecular manufacturing and whatever error correction you’d now assign to those judgements… Can you give any estimate as to the date that a manufacturing system could make products incorporating grams of nanomachines that take advantage of a large fraction of theoretically available performance? (For what it’s worth, the most detailed and defensible estimates I’m aware of are still the ones in _Nanosystems_ chapter 1, available online at http://e-drexler.com/p/04/04/0417nanosystemsDesc.html)

    Thanks,
    Chris

    Ps. One additional question: Did the Ideas Factory spend any time on social or other implications of molecular manufacturing or nanoscale technologies? If so, is any of that work going to be available?

  8. 8 Richard Jones January 18, 2007 at 1:49 pm

    We did think about social implications of this stuff, particularly in the early part of the week (and the importance we attached to it was one reason for asking Jack Stilgoe, from Demos, to be one of the mentors). This proved very helpful, I think, in framing what sort of targets and outcomes we were aiming for in the projects, but I don’t think there was anything that would be useful outside the context of the ideas factory.

    As for timing, that’s a difficult question. There’s no doubt that everything going forward really does rest on the most recent results and capabilities developed in the participants labs and elsewhere, but since progress is always step-by-step it’s difficult to say, it had to be this year rather than last year or next year. I think there is more progress to be made and we’ll certainly be thinking of ways to keep the pot boiling, as it were.

  9. 9 Hal January 18, 2007 at 7:40 pm

    I have to agree with Chris that on first reading, these proposals sound like technologies that are at least 20 years off! I can’t imagine what you guys were smoking at your get-together but it must have been good stuff. It will be truly amazing if these groups can deliver on their proposals. I can’t wait to see what happens…

  10. 10 Richard Jones January 18, 2007 at 8:48 pm

    Alas, Hal, the EPSRC budget ran to no more than a couple of glasses of inexpensive wine with dinner. It’s an explicit understanding in this kind of project that there’s a significant chance of failure; as I wrote above “We were looking for a grand vision – real ambition, of a kind that scientists are sometimes reluctant to commit to. But we needed to be sure that, on the very first day of the project, it was clear exactly what the newly starting scientists on the project would do. And, while it’s obvious that for a sufficiently big vision, one can’t expect the route from here to there to be fully mapped out at the outset, we need to be sure that there are no obviously unbridgeable chasms in the way. We know, and the funders know too, that there is a significant risk of failure, but that’s as it has to be.”

  11. 11 avlo January 18, 2007 at 8:49 pm

    “I would add that there is also a need to begin patenting the ideas that will make it possible to implement such a device. That will ensure that wealth is appropriately channeled from those who create such a device to those who have a vague idea of what such a device is, and a good lawyer.” – B. Bushman

  12. 12 Phillip Huggan January 18, 2007 at 10:46 pm

    IBM owns the patent for STMs that utilize an UHV (but not at STP). I’m glad IBM doesn’t choose to hire good lawyer to aggresively enforce this patent. All the non-IBM initiated basic nanotech research undertaken with UHV STMs would not have occured had IBM fortified their intellectual property.

  13. 13 Martin G. Smith January 18, 2007 at 11:14 pm

    Hal and Chris – I would suggest that success will be about as far in the future as the collective minds can reach. I recall 2 years ago being told that CAD based Metal Casting machines were at least 10 years off. Yet, time compressed, thinking expanded and we now have two machines replicating parts with .001mm accuracy, 2 years later.
    Remember the words of a 900 year old Master – ‘Do, or do not. There is no try.’

  14. 14 sean January 19, 2007 at 12:15 am

    so we’re talking about developing a molecular machine language here as part of the compiler development. I would call it assembly language, to stretch to the CS terminology but that takes on a double meaning here.

    one could imagine the abstraction of a higher order language built on top of this with concepts like macros, functions, procedure calls, loops, side effects etc all translated to their matter-equivalents.

    It will take some effort not to conflate the terminology and to define the primitives in a manner that maintains their flexibility while still providing useful abstractions.

    I wonder if you’d even want to share your first attempts with the world. The first working language of this type will immediately enable crafting a better and more elegant language. It may be worthwhile to keep it under wraps and release only at the nth generation where you’ve followed the evolution of computer languages and finally start to divurge. I bet this would happen very quickly once the first working protoype emerges.

    neat stuff. I’ll be keeping an eye on this.

  15. 15 Chris Phoenix January 19, 2007 at 1:42 am

    Sean, the higher-order abstractions you’re talking about are one of the main reasons I expect molecular manufacturing to be so powerful. If you know exactly how something it built, and exactly how it will respond to a useful range of inputs, then you can combine it with other things to make modules with higher function. Then, you know (in principle) exactly what the higher level modules will do, so you can combine them, and so on, to build quite tall towers of functionality.

    A computer has perhaps two dozen levels of abstraction between the electrons and the screen. Each level is “simple”–that’s the point of it! So you can compare levels of nanomachine design to any level of computer design you like. I have gone so far as to compare billion-atom million-feature functional blocks to CPU machine language! (Which is about halfway between the electrons and the screen.)

    I suspect that, when we start designing and building gram-scale atom-precise products, designers will want at most a few thousand options at each level. That implies at least eight levels of abstraction between the atom and the gram. I think it’ll actually be 15-20. And product designers will usually want to think about only two or three levels at a time, though good designers will be able to mentally encompass four or five when necessary.

    Here’s an incomplete first draft:

    Level 1: Reaction trajectories, potential energy surfaces.
    2: Covalent structures: ball-and-stick diagrams.
    3: Surface and volume structures. (Overlaps with 2 and 4.)
    4: Lowest functional parts: gears, levers, wires…
    5: Gearboxes, logic gates
    6: Machines, circuits
    7: Machine systems (e.g. assembly line, simple CPU)
    8: Nanoblocks ~100 nm-1 micron
    9: Nanoblock surface/volume structures; moving interfaces
    10: Virtual materials (10-100 micron scale)
    11: Human-interacting material properties (texture, appearance)
    12: Detailed product form and function
    13: Large-scale product form and function

    Molecular manufacturing researchers will have to work from the bottom up in order to make these things possible. The software described here sounds like it’s intended to integrate levels 1-2. That will provide a solid foundation for designs at levels 3 and 4.

    Product designers will have to start at high levels. Fortunately, even a few options at each level is enough to build the next higher level on. And above level 4, designers can think mechanically if they want to. So it shouldn’t take long, once large-scale hardware-building becomes available, to get a basic design capability all the way up to human-scale. Adding more options will make the product range broader and richer, but they can be filled in over time.

    Of course, different applications will need different emphases in the levels. A micro-scale medical application may not use nanoblocks. A computer will have many levels of logic: gate, register, ALU, pipeline/microcode, chip, system.

    Nanofactory control software will start at the highest level and go all the way to the lowest, by using pre-canned rules and recipes for decomposing pre-specified designs. So nanofactory control is really the opposite of the project described here. It’s like the difference between a machine language programmer and a chip designer.

    Chris

  16. 16 brian wang January 19, 2007 at 10:25 pm

    btw: I believe Chris was saying some people (like Hal) would think that these projects will take a long time to succeed. Since Chris has predicted full blown molecular nanotech for 2015-2020 then this advancement would likely be expected to slot in at 2009-2012. Certainly the researchers would be thinking that they can reasonably expect to do this in 3-5 years as that was the guiding timeline stated going into the Sandpit.

    A lot of the capabilities exist in separate research. Although possibly not in the labs that are working on this. So their could be some learning curve mastery of known processes and research that could take some time this year. Setup, procurement and funding could take time from this year. A fair amount of what is needed is clever integration and refinement. DNA synthesis of 32,000 bases has been around since 2004 and companies sell this service at a cost of about 70 cents to 2 dollars per base. Error rates are about 2 for 10,000. Movement of molecules and charge down nanowires has been done.

    The funding amounts also are such that something would be expected to be delivered in about 2-3 years. If stuff is not completed then probably more money would need to be obtained to continue. I think the projects are properly sized and that useful (not necessarily complete) results will be possible in 2-3 years. Good luck to getting it executed.

  17. 17 Michael Handy January 20, 2007 at 4:41 am

    As an Undergraduate Studying the field, I’d like to say how exciting these proposals are.
    It is a breath of fresh air from the constant talk of sunscreen and spill resistant clothes we receive in the media and elsewere, and a step towards the concepts and possibilities that got people such as me interested in the field in the first place.

  18. 18 Andrés Alba February 9, 2007 at 9:00 am

    Science fiction novel ‘The Diamond Age’ 1995, by Neal Stephenson, describes the idea of a matter compiler.
    http://en.wikipedia.org/wiki/The_Diamond_Age

    I am glad to see someone is starting to think for real about it.

  19. 19 John February 11, 2010 at 9:47 am

    Hi. One additional question: Did the Ideas Factory spend any time on social or other implications of molecular manufacturing or nanoscale technologies? If so, is any of that work going to be available?


Leave a comment




EPSRC logoIdeas factory logo