Software control of matter – your ideas welcome

One of the purposes of this public blog for the EPSRC Ideas Factory was to open up the process to anyone interested. When the sandpit begins, on January 8, we’ll be writing about the process as it happens. But we’d also be very interested in any ideas any readers of the blog might have. You might have an opinion about how we might achieve this goal in practise; you might have thoughts about what kinds of materials one might hope to make in this way; or you might have thoughts about why – for what social benefit, or economic gain – you might want to make these materials and devices.

All readers are invited to comment on the thoughts they might have through the comment facility on the Ideas Factory blog. Towards the end of next week, I’ll start putting up some posts asking for comments, and if we get any suggestions, we will feed the suggestions in to the participants of the Ideas Factory, using the blog to report back reactions. One of the mentors for the Ideas Factory – Jack Stilgoe, from the thinktank Demos – will collate and report the comments to the group. Jack’s a long-time observer of the nanotech scene, but he’s not a nanoscientist himself, so he won’t have any preconceptions of what might or might not work.

Richard Jones


32 Responses to “Software control of matter – your ideas welcome”

  1. 1 NanoEnthusiast December 31, 2006 at 10:16 pm

    Hi, Richard, Philip M, etc.

    I think this blog is a great idea, as a Yank I’m a bit jealous. We don’t, as of yet, have a project like this Ideas Factory. In the triennial review of our National Nanotechnology Initiative there is a call for funding along these lines for “experimental demonstrations that link to abstract models and guide long-term vision.” That quote refers to “Site-Specific Chemistry for Large-Scale Manufacturing.” Hopefully, when such a project begins here, we too will have a blog like this where laymen, like myself, can express their thoughts.

    I have a few thoughts on the technical and societal aspects of this technology, but I will only write about one now.

    It is quite obvious that one technology in particular, that of the scanning probe microscope, has inspired the idea of the software control and manipulation of matter for the purposes of manufacturing. It is one of those wonderful instances in science where a negative has been turned into a positive. The negative being, the fact that at the nanoscale, it is impossible to observe anything without interacting with it. Instead of lamenting this fact, researchers embraced this phenomenon to deliberately use these devices to pick-up an deposit individual atoms. But what of the original desire, to observe without interaction? It seems as though the laws of quantum mechanics allow a loop-hole.

    The idea in question is that of counter-factual or interaction-free measurement, also known as quantum interrogation. It is best described by the classic bomb-detector thought experiment. The purpose of the set-up is to detect a bomb which is sensitive to a single photon without detonating it. You start out with a half-silvered plane mirror (Mach-Zehnder interferometer set-up), a photon both passes through and is reflected by the mirror according to the laws of quantum mechanics. In the part of the wave-function where the photon passes through the mirror it interacts with the bomb thus detonating it, but even if the photon does not take this path in reality, you can still gather information about the interacting path from the non-interacting path taken where the photon is reflected by the mirror. By doing this you can determine that the bomb in not a dud without detonating it. This only works half the time, because in the other half the photon destroys what you are trying to observe. However, using the quantum zeno effect it is possible to improve your chances arbitrary. So it is possible to gather information about something without interacting with it.

    So the question is what might this mean for nanotechnology? It has been suggested that techniques like the above could be used to watch a live movie of a Bose-Einstein condensate without destroying it. No doubt, there are many other things that could be observed in this way. Specifically to the task of this Ideas Factory, could one use this in conjunction with SPMs to both manipulate and see at the same time? The use of scanning probe microscopes is a bit like a blind person “seeing” his world by touching everything. It is hard to imagine a blind man getting a job making fine watches with delicate clockwork. The simple act of checking his work would destroy it in many cases. The same could be said about the Drexlerian vision of extending the capabilities of SPMs to create a working nanofactory. It is the nanofactory, or something very much like it, that this Ideas Factory seems to be all about. Presently, it is hoped that detailed computer models will allow us to predict problems ahead of time and find solutions beforehand. A great example of this is the paper by Jingping Peng, Robert Freitas et al. where reconstruction issues involving carbon dimer placement prompted the development of a placement strategy that stagers reaction sites. Another example of this sort of thing is when a team discovered that the properties of a surface that they where studying mysteriously changed after being imaged by an SPM. It was discovered by simulation that what was happening was that material from the surface was being picked-up by the tip thus confusing their equipment. As wonderful as simulations are, wouldn’t it be beneficial if you could both work *and* see at the nanoscale like we do at the mesoscale?

    I am interested to know to what extent participants in the Ideas Factory from the field of microscopy, Philip Moriarty and company, are aware of and employ quantum interrogation techniques. It is my understanding that this idea is only now beginning to be exploited. Paul Kwait of the University of Illinois has used the technique to get answers from a quantum computer without it “actually running”.

    Here are some relevant links:

    Conclusion to the triennial review of the U.S.’s National Nanotechnology Initiative:

    A paper by Paul Kwait et al. that involves interaction-free measurement:

    I believe there is also a paper in Nature that deals with his work in quantum computing. No doubt, Ideas Factory participants are all subscribers.

    Paper on carbon dimer placement to synthesize diamond by Peng, Freitas, et al:

  2. 2 NanoEnthusiast December 31, 2006 at 10:20 pm

    Richard, I’ve left a rather long comment with multiple links. Please fish it out of the spam filter.

  3. 3 jim moore January 1, 2007 at 4:17 am

    You asked for half baked ideas, but I have a couple of half frozen ideas instead.

    Nano scale actuator

    Because it is possible to change the temperature of nanoscale objects very quickly it should be possible to make a structure that contains H20 and cycle it from ice to water. This will create a shape change that can be the basis of a nanoscale actuator. (there is a 9% volume change when water freezes/melts.)

    Using Ice to fill the negative space

    I have looked at lot of 3D prototyping systems, they all have some sort of material to fill the “negative” space in object they are building which is removed later. Would it be a useful idea to assemble in an icy state then melt the ice and drain away the unwanted liquid water? The melting and flowing of water could do the jobs of final positioning and adhesion of parts nano scale and larger.

  4. 4 Chris Phoenix January 1, 2007 at 6:13 pm

    This blog is a great innovation. I hope it works as well as I think it will. Thank you for trying it!

    In talking with many scientists about molecular manufacturing, I’ve found that they frequently over-generalize limitations. I hope it won’t sound too arrogant or too obvious if I list a few of the areas where scientists have tended to overestimate problems and thus limit their own problem-solving capacity:

    1) Anything we or biology can do today establishes a lower bound, not an upper. For example, there is no fundamental reason to expect that chemical processes will be limited to the 10^-6 error rate of DNA transcription.

    2) Recently a physicist asked me whether entropy wouldn’t build up in a nano-computational system and cause errors. My answer–that modest energy input could restore digital signals, and the resulting heat could be conducted away–satisfied him immediately, but without that conversation, he might have continued to assume that entropy would be a practical problem. Even theoretical limitations often arise from overgeneralized assumptions.

    3) In fact, the word “entropy” deserves special honors. “Entropy” is a catch-all that turns into a muddle. Separate from my conversation with the physicist, an expert computer scientist has written that Babbage’s mechanical Analytical Engine could not have worked because entropy would build up and distort the signals–which is of course completely incorrect; simple detents can restore digital mechanical signals. Entropy always has a physical mechanism. Identify the mechanism, and you may find that it’s not as bad as you assumed, or that you’ve lumped together a practical problem with a much less troublesome theoretical limit–and the practical problem can be solved.

    4) I’ve found that most nanotechnologists are used to thinking in terms of complicated and finely-tuned reaction conditions to achieve intricate and useful results. I don’t know if there is a theoretical or only a practical dividing line between the following two regimes: A) Complex conditions -> complex phenomena -> complex output. B) Simple programmable operations -> repeated many times -> intricate output. Computer scientists and engineers are comfortable with B). My experience suggests that most nanotechnologists are only comfortable with A).

    I hope that some of these points are more useful than offensive. Thank you for considering them.


  5. 5 Robert A. Freitas Jr. January 1, 2007 at 9:40 pm

    Seeking to achieve the software control of matter seems like an excellent idea. The self-assembly of macroscale objects comprised of materials both inorganic (e.g., diamond crystals) and organic (e.g., DNA/protein-based life) is already demonstrated in nature. But self-assembly processes will probably not be sufficient to make all of the things we would like to build. As noted in the final report of the recently completed Congressionally-mandated review of the U.S. National Nanotechnology Initiative by the National Research Council (NRC) of the National Academies and the National Materials Advisory Board (NMAB): “For the manufacture of more sophisticated materials and devices, including complex objects produced in large quantities, it is unlikely that simple self-assembly processes will yield the desired results. The reason is that the probability of an error occurring at some point in the process will increase with the complexity of the system and the number of parts that must interoperate.”

    The opposite of self-assembly processes is positionally controlled processes, in which the positions and trajectories of all components of intermediate and final product objects are controlled at every moment during assembly. Positional processes should allow more complex products to be built with high quality, and should enable more rapid prototyping. Positional assembly is the norm in conventional macroscale manufacturing (e.g., cars, appliances, houses) but has not yet been seriously investigated experimentally for nanoscale manufacturing. Of course, we already know that positional fabrication will work in the nanoscale realm. This is demonstrated in the biological world by ribosomes, which positionally assemble proteins in living cells by following a sequence of digitally encoded instructions (even though ribosomes themselves are self-assembled). Lacking this positional fabrication of proteins controlled by DNA-based software, large, complex, digitally-specified organisms would probably not be possible and biology as we know it might cease to exist.

    Today, vast sums of money are already being invested in self-assembly-based biotechnology approaches to manufacturing. By contrast, only small sums are currently directed towards exploring positionally controlled molecular manufacturing using organic materials, and almost no resources are being devoted to positionally controlled molecular manufacturing using inorganic materials. Thus even a fairly large investment in the former area would probably have negligible incremental impact, while even a small investment in the latter area could have significant incremental impact, on progress in molecular manufacturing technology.

    The most important inorganic materials may be the rigid covalent or “diamondoid” solids, since these could potentially be used to build the most reliable and complex nanoscale machinery using positional assembly. Preliminary theoretical studies have suggested great promise for these materials in molecular manufacturing. The NMAB/NRC Review Committee recommended that experimental work aimed at establishing the feasibility (or lack thereof) of positional molecular manufacturing should be pursued and supported: “Experimentation leading to demonstrations supplying ground truth for abstract models is appropriate to better characterize the potential for use of bottom-up or molecular manufacturing systems that utilize processes more complex than self-assembly.” One possible rough outline for a combined experimental and theoretical program to explore the feasibility of nanoscale positional manufacturing techniques, starting with the positionally controlled mechanosynthesis of diamondoid structures using simple molecular feedstock and progressing to the ultimate goal of a desktop nanofactory appliance able to manufacture macroscale quantities of molecularly precise product objects according to digitally-defined blueprints, is available at the Nanofactory Collaboration website (

  6. 6 Chris Phoenix January 1, 2007 at 11:58 pm

    Some ideas in various stages of bakedness:

    Silica-nucleating proteins (e.g. silicatein) might be used to make silica structures. Especially interesting would be if a proximal-probe-bound protein could be used to “grow” a silica shape, with deposition only taking place where the protein was brought into contact with the existing shape. In principle, this could be used to fabricate mechanically functional shapes, blurring the line between covalent solid machines and biomimetic systems. Eventually, it could be useful that silica could maintain its structure in a non-solvated regime.

    Nano-manipulation (cf Zyvex) and electron-beam buckytube welding/cutting (cf Zettl and Banhart) may have reached the point where a 3D structure of buckytubes could be built up by feeding in a single buckytube, bonding it to the structure, then cutting to fit–somewhat analogous to lamp-worked glass beads. Similarly, what if the nano-manipulated tube was welded but not cut–could it serve to transmit force from the external actuator directly to the nanoscale? Of course the stiffness would be quite low, but that could be improved by pulling part of the structure against another part functioning as a stop.

    I hypothesize that efficient biological protein machines balance their energy by means of entropic springs–easily tunable by evolution, but limiting the speed of operation. This hypothesis is testable, and has implications for protein study and for nanomachine design. See for more discussion.

    With regard to Rob Freitas’s post: Positionally controlled fabrication has lots of dimensions of possibility: size of molecular building blocks or moieties; strength of bonds (weaker bonds can be compensated by parallel bonds); materials constructed; medium (polar or nonpolar solvent, supercritical CO2 or xenon, or vacuum) and of course flexibility of the deposition mechanism. Freitas’s proposal is near an extreme; very strong bonds, very small moieties, vacuum, relatively small set of operations. This does not make it bad; diamondoid appears to be a “sweet spot” in the space of machines. My point is that mechanosynthesis of structures is much broader than diamondoid or Drexler, and blends into approaches that don’t even require covalent chemistry.


  7. 7 Richard Jones January 2, 2007 at 7:48 pm

    Thanks for your comments, everybody. I’m not going to respond to them now, but we’ll present them to the group next week. Please keep them coming, if you have any more thoughts.

    Richard Jones

  8. 8 NanoEnthusiast January 3, 2007 at 12:37 am

    Richard, I’ll post an abbreviated version of my original post that is still awaiting moderation here:

    I am interested to know to what extent participants in the Ideas Factory from the field of microscopy, Philip Moriarty and company, are aware of and employ techniques based on counter-factual or interaction-free measurement, also known as quantum interrogation. The idea is based on the Elitzur-Vaidman quantum bomb-detector thought experiment. The question is, can you detect the presence of a bomb that explodes upon being touched by a single photon without exploding it? The intent is to gather information about a space without entering that space. This is achieved by the use of a Mach-Zehnder interferometer, where it is possible to exploit wave-particle duality to that end. The beam splitter allows a single photon to travel down two paths simultaneously. Normally, the photon would later interfere with itself creating an interference pattern; but if there is an object blocking its path it can not do that. Instead, the photon makes it to an additional detector perpendicular to first one, the one that the photon reaches when it interferes with itself. (I’ll see if the spam filter allows me to post a link to a diagram in my next post.) This method only works 50% of the time because when the photon in this universe, to use the Everett interpretation, interacts with the bomb it (the bomb) is destroyed; but when it does not interact with the bomb you still get information by the absence of the other part of the photon’s path. The odds can, however, be improved arbitrary using the quantum-zeno effect.

    So the question is what might this mean for nanotechnology? It has been suggested that techniques like the above could be used to watch a live movie of a Bose-Einstein condensate without destroying it. No doubt, there are many other things that could be observed in this way. Specifically to the task of this Ideas Factory, could one use this in conjunction with SPMs to both manipulate and see at the same time? The use of scanning probe microscopes is a bit like a blind person “seeing” his world by touching everything. It is hard to imagine a blind man getting a job making fine watches with delicate clockwork. The simple act of checking his work would destroy it in many cases. The same problem might plague the Drexlerian vision of extending the capabilities of SPMs to create a working nanofactory. (It is the nanofactory, or something very much like it, that this Ideas Factory seems to be all about.) Presently, it is hoped that detailed computer models will allow us to predict problems ahead of time and find solutions beforehand. A great example of this is the paper by Jingping Peng, Robert Freitas et al. where reconstruction issues involving carbon dimer placement prompted the development of a placement strategy that stagers reaction sites. As wonderful as simulations are, wouldn’t it be beneficial if you could both work *and* see at the nanoscale like we do at the mesoscale?

  9. 9 NanoEnthusiast January 3, 2007 at 12:52 am

    Here is the link to a paper by Paul Kwiat on quantum interrogation, with full diagrams:

    Kwiat is the man behind the quantum computer that works better when it doesn’t “actually run”. In that experiment, he employed the same ideas.

    Richard, my first post is now redundant. Ignore it if you wish.

  10. 10 Phillip Huggan January 3, 2007 at 6:42 am

    One idea is to utilize the interior of a carbon nanotube as a reaction site:
    Carbon nanotubes offer the potential advantage of giving only one degree of freedom for a reaction co-ordinate.
    The Higher Diamondoid diamantane (C14H20) fits ideally inside a (7,7) CNT. The paper mentions functionalizing reaction sites of adjacent diamondoids (inside the CNT) with Boron and Nitrogen, but I wonder if it would be possible to polymerize (combine) two adjacent Diamondoids that have each had one or more their opposing hydrogen atoms removed? Building structures of interlocking diamond wire would be simpler without the Boron and Nitrogen.

    My main novelty to introduce is the idea of nested CNTs. The reaction site is in the middle of a MWCNT. Each concentric interior tube “layer” consists of two tubes actuated at each end (say, by an SPM); can be withdrawn nearly all the way or inserted right to the reaction site as need be.
    Product buildup would consist of joining two adjacent diamantane molecules within the open-ended MWCNT that have (somehow) been functionalized or hydrogen depassivated. Then after a long enough diamond chain has been constructed, remove some of the innermost interior CNTs, allow the diamond wire to rotate (perhaps doping and an electric field could help here) perpendicular to the CNT’s length. Then *reinsert* the inner most tubes to “clamp” the diamond wire in place. Find some method of functionalizing/depassivating a reaction site on the clamped diamond wire (run an STM over a custom-created opening in the MWCNT sidewall?), and you now have the ability to maybe 3D Higher Dimaond mesh. “y”-shaped CNTs could be used as the reaction site if three actuating devices are required for a given reaction.

    Higher Diamondoid mesh isn’t likely to be rigid enough for some/most/any nanofac components. I have no idea how the many CNT layers would connect with SPMs at the ends of the CNT layers. Certainly this technique is beyond today’s nanotechnology mastery. I doubt diamond wire mesh would even be strong enough to form the walls for an UHV chamber.

  11. 11 Chris Phoenix January 3, 2007 at 9:56 pm

    Re NanoEnthusiast’s suggestion: (I’m sure this isn’t news to the Sandpit people, but for others reading the comments:) Covalent structures are not generally delicate enough to require such intricate techniques. Here’s a quick analogy. A blind person might have trouble repairing a watch. But a blind person could solve a Rubik’s Cube with braille faces instead of colors–could easily read the faces without rotating the cube accidentally.

    A major reason we can’t “see” nanoscale mechanical structures is because photons are too big. Then, in the absence of photons, we have to use interactions and phenomena we’re not used to, and have to work hard to figure out what the signals mean. Also, our tools are vastly bigger than what they’re measuring, and thus hard to use. So it is not at all simple to “see” the nanoscale. But there are a lot of things of interest that we can probe without disturbing.

    Here’s a more detailed analogy. It’s not a direct physical analogy with a scanning probe microscope, but it’s pretty close. Suppose you were trying to read Braille with a toothpick. If you had good muscle control and spatial sense, you could do it pretty straightforwardly. Sure, you *could* poke the toothpick through the paper, but you wouldn’t normally worry about that. That’s the physical limit–a toothpick can poke through paper–and that’s how much you have to worry about it in theory. So what is it that makes SPM hard to do?

    Take a telephone pole and hang it sideways from milspec bungee cords so it looks like a battering ram. Put a toothpick sticking out of one end. Position a Braille book next to the toothpick*. Stand at the other end. Now try to read the Braille. You could still do it… the pole transmits force well and requires very little force to move, so it should be sufficiently sensitive… if you had the patience to move the telephone pole *very* slowly and delicately, and a good enough kinesthetic sense to track the force feedback. With a mechanical system to do the scanning and measuring, rather than human muscles and nerves, you should be able to read Braille just fine, albeit slowly… as long as the page didn’t move, and the bungee cords didn’t slowly stretch, etc. It’d certainly be a non-trivial engineering problem.

    Although it’s difficult to scan Braille with a telephone pole, the size and inconvenience of the pole (or scanning probe microscope) don’t have any direct connection to the size of things you can measure. And the delicacy of certain nanoscale or quantum phenomena doesn’t have any direct connection to the strength of covalent structures.

    Just because the telephone pole *looks* like a battering ram, doesn’t mean that the weight of the pole will drive the toothpick through the paper. Just because the system is cumbersome, doesn’t mean that it’s running into deep physical limits. Just because careless operation can easily destroy the book, doesn’t mean that paper is too delicate to be touched by toothpicks.

    And although there are probably experimenters right now doing SPM experiments that do run up against deep physical limits, there are lots of other experimenters making SPM images of atoms in crystals without coming close to knocking the atoms loose.

    * In fact, you have to bring the pole to the book rather than placing the book next to a pole hanging still, and you don’t know exactly where the book is. So you have to move the gantry crane that the pole is hanging from… while continually sensing whether it’s touching the book yet. Not easy–but SPMs actually do this. And another thing–the “toothpick” is actually more like a golf ball or thimble–much bigger than the Braille dots, but it has a lowest point that can be used to scan them.

    I should mention thermal noise. Oh yes, the book is vibrating… but it’s usually vibrating less than the width of a Braille dot, and even if it weren’t, you could sense the *average* position of the dot by looking at the *average* force between the dot and the toothpick. Now, if you had a non-covalent system like small molecules resting on a surface, the atoms might actually move around, and the microscope could be too slow to watch them, and only see them statistically. A large molecule like a buckytube won’t move by itself at room temperature, but can be pushed–this is actually quite useful.

    I hope this explanation isn’t too long-winded for this blog, and didn’t make too many simplifications and analogies that will annoy the real scientists. Perhaps I should have just skipped to the summary: Scanning probe microscopes scanning covalent solids are limited a lot more by practical (e.g. mechanical) problems than by foundational physical limits or quantum phenomena.


  12. 12 Chris Phoenix January 3, 2007 at 11:43 pm

    On second thought…

    The previous comment was the kind of response I’d have given on my blog–correcting/instructing other commenters. But this isn’t my blog–and I guess it’s for brainstorming, not criticism of others’ ideas. I won’t be offended if the moderators want to delete the comment. And if/when guidelines are posted for how/whether to respond to comments, I’ll do my best to follow them.


  13. 13 NanoEnthusiast January 4, 2007 at 3:36 am

    Yes, Chris, I am aware that the whole point of the MNT paradigm is to create idealized conditions and work with idealized materials, where it is not needed to know everything that is going on. Models tell you what should be going on, and it is expected that in the eutatic environment there will be little deviation between theory and reality. The sort of devices imagined in the MNT program only can move in a very small number of ways. That is why it’s all about diamond or other covalent solids. Maybe, the MNT program won’t need exotic methods of observation to work in practice. At least we have a theoretical framework for such methods if they are needed.

    The role counterfactual measurement might be able play in nanotechnology is unclear. It is still a relatively new field. It will probably be integral to the design of any practical, scaleable quantum computer. If we had a decent quantum computer now, this whole issue of the feasibility of a software-controlled matter compiler would be much easier to settle. We could see rather quickly what is most likely to work and not to work. As an example of where we are with simulations, take proteins. The only simulations of proteins that are practical are ones based on protein design, general folding is still a big problem. This limits you to a set of ones that can be easily designed. If we had a mature form of microscopy based on counterfactual measurement, might we have a better understanding on how proteins fold in sutu?

    I admit, Chris, you made a pretty persuasive argument in one you previous essays that mature MNT could be very powerful even with a rather impoverished set of reactions. The analogy of a digital computer is a good one. It only does a handful of things well, but these operations can be combined in a myriad ways to produce a vast array of applications. The same could be true in nanotechnology, but this early in the game let’s try to not limit ourselves to processes that are easily visualized in simulations. The problem with that is, what is easy to model and what is easy to do are totally different things. This is what I think has created so much confusion about Drexler’s ideas. They are simple and easy on one level, but on the construction front they are very hard or impossible with current tools. Hopefully, we will soon be able to test the basic reactions in diamond mechanosynthesis. (Maybe this Ideas Factory will fund such an effort!) That would be a huge achievement in itself, but it would not prove the entire enterprise. And even if you knew for sure an MNT style, neat and clean nanofactory was possible, you would still have to build it. That is daunting task to say the least. We don’t know how many messy processes will be needed to make the first nanofactory.

    The point I was making about being able to see like we do at the macroscale is that, at the macroscale, the interference caused by observation does not appreciably change what we are looking at. That is the feature of seeing at the macroscale that I hope quantum interrogation can bring to the nanoscale.

    In short, the technology of interaction-free measurement or quantum interrogation is still new. It may or may not be needed in the normal operation of a device that can build through the software control of matter. Indeed, it would be easier to not have to deal with the extra complexity. However, there may be phenomena of interest to nanotechnologists that can only be directly viewed in this manner; as simulations are a rather indirect way to learn about the world around us.

  14. 14 Chris Phoenix January 4, 2007 at 7:11 am

    I’d be surprised if there were a product structure that was delicate enough to need counterfactual measurement (but could last long enough to be considered a product). But you make a good point that it’s not just final structures, but processes that have to be observed. And more generally, there are probably other applications that I don’t know enough to think up. That’s why I more-or-less retracted my comment.

    Brian Wang says that Dwave is coming out with an analog quantum computer system… this year… with 16 or maybe 64 qbits. Go to and search for dwave to find relevant stories.

    Diamondoid MM has a lot of (expected) advantages beyond being relatively easy to simulate. High strength and stiffness, and superlubricity.

    There are areas where it’s fair to accuse MNT of avoiding potential functionality to keep analysis simple. Electronics is a big one. But the choice of diamondoid isn’t. In fact, if you read Nanosystems closely, Drexler didn’t define “diamondoid” as pure-carbon diamond lattice. He was talking about a broad range of covalent solids, certainly beyond detailed analysis at the time. As far as I know, the more recent emphasis on diamond-only came from Freitas and/or Merkle, and its purpose was to make bootstrapping easier.

    As to the benefits of easily characterized systems (which I think is more about digital/multistable than about few degrees of freedom, though the two tend to go together): Regardless of the manufacturing system, predictable device and subsystem operation will be extremely useful for product design. The most complex/intricate things humans have built BY FAR are computer hardware and software, and they would be impossible without levels of abstraction, which in turn depends on fully characterized (Boolean) operations.

    There will be lots of useful nano-products, even programmable-materials products, that won’t need that kind of intricate predictability. But for products that do need it, well… if you can’t characterize, you can’t design. You can punt and use evolution, but it remains to be seen whether evolution can be a general-purpose R&D tool.

    Covalent bonding is highly non-linear and in many cases produces metastable systems that might as well be called stable. In effect, it should provide a family of digital operations with ignorable error rates, making structures that are completely identical (modulo isotope-sorting, if necessary, and below the few-micron scale where background radiation causes too-high damage rates per unit). I think that fully-known nanostructures (engineered to have fully-known functionality) will be centrally important for a number of directions of nanotech advancement. And additive covalent synthesis of covalent solids is the best way I know of to achieve that. If the Ideas Factory comes up with another way to achieve fully characterized construction, that’ll be great! (Though I’ll still want to ask about material properties and exponential manufacturing.)

    Of course, the approach I’m promoting here denies the necessity of biological complexity for making useful products. I could write several essays on why biological complexity is overrated. It’s too easy to point to biology’s successes and attribute them to its complexity, but those successes come with subtle limitations (e.g. ATP synthase needs a volume of water on each side to be efficient), the complexity doesn’t always contribute to the success (maybe it was necessary to let the success evolve), and often it’s better overall to over-engineer in order to simplify (products compete in the design cycle, not only in usage).


    Ps. In large systems, there will be faults. But I think fault tolerant design to deal with relatively rare errors in otherwise perfect constructions can preserve fully characterized operation, a lot more easily than statistical aggregate constructions can be treated as fully characterized. So what about thermal noise, which certainly provides an inescapable statistical variance? I have answers for mechanosynthesis, for noisy motion trajectories, and for mechanisms “jumping the tracks”, to explain how in each case the important features can be fully characterized and thermal noise can be functionally ignored above the atom scale, but this comment is already far too long.

  15. 15 Jeremy Baumberg January 4, 2007 at 7:26 am

    Interesting discussions on the Sandpit theme that I’ll be working as part of next week. What seems to be missing is the sense of what should we do now. The ideas presented are often very interesting, but we know very little except for the conceptual (and possibly some fairly primitive modelling) about practically going ahead with them. I’m rather of the view that we need to define a promising line of devices (and it may not matter too much which) and work on trying out practical construction schemes that can be built now. For what its worth, I am a fan of directed assembly, which combines self-assembly with a tuneable time-dependent environment that guides assembly.


  16. 16 brian wang January 4, 2007 at 10:20 am

    The first part of the Sandpit (or even before) should map out the range of existing and near term technologies that help provide control of matter at or near molecular scales. Then assess the current capabilities and best near term improvements that can be made in each. Then brainstorm around how they might be combined into complimentary systems.

    You would need to specify the dimensions of control that you would have. Size ranges of control, degrees of control, reaction control, speed of processes, repeatability of reactions etc…

    DNA nanotechnology
    DNA synthesis
    RNA synthesis
    self assembly and guided self assembly
    laser manipulation
    magnetic and electrical manipulation
    STMs, AFMs, TEMs
    Dip pen lithography (arrays of 55,000 and plans for a million)

    Look at the sensor technology

    Supercomputers and quantum computers

  17. 17 Brian Wang January 4, 2007 at 5:45 pm

    Other factors by which the survey of processes should be ranked is whether or not they can create devices that are powered, whether they could make a molecular actuator, could they make a complex synthetic material like a phonon mediated architected room temperature superconductor, how scalable are the current options, are they a path to exponential manufacturing or would they go to terahertz operation with trillion manipulators operating in parallel. how good could a creative application of several current systems get in 2 years, 5 years, 10 years, 20 years. Other rating factors error rates, reliability of operations (some DNA insertion methods work 1 out of 5 times), range of materials that can be worked with, current limitations, fundamental limitations (Note: when brainstorming there are paths and ways to avoid solving limitations. to engineer implified but useful processes that can still achieve a goal but take longer. You can use multiple processing steps, macroscale systems can help manipulate things where the molecular scale is having problems. Flush different reactants in and out of a picoliter chamber using microchannels. Use nanopore filters to control what goes in or out. Find ways to kluge a better next stage of capability.

    A quantum computer roadmap looks at ranking each of several promising technologies and roadmaps each for work that needs to be done to take each to the next level. The 2004 roadmap did seem to miss the analog (adiabatic) approach. The Sandpit should also be aware of the advanced work on scaling ion trap quantum computers. They have good control of ions.

    Fit metamaterials into the developing capabilities area. When we will have useful superlenses made from metamaterials. Superlenses applied to laser manipulation, laser lithography etc… Metamaterials also give better control of terahertz radiation by fine adjustment of nanowires. It could give finer magnetic and electrical control.

    How can the chemistry of some of the processes be extended? Building from DNA nanotech with more synthetic DNA, more kinds of polymers, combining that manipulation with nanotubes, silicon, metals, dopants, etc…

    There is also the work for hijacking and manipulating viruses and bacteria and cells to do engineered work. Viruses making batteries (6 nanometers diameter by 880 nanometer length).

    Some sweet spots for improved capability seem to be getting STMs and other microscopes able to be 5-10 times more precise and repeatable down to about 1 angstrom. Then a lot of the work that Rob Freitas and Ralph Merkle are computing would be a lot easier. Freezing things could be a short term way to make up for less ability in the tool.

  18. 18 Kurt9 January 5, 2007 at 7:18 am

    I think the best way to do this is to research out completely how biological cells work, then reverse engineer them. Cells clearly self-replicate and can form a wide variety of systems (e.g. multi-celular life-forms). Biology, being based on cells, is modular. Any kind of industrial nanotech would have to have similar modularity to be scalable from the nano-level to making macroscale products. Biology, being modular, has the needed redundancy built-in, where every cell does not have to be perfectly functional for the whole system to work. Within each cell, there is built-in redundancy and modularity in the celullar processes that build and power an individual cell.

    It seems to me that biologically-derived nanotech is the way to go. Needless to say, I am very sceptical of any kind of “dry” nanotech concept. I have worked with AFMs in the past and find them a very useful analytical instrument, but use for nanofabrication involves scaling issues that are very difficult.

  19. 19 Philip Moriarty January 5, 2007 at 10:09 am

    It’s clear that Richard’s efforts in establishing this Ideas Factory blog have paid off – there have been some fascinating and thought-provoking comments posted here. As Jeremy (Baumberg) states above, however, perhaps the key challenge for next week’s sandpit will lie in identifying schemes for the control of matter which can be implemented on a three to four year timescale (i.e. within the typical timeframe of an EPSRC grant). This will ultimately require the proposal of **well-defined** materials systems and manipulation protocols. Brian Wang’s and Kurt’s comments above are interesting in this regard. (However, Brian’s inclusion of the ability to generate “a complex synthetic material like a …room temperature superconductor” as one of the assessment criteria is perhaps just a little ambitious…!).

    My research interests of relevance to the matter compilation theme span SPM-directed manipulation of single atoms and molecules under what might be termed extreme conditions (ultrahigh vaccum/ 4K – 300K) to self-assembly/organisation of molecules and nanoparticles deposited from solution onto solid substrates (at atmospheric pressure, room temperature). With regard to atomic/molecular manipulation using scanning probes as pioneered by Eigler et al., the bottom line – as I see it – is that SPM represents the only tool currently at our disposal which can drive and characterise fundamental mechanosynthesis reactions or postionally controlled processes.

    I find much to agree with in Rob. Freitas’ post (Comment 5) in that I think that there is a lot of exciting science to pursue in the area of computer-driven positionally-controlled chemistry using scanning probes. Techniques such as inelastic tunnelling spectroscopy could be performed in parallel with dynamic atomic force microscopy/spectroscopy in order to characterise the tip structure and chemical nature during positionally-controlled fabrication of nanostructures. To date, and to the best of my knowledge, a complex 3D structure (analogous to, say, one of the quantum corrals fabricated by Mike Crommie et al. in 1993) has not yet been built with scanning probes. I’m very interested in exploring whether – **with appropriate tip functionalisation** and consideration of the (complicated) potential energy landscape and associated energy barriers -autonomous atom-by-atom fabrication of 3D nanoparticles is possible using scanning probes. Diamondoid structures and hydrogen-passivated diamond (and silicon) surfaces are likely to be of particular importance.

    Hence, unlike Kurt (Comment 18 above), I believe that there is significant mileage in exploring “dry” nanotech concepts. This doesn’t mean, however, that I see scanning probe methods as being scalable to the nanofactory/molecular assembler concepts put forward by Drexler et al. (Chris and I spent a considerable amount of time discussing the “machine language” of molecular manufacturing a couple of years ago so I’m not going to repeat the arguments here.)

    Bringing together my interests in both “brute force” directed assembly using scanning probes and self-assembly/self-organisation, I’m particularly interested in the “grey area” between fully deterministic positional control and directed self-assembly and self-organisation. (I realise that “directed self-assembly” is rather an oxymoronic term but bear with me…). STM tips generate intense electric fields gradients which can be used to drive the diffusion of adsorbed atoms and molecules in a given direction (Stroscio, Whitman et al. demonstrated this for Cs on GaAs(110) back in the early nineties). In liquids, many groups (including ourselves in Nottingham) are playing with dielectrophoresis as a method of tuning self-assembly of nanotubes and nanoparticles.

    A grand challenge for nanotechnology is to connect the nanoscopic and macroscopic worlds. I believe that there is particular potential in combining far-from-equilbrium pattern-forming processes with molecular design and the exploitation of non-covalent interactions to tune the structure of matter across a wide range of length scales (nanometres to microns to millimetres (?!)). If self-assembly and self-organisation can be directed via external fields, pH differences etc… then there is a broad (and tunable) parameter space to explore. In terms of specific systems, I’m keen on using functionalised metal nanoparticles. Metal nanoparticles (and metal nanoparticle assemblies) also have rather interesting plasmon-mediated optical properties which, in the spirit of Richard’s “It’s all about metamaterials” post on Soft Machines, could feed into the control of photon “flow” through a structure.

    While on the topic of directed- and self-assembly, Chris in Comment 4 suggests that most nanotechnologists are only comfortable with a regime whereby “Complex conditions->complex phenomena->complex output”. I’d quibble with this! It’s generally appreciated that very many systems whose dynamics can be described with rather simple differential equations and a hadful of variables (a damped, driven pendulum or Turing reaction-diffusion systems, for example) give rise to remarkably complex outputs. Hence, simple computational models (e.g. the Ising model) can give rise to complicated and rich dynamic behaviour.


  20. 20 paulrouse January 5, 2007 at 1:53 pm

    Lots of great ideas, thoughts and suggestions coming out which will be very helpful for next week.

    A small point from the EPSRC perspective. Philip suggests above that participants might want to think in terms of typical EPSRC grants. I would discourage you from doing that in favour of concentrating on the scientific challenges. A structure will arise out of the science agenda as the week progresses.

    An IDEAS factory has no constraints (as long as we are not breaking the law!).


  21. 21 Philip Moriarty January 5, 2007 at 3:27 pm

    Hi, Paul.

    My point re. thinking about a “typical” EPSRC grant was really solely related to the time scales involved – what could we hope to achieve on a three to five year timescale? This will tend to focus discussion. Jeremy suggests something similar in his comment above – in terms of device fabrication, what might be realisable now or in the near future?

    I realise, however, that the Ideas Factory scheme is seen by EPSRC to be a mechanism entirely distinct from conventional funding mechanisms such as Responsive Mode and apologise for the lack of clarity in my previous post.

    I look forward to meeting you on Monday.


  22. 22 Chris Phoenix January 5, 2007 at 5:33 pm

    The description of the project’s goal leaves a lot of possibilities open. A “self-replicating” robot system can mean anything from installing batteries in robots to smelting metal and stepping chips; likewise, atomic control of matter on kilogram scale can mean anything from growing large salt crystals, to manufacturing kilograms of precise nanoparticles, to tabletop nanofactories.

    Granted you have to focus on things you can accomplish in 3-5 years, it wasn’t clear to me whether you intended to make nanoscale devices directly, or whether you might choose to spend the time making tools (perhaps including theoretical or computational tools) with which to make devices 6-10 years from now. Your “matter compiler” description–“output a macroscopic product in which every atom is precisely placed”–seemed to imply the latter. So I’ve been trying to answer the question, “Where are we going–What kinds of things should the matter compiler compile?”

    In terms of what you can do in the next 1-2 years–The DNA structure people are doing amazing things, and could probably be assisted by a number of cross-disciplinary tools. For example, I’m told they can build 3D structures more easily than they can verify what they’ve built. Can LEAP be used to make 3D images of flash-frozen structures? Or, what about a microfluidic DNA synthesizer that could produce molecules by the thousands instead of billions and reduce materials costs? Perhaps a scanning-probe nanopipette (borrowed from cellular surgery) might help to reduce diffusion times for building meta-structures out of large self-assembled building blocks. I don’t know–I’m not a DNA person–but my strong impression is that giving them better tools will pay big dividends.

    I am very much in favor of Philip Moriarty’s suggestion to study 3D diamond synthesis.

    It might be useful to ask the question: How much information can be delivered to the nanoscale and embodied in a product, and how rapidly? DNA synthesis is around a kilobyte per hour; I guess temperature and pH change are about the same; scanning probe chemistry with cm-scale microscopes might be similar (Philip?). Being able to select one of a thousand materials is very good. But to those who are interested in function emerging from physical structure (including, I think, a lot of DNA people), a kilobyte of structure is quite small and limited, compared to what they’d really like to play with.

    So I’d like to propose the goal of a kilobyte per minute inserted into atom-precise structures:

    What modalities can deliver a kilobyte per minute to the nanoscale? I can think of four: Scanning beam (FIB, electron microscope), microfluidic injection of pre-made DNA strands, direct electrodes, and pulsed light (photons are hard to catch and even harder to store, but color-specific actuators feeding simple computational apparatus–e.g. nanoscale stepping motors–might be useful).

    What nanoscale phenomena can latch or store a kilobyte per minute? A photoresist can do it. DNA binding *might* be able to do it in a microfluidic environment. With development, fast redox actuators might be able to, again if they were coupled to some kind of computational structure.

    What nanoscale processes can be used for atom-precise fabrication? Can any of them be driven by incoming or stored information? Which of them have operating frequencies under fifty milliseconds? Displaced electrons or deformed molecules may be used to modulate chemistry. Bonus question: What externally-driven atom-precise fabrication processes can create 3D structures?

    The set of techniques that passes these tests is small, but probably is not zero. Let me add two forward-looking questions that it may be too early to ask:

    Could any nanoscale structures, produced by the previously selected technologies, be useful in enhancing the performance or convenience of high-information-rate fabrication? Do we have the luxury of making this a criterion in selecting which technologies to develop?

    Is there any point in thinking about kilobyte per second fabrication?


  23. 23 Chris Phoenix January 5, 2007 at 11:46 pm

    Philip, I’ve been thinking about your comment on simple models giving rise to rich dynamic behavior. I think that we were actually referring to the same kind of system, though I may have used the wrong language to do it. If I can select values for three analog parameters and get a wide range of structures in five distinct families (e.g. diblock copolymers), then we could say that the variables are a handful and the physical model/computation/behavior is simple. But we could also say that the relationship between inputs and outputs is, maybe not mathematically complex, but certainly non-obvious.

    I suspect that familiarity with this kind of nanoscale system, in which “simple computational models (e.g. the Ising model) can give rise to complicated and rich dynamic behaviour” is what makes some scientists say the molecular manufacturing approach is “too mechanistic” or “too simplistic,” and then start looking for the complexity they assume we’re trying to sweep under the rug.

    But there may be a deeper point. If the system’s behavior is accurately described by analog values and continuous equations, doesn’t that imply that it must involve relatively large numbers of atoms? And if there are far more atoms (more precisely, far more degrees of freedom) than can be directly specified by the precision of the input variables, doesn’t that imply that the output probably isn’t atom-precise? An exception could be if the atoms self-organize according to strong forces like ionic crystallization (e.g. salt crystals), but can that be compatible with rich dynamic behavior in self-assembly?

    To relate this to the Ideas Factory’s purpose: Are there classes of analog-behaving system that, while they form interesting nanoscale structures, can be excluded from consideration as goals and/or methods because their analog nature must preclude atom-precision?


  24. 24 Lee Cronin January 7, 2007 at 12:27 pm

    How fantastic, there are lots of ideas here, amazing prospects. It looks like conceptualising and abstracting the idea of a ‘matter compilier’ and ‘software control’ is happening and very exciting. It will mean different things to different people maybe? Nature is a pretty good, well amazing ‘matter compilier’ actually – my 9 month old son is self assembling rather well (he is better looking that I already). I guess the software control comes by running through that wonderful biological polymer, DNA. Can we can do better? Maybe, since we do not need to evolve against a natural fitness landscape and we have some blueprint ideas, know a bit of the laws of physics, chemistry etc.

    One thing I am thinking about ahead of next week is a set of grand challenges that if possible could change the world if realised. If we could make a perfect ‘matter-compiler’ then, for instance, we should be able to make a better version of photosystem 2 (or similar biological light harvester) that would fix CO2 and water, or produce electricity at a great efficiency. Can we precisely engineer the stiffness of materials down to the atomic level – maybe that way we could make highly efficient heat pumps. How about making self-cooling materials that work due to brownian motors that rotate only in one direction and have a dipole fixed to the centre of rotation. Line that up with a conducting nanowire can could that be a nano-battery. Nanoscale computers presumably would best be powered by smaller batteries? Could theory help us direct the software so we can really design materials with (semi-)predictable properties? (Would that be a bit like video killing the radio star though?)

    Can we engineer self-assembling / dynamic and even evolving entities (using a combintorial library) and also interface this with some top down directed assembly – taking top down and bottom up assembly paradigms and puting them together is something I often here but we are short on routes. One of the exciting things about next week is that the collection of people coming seem to perfectly bridge that divide.

    In my own work I self assemble architectures in solution on multiple length scales from molecules that are 1 nm in size to mm using the same building blocks (these range from pure organic to pure inorganic, metal oxide and metallic building blocks). I can change the stickyness of the building blocks to build in error correction and even deposite them on surfaces, assemble them in 3d, and increasingly am trying to direct their assembly at the inteface and measure their physical properties and spatial organisation. But there are big gaps here, I still suffer from the illusion of control – supramolecular chemistry is great at allowing the constructing of large molecular architectures but they often do not follow my design! There are also many new physical properties that we can introduce by changing the local environment, often by encapsulation. It is possible to end up with a Russian-Doll assembly of molecules within molecules – maybe this is one route to produce amazing new optical, magnetic, conducting devices etc.

    The idea of coupling directed self assembly with pattern formation and trying to use self aggregating nanoparticles or clusters to build structures that can go from the nano to the mm is interesting. For instance can we interaface self-assembling / dynamic and even evolving entities (using a combintorial library) with some top down directed assembly – taking top down and bottom up assembly paradigms and puting them together is something I often here but we are short on routes.

    One of the exciting things about next week is that the collection of people coming seem to perfectly bridge that divide. Cannot wait….

  25. 25 Lee Cronin January 7, 2007 at 12:40 pm

    Sorry, duplicated paragraph above, and I was refering that people often discuss merging top down and bottom up assembly paradigms, and this is something I often think about, but we are short on routes to do this.

  26. 26 Chris Phoenix January 8, 2007 at 4:37 am

    Lee asks if we can do better than biology? I should hope so! Biology labors under many constraints that we do not need to mimic. It has to solve a lot of problems that engineered molecular machines don’t have to solve. This implies that other approaches can have performance significantly better than biology.

    Of course, there are other problems that anything nanoscale will face. Upper bounds do exist for machine performance. But in many cases, biology should be viewed as setting lower not upper bounds.

    The below problem dimensions are mostly independent, implying a lot of different directions for possible improvement.

    * Biological molecular machines rely on slow thermodynamic relaxation and diffusion processes for efficient operation. An engineered machine, especially if built with strong materials, could transfer stronger forces between machines and could use mechanical rather than entropic springs, balancing energy faster and in a smaller volume.
    * Biological molecular machines work in the drag of water. It’s well established that some enzymes do not need water. It should be possible to engineer chemical reaction mediators that work in vacuum.
    * All biological systems are limited to design spaces that are accessible to incremental blind improvement. Obviously, we don’t need to stick with that limitation.
    * Biological organisms must metabolize complex and varying chemical inputs. We can provide refined chemicals.
    * Organisms maintain internal state in the face of external perturbations via a series of complex feedback loops. Machines may maintain their state by being designed to be insensitive to perturbations. This may waste some matter or some energy, but that is OK for many engineering applications.
    * Organisms must self-repair. Machines don’t have to. If they’re cheap enough to rebuild, they can just be replaced when they break. (This does not preclude recycling.) If they’re numerous and redundant/fault-tolerant and break only rarely, then breakages can be ignored for a useful product lifetime.
    * Organisms must resist predators and parasites. Some machine-structure materials would be susceptible to attack by oxygen, water, or organisms, but in general machines can be protected by simple barriers.
    * Organisms must grow from smaller to larger instances, from the inside out. Machines can be built externally in their finished “adult” form.
    * Organisms must, with few exceptions, maintain the processes of life at all times. Machines, being simpler, can usually be frozen in place and restarted.

  27. 27 Phillip Huggan January 8, 2007 at 5:08 am

    “(P.Moriarty wrote:)
    Techniques such as inelastic tunnelling spectroscopy could be performed in parallel with dynamic atomic force microscopy/spectroscopy in order to characterise the tip structure and chemical nature during positionally-controlled fabrication of nanostructures”

    I thought IETS was only for metal surface experiments. It wouldn’t be a tool useful for hydrogen-passivated diamond experiments, would it? Maybe if the diamond was doped? I’d envisioned a mechanosynthesis experiment on diamond required an AFM enclosed within an electron microscope (with the unknown chemistry part of the mechanochemistry being the current showstopper).

  28. 28 Chris Phoenix January 11, 2007 at 2:06 am

    Philip, don’t know if you’ve seen this news that may be relevant to the themes you raised above: connecting the nano- and macro-world, and plasmon-photon interactions.

    Put a buckytube in a coax sheath made of aluminum oxide and Cr or Al, and leave the tube sticking out the end, and it’ll gather light and transmit it up to 50 microns.

    They talk about building an array of these things that gathers light at one end and puts electricity out the other. And speculating on quantum computer applications.


  29. 29 Philip Moriarty January 11, 2007 at 8:47 am


    The key idea is that IETS is used for *tip* characterisation. I have in mind a diamond and metal surface which are placed alongside each other: the metal surface is used as an appropriate substrate for IETS characterisation. Yes, this involves registration problems but, in principle, these are surmountable via closed loop technology.

    IEST is painfully slow, however, and – prompted by discussions with other members of the sand pit – I’m thinking about other “pure force” mechanisms for chemical identification.

    Chris, I hadn’t seen the paper to which you refer. I’ll try to find time to read it this evening (sandpit discussions/debate permitting!).

    Bye for now,


  30. 30 Phillip Huggan January 11, 2007 at 9:24 pm

    “(P.Moriarty wrote:)
    The key idea is that IETS is used for *tip* characterisation. I have in mind a diamond and metal surface which are placed alongside each other: the metal surface is used as an appropriate substrate for IETS characterisation. Yes, this involves registration problems but, in principle, these are surmountable via closed loop technology.”

    If possible, would it be easier to place an AFM within a TEM? The idea being to attempt functionalize (somehow) an AFM tip, then lay the AFM tip down and use the SEM to see if the moeity is indeed on the AFM tip. This would obviate the need for an STM for this portion of the experiment.

  31. 31 Chris Phoenix January 12, 2007 at 12:27 am

    Here’s a crazy idea – could you characterize the tip by using the old field emission microscope technology? Not the modern atom probe that strips atoms away, but the one where the tip just emits electrons? You might at least get positional information that way.

    Hm, now that I think of it, you might use an atom probe to shape the tip of the tip into a nice smooth hemisphere. That’s if the tip is conductive, of course.

    Is it possible that one of the electron microscope detector technologies could be adapted for this purpose? Wikipedia has one sentence that indicates this may be the case. An integrated electron microscope / tip characterizer in a single tool might be useful.


  32. 32 steam shower units March 2, 2014 at 2:25 am

    Read this incredible website and bought a steam shower and never ever looked back,
    marvelous information on this website cannot give thanks enough

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: