Self-assembly of natural biological nanostructures. Molecular-like self-assembly of colloidal particles

Since the word “nanotechnology” gained worldwide popularity, stories about “nanorobots” taking over the Universe have become widespread. Science fiction writers compete to invent the most terrible scenario of a global catastrophe, filmmakers shoot multi-billion-dollar blockbusters, and terrible rumors periodically leak into the blogosphere that “a three-headed mutant puppy was born in China as a result of a secret nanogenic experiment.” What is fact and what is fiction in futuristic “horror stories”? What do scientists actually do when they create and study nanostructures? How do they do it?

Eric Drexler's Nightmare 1

The idea of ​​“gray dust” (in some versions – “gray goo”) was put forward by one of the ideologists of the modern nanotechnology boom Eric Drexler. Its roots lie in the very positive desire of people to reduce the size of devices and improve the properties of the materials they use. And nanotechnology promises a breakthrough here on a scale no less than with the advent of metallurgy, plastics or composite materials.

An important circumstance: the advantages of nanodevices and nanomaterials on a global economic scale will become noticeable only when nanostructured products reach macro-sizes. For example: if you use nano-sized additives, additives, modifiers, etc. during the construction of a building, you can improve the characteristics of the structure by percentages, at most several times. If the entire building is assembled entirely from nanostructured building blocks, then it can be tens or hundreds of times larger than those currently existing.

But the smaller a certain part or device becomes, the more effort must be spent on its manufacture, control and handling. That is, the smaller the part, the more expensive it is. What to do?

The original solution to the problem is to “teach” nanoscale devices to assemble themselves without human intervention. Each of us has seen how patterns are formed on frosty glass. This is an example of self-organization at the molecular level. Molecules of water vapor from the air are deposited onto a crystalline seed that spontaneously appears on the glass. Deposition occurs unevenly; the distribution of surface energy over the surface of the seed crystal favors the incorporation of new molecules predominantly in a certain place and, as a result, the growth of the structure strictly in a certain direction. As a result, we can observe with the eye - i.e. at the macrostructure level – the appearance of intricate two-dimensional patterns on the glass.

Eric Drexler predicted that the main path for the development of nanotechnology would be the creation and improvement of approaches to molecular and atomic self-assembly. The logical development of this direction should be micro- and nanoconveyor production, in which self-assembly technologies will be used by nanoscale machines to recreate themselves and similar nanodevices. It is precisely these (and only such) factories, capable of operating without human intervention non-stop 24 hours a day and 365 days a year, that will be able to create tens, hundreds and thousands of tons of relatively inexpensive, but at the same time, nanostructured materials, parts and devices. And only in this case will it be possible to realize all those fantastic possibilities that the ability to control the structure of materials and the properties of parts with atomic precision promises.

This is where the nightmare lies Drexler called it "gray slime". What will happen if at one of these autonomous nanofactories something in the technology control mechanism breaks down, and nanomachines stop making useful nanoparts, and instead begin to simply recreate themselves? Some artificial creature will appear, so tiny that it will be very difficult to notice and destroy. It can spread easily if it manages to get into the environment, and the only thing it will do is use all the planet's material to produce nanostructured "dust" or "slime" (slime is scarier, which is why this scenario has become more common). Gradually, all living and inanimate nature will be “gobbled up” and processed into nanoslime.

Molecular self-assembly, living and nonliving

First of all, we need to separate artificial technologies and living nature. Because in living nature it is the processes of molecular self-assembly that underlie the self-reproduction of macrosystems. The ability of protein molecules to specifically and selectively bind to other molecules is a fundamental feature that underlies all processes occurring in a living cell. The human genome contains tens of thousands of protein structures. This is enough to provide the cell with building materials so that it can extract energy from high-energy compounds, exchange a complex system of signals with other cells in the structure of the body, etc.

This means that examples of nanofactories that can exist autonomously and reproduce themselves based on molecular self-assembly are all living beings.

We know enough to say that it is molecular self-assembly that underlies the growth and development of any living organism. But we still know too little to create a similar system from artificial materials and for it to work.

Scientists today know thousands of molecular interaction reactions based on the principle of self-assembly. Many of them have been modeled and studied in detail. But in a living cell, many millions of intermolecular reactions occur, and all of them are carried out in a directional manner.

Today it is impossible to imagine that someone could create an artificial analogue of a living cell or even a virus - the simplest system capable of self-reproduction. This is theoretically possible, but this is a prospect of many decades of scientific research.

What can be done using self-assembly of molecules now?

It is possible to create single nanoparts and nanodevices. They will not be able to reproduce themselves and will be very expensive to produce, but their presence in a macro-device can fundamentally improve the technical characteristics and consumer properties. We are talking about MEMS and NEMS technologies (Micro- and NanoElectro-Mechanical Systems). For example, complexes on the NanoFab 100 platform make it possible, under high vacuum conditions, to transfer crystalline silicon wafers from one technological module to another and successively create a variety of nano-sized structures on silicon. In this case, technologies based on self-assembly play an important role, for example, the growth of epitaxial monoatomic layers. They allow the formation of nanostructured workpieces - very regular, with precisely specified properties.

However, for the manufacture of a final part or device, an integrated approach turns out to be fundamentally important: having a perfect workpiece, we need to be able to target nanolocal influences on it. And here the question arises: How to see, how to measure?

So, self-assembly of molecules is one way to create nanostructures. But in order for the created structures to be used in real products, you need to have tools that allow you to see nano-sized objects, measure their physical and chemical properties, and generally control the process of their creation and integration into MEMS and NEMS products. What are these tools?

Of course, the most informative and promising method for analyzing nanostructures today is scanning probe microscopy (SPM). The essence of this approach is that a very sharp needle - a probe - is brought to the surface of the sample, which is then moved from point to point (scanned) and the force of interaction between the needle and the surface of the sample is measured. Probe needles can be very different; accordingly, the nature of the interaction forces will be different, which means that different characteristics of a nanoobject can be studied.

For example, if the probe is conductive, it can be used to measure electrical properties at each point on the surface (electrical conductivity, capacitance, charge, etc.). Using a magnetically coated probe, you can determine the magnetization of a sample and build a map of the distribution and orientation of magnetic domains in the surface layer of magnetic materials. A diamond probe can measure the hardness of a material with nanometer resolution. In total, there are more than 40 scanning probe microscopy techniques. The only fundamental limitation of SPM is that all information is collected exclusively from the surface.

The second important tool for studying nanostructures is electron microscopy (EM). Powerful transmission electron microscopes today provide subangstrom spatial resolution. The limitation of this approach lies in the fact that electrons interact with matter, which means they cannot penetrate deeply. The most advantageous samples for transmission microscopy are thin and hard structures, such as foils, two-dimensional crystals, etc.

Scanning electron microscopy, like SPM, allows you to obtain a visual image of the sample surface. There are two fundamental differences.

First, the resulting image has only two coordinates that can be quantitatively measured (X and Y). The height of the observed structures can be estimated indirectly, but it is impossible to measure it quantitatively (SPM gives an exact height value at each point). Secondly, electrons, unlike a solid-state probe, still penetrate into matter. Therefore, in EM it is possible to obtain information about the surface layer. The electron beam used to scan the object has very high energy; colliding with atoms of matter, electrons are reflected, scattered, and also cause serious changes in the electron shell of the atoms. Analysis of the energy of electrons, as well as X-ray quanta, which emit from the region of interaction of the beam with matter, allows us to obtain information about the elemental composition in the near-surface layer of the object.

X-ray radiation can provide very useful information about the internal structure of matter on a nanometer scale. At relatively large inhomogeneities in the structure of an object (nanometers and tens of nanometers), X-rays can be deflected, and this phenomenon underlies small-angle X-ray scattering (SAXS). SAXS allows one to study the size and distribution of nanoparticles in suspensions and in the structure of polymer nanocomposites. The same method helps to detect and study nano-sized cavities, for example in solid foams, and is also very useful in the study of thin films. If the inhomogeneities are comparable to the wavelength of X-ray radiation (and these are angstroms - the characteristic sizes of atoms and atomic lattices in crystals), then wide-angle scattering (WAS) is analyzed. This method provides information about defects in the crystal lattice and allows one to reconstruct the spatial organization of biological or synthetic macromolecules.

The best source of X-rays for such studies is the synchrotron, but the modern development of compact systems for X-ray diffractometry provides scientists with efficient benchtop instruments for many SAXS and SAXS applications.

Tools of Russian leadership

In recent years, it has become fashionable to scold the domestic industry and gossip about how bad everything is in our science. However, there are examples of how domestic research and production companies create equipment for the most advanced research, even on the scale of the entire world science.

Thus, the company Nanotechnology MDT has been operating in Zelenograd near Moscow for 20 years. Research instruments for nanotechnology are developed and mass-produced here, which are readily purchased by leading research centers around the world.

The key to success turned out to be an integrated approach to the study of nanostructures.

At the end of last year, we equipped a unique nanocenter at the Kurchatov Institute,” says Victor Bykov, CEO and founder of NT-MDT. – The basis of the center was a complex on the NanoFab 100 platform, integrated with a synchrotron radiation output channel. NanoFab 100 is a set of technological modules for the formation, processing and analysis of nano-sized structures, assembled into a single automated system.”

Now researchers have the opportunity to grow a certain structure using one of the methods of molecular self-assembly (for example, in a chamber for the growth of epitaxial structures), modify it using nanolocal exposure methods (for example, give the required shape using a focused ion beam, and this can be done with simultaneous observation using electron microscope column), and then study its characteristics in the scanning probe microscopy module.

Together with a source of synchrotron radiation, you get a complete set of everything a scientist might need. It is important that the sample is always under conditions of high or ultra-high vacuum, and special technical solutions ensure its precise repositioning during transportation from module to module - each new tool ends up in exactly the same place on the sample that was worked with in the previous module.

The principle of integrating various methodological approaches into a single system also works well when creating relatively compact research instruments. For example, the joint Belarusian-Japanese scientific enterprise Solar TII operates in Minsk.

Minsk is not Russia, but the scientific school is still the same, Soviet. At one time, the Japanese became interested in our technologies and developments in the field of Raman spectroscopy (RS). With their investments, Raman spectrometers were developed, inexpensive, with excellent characteristics, very competitive in the world market.

Today, the combination of Minsk spectrometers and Zelenograd scanning probe microscopes has made it possible to create a completely unique research system. This device uses the effects of nonlinear optics and, due to this, bypasses fundamental physical limitations, such as the diffraction limit, which limits the spatial resolution of optical spectroscopy methods. The integration of two approaches - Raman spectroscopy and scanning probe microscopy - made it possible to obtain information about the chemical composition of the surface layer with a resolution of up to 50 nanometers!

Another example. At the Moscow Institute of Physical Optics, using patented technology (the so-called “Kumakhov lens”), they learned to focus X-rays into a very narrow spot - so far no one in the world has been able to do this. This made it possible to perform X-ray fluorescence analysis of microscopic areas on the sample. And as a result of the integration of a compact micro-X-ray fluorescence setup with an SPM, another unique device appeared. It allows you to study the surface topography and at the same time provides information about the elemental composition of the selected microsection of the sample.

It can be stated that domestic equipment for nanotechnology research occupies a strong position among the most advanced in the world.

***

It is clear that crowds of galactic nanorobots destroying everything in their path, or, if you like, clouds of harmful “intelligent” nanodust, are nothing more than plots for pseudo-science fiction. However, self-assembly of nanoscale structures does exist; this is an important and extremely promising direction in the development of nanotechnology.

So far, we are at that level of knowledge and skills when each created nanoobject has to be carefully examined, and at the same time it is necessary to control all external conditions so that the resulting product can be used for practical purposes. This is only the very beginning of the journey, and it is all the more pleasant to realize that domestic science and domestic technologies are at the forefront of this movement. We set a good pace at the start and, hopefully, we will be able to maintain the lead in the future.

1 Kim Eric Drexler, genus. in 1955, American engineer. Working at NASA since 1975, he was already using nanotechnological approaches to improve the efficiency of solar panels. In 1986, he founded the Foresight Institute, whose main goal is to study the prospects for expanding human capabilities with the help of nanotechnology and the associated risks. Since leaving the organization in 2005, Drexler has worked as chief technical consultant for Nanorex, a company that produces software used in the design of nanostructures.

Deputy Director of the Foresight Center of the Institute for Statistical Research and Economics of Knowledge at the National Research University Higher School of Economics Alexander Chulok read at the Central Park of Culture and Leisure named after. Gorky lecture on scientific and technological progress and its impact on humanity. In addition to the topic of technology development, Stocking spoke about the emergence of new markets and the death of old ones, as well as the problems associated with these processes.

In response to the question “how to guess the future now?” I have to disappoint you: this is almost impossible. However, the future can be shaped the way we want it to be. An economy of expectations has arrived, which is largely due to fundamentally new needs and new approaches to working with information. Now I will briefly talk about what key changes await us in the next 20 years in the main sectors of the economy.

Medicine and healthcare

Health is the first thing that worries a person. In Russia, there is an increasingly noticeable trend towards taking care of one’s physical condition: everyone wants to be fit, beautiful, athletic and, of course, healthy. There is a clear trend towards personalization in healthcare.

I'll show you with this example. Medical developments will make it possible to adapt a treatment regimen to a specific person based on deciphering his genome (already a “basic” set costs 100 euros, but what will happen when the cost drops tenfold?), analysis of his environment, how he lives, what breathes. In the future, instead of standard medications, individual treatment regimens will be sold, according to which, say, you need to get up at 6 am, sleep until 9, be sure to eat strawberries and under no circumstances be in the sun before 10 pm in Turkey, but if it is the sun Egypt - then there are no questions.


Alexander Chulok
Photo: hse.ru

A separate question is whether patients will adhere to the required treatment regimen? Most people take the pills, say, not for five days, as they should, but for three and quit - it helped, why continue to take it? In the case of chronic diseases, almost every second person ignores doctors' orders. Implantable microchips will allow you to forget about the schedule for taking medications and optimize their dosage.

I hope we will see the end of traditional medical examination: there will be no need to go to the clinic to get tested, a special wrist bracelet will monitor the state of the body. There are already mobile devices that record dozens of different biometric indicators.

Are big pharmaceutical companies ready for such changes? Obviously they will have to adapt. As well as pharmacies, which in their current form will also no longer be needed, because a person will be able to print any drug on a home 3D printer.

The development of 3D printing is associated with the next trend - organ replacement. Last year, an old woman in Belgium had her jaw replaced by having it printed on a 3D printer. The news then quickly spread throughout the world, but in total the operation cost about a million euros. In 20 years, many people will have some kind of printed organ in their bodies. Now they are already printing a lung, a kidney, an eye.

Attempts to “fix” what is already “broken” will become a thing of the past; doctors will not say, if you get sick, then come. The medicine that is now developing in the USA, Germany, and Israel is preventive medicine. Its basic task is to prevent the disease, and not to treat its consequences.

Improving human properties is another rapidly developing trend in medicine. Now there is a merging of nano-, bio-, info- and cognitive technologies that make it possible to radically strengthen a person, optimize his intellectual and physical characteristics literally beyond the intuition of the most brilliant designer. A few years ago, a congress of futurologists took place in the Swiss city of Lucerne, who said that by 2045 a person would gain immortality, and thoughts would be transmitted from person to person, which could lead to the formation of new communities.

Now imagine this picture: a 120-year-old man who passes the GTO better than a thirty-year-old, runs cross-country and whose brain works five times better and has ten times more experience. The employer will hire him, and not the young man, who still needs to learn a lot. What should 30-year-old loafers do? And this is a global challenge. Many countries have already seriously thought about this.

Now there is a lot of analytics based on the analysis of social network data, some talk about their control. But how will you control your thoughts? For example, if earlier in a number of European countries, when you were caught on a recording made by a city camera, you could demand that you be cut out of it, now what will you cut out? Satellite? Interface? Facebook or Mindbook?

It is obvious that technology will increasingly influence the geopolitical situation: if a country does not “fit in” with the new technological wave and does not provide its citizens with a high quality of life, it risks losing the most active creative layer, pulsating with ideas.

Information and telecommunication systems

We are witnessing the rapid and total penetration of information and telecommunication technologies (ICT). Who would have imagined 70 years ago that we would talk using small boxes? Now almost everyone walks around with mobile phones, some with smartphones in the form of a bracelet. The distance between the device and the human body is 2-3 centimeters. And it is shrinking; in the future, devices will simply go under the skin. A little more and we will have brain-computer interfaces.


Photo: Jordi Boixareu / Zumapress / Global Look

Now it’s difficult to imagine how virtual reality and augmented reality will change our thinking. Our society will disintegrate - we will listen to a lecture, sitting in virtual reality glasses in the country, while being in a virtual room or school. Nowadays, thanks to services such as Coursera, you can watch excellent courses in almost all areas of knowledge. And for now you are just listening to webinars, but in the future technologies will appear that will allow you to be inside this virtual room.

For example, the market for augmented reality technologies in surgery is about $5 billion, and this is just one application. There are already prototypes of helmets that allow you to obtain up-to-date and complete information about the object under construction: who created it, how much it costs and what problems it may have. This is a completely different level of analysis, management and control.

The time is coming for fully digital factories. For example, Amazon.com does not have a single person in its warehouses; robots are responsible for almost all processes. We have only a few rare examples of attempts to create such productions. It is obvious that the effect of their spread will be equivalent to telegraph technology for the world of pigeon mail. The world is moving to platform solutions, this is a completely different production paradigm, and we, for example, are all trying to establish a consolidated discussion in the country on 3D printers, while abroad they have long been sold in specialized stores, or discussing solar panels, and developments have already appeared transparent solar battery. The next step is to replace the windows with them and move towards a completely energy-independent home. And if it is also connected to a smart grid - a smart distributed energy system, then it will also begin to release energy into the network, thereby achieving a positive balance. How much do you pay for electricity? Now imagine that this money will be paid to you.

Energy

Most likely, the energy sector of the future will be autonomous, smart, environmentally friendly and adaptive to human needs. Many people have external batteries that charge mobile devices, but a film has now been developed that allows you to charge your phone in a few minutes. In the future, its battery will last not 3-4 days, but a month or two, years.

The next trend in energy is everything independent. In America, the technology of an autonomous soldier has been developing for several decades, charging equipment simply by walking. Now imagine that you are in a kind of “energy cocoon”; you are connected through a special suit or device to the general power distribution network. It will be possible to exchange energy directly. Tesla's recently introduced home storage device is just the first move. It is very expensive and not yet particularly effective, but colossal breakthroughs in the energy sector are expected.

In classical foresights, it is customary to study not only those trends that are most likely to occur, but also those events whose probability of occurrence is minimal, but if they happen, then such a “wild card” will not be of any use to anyone. One of these, alas, unpleasant “wildcards” was the accident at Fukushima; few people expected it, but the effect was colossal. Now many are analyzing the effects of the development of accessible technologies for extracting methane from gas hydrates, shale, and oil production from unconventional fields. But these are all events in the zone of our managerial foresight, but what if we create efficient, cheap, “green”, and at the same time miniature energy sources, for example nuclear mini-reactors? Their impact on existing value chains will be enormous.

Transport

Transport technologies will provide the effect of space compression. Unfortunately, Russian infrastructure still acts as a strong barrier to the development of this trend in our country. But I would really like to spend a weekend in Kamchatka or Baikal. As we ponder road construction plans, China's high-speed trains are seriously aiming to break the 1,000-kilometer-per-hour barrier using magnetic levitation technology.

Modern vehicles will, of course, operate not only on the ground, but also in the air, and some may go beyond the atmosphere. Many countries are already developing a “space elevator”. The development of tether systems, including the development of a “space elevator,” will make it possible to change the orbits of spacecraft, move cargo between orbital stations, launch small spacecraft and deliver payloads into orbit. The key barrier here is the cable itself, which must not even withstand the elevator, but its own weight. Fiber as thick as a hair must withstand a ton (currently 500-600 kilograms). To make such a cable, nanotechnology is needed. They will create a real revolution in many industries.

Manufacturing, science and education

Now we are trying to introduce additive technologies - 3D printing, and they will be replaced by molecular self-assembly - this is an even more advanced technology. At the molecular level, it will be possible to collect anything. Using nanofactories, it will be possible to create things, products; in the future, a cow will not be needed to produce milk. These technologies are the killers of 3D printers.


3D printed jaw implant
Photo: uhasselt.be

The key problem in everything smart (smart networks, cities, homes, businesses, etc.) is modeling. And here our mathematicians come to the rescue. Here our country definitely has a chance to achieve a leading position in the market. However, we observe an interesting pattern: as soon as a researcher increases the level of citations, his affiliation and affiliation with one or another university often changes: if in his early works it is indicated that the person is from Russia, then in later works - bang! - already some American university.

China followed this path. The Chinese bought professors based on their citation index, along with their families, and gave them salaries the same as in America. They told them: “work, but the rights to the created intellectual property will belong to the PRC.” Now there are Chinese cars, Chinese planes - everything is made in China.

We spend about $15 billion a year on science, while the United States spends $450 billion. If you look at the distribution in world science, there are very few of us there. And such a moment. There is a method called “analysis of research fronts”. If other scientists suddenly begin to actively cite researchers who work in certain areas, it means that it is in these areas of science that a breakthrough is possible. But if publications abroad, say, on medicine, are directly related to biochemistry, chemistry, physics, engineering, then in the publications of Russian scientists there are almost no such connections. Our main field of science is astronomy.

) — the process of formation of an ordered supramolecular structure or environment, in which only the components (elements) of the original structure take part in an almost unchanged form, additively constituting or “assembling”, as parts of a whole, the resulting complex structure.

Description

Self-assembly refers to typical bottom-up methods for producing nanostructures (nanomaterials). The main task facing its implementation is the need to influence the parameters of the system in such a way and set the properties of individual particles so that they are organized to form the desired structure. Self-assembly is at the core of many processes, where the "instructions" for how to assemble large objects are "encoded" in the structural features of individual molecules. Self-assembly should be distinguished from self-assembly, which can be used as a mechanism for creating complex “patterns”, processes and structures at a higher hierarchical level of organization than that observed in the original system (see figure). The differences consist in the numerous and multivariate interactions of components at low levels, at which there are their own, local, laws of interaction, different from the collective laws of behavior of the ordering system itself. Self-organization processes are characterized by interaction energies of different scales, as well as the existence of restrictions on the degrees of freedom of the system at several different levels of its organization. Thus, the self-assembly process is a simpler phenomenon. However, one should not go to extremes and consider, for example, that the process of growth of a single crystal is the self-assembly of atoms (which, in principle, corresponds to the definition), although, for example, the self-assembly of larger objects - microspheres of the same size, forming a dense spherical packing, which leads to the formation of the so-called (three-dimensional diffraction grating of microspheres) - this is a typical example of self-assembly. Self-assembly can include the formation (for example, of thiol molecules on a smooth gold film), the formation of films, etc.

Illustrations


Author

  • Gudilin Evgeniy Alekseevich

Sources

  1. Philosophy of nanosynthesis // Nanometer, 2007. -www.nanometer.ru/2007/12/15/samosborka_5415.html (access date: 10/13/2009).
  2. Self-assembly // Wikipedia, the free Encyclopedia. - http://en.wikipedia.org/wiki/Self-assembly (access date: 07/31/2010).

which promises the ability to control the structure of materials and the properties of parts with atomic precision.

And here lies the koi imar, which Drexler called “gray slime.” What will happen if at one of these autonomous nanofactories something in the technology control mechanism breaks down, and the i-nomachines stop making useful nanoflies, and instead begin to simply recreate themselves? Some artificial creature will appear, so tiny that it will be very difficult to notice and destroy. It can spread easily if it manages to get into the environment, and the only thing it will do is use all the planet's material to produce nanostructured "dust" or "slime" (slime is scarier, which is why this scenario has become more common). Gradually, living and inanimate nature will be “gobbled up” and processed into slime.

Molecular self-assembly, living and nonliving

First of all, we need to separate artificial technologies from living nature. Because in living nature it’s the process! molecular self-assembly underlie the self-reproduction of macrosystems. The ability of protein molecules to specifically and selectively bind to other molecules is a fundamental feature that underlies all processes occurring in a living cell. The human genome contains tens of thousands of protein structures. This is enough to provide the cell with building materials so that it can extract energy from high-energy compounds, exchange a complex system of signals with other cells in the structure of the body, etc.

This means that examples of nanofactories that can exist autonomously and reproduce themselves based on molecular self-assembly are all living beings.

We know enough to say that it is molecular self-assembly that underlies the growth and development of any living organism. But we know too little yet to create a similar system from artificial materials and for it to work

Examples of the formation of surface nanostructures by self-organization:

a) These islands on a silicon wafer have a height of 0.3-0.6 nm. Image and sample courtesy of E.E. Rodyakina, S.S. Kosolobov, D.V. Shcheglov, A.V. Latyshev. Institute

Semiconductor Physics SB RAS, Russia;

b) An array of ordered pyramidal islands on a germanium-silicon substrate. Image obtained by M.V. Shalev, Institute of Physics of Microstructures RAS, Nizhny Novgorod, Russia. Sample provided by A.V. Novikov, N.Yu. Shuleshov, M.V. Shalaev, Institute

physics of microstructures RAS

Scientists today know thousands of molecular interaction reactions based on the principle of self-assembly. Many of them have been modeled and studied in detail. But in a living cell, many millions of intermolecular reactions occur, and all of them are carried out in a directional manner. Today it is impossible to imagine that anyone could create an artificial analogue of a living cell or even a virus - the simplest system capable of self-reproduction. Theoretically, this is possible, BUT This is the prospect of many decades of scientific research.

What can be done using self-assembly of molecules now?

It is possible to create single nanostructures and nanodevices. They will not be able to reproduce themselves and will be very expensive to produce, but their presence in a macro-device can fundamentally improve the technical characteristics and consumer properties.

We are talking about MEMS technologies

This is what NEMS elements manufactured today look like

and NEMS (Micro- and NanoElectro-Mechanical Systems). For example, complexes on the NanoFab 100 platform allow, under high vacuum conditions, to transfer crystalline silicon wafers from one technological module to another and successively create a variety of nano-sized structures on silicon. In this case, technologies based on self-assembly play an important role, for example, the growth of epitaxial monoatomic layers. They allow the formation of nanostructured workpieces - very correct, with precisely specified properties.

However, for the manufacture of a final part or device, an integrated approach turns out to be fundamentally important: having a perfect workpiece, we need to be able to target nanolocal influences on it. And here the question arises:

How to see and measure?

So, self-assembly of molecules is one way to create nanostructures. 11o in order for the created structures to be used in real products, you need to have tools that allow you to see nano-sized objects, measure their physical and chemical properties and generally control the process of their creation and embedding

In recent years, the concept of “self-organization” has been widely used to describe and explain similar phenomena in physical, chemical, biological and even economic and sociological systems. It would seem that, contrary to generally accepted thermodynamic laws, order arises in a distributed dynamic system consisting of its inherent simple elements - complex structures, complex behavior, or complex spatio-temporal phenomena. In this case, the properties of the emerging structures are fundamentally different from the properties of the initial elements of the system. And the most surprising thing is that self-organization in the system appears spontaneously from a homogeneous state.

Self-organization is the phenomenon of spontaneous formation of structure in systems that are different in their physical nature. The spontaneous emergence of a structure means the appearance of an ordered state in an initially random distribution of system components without visible external influence. In the general case, ordered states can be a spatially uneven distribution of material components of the system that persists over time; continuous oscillations of the concentrations of system components when they oscillate between two or more values; more complex forms of ordered collective behavior of components. The formation of structure is equally inherent in both physical devices such as lasers and chemical reaction media and biological tissues, communities of living organisms, geological and meteorological processes, and social phenomena of human society. Self-organization mechanisms turn out to be different for systems that are different in nature, but nevertheless, all of them have some common structural and dynamic characteristics.

Systems that are different in nature may correspond to different, often sharply different, levels of complexity of self-organization. This complexity is determined by the nature of the self-organizing system - the complexity of its structure and behavior, the dynamic mechanisms of interaction of components. Thus, the much more complex behavior of collective insects (bees, termites, ants) compared to bacteria and viruses underlies much more complex processes of self-organization of behavior in a community of collective insects. At the same time, specific manifestations of self-organization processes at relatively simple levels of its complexity can act as an integral part of phenomena at a more complex level.

Vivid and consistent examples of self-organization have been discovered among physical systems. The concept of self-organization has also spread to chemical phenomena, where along with it the term “self-assembly” is quite widely used. And in biology, self-organization during the second half of the 20th century became a central concept in describing the dynamics of biological systems, from intracellular processes to the evolution of ecosystems. Thus, self-organization is an interdisciplinary phenomenon and belongs to a field of knowledge that is usually called cybernetics or more narrowly - synergetics.

Any specific process of self-organization is based on some dualism. On the one hand, the self-organization of the system is carried out by specific physical, chemical or some other mechanisms. On the other hand, in order for a system to be self-organizing, it is necessary to fulfill the cybernetic conditions common to all self-organizing systems - the general principles of self-organization.

  • 1. Self-organization processes arise in distributed dynamic systems. A distributed system must be a collection of a large number of individual components, elements that make up the system. These may include individual molecules in chemical reaction-diffusion systems, individuals in a school of fish, individual people in a crowd gathered in a square. These components must interact with each other, i.e. the system must be dynamic, operating on the basis of dynamic mechanisms.
  • 2. An important feature of self-organization processes is that they are carried out in open systems. In a thermodynamically closed system, evolution over time leads to a state of equilibrium with a maximum value of the entropy of the system. And, according to Boltzmann, this is the state with the maximum degree of chaos.
  • 3. The system must exhibit positive and negative feedback. Processes occurring in a dynamic system tend to change the initial relationships between the system components involved in these processes. This can be conditionally called changes in the output of the system. At the same time, these components are the initial ones for the processes occurring in the system; they are also parameters at the entrance to the system. If changes in the output of a system affect the input parameters such that changes in the output are amplified, it is called positive feedback. Under negative feedback This implies a situation where dynamic processes in the system maintain a constant output state. In general, dynamic systems with positive and negative feedbacks are modeled by nonlinear differential equations. This is a reflection of the nonlinear nature of systems capable of self-organization, which is, apparently, the main property of a system that determines its ability to self-organize.

The concept of “self-assembly” has a chemical origin. It was introduced in 1987 by the French chemist J.-M. Len in order to highlight among the numerous phenomena of self-organization the processes of spontaneous structure formation in systems that are in a state of thermodynamic equilibrium. Indeed, a large number of such structure formation processes are known under equilibrium, or rather close to equilibrium, conditions. Among them, for example, are “helix-coil” transitions in polymer molecules, the formation of supramolecular structures of amphiphilic molecules (micelles, liposomes, bilayers), etc., up to crystallization. The term “self-assembly” is mainly used in relation to molecular systems. Nevertheless, processes related to self-assembly were also discovered in the case of other micrometer-sized formations.

Self-assembly is a process in which a spontaneously ordered whole (aggregate) is formed from individual components or components of a mixture by minimizing their total energy. In nature, the final conformation of a huge number of macromolecules (such as proteins, micelles, liposomes and colloids) is formed through self-assembly through the process of folding. There are many examples of natural self-assembly that occurs spontaneously under the influence of natural forces. Such natural self-assemblies are observed at all levels (from molecular to macromolecular) and in various systems of living matter.

Self-assembly in nanotechnology covers a wide range of concepts and methods of increasing structure complexity, ranging from growing crystals to creating perfect biological organisms. With the help of natural mechanisms during such self-assemblies, it is possible to form and create various nanostructures and then larger systems and materials with the required physicochemical properties. Enlarged heterogeneous aggregates must be suitable for performing various complex functions or creating new forms of materials with unusual properties.

Implementing guided self-assembly of the required artificial nanostructures from molecular “building” blocks is the main task of nanotechnology. Of course, to solve it it is necessary to use information about the intermolecular interaction between molecular “building” blocks, the spatial arrangement of nanostructures, the results of computer molecular modeling, as well as bionics data. By bionics we mean the production, based on the structures and functions of biological substances, of artificial objects that imitate natural systems.

Self-assembly is the fundamental process (or driving force) that led from inanimate matter to the evolution of the biological world. Understanding, inducing and directing self-assembly is the key to a gradual transition to bottom-up nanotechnology. If you know the principles of self-assembly, you can understand the role of the various intermolecular interaction forces that control this self-assembly. To induce and control the required self-assembly process, it is also necessary to be able to model and predict the course of the self-assembly process under various conditions.

The success of self-assembly is determined by five factors:

  • 1. Presence of molecular “building” blocks. The greatest interest for nanotechnology is the self-assembly of molecules of large sizes, in the range from 1 to 100 nm. Moreover, the larger and more well-structured the initial molecular “building” blocks are, the higher the level of technical control over the remaining molecules and their interactions, which greatly facilitates the self-assembly process. Diamondoids, hydrocarbons in which carbon atoms form a tetrahedral spatial lattice, exactly the same as in diamond (adamantanes, diamantanes and triamantanes), can be considered as the most universal and promising categories of molecular “building” blocks.
  • 2. Intermolecular interactions. Typically, the forces that ensure self-assembly are determined by weak non-covalent intermolecular bonds: electrostatic and hydrogen bonds, van der Waals, polar, hydrophobic and hydrophilic interactions. The compatibility of individual parts and the stability of the entire self-assembly complex is ensured by a large number of such weak interactions for the conformation of each molecular region. An example of stable self-assembly built through weak interactions is the structure of proteins.
  • 3. Reversibility of the process. Current and proposed self-assemblies in nanotechnology are controlled but spontaneous processes in which molecular "building" blocks are combined into desired ordered assemblies or complexes. For such a process to be spontaneous, it must be carried out in a reversible way.
  • 4. Ensuring the mobility of molecules. Due to the dynamic nature of the self-assembly process, a liquid medium is required for it to occur. The external environment possible for use may include: liquids, gases, fluids in a supercritical state, interphase boundaries between crystals and liquids from the liquid phase, etc. In all these cases, dynamic exchange processes must occur during self-assembly in the direction of achieving a minimum energy value systems.
  • 5. Process environment. Self-assembly is significantly influenced by the environment. The resulting molecular aggregate is an ordered set of particles, which has the thermodynamically most stable conformation. Self-assembly occurs in liquid and gaseous media (including dense gas-supercritical fluid media), near the interface between a crystal and a fluid, or at the interface between a gas and a liquid.

At each stage of assembly, at least one component must diffuse freely in the solvent to find its specific, designated binding site after examining all possible positions and orientations. This requires that the component be soluble, have a surface complementary to that of its specific binding site, and that all other surfaces of the workpiece and component be non-complementary to prevent stable binding. These parameters complement the functional requirements: materials and working environments in natural conditions are most suitable for the formation of complex structures using self-assembly. This process has been successfully used in supramolecular chemistry and is also widely used to control molecular crystallization.

Let's consider the self-assembly methodology. There are two types of it, which are based on two processes occurring, firstly, at the interface between the liquid and solid phases and, secondly, inside the fluid phase. The fluid phase can be taken to be liquid, vapor or dense gas (in a supercritical state).

There are a number of laboratory methods for self-assembly that use a fluid environment as an external environment for the association of molecules and a solid surface as a basis for nucleation and growth.

Fixation of molecules as seeds for assembly on solid supports used for self-assembly can be achieved by the formation of covalent or non-covalent bonds between the molecule and the surface. The former determine irreversible and, therefore, stable fixation at all stages of assembly. Fixation with the help of the latter is a reversible process, at the beginning of which it is unstable, but becomes stable with the corresponding development of the self-assembly process.

The covalent bond most often used for fixation is the sulfide bond with a noble metal. One such example is the covalent bond between thiol-containing molecules (such as alkanethiol chains or proteins containing cystine in the structure) and gold. Typical non-covalent bonds used for fixation include the following three types of binding: 1) due to antibody affinity energy; 2) due to affinity energy using the biotin-streptavidin system and its modification; 3) complexation with fixed metal ions.

Self-assembly of a monolayer is of great practical importance. By definition, a self-assembled monolayer is a one-molecule-thick two-dimensional film that forms covalent bonds with a solid surface. Self-assembly of a monolayer is widely used in nanotechnology, including nanolithography, in modifying adhesive properties and wetting characteristics of surfaces, in the development of chemical and biological sensors, insulating layers in microelectronic circuits and the manufacture of nanodevices, etc.

Various methods for obtaining self-assembled monolayers (SAM) of proteins:

Let's consider various methods of self-assembly of a protein monolayer (Fig. 6.14).

  • 1. Physical adsorption. This technique is based on the adsorption of proteins on solid surfaces such as a carbon electrode, metal oxide or silicon. Adsorbed proteins form a self-assembled monolayer with randomly oriented proteins. Control of orientation characteristics can be improved by modifying the protein and the surface itself, as shown in Fig. 6.14a.
  • 2. Incorporation of polyelectrolytes or conductive polymers, which can serve as a matrix, the surface of which captures, fixes and adsorbs proteins. This process is shown in Fig. 6.146.
  • 3. Incorporation of alkanethiol chains into a self-assembled monolayer creates a membrane-like monolayer on a noble metal, while proteins can be physically adsorbed (a); inclusion of proteins in polyelectrolytes or conducting polymers (b); inclusion in SSM (c); connection to SMS with non-oriented location ( G); connection to the SMS with oriented arrangement (b); direct site-specific attachment to the gold surface (f).

arranged without any specific orientation. If chains of different lengths are used (creating dents and pits), this will determine a certain topography of the self-assembled monolayer, which, in turn, can orient the proteins (Fig. 6.14c).

  • 4. Non-oriented attachment to a self-assembled monolayer. In this case, the chains that form a self-assembled monolayer have functional groups at the ends that react in a nonspecific manner with different regions of the protein. For this reason, the orientation of the proteins is random, as shown in Fig. 6.14
  • 5. Oriented attachment to a self-assembled monolayer. The principles of assembly are the same as in the previous case, but here the functional group specifically interacts only with a certain domain or region of a given domain, and, therefore, a clearly defined orientation is achieved. For this purpose, the structure of proteins can be chemically or genetically modified. This self-assembly method is shown in Fig. 6.14d.
  • 6. Direct selective accession to gold. This occurs when cystine, which has unique properties, binds to the surface of gold. In this case, the orientation is completely controlled. This connection option is shown in Fig. 6.14f.

Strain-guided self-assembly is used in the manufacture and connection of wires and switches. A surface with a lithographically specified relief is impregnated with a deposited substance of a controlled composition under conditions of deformation. A functional group that is typically associated with surface functionality can be introduced into the substrate. This self-assembly method can be used, for example, in the creation of semiconductor devices, where it is necessary to fix the components of the system on a solid substrate in order to fully control the progress of the self-assembly process and its completion.


Scheme of DNA-guided assembly

DNA can be used both for site-selective fixation and as a binder, resulting in a lattice framework for the self-assembly of nanostructures. Synthesis of a nucleic acid-protein conjugate using specific interactions between two complementary DNA strands, antigen and antibody, between BIO and CTB can provide effective mechanisms that determine the direction of attachment of nanostructural modules (Fig. 6.15).

Recent genetic engineering advances in techniques to manipulate DNA sequences fixed to the gold surface, similar to doping, further increase control over the self-assembly process. A similar method can be used in the case of molecules of inorganic substances reaching the size of nanocrystals. DNA can also be used for template-assisted synthesis. An example of such a synthesis is the production of silver nanowires using DNA as a base.

An effective way to discover promising compounds and self-assemblies is to apply advances in dynamic combinatorial chemistry, which is a bottom-up evolutionary approach to nanotechnology. To develop a dynamic combinatorial chemistry structure, it is necessary to assemble a dynamic combinatorial library of intermediate components that, when added as templates, form the desired molecular assembly. In dynamic combinatorial chemistry, an important component is the molecular recognition mechanism. An addition is knowledge of the peculiarities of creating “guest-host” complexes.

Currently, combinatorial chemistry is used as a method of theoretical research in establishing the structural basis of enzyme function and identifying new enzyme inhibitors. It is believed that with its help it is possible to potentially quickly achieve new self-assemblies in nanotechnology, as well as the discovery of new drugs, supramolecular assemblies and catalysts.

There are two types of combinatorial chemistry: traditional and dynamic (Fig. 6.16). The main difference between the two is that in dynamic chemistry the molecular "building" blocks are held together by weak but reversible non-covalent bonds, while in traditional combinatorial chemistry the interactions are driven primarily by strong and irreversible covalent bonds.


In traditional combinatorial chemistry, a static mixture of aggregates of a fixed composition is formed, and the introduced “template” (ligand) selects the best binder without increasing its content. In dynamic combinatorial chemistry, one starts from a dynamic mixture, in which, after adding a “template,” the composition and distribution of concentrations of blocks changes, and the best binder in relation to the “template” will be the only predominant product.

In combinatorial chemistry, a “template” (or ligand) is considered to be a molecule, ion or macromolecule that reacts with other components and changes the distribution of concentrations of system products during continuously occurring reactions of the formation of the required aggregate, macromolecule or intermediate product. An example of a “template” is a DNA molecule that serves as a model for the synthesis of a macromolecule such as RNA.

Self-assembly in dynamic combinatorial chemistry enables new approaches to molecular assembly. Many interesting improvements have been made in this area in recent years. In particular, the so-called molecular docking has received great development - a procedure for searching for optimal docking sites for small molecules of a ligand (biologically active substance) to a protein macromolecule.

A dynamic combinatorial library (DCL) is a set of intermediates that can be in dynamic equilibrium with building blocks. To describe the composition of a DCB, the term “chemical set” is usually used, which consists of two or more library components, “building” blocks or reagents. “Building” blocks with properties suitable for the formation of self-assembling objects are selected from a dynamic combinatorial library, and self-assembly is carried out in the presence of a “template”.

The components of DKB interact through the formation of weak non-covalent bonds. In principle, it is possible to create any reversible assemblies from these components. Since all interactions between components are reversible and equilibrium, DCB has a dynamic nature. Thus, DKB is able to easily respond to various external factors. In particular, the number of specific DCS aggregates can change with changing thermodynamic conditions and depending on the nature of the “template” added to the system. In the equilibrium state, before adding the “template,” the DCB components have many opportunities to interact with each other through weak non-covalent bonds to form a variety of aggregates. After the “template” is added to the DCB system, a redistribution of the content of intermediate substances occurs. As a result, only the concentration of those aggregates or assemblies that best correspond to the “template” will increase and become stable.

An increase in the concentration of a certain intermediate product can only occur as a result of a reversible shift of other reactions in the direction of the formation of this product, if only this is dictated by equilibrium conditions (achieving a minimum energy and maximum entropy). Consequently, the system strives to provide the assembly with the most stable connections with the “template”, and the concentration of unstable assemblies decreases. At the same time, DCB components can interact with each other spontaneously, producing a large number of different aggregates with different shapes and properties.

There are many factors that influence the effectiveness of DCS. These include:

1. The nature of components and “templates” of DCS. It is necessary that the selected components have suitable functional groups. The greater the diversity of these groups in components, the greater the variability that can be achieved in system development (see Figure 6.17). In addition, the properties of these groups must be compatible with the properties of the “template”.


  • 2. Types of intermolecular interactions in DCB. In order to use computational chemistry to predict the possibility of the formation of molecular aggregates, it is necessary to know a priori about the intermolecular interactions between the components and the mechanism of association of the component with the “template”. In DKB, intermolecular interactions must be non-covalent in nature, which leads to the reversibility of transformations occurring between the components of DKB. Such interactions facilitate the rapid establishment of equilibrium, so that all available possibilities for the formation of molecular aggregates can be tested.
  • 3. Thermodynamic conditions. The solubility of components, “templates” and the resulting molecular aggregates in a solvent (DCB medium) can strongly depend on equilibrium thermodynamic conditions. To increase the effectiveness of DCB, the solubility of the components in the medium should not differ significantly from the solubility of the “template”. In an aqueous environment, insufficient solubility of the “template” is a problem mainly when using a protein as a template; a similar problem can also arise with nucleic acids. The formation of an insoluble molecular aggregate shifts the equilibrium towards the formation of this aggregate as a reaction product. The conditions for the reactions presented in the DCS should be as mild as possible in order to minimize the likelihood of incompatibility that is inevitable in the processes of exchange and recognition.
  • 4. Methods of analysis. In DCS, under certain circumstances, it must be possible to stop ongoing reactions so that the system can be moved from a dynamic to a static state. Termination of reactions allows the system to be “disconnected” from synthesis after the addition of the “template” and the formation of the best possible cross-linking reagent. In this case, the system comes to an equilibrium state and the distribution of molecular aggregates remains constant to allow analysis.

Sometimes simplification of the self-assembly process can be achieved through analysis at the recognition stage. Molecular recognition is the specific identification through the interaction of one molecule with another.

The peculiarity of the recognition of DCB molecules is the selection of the receptor most suitable for a given “template”. This facilitates the development of an evolutionary approach for obtaining and selectively selecting the most suitable receptors, similar to the evolutionary development of nature. Directed evolution of high-affinity ligands for biomolecules in the newly emerging field of combinatorial chemistry called dynamic variability, can be widely used in self-assembly.

There are two fundamental approaches in the process of molecular recognition: shaping and molding (see Fig. 6.18).

During “shaping,” the created molecular aggregate from a library of compounds takes the form


Illustration of shaping and molding in molecule recognition

emptiness limited by the “template”. The free space inside the “template” serves as a cast and a place where the library components are connected and aggregates are formed. When “molding”, the components of a dynamic library are directly connected using “templates”.

A huge number of molecules are used for self-assembly, receptor formation and molecular recognition. Such “recognition” molecules may contain receptors for recognizing acidic carboxyl, peptide, carbohydrate and other groups.

Molecular receptors are conceptually the simplest objects of supramolecular chemistry, although their structure is not always simple. Their function is to “find” the desired substrate among similar ones and selectively, that is, selectively bind it. Selectivity of molecular recognition is achieved if, along with the complementarity of the receptor and the substrate, there is a strong overall binding between them, resulting from multiple interactions of several binding centers. A necessary condition for such interaction is a large contact area between the receptor and the substrate.

Special methods and reagents are available for constructing cyclic, container, or linear self-assembling structures (or complexes) as receptors and for identifying molecules. For example, a strategy for constructing a cyclic structure is to use triple and complementary hydrogen bonds between the donor-donor-acceptor group of one molecule and the acceptor-acceptor-donor group of another molecule.

Container-based supramolecular chemistry techniques can also be used to design macromolecules susceptible to molecular recognition and the formation of specific bonds. In these methods, the internal surface of the designed molecule (the “host” or receptor) interacts with the surface of the “guest”, or ligand, and the energy of the weak bonds formed between them determines the degree of strength of specific binding and the ability to recognize the molecules.

After the self-assembly of the components is completed, the resulting “host” takes on an individual spatial conformation, often with a void or gap for complete or partial enclosure of the “guest” molecule. Although the control over technology development and recognition specificity in these methods is not as significant as in dynamic combinatorial library systems, in many cases there are fewer restrictions and difficulties in development than in dynamic combinatorial library systems.

IIIIIIIIIII" IIIIIIIII" IIIIIIII" IIIIIIII" IIIIIIII" IIIIIIII" IIIIIIII" IIIIIIII" IIIIIIII" IIIII11111 No. 111111IIIIIIIII No. 11111IIIIIIIIIIIIIII No. 111IIIIIIIIIIIIIIIIIII No. 11^