This book provides a collection of methods that until now have been scattered through the literature of the last 25 years. It reviews elements of sampling theory and discusses how modern notions of chaos and nonlinear dynamics explain the workings of molecular dynamics. This book presents the most important and main concepts of the molecular and microsimulation techniques. It enables readers to improve their skills in developing simulation programs by providing physical problems and sample simulation programs for them to use.
Provides tools to develop skills in developing simulations programs Includes sample simulation programs for the reader to use Appendix explains Fortran and C languages in simple terms to allow the non-expert to use them. Molecular Dynamic Simulation for Engineering and Materials explains the fundamentals of MD simulation and explores recent developments in advanced modeling approaches based on the MD method.
The improvements in efficiency and accuracy delivered by this new research are explained to help readers apply them to a wide range of tasks. The rich research experience of the authors in molecular dynamic simulation will ensure that readers are provided with both an in-depth understanding of this method and clear technical guidance. This book details the necessary numerical methods, the theoretical background and foundations and the techniques involved in creating computer particle models, including linked-cell method, SPME-method, tree codes, amd multipol technique.
It illustrates modeling, discretization, algorithms and their parallel implementation with MPI on computer systems with distributed memory. The text offers step-by-step explanations of numerical simulation, providing illustrative code examples.
With the description of the algorithms and the presentation of the results of various simulations from fields such as material science, nanotechnology, biochemistry and astrophysics, the reader of this book will learn how to write programs capable of running successful experiments for molecular dynamics.
This work presents modern implementations of relevant molecular dynamics algorithms using ls1 mardyn, a simulation program for engineering applications.
The work describes distributed and shared-memory parallelization on these platforms, including load balancing, with a particular focus on the efficient implementation of the compute kernels. The text also discusses the software-architecture of the resulting code. The book explains, in detail, how to use each of these packages, also providing real-world examples that show when each should be used. The latter two of these are open-source codes which can be used for modeling at no cost.
Several case studies how each software package is used to predict various properties of nanocomposites, including metal-matrix, polymer-matrix and ceramic-matrix based nanocomposites. Properties explored include mechanical, thermal, optical and electrical properties. This is the first book that explores methodologies for using Materials Studio, Lammps and Gromacs in the same place.
It will be beneficial for students, researchers and scientists working in the field of molecular dynamics simulation. Gives a detailed explanation of basic commands and modules of Materials Studio, Lammps and Gromacs Shows how Materials Studio, Lammps and Gromacs predict mechanical, thermal, electrical and optical properties of nanocomposites Uses case studies to show which software should be used to solve a variety of nanoscale modeling problems. Addressing the need of chemistry, biology and engineering students to understand and perform their own molecular simulations, the author introduces the fundamentals of molecular modeling for a broad, practice-oriented audience and presents versatile practical applications.
The book presents a thorough overview of the underlying concepts. Understanding Molecular Simulation: From Algorithms to Applications explains the physics behind the "recipes" of molecular simulation for materials science. Computer simulators are continuously confronted with questions concerning the choice of a particular technique for a given application.
A wide variety of tools exist, so the choice of technique requires a good understanding of the basic principles. More importantly, such understanding may greatly improve the efficiency of a simulation program. The implementation of simulation methods is illustrated in pseudocodes and their practical use in the case studies used in the text. Since the first edition only five years ago, the simulation world has changed significantly -- current techniques have matured and new ones have appeared.
Several new examples have been added since the first edition to illustrate recent applications. Questions are included in this new edition. No prior knowledge of computer simulation is assumed. The latest developments in quantum and classical molecular dynamics, related techniques, and their applications to several fields of science and engineering. Molecular simulations include a broad range of methodologies such as Monte Carlo, Brownian dynamics, lattice dynamics, and molecular dynamics MD.
This book is a unique reference work in the area of atomic-scale simulation of glasses. For the first time, a highly selected panel of about 20 researchers provides, in a single book, their views, methodologies and applications on the use of molecular dynamics as a tool to describe glassy materials. The book covers a wide range of systems covering "traditional" network glasses, such as chalcogenides and oxides, as well as glasses for applications in the area of phase change materials.
Initial state Preparation of the initial state uses the following three functions, one for the atomic coordinates, the others for the velocities and accelerations. The accelerations are simply initialized to zero. Here we have picked the former in order to provide some motivation for a discussion of the latter. The alternative is to make extensive use of argument lists, perhaps using structures to organize the data transferred between functions; while offering a means of regulating access to variables, it makes the program longer and more tedious to read, so we forgo the practice.
Having settled this issue, what are the global variables used by the program? The variable mol is actually a pointer to a one-dimensional array that is allocated dynamically at the start of the run and sized according to the value of nMol. The vector region contains the edge lengths of the simulation region. The other quantities, as well as a list of those variables supplied as input to the program, will be covered by the remaining functions below. All dynamic array allocations are carried out by the function AllocArrays.
Measurements In this introductory case study the emphasis is on demonstrating a minimal working program. The measurements of the basic thermodynamic properties of the system that are included are covered by the following functions.
The quantity vSum is used to accumulate the total velocity or momentum, since all atoms have unit mass of the system; the fact that this should remain exactly zero serves as a simple — but only partial — check on the correctness of the calculation. It also checks that all requested data items have been provided. The function PrintNameList, also called by main, outputs an annotated copy of the input data.
More detailed results based on more extensive computations will appear later. Conservation laws The most obvious test that the computation must pass is that of momentum and energy conservation. One quantity that is not conserved is angular momentum; a conservation law requires the system to be invariant under some change, such as translation, but, because of the periodic boundaries, the rotational invariance needed for angular momentum conservation is not applicable.
Programming errors can sometimes but not always be detected by the violation of a conservation law; when this occurs the effect can be gradual, intermittent, or catastrophic, depending on the cause of error. In Table 2. Edited output from a short MD run. Fortunately, relaxation is generally quite rapid, but one must always beware of those situations where this is not true.
Equilibration can be accelerated by starting the simulation at a higher temperature and later cooling by rescaling the velocities this is similar, but not identical, to using a larger timestep initially ; too high a temperature will, however, lead to numerical instability. The normalized histogram represents a discrete approximation to f v.
Other kinds of analysis in subsequent case studies will involve functions that operate in a similar manner. In order to use this function storage for the histogram array must be allocated, and a number of additional variables declared and assigned values.
The initial velocities are based on random numbers generated using a default initial seed; to change this value introduce a new integer variable randSeed whose default value is arbitrarily set to 17 and in SetupJob use this value to initialize a 38 2 Basic molecular dynamics 0. Velocity distribution as a function of time; successively broader graphs are at times 0.
The results are shown in Figure 2. From results of this kind it is clear that there is no need to assign an initial velocity distribution carefully — the system takes care of this matter on its own for very small systems there will be deviations from the theoretical distribution [ray91]. Convergence is fastest at high density, while at lower density h t does not begin to change until atoms come within interaction range. Finite systems lack the monotonicity suggested by the theorem, but the overall trend is clear and, strictly speaking, the theorem only addresses average quantities.
A computation of this kind was carried out in the early days of MD [ald58]; Boltzmann would presumably have found the results much to his liking. Thermodynamics To provide a glimpse of what can be done, we show a few measurements made during some short test runs using as input data, 40 2 Basic molecular dynamics -1 h-function -2 -3 -4 0. Time dependence of the Boltzmann H-function neglecting constants starting from an ordered state, at densities 0.
The output is summarized in Table 2. We will return to these matters in Chapter 3. Measurements from soft-disk simulations at different densities: total energy, kinetic energy and pressure are shown. Clearly, a single trajectory conveys very little information, but if the trajectories of groups of nearby atoms are examined a clear picture emerges of the different behavior in the solid, liquid and gaseous states of matter.
Diffusion is just the mean-square atomic displacement after allowing for periodic wraparound in the MD case , and is one example of a transport process that MD can examine directly; we will return to this in Chapter 5. Trajectories can be shown on a computer display screen by simply drawing a line between the atomic positions every few timesteps; whenever a periodic boundary is crossed simply interrupt the trajectory drawing and restart it from the opposite boundary.
Suitable graphics functions are readily added to the program; all that is required, apart from setting up the display functions and arranging for atomic coordinates to be converted to screen coordinates, is the decision as to how frequently the display should be updated.
An example of a simple interactive MD simulation is shown in Figure 2. The details involved in writing such programs depend on the computer and software environment; this two-dimensional example is described in [rap97], although 42 2 Basic molecular dynamics Fig.
Trajectory plots at densities of 1. Example of an interactive simulation. Visualization plays an essential role in many kinds of problem, and the ability to interact with the simulation while in progress can prove to be of considerable value. Extend the graphics capability of the interactive MD program so that trajectories can be displayed.
Models of this kind are widely used in MD studies of basic many-body behavior, examples of which will be encountered in later chapters. The Lagrangian formulation of classical mechanics provides a general basis for dealing with these more advanced problems, and we begin with a brief summary of the relevant results.
A full treatment of the subject can be found in textbooks on classical mechanics, for example [gol80]. For example, in the case of partially rigid molecules the lengths of interatomic bonds should be kept constant. Such restrictions on the dynamics are called constraints and their effect on the equations of motion is the appearance of extra terms that play the role of internal forces, although these terms have an entirely different origin.
Here we outline the general framework; the details depend on the problem, and examples will be encountered in Chapters 6 and The sum on the right-hand side of 3. Hamilton equations of motion An alternative formulation of the equations of motion sometimes appears in the MD literature.
Although such a description must in principle be based on quantum mechanics, MD generally adopts a classical point of view, typically representing atoms or molecules as point masses interacting through forces that depend on the separation of these objects. More complex applications are likely to require extended molecular structures, in which case the forces will also depend on relative orientation.
The quantum picture of interactions arising from 3. Obviously the structural models and potential functions used in classical MD simulation should not be taken too literally, and the potentials are often referred to as effective potentials in order to clarify their status. While such a systematic approach is appealing, it is not generally used in practice. Atoms are modeled as point particles interacting through pair potentials. The purpose of this book is not to discuss the design of molecular models; we will make use of existing models and — from a pedagogical viewpoint — the simpler the model the better.
As far as MD is concerned the complexity of the model has little effect on the nature of the computation, merely on the amount of work involved. Example potentials The most familiar pair interaction is the LJ potential, introduced in 2. The LJ interaction is characterized by its strongly repulsive core and weakly attractive tail. The discontinuity at rc affects both the apparent energy conservation and the actual atomic motion, with atoms separated by a distance close to rc sometimes moving repeatedly in and out of interaction range.
A slight change to the LJ interaction leads to a potential that is entirely repulsive in nature and very short-ranged 2. A system subject to the original LJ potential can exist in the solid, liquid, or gaseous states; the attractive part of the potential is used to bind the system when in the solid and liquid states, and the repulsive part prevents collapse. When the attractive interaction is eliminated, the behavior is determined primarily by density; at high density the soft-sphere system is packed into a crystalline state, but once melted, unlike the LJ case where there is also a liquid—gas phase transition, the liquid and gas states are thermodynamically indistinguishable.
Other functional forms can be used for interactions between atoms, and between small molecules in cases where spherical symmetry applies [mai81]. Some prove more suitable than others for particular problems.
But since the subject is MD, not the construction of potential functions, we will not pursue this subject any further. The different approaches to computing interactions: all pairs, cell subdivision the cell size exceeds the interaction range , and neighbor lists the concentric circles show the interaction range and the extra area covered by the neighbor list for one of the atoms.
Interactions suitable for describing other kinds of molecule will be introduced in subsequent chapters. Although testing whether atoms are separated by less than rc is only a part of the overall interaction computation, the fact that the amount of computation needed grows as O Nm2 rules out the method for all but the smallest values of Nm.
Two techniques for reducing this growth rate to a more acceptable O Nm level, often used in tandem, will be discussed here; to within a numerical factor this clearly represents the lower bound for the amount of work required to process all Nm atoms. A schematic summary of the methods appears in Figure 3. Cell subdivision Cell subdivision [sch73, hoc74] provides a means of organizing the information about atom positions into a form that avoids most of the unnecessary work and reduces the computational effort to the O Nm level.
Imagine that the simulation region is divided into a lattice of small cells, and that the cell edges all exceed rc in length. Then if atoms are assigned to cells on the basis of their current positions it is obvious that interactions are only possible between atoms that are either in the same 50 3 Simulating simple systems cell or in immediately adjacent cells; if neither of these conditions are met, then the atoms must be at least rc apart. The wraparound effect due to periodic boundaries is readily incorporated into this scheme.
Clearly, the region size must be at least 4rc for the method to be useful, but this requirement is usually met. The program for the cell-based force calculation involves a form of data organization known as a linked list [knu68]. Rather than accessing data sequentially, the linked list associates a pointer pn with each data item xn , the purpose of which is to provide a nonsequential path through the data.
This kind of data organization will reappear in other contexts in subsequent chapters. In the cell algorithm, linked lists are used to associate atoms with the cells in which they reside at any given instant; a separate list is required for each cell. The reason for using linked lists is to economize on storage.
The basic organization involves scanning cell pairs, 3. Part of the code is devoted to the special handling of cells adjacent to one or more of the periodic boundaries, and there is an implicit assumption that there are at least three cells in each direction otherwise the same neighbor will be accessed on both sides.
If there are roughly as many cells as there are atoms this array requires close to two elements per atom. If there is any risk that a component of cc might lie outside the cell array, an indication that something is seriously wrong with the computation since it implies an atom has escaped from the system this is more likely to happen when using hard walls rather than periodic boundaries , a check of this kind is easily inserted.
Several new constants and vector operations appear in this listing. The operation define VLinear p, s p. In view of the fact that the majority of the work in this function is carried out inside a highly nested set of loops, it hardly comes as a surprise to learn that there are different ways of organizing the computation.
The method used here is to scan over cells, then over offsets, and only then over cell contents; alternatives include scanning over relative cell offsets and then over cells, or scanning the atoms in the outermost loop, with inner loops that scan the neighboring cells of the cell containing the atom together with their contents.
Since the cells are often used as part of the neighbor-list method, this issue is usually not critical. The fact that the list contains atom pairs that lie outside the interaction range ensures that over this sequence of timesteps no new interacting pairs can appear that are not already listed. This criterion, which is equivalent to examining atomic displacements, errs slightly on the conservative side, since it combines contributions from different atoms, but it guarantees that no interacting pairs are ever missed because atoms cannot approach from rn to rc during the elapsed time interval; a more precise test could be based on the accumulated motions of individual atoms, but, because the refreshing is already infrequent, the saving will be minimal.
In either instance, the cell method is used to build the neighbor list, with the cell size now being determined by the distance rn rather than rc if the system is too small — relative to rn — for the cell method to work, then the more costly all-pairs approach must be used to build the list. The construction function is very similar to the cell version of ComputeForces.
The difference is that, instead of computing the interactions, potentially interacting pairs are merely recorded in the neighbor list for subsequent processing each pair is recorded as two consecutive values. If these replica atoms are included in the force computation the wraparound checks are no longer required, but the cell array will have to be enlarged to include the region that the replica atoms can occupy.
The multiple-timestep method is available for medium-range forces that extend beyond several mean atomic spacings but excluding long-range forces of the Coulomb type which require special treatment — Chapter 13 [str78]. Pairs of interacting neighbors are divided into groups on the basis of their separation, and the contributions of more distant groups are evaluated at less frequent intervals. While the method has proved useful, it is essential to verify that this approximation does not adversely affect the behavior being studied.
As an alternative to direct evaluation, interactions can be computed using a simple table lookup, possibly accompanied by interpolation for additional accuracy there are also situations where the potential only exists in tabular form. Which method is faster depends on the complexity of the potential function.
The value of tabulation can depend on the computer hardware in ways that are not obvious. So, for extensive simulations, some empirical investigation of this subject should prove worthwhile. If the potential function also depends on molecular 60 3 Simulating simple systems orientation the lookup table becomes multidimensional, and storage limitations may prevent construction of a table with adequate resolution.
Only two classes of method have achieved widespread use, one a low-order leapfrog technique, the other involving a predictor—corrector approach; both appear in various different but equivalent forms. Obtaining a high degree of accuracy in the trajectories is neither a realistic nor a practical goal.
As we will see below, the sharply repulsive potentials result in trajectories for which even the most minute numerical errors grow exponentially with time, rapidly overwhelming the power-law type of local error introduced by any of the numerical integrators.
Leapfrog-type methods Two very simple numerical schemes that are widely used in MD are known as the leapfrog and Verlet methods [bee76, ber86b]; they are completely equivalent algebraically. Their storage requirements are also minimal. The highly intuitive [fey63] leapfrog method is equally simple to derive. The result 3.
In the two most familiar forms of the method there is a choice between using the acceleration values at a series of previous timesteps — the multistep Adams approach — or using the higher derivatives of the acceleration at the current timestep the Nordsieck method. For methods accurate to a given power of h the two forms can be shown to be algebraically equivalent. The methods are of higher order than leapfrog, but entail a certain amount of extra computation and require storage for the additional variables associated with each atom.
We will focus just on multistep methods, because derivatives of the acceleration — quantities that are not natural participants in Newtonian dynamics — are absent. The advantage of using higher derivatives is that h can easily be changed in the course of the calculation; this is never done in MD.
In order for this to be 3. Variations of this method tried in the past include actually doing this second evaluation — at considerable computational cost — and applying the corrector more than once; neither were found to provide noticeable improvement in accuracy and they are not used. In choosing the initial coordinates, the usual method is to position the atoms at the sites of a lattice whose unit cell size is chosen to ensure uniform coverage of the simulation region.
Typical lattices used in three dimensions are the face-centered cubic FCC and simple cubic, whereas in two dimensions the square and triangular lattices are used; if the goal is the study of the solid state, then this will dictate the lattice selection. The function that generates an FCC arrangement with the option of unequal edges follows; there are four atoms per unit cell, and the system is centered at the origin. Examples of other lattices are shown subsequently. Though not used in the MD programs, it is useful during analysis of spatial organization, in order to contrast MD results with those of random point arrays.
Temperature adjustment Bringing the system to the required average temperature calls for velocity rescaling. If there is a gradual energy drift due to numerical integration error, further velocity adjustments will be required over the course of the run. The temperature adjustment or velocity rescaling function below would therefore be called from SingleStep immediately following the call AccumProps 2 used for summarizing the results. In the series of runs deltaT varies over a 16 : 1 range between 0.
A few brief test runs with the actual potential function 3. Table 3. Energy conservation for leapfrog LF and predictor—corrector PC methods. The results are shown in Figure 3. To emphasize the accuracy of the method — from the energy point of view — most of these results are repeated in Table 3. Convergence of mean kinetic energy from different initial states. In most cases, once the system has equilibrated there will be no memory of the details of the initial state, but problems can arise in cases of very slow convergence, or where there are different metastable states in which the system can become trapped.
As a brief demonstration we show how the kinetic energy varies with time for simulations that differ only in the choice of the initial random velocities.
The runs use different values of randSeed such as 17, 18 and 19 , and the results are shown in Figure 3. Nm pairs cells nebrs 64 2. When rNebrShell has the value 0. Other equilibrium and steady state properties are similarly well-behaved. When it comes to the trajectories themselves it is an entirely different story: trajectories display an exponential sensitivity to even the most minute perturbation.
This extreme sensitivity is the microscopic basis for molecular chaos that plays such an important role in statistical mechanics; though the equations of motion are time reversible, this fact turns out to be unobservable in most practical situations [orb67, lev93]. To actually measure this behavior we consider a system of 2Nm atoms in which odd- and even-numbered atoms form independent but identical subsystems that are assigned the same initial coordinates and velocities.
Although the study uses soft atoms and a leapfrog method subject to numerical integration error, this error is not the dominant factor, because similar results can also be obtained in hard-sphere studies that are free from integration error. The atoms are divided into two entirely separate subsystems, and nMol is doubled.
Trajectory divergence for different initial velocity perturbations the vertical scale is logarithmic. Explore the use of PC methods involving derivatives of the acceleration [bee76, ber86b]. How is the computation speed affected by the way the data are organized?
Investigate the use of multiple-timestep methods. The treatment of properties associated with the motion of atoms — the dynamical behavior — forms the subject of Chapter 5. While basic MD simulation methods — formulating and solving the equations of motion — fall into a comparatively limited number of categories, a wide range of techniques is used to analyze the results. Much of this processing will be carried out while the simulation is in progress, but some kinds of analysis are best done subsequently, using data saved in the course of the simulation run; the choice of approach is determined by the amount of work and data involved, as well as the need for active user participation in the analysis.
Averages corresponding to thermodynamic quantities in homogeneous systems at equilibrium are the easiest measurements to make. Statistical mechanics relates such MD averages to their thermodynamic counterparts, and the ergodic hypothesis can be invoked to justify equating trajectory averages with ensemble-based thermodynamic properties [mcq76].
If the system is spatially inhomogeneous, all quantities, from the simplest thermodynamic values onward, must be based on localized measurements. If the system is also nonstationary over time, long term time averaging is ruled out because it would obliterate the very effects being studied.
In short, the more complex the phenomenon the more demanding the measurement task. These topics will be encountered in Chapters 7 and While Monte Carlo requires less computation per interacting atom pair, because only the potential energy has to be evaluated, the number of Monte Carlo cycles required to obtain uncorrelated samples more precisely, a series of samples that are only weakly correlated may exceed the corresponding number of MD timesteps. There is just one minor difference, in that each conserved momentum component removes one degree of freedom, but this is a negligible effect for systems beyond a minimal size.
Error analysis The measurement process in MD is very similar to experiment. But the experimentalist often has the advantage of knowing that each estimate is independent, 4. Averages of directly measured quantities may not be the main problem, given an adequate run length, but statistical error estimates are particularly sensitive to correlations between samples.
We assume that the problem has been correctly formulated and implemented; errors in the results can then be categorized as follows. There are errors due to inadequate sampling of phase space where, especially near a thermodynamic phase boundary, or in the case of infrequently occurring events, enough of the relevant behavior fails to be sampled; this is symptomatic of poor experimental design. Only for errors of the last kind is the usual statistical analysis applicable.
Whenever the current Mb is odd, the last value is simply discarded before doubling the block size. In less than ideal situations where the measurement period is too short, or barely adequate in length, the plateau will either not appear at all or will be very narrow; in such cases the variance estimate is unreliable. When the method works successfully the block size at the start of the plateau is an indication of the extent to which the samples are correlated.
Figure 4. Density and mean-temperature dependence of energy for Lennard-Jones solid curves and soft-sphere dashed systems, for densities 0. Density and temperature dependence of pressure for Lennard-Jones solid curves and soft-sphere dashed systems, for densities 0. Equation of state Pressure is obtained from the virial expression 2. Pressure measurements for the runs described above are shown in Figure 4.
A more extensive analysis of this kind would lead to the complete equation of state [nic79]. Thus, even this very rough comparison suggests that size dependence will normally only be an issue if high-quality estimates are required. Table 4. The MD approach can, of course, provide the answer to any question about structure, such as the nature of spatial correlations between atoms taken three at a time; while this kind of information can prove useful in trying to understand the behavior, comparison is impossible since the corresponding experimental data are unobtainable.
This is the three-dimensional version of the computation; the changes for two dimensions are minor. Three density values are used, namely, 0. The results appearing in Figure 4. Radial distribution function for soft spheres at densities 0. Long-range order corresponds to the presence of lattice structure and is the quantity underlying x-ray scattering measurements from crystalline materials. The local 4. Time dependence of long-range order in soft-sphere systems that start in an ordered state; the results are for densities 0.
This function is called prior to the call to PrintSummary and the output should include the value of latticeCorr. No averaging over separate measurements is included, but this could easily be added. Any mismatch will introduce imperfections of one kind or another into the ordered state, leading to a reduction in the apparent long-range order.
In Figure 4. The four density values shown are between 0. At the larger densities a moderate to high degree of order persists throughout the observation period although this is not a guarantee of what might happen over much longer times , whereas at the lowest density the long-range order rapidly vanishes.
The Voronoi subdivision for a small, random set of points in two dimensions; the region boundaries are periodic. How to describe the spatial organization of what sometimes amounts to little more than a random array of atoms is far from obvious.
The most widely used method is based on a Voronoi subdivision [hsu79, cap81, rap83, med90], in which each atom is surrounded by a convex polyhedron constructed using certain prescribed rules. The outcome of this construction process is the partitioning of space into a set of polyhedra, with all points that are closer to a particular atom than to any other belonging to its polyhedron. Displaying an image of such a partitioning in three dimensions is not particularly informative, but in Figure 4.
Construction of Voronoi polyhedra is an exercise in computational geometry and is by far the longest and most complex of the analysis programs used in these case studies. Periodic boundaries are assumed. A concise summary of the method follows. A large tetrahedron is then constructed as a generous overestimate of the eventual polyhedron; portions of this polyhedron will be removed in the course of the computation until what remains at the end is the Voronoi polyhedron for that atom.
Measurements made on the resulting polyhedron include vertex, edge and face counts, as well as the volume and surface area. Because of the rather complex nature of the algorithm these details can be handled in a variety of ways; this is one possible approach. The method for determining which atoms can contribute to a particular polyhedron assumes that the region has been subdivided into cells.
Here we only scan neighbor cells, but the range could be extended. The main program used in the cluster analysis is the following; the values of runId and rClust must be supplied when the program is run. Percolation theory can be used to explain the changing behavior as rd is varied and also to inspire other kinds of cluster analysis [sta92]. Examine the errors in energy and pressure due to truncating the LJ interaction.
Study the soft-sphere equation of state near the melting transition; what kind of transition occurs? The possible existence of a hexatic phase in two-dimensional liquids — in which there is long-range orientational order although no translational order — has been explored using MD [abr86]; look into the subject.
Examine the cluster distributions for the two-dimensional case from the point of view of percolation theory [hey89]. Most of the analysis is incorporated into the simulation program, but it would of course be possible though extremely storage intensive to store the required trajectory data for subsequent processing.
Discrete atoms play no role whatsoever in the continuum picture, but this does not seriously limit the enormous range of practical engineering applications of the continuum approach. Theory then leads to an expression analogous to the Einstein diffusion formula 5. In order to compute the quantities involved in the autocorrelation functions, certain additions must be made to the interaction calculations. In the expressions for the pressure tensor 5. The MD approach provides equivalent information directly from the trajectories, so that comparison with experiment can be made by carrying out a Fourier analysis of the simulation results — in a sense this amounts to performing the experiment on the model system.
Such correlation functions span the entire range of length and time scales, from slow long-wavelength modes at the hydrodynamic limit, right down to the atomic level [boo91]. There have been attempts in the past to establish a functional relationship between G s and G d , but these have been unsuccessful because of the wealth of dynamical detail that must be discarded.
The ability to use MD to study the behavior across a range of scales provides the bridge between atomistic and macroscopic worlds. While the local density conveys information about the distribution of atoms, it is equally possible to examine local variations in the motion of the atoms. Note that wraparound effects can occur 5. This in turn can be expressed either as the Fourier transform of a discretized form of the van Hove correlation function 5. The latter is clearly preferable since it requires a great deal less work.
Some of the detail is lost when grid averages are used, but this affects results at short distances — typically of the order of the grid spacing — while details of the more interesting long-range behavior are preserved. Because of the considerable amount of data that must be collected, we begin with some remarks on how the computations and data are organized.
The calculation starts by evaluating the Fourier sums for the density and the three current components — one longitudinal and two transverse — along each of the three k directions. The number of different k values used is denoted by nFunCorr. The contributions to all the overlapped correlation function measurements in progress at a given instant are evaluated by the following function. A cubic region shape and leapfrog integrator are assumed. The current correlation functions are treated in the same way.
Cutoffs for limiting the output are built into the program but could also be added to the command line if desired. Soft-sphere velocity autocorrelation functions for densities 0.
Leapfrog integration is used. A single set of results based on sets of partially overlapped measurements is produced during a run of 15 timesteps. The negative correlations that are observed at higher densities both for LJ and hard-sphere systems were one of the important early revelations of MD [rah64, ald67]. The system used is the same as above, at a density of 0. Figure 5. Estimates for D based on 5.
Velocity, pressure-tensor and heat-current autocorrelation functions; the vertical scale has been expanded to show the noise present in the results. In Figure 5. To improve the quality of the results, 5. The work describes distributed and shared-memory parallelization on these platforms, including load balancing, with a particular focus on the efficient implementation of the compute kernels.
The text also discusses the software-architecture of the resulting code. Odorant binding proteins OBPs and olfactory receptors ORs belong to the main molecular protagonists of the perception of smell. With the use of molecular dynamics simulations, we provide some clues to identify the residues responsible for the affinity of OBPs and ORs for various odorants. The affinity prediction is compared with biophysical experiments when available. This book introduces recent theoretical developments concerning the dynamic behaviour of fracture.
Readers learn how the recent development of molecular dynamics and other state-of-the-art methods can help to solve the important problem of fracture from the atomic level. Molecular Dynamics is a two-volume compendium of the ever-growing applications of molecular dynamics simulations to solve a wider range of scientific and engineering challenges.
The contents illustrate the rapid progress on molecular dynamics simulations in many fields of science and technology, such as nanotechnology, energy research, and biology, due to the advances of new dynamics theories and the extraordinary power of today's computers. This second book begins with an introduction of molecular dynamics simulations to macromolecules and then illustrates the computer experiments using molecular dynamics simulations in the studies of synthetic and biological macromolecules, plasmas, and nanomachines.
Coverage of this book includes: Complex formation and dynamics of polymers Dynamics of lipid bilayers, peptides, DNA, RNA, and proteins Complex liquids and plasmas Dynamics of molecules on surfaces Nanofluidics and nanomachines. The latest developments in quantum and classical molecular dynamics, related techniques, and their applications to several fields of science and engineering. Molecular simulations include a broad range of methodologies such as Monte Carlo, Brownian dynamics, lattice dynamics, and molecular dynamics MD.
Ab initio molecular dynamics revolutionized the field of realistic computer simulation of complex molecular systems and processes, including chemical reactions, by unifying molecular dynamics and electronic structure theory. This book provides the first coherent presentation of this rapidly growing field, covering a vast range of methods and their applications, from basic theory to advanced methods.
This fascinating text for graduate students and researchers contains systematic derivations of various ab initio molecular dynamics techniques to enable readers to understand and assess the merits and drawbacks of commonly used methods. It also discusses the special features of the widely used Car-Parrinello approach, correcting various misconceptions currently found in research literature. The book contains pseudo-code and program layout for typical plane wave electronic structure codes, allowing newcomers to the field to understand commonly used program packages and enabling developers to improve and add new features in their code.
Computational molecular and materials modeling has emerged to deliver solid technological impacts in the chemical, pharmaceutical, and materials industries.
It is not the all-predictive science fiction that discouraged early adopters in the s. Rather, it is proving a valuable aid to designing and developing new products and processes. People create, not computers, and these tools give them qualitative relations and quantitative properties that they need to make creative decisions.
With detailed analysis and examples from around the world, Applying Molecular and Materials Modeling describes the science, applications, and infrastructures that have proven successful.
0コメント