This page has been visited total since 11 June 2009

 

EROPHILE

Eigenvector representation of observables and probabilities in a high-dimensional Euclidean space.

or go to JCP to see J. Chem. Phys. 130, 044905 (2009); DOI:10.1063/1.3063118

 Permalink: http://link.aip.org/link/?JCPSA6/130/044905/1

 

See also the "Dynamical Integretion over a Markovian web " 

See also the "Integretion over a Marcovian web"

 

 

 

EROPHILE: A geometric representation for systems evolving through infrequent transitions in a network of states

 

Authors: Georgios C. Boulougouris1 and Doros N. Theodorou2

Affiliation:

School of Chemical Engineering, National Technical University of Athens, Zografou Campus, GR-15780 Athens, Greece. Fax: +30 210 772 3112

1E-mail: gboul@chemeng.ntua.gr

2E-mail: doros@central.ntua.gr.

 

 

The dynamics of many physical, chemical, and biological systems can be reduced to a succession of infrequent transitions in a network of discrete states representing low energy regions in configuration space.  This enables accessing long-time dynamics and predicting macroscopic properties.  Here we develop a novel, perfectly general statistical mechanical/geometric formulation that expresses both state probabilities and all observables in the same Euclidean space, spanned by the eigenvectors of the symmetrized time evolution operator. Our formalism leads to simple expressions for nonequilibrium and equilibrium ensemble averages, variances, and time correlation functions of any observable and allows a rigorous decomposition of the dynamics into relaxation modes.  Applying it to sub-glass segmental relaxation in atactic polystyrene up to times on the order of 10 ms, we probe the molecular mechanism of the g and d processes and unequivocally identify the d  process with rotation of a single phenyl group around its stem.

 

The dynamics of many physical, chemical, and biological systems proceeds as a succession of transitions between small regions in their configuration space, which we shall call “states”.   The states constitute “basins” of low energy with respect to the generalized coordinates spanning configuration space, or of low free energy with respect to a set of order parameters providing a coarse-grained description of the system.  Each state contains one or more local minima of the (free) energy.  Transitions between states are infrequent events, in the sense that the mean waiting time for a transition out of a state is long in comparison to the time required for the system to establish a restricted equilibrium distribution among configurations in the state. The entire configuration  space can be tessellated into states.  Representing each state by a point in configuration space and connecting all pairs of states between which a transition is possible, one obtains a graph, or network of states.   Examples of  phenomena that can be modelled as occurring through a succession of transitions in a network of states include diffusion of point defects and impurities in metals and semiconductors;1 of gas molecules in amorphous polymers;2 of bulky hydrocarbons in microporous solids, such as zeolites;3 structural relaxation and plastic deformation in glasses;4 phase transitions in molecular and atomic clusters;5 surface diffusion;6 protein folding;7 chemical reactions;8 and the function of gene regulatory networks for protein expression.9  

The possibility of coarse-graining dynamics into a sequence of transitions in a network of states is of strategic importance for understanding and predicting macroscopic properties from atomic-level structure and interactions.   The longest times that can be simulated with “brute force” molecular dynamics are microseconds, i.e., too short by many orders of magnitude in comparison with the experimental time scales of most phenomena of interest.   A more efficient strategy is to construct the network of states i and compute the rate constants  ki®j for transitions between them from atomic-level information.  The theory of infrequent events provides rigorous techniques for computing  ki®j, given the potential energy hypersurface of the system.10,11  Once states and interstate rate constants are known, the system evolution at the state level can be tracked by solution of the master equation over many orders of magnitude longer times than are accessible by molecular dynamics.

In this article we devise a simple geometric interpretation for the time-dependent state probabilities and for any observable in a system evolving through a sequence of infrequent transitions in a network of  states.   This interpretation employs a Euclidean space of dimensionality equal to the number of states, spanned by the eigenvectors of the rate constant matrix, appropriately symmetrized.   The time-dependent expectation values and autocorrelation functions of all observables are obtained from dot products in that space.  Furthermore, the dynamics of any observable is decomposed into contributions from different relaxation modes, allowing a direct connection with spectroscopic measurements. As an example application we study the reorientational dynamics of pendant groups during physical ageing of glassy atactic polystyrene.

 

Consider a system capable of adopting N discrete states, among which it undergoes infrequent transitions.   We will denote by Pi(t) the probability of finding the system in state i (1 £ i £ N)  at time t.  Collecting all state probabilities, we form an N-dimensional vector P(t).  The evolution of the system is described by the master equation

 ,    or                                              (1)

 

with given initial distribution among states P(0).  ki®j is the transition rate constant from i to j.  This is independent of time, thanks to the time scale separation which makes the transition from i to j an infrequent event.10  The evolution of the system in state space is a Poisson process.12   The N´N rate constant matrix K is defined by  and  . At very long times, the system adopts its equilibrium probability distribution among states, P(∞).  This is a stationary solution of the master equation (1), thanks to the microscopic reversibility condition on the rate constants:

                                                                                                       (2)

Two clarifications are in order here.  First, states are not necessarily single basins around individual minima on the system’s energy hypersurface.  Basins communicating through fast transitions, relative to the in-basin residence time, are lumped into the same state, such that the time scale separation criterion holds for exit from the state. This lumping into “metabasins” has been discussed in the context of glass-forming liquids.13,14  Secondly, for many systems the full set of states is not known a priori.  This is not prohibitive for invoking the formulation to be developed herein.  In our Dynamic Integration on a Markovian Web (DIMW) approach, we have shown15  how one can start off from a small set of states (those contributing nonzero elements to P(0)) and progressively augment it “on the fly”, using a first passage time criterion.  In DIMW a sequence of master equations is solved  on an ever increasing network of states.  In the solutions one can discern plateau regions, over which P(t) remains practically independent of time; these are indicative of restricted equilibration of the system over a subset of states, before it moves on across the high energy barriers which surround the subset.  The formulation we present is applicable for times up to the end of such a plateau region, with N being the number of states in the subset and P()  the restricted equilibrium values attained by the probabilities of these states at the plateau.

            In many applications, the master equation (1) is solved numerically, generating stochastic trajectories by Kinetic Monte Carlo simulation.2,3,6  This is convenient, but may lead to unacceptably large statistical error in multistate systems with rugged energy hypersurfaces.15  A preferable alternative is to solve the master equation analytically, via spectral decomposition.

            Here we introduce an Eigenvalue Representation of Observables and Probabilities in a HIgh-dimensionaL Euclidean space (EROPHILE) for systems evolving according to master equation (1).  Our starting point is to transform16 the state probability vector P(t) into a reduced vector  with elements .   This satisfies the reduced master equation   with .  The matrix  is symmetric by virtue of detailed balance, equation (2).  It has the same eigenvalues8 as K. We denote these eigenvalues by .  We symbolize by  the eigenvector of corresponding to eigenvalue , 0 £ n £ N-1.  Note that , corresponding to the equilibrium distribution among states.  The Euclidean norm of  is unity by the normalization of .  The solution of the master equation can be written as:

                                              (3)

where the normalization condition  has been used in separating out the equilibrium contribution ().  The eigenvectors  form an orthonormal basis set:  , 0 £ m, n £ N-1                                                                                            (4) 

with dmn being the Kronecker delta.  They also satisfy .

            We now consider the N-dimensional linear space from which the vector of reduced probabilities  takes values.  From equations (3) and (4) it is clear that the eigenvectors  of the symmetrized rate constant matrix constitute an orthonormal basis set for this space.  Furthermore, by equation (4), at any time  t ³ 0,  the tip of the reduced probability vector  lies on a (N-1)-dimensional hyperplane which passes through point  and is spanned by . This hyperplane is normal to the equilibrium eigenvector =.   It is described by the equation  and intersects each -axis at a distance from the origin.  Note that this hyperplane, on which is bound to evolve, is dictated solely by the equilibrium state probabilities, therefore by the evolution operator K, and is independent of the initial probability distribution among states. A pictorial representation of the hyperplane for the simple case of a system with three states is given in figure 1a. 

            From the solution, equation (3), it is clear that one can prepare the initial conditions in such a way that the probability distribution among states decays to equilibrium along a single exponential.  If one sets

,  or    (1 £ i £ N , 1 £ m £ N-1)           (5)

then, by virtue of the orthonormality of eigenvectors, equation (4), the time-dependent distribution among states will evolve as , or .

In this case we will say that the system has been initially perturbed, and decays to equilibrium, along mode m.  Each mode corresponds to a cascade of interstate transitions grouped together in such a way as to bring about a redistribution of the occupancy probabilities of all states along a single eigenvector.

            In the more general case, any initial probability distribution can be expressed as a linear combination of N-1 perturbation vectors from the equilibrium distribution: ,             with                                                           (6)

The resulting solution will be: .                                                    (7)

In -space, the time-dependent probability distribution of the system will move along a curved line lying on the hyperplane spanned by eigenvectors , until it collapses on point  (see figure 1a). 

            Another interesting special set of initial conditions is that wherein the system is initiated in a single state i:  .                                                               (8)

This is readily cast in the form of equation (6) with .                             (9)

The time-dependent evolution of state probabilities follows from equation (7).

 

We now take a significant step forward, which will enable us to describe any observable as a vector “living” in the same Euclidean space of reduced state probabilities depicted in figure 1. 

            Let A be any observable depending on the configuration of the system.  We symbolize by Ai the value of A in state i.  Ai can be computed atomistically as a restricted in-state equilibrium average.  Collecting the values of Ai for all states, we form an N-long observable vector  with elements

        (1 £ i £ N)                                                                                        (10)

The Euclidean norm of  has the physical meaning of the root mean square value of observable A at equilibrium:

                                                            (11)

The time-dependent ensemble average of A, starting from an arbitrary initial distribution among states, is readily obtained using equation (8) as

                                        (12)

where we have set .

The equilibrium expectation value of the observable is simply

.                                                                                                (13)

The equilibrium variance of A is obtained as

                                         (14)

Expressing  in the basis of eigenvectors , , we can further write                       (15)

The above properties have an immediate geometric interpretation in our Euclidean -space (see figure 1b).  We draw  as a vector in that space, using any consistent units to measure the state values of the observable, Ai. By equations (12)-(15), , , and  admit the interpretation shown in figure 1b. 

 

Of particular importance in understanding the dynamics and making contact with spectroscopic measurements are time autocorrelation functions of the form .  In general, these are averages over the nonequilibrium ensemble representing the system under given P(0), and therefore depend on both the time origin t0 and the time interval t.   Specializing to the case t0=0, we write

                                    (16)

where stands for the conditional probability of finding the system in state j provided it was in state i at t=0, and .  From equation (7),  can be written as                                 (17)

with  given by Eq. (9).

Substituting Eq. (17) into Eq. (16) one obtains:

                                            (18)

            If  the system starts off at equilibrium, , equation (18) simplifies further to

  (19)

By equation (19), the equilibrium time autocorrelation function of excursions from the mean admits a simple decomposition into the modes, the amplitude of each mode being the squared projection of the reduced observable vector onto the mode eigenvector.  The dominance of a mode (or a set of modes) in a relaxation measurement can be assessed through the changes in observable βn occurring along the mode(s). 

            For the special case where the observable is a component of a three-dimensional vector  characterizing the configuration of the system, equation (18) gives:

,                                                 (20)

where  symbolizes component idim Î {1, 2, 3} of vector v(l) in state i, , and .                                                                            

For the autocorrelation function of the entire vector  at equilibrium, equation (20) gives:

                                                                                (21)

where .

Subtracting equation (21) evaluated at  from equation (21), one obtains:

                                                               (22)

            Equation (18) informs us how the autocorrelation function behaves for a system perturbed from equilibrium. We express the initial probability distribution in terms of mode contributions via equation (6).  We also express the observable value in each state i as  , with βm defined in equation (12).  It is then easy to recast equation (18) as

                                                                                                                          (23)

In the limit of an equilibrium initial distribution (), one recovers equation (19) from equation (23), as expected from linear response theory.  In general, the autocorrelation function, equation (23), contains all relaxation modes on which the projection  of the observable is nonzero, even in cases where the system is initially perturbed away from equilibrium along a single mode. 

EROPHILE is applicable to any stochastic system, from economics to medicine.  Although developed here in a statistical mechanical context, it is as general as statistics itself.

 

We apply EROPHILE to analyze the sub-glass dynamics of a united atom model of atactic polystyrene (a-PS) that we have recently simulated at 250 K and 1 bar by the DIMW method.15  DIMW enabled us to obtain an analytical representation of the dynamics of the glassy polymer in its multidimensional configuration space, based on solving the master equation recursively in an ever-expanding network of states.  Each state is a basin constructed around an inherent structure.  Rate constants for transitions from one state to the other were computed via multidimensional transition state theory on the rugged potential energy landscape of the polymer glass, whereas saddle points around a given minimum were located by the “dimermethod.17  The evolution of the probability distribution has been tracked15 over more than 10 orders of magnitude on the time scale, reaching 10-5s. 

Acknowledgements

This work has been supported by the Senate Committee on Basic Research of NTU Athens in the form of a 2006 “Karathéodoryprogramme.  Simulations have been performed in part on the Mare Nostrum supercomputer at the Barcelona Supercomputing Center - Centro Nacional de Supercomputación, Spain.

 

References

1  Vineyard, GH. Frequency factors and isotope effects in solid state rate processes. J. Phys. Chem. Solids 3, 121 (1957).

2 Gusev, A.A., Müller-Plathe, F.; van Gunsteren, W.F.; Suter, U.W. Dynamics of small molecules in bulk polymers. Advan. Polym. Sci. 116, 207-247 (1994).

3 Theodorou, D.N., Snurr, R.Q.,  Bell, A.T. Molecular dynamics and diffusion in microporous materials.  In G. Alberti and T. Bein (Eds.),  Comprehensive Supramolecular Chemistry (Elsevier Science,  Oxford, 1996), p. 507-548.

4  Stillinger, F.H. A topographic view of supercooled liquids and glass formation. Science 267, 1935-1939 (1995).

5  Wales, D.J.,  Doye, J.P.K., Miller, M.A., Mortenson, P.N., Walsh, T.R. Energy landscapes: from clusters to biomolecules.  Advan. Chem. Phys. 115, 1-111 (2000).

6  Voter, A.F.& Doll, J.D. Dynamical corrections to transition-state theory for multistate systems – Surface self-diffusion in the rare-event regime. J. Chem. Phys. 82, 80-92 (1985).

7  Wolynes, P.G. Energy landscapes and solved protein-folding problems, Phil. Trans. Roy. Soc. A 363, 453-464 (2005).

8 Wei, J. &  Prater, C.D. The structure and analysis of complex reaction systems.  In Advances in Catalysis, Vol. 13 (New York, Academic, 1962).  p. 204-390. 

9  Kaznessis, Y.N. Multi-scale models for gene network engineering, Chem. Eng. Sci. 61, 940-953 (2006).

10 Chandler, D. Statistical-mechanics of isomerization dynamics in liquids and transition-state approximation. J. Chem. Phys. 68, 2959 (1978).

11 Chandler, D. Throwing ropes over rough mountain passes, in the dark.  In Berne, B.J., Ciccotti, G., Coker, D.F., Eds. Classical and Quantum Dynamics  in Condensed Phase Simulations (World Scientific, Singapore, 1998).  p. 51-66.

12  van Kampen, N.G. Stochastic Processes in Physics and Chemistry, North-Holland, Amsterdam, 1981.

13  Doliwa, B. & Heuer, A. What does the potential energy landscape tell us about the dynamics of supercooled liquids and glasses?  Phys. Rev. Lett. 91, 235501  (2003).

14  Tsalikis, D., Lempesis, N., Boulougouris, G.C., Theodorou, D.B. On the role of inherent structures in glass-forming materials II. Reconstruction of the mean square displacement by rigorous lifting of the inherent structure dynamics.  J. Phys. Chem. B, in press.

15 Boulougouris, G.C. & Theodorou, D.N. Dynamical Integration of a Markovian Web:  A first passage time approach. J. Chem. Phys. 127, 084903 (2007).

16 Reichl, L.E. A Modern Course in Statistical Physics, 2nd ed. Ch. 5 (Wiley, New York, 1998).

17 Henkelman, G. &  Jónsson, H. A dimer method for finding saddle points on high dimensional potential energy surfaces using only first derivatives. J. Chem. Phys. 111, 7010-7022 (1999).

18 Yano, O. &  Wada, Y.  Dynamic mechanical and dielectric relaxations of polystyrene below glass temperature.  J. Polym. Sci., Part Α-2: Polym. Phys. 9, 669-686  (1971)

19 Crissman, J.M. & McCammon, R.D. Apparatus for Measuring the Dynamic Mechanical Properties of Polymeric Materials between 4° and 300°K  J. Acoust. Soc. Am. 34, 1703- 1706   (1962)

20 Hunt, B.I., Powles, J.G., Woodward, A.E. Proton spin-lattice relaxation measurements on some high polymers of differing structure and morphology Polyethylene, poly-4-methyl-pentene-1, poly-l-leucine, poly-phenyl-l-alanine, polystyrene and poly-α-methyl-styrene. Polymer 5, 323- 342  (1964)

21 Illers, K.H. &  Jenckel, E. Dynamic mechanical behavior of polystyrene, poly(p-chlorostyrene), and poly(p-bromostyrene) at low temperatures. J. Polym. Sci., Part B: Polym. Lett. 41, 528-531  (1959).

Figure 1: Schematic of the EROPHILE representation for a simple three-state system.  a: State probabilities. Any possible distribution of the system among its states lies within a hyperplane normal to the equilibrium unit vector  in the Euclidean space of reduced state probabilities .  The hyperplane contains point  and intersects each of the  axes at . The  hyperplane is spanned by the N-1 eigenvectors (n=1, …, N-1) of the symmetrized rate constant matrix.  Any perturbation of the probability distribution from equilibrium is written as a linear combination of the latter N-1 eigenvectors .  The curved line on the hyperplane traces a trajectory of the system starting at initial reduced probability distribution and ending at equilibrium.  The quantities an are projections of  on the eigenvectors .   b: Observables.  For any observable A, a Euclidean vector  is created with components , where Ai is the value of the observable in state i. The equilibrium average of A corresponds to the  projection of this vector onto the equilibrium  eigenvector .  The equilibrium fluctuation of A equals the length of the projection of  on the hyperplane spanned by eigenvectors .    The time-dependent nonequilibrium ensemble average equals the projection of  onto the reduced probability vector .  As the tip of moves from  towards the equilibrium point ,  moves to .  The exponentially decaying components of  along the modes are proportional to the projections  of  on the eigenvectors.   The components of the equilibrium autocorrelation function of A along the modes are proportional to . 

 

 

From 23/9/2009, Thanks to revolvermaps   http://www.revolvermaps.com/