parallel implementation of dissipative particle dynamics and application to energetic materials
TRANSCRIPT
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
1/20
Parallel Implementation of Dissipative Particle Dynamics:Application to Energetic Materials
1 Introduction
The simulation of various systems of particles provide an alternative for conducting costly
or risky experiments in the eld. However, these simulations often require a trade-off between
accuracy or level of detail and the size of the simulation. For example, a simulation which strives
to maintain electronic detail such as Quantum Monte Carlo can only be used to simulate systems
of 30 or 40 atoms for a picosecond on current computer systems. Molecular Dynamics (MD)
can be used to simulate molecules on an atomistic scale. Atomistic MD follows the classical
laws of physics with atomic movement governed by Newtons second law [1]. In atomistic MD
simulations, only atoms are modeled as opposed to quantum mechanics simulations, which model
atoms and electrons. Millions of atoms can be modeled on nanosecond timescales in atomistic MD.
Another method to run larger simulations with acceptable trade-off in detail is coarse graining.
Methods of coarse graining include representing groups of atoms as beads or considering the
system in terms of elds instead of individual forces. Coarse grained MD would be calculated
much the same way as MD, by considering the conservative forces on each bead via Newtons
second law. Dissipative Particle Dynamics (DPD) improves CG MD by adding dissipative and
random forces [2]. DPD has been prevalent in the simulation of polymers, surfactants and colloids
since its inception, but has recently been used in other applications including energetic materials
[3, 4, 5].
DPD, as originally formulated, samples the canonical ensemble ( i.e. , constant temperature),
but variants of DPD have also been developed that sample the isothermal, isobaric ensemble (con-
stant pressure DPD), the microcanonical ensemble (constant energy DPD), as well as isoenthalpic
conditions (constant enthalpy DPD). The purpose of this research project was to develop parallel
1
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
2/20
versions of these DPD variants for a previously written parallel MD code, called CorexMD. Im-
plementation of these parallel DPD variants will allow larger simulations of millions or billions
of particles on longer length (up to microns) and time (up to microseconds) scales. From this
work, researchers can model microstructural voids in energetic materials, which are known to be
important for detonation of explosives. Current experiments also cannot provide information about
materials and their detailed mechanisms so this project will assist in that respect.
2 Dissipative Particle Dynamics
In DPD, particles are dened by mass ( m), position ( r ), and momentum ( p). They interact
through a pairwise force ( F i j ).
F i j = FC i j + F Di j + F
Ri j (1)
In Eq. (1), FC i j is the conservative force, F Di j is the dissipative force, and F
Ri j is the random force,
which are given by Eqs. (2) - (4).
FC i j = duCGijdr ij
r i jr i j
(2)
F Di j = i j D(r i j)(
r i jr i jvi j )
r i jr i j
(3)
F Ri j = i j R(r i j)W i j
r i jr i j
(4)
In Eqs. (2) - (4), r ij is the separation vector between particles i and j, r i j = |r ij | , is the friction
coefcient, is the noise amplitude, vij = pimi
p jm j
, W i j is an independent Wiener process such that
W i j = W ji , and D and R are weighting functions that vanish for r r c where r c is the cut-off
radius [6].
DPD variants exist that conserve temperature (DPD-T), energy (DPD-E), pressure (DPD-P),
2
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
3/20
and enthalpy (DPD-H). The equations of motion for each variant are briey discussed below.
2.1 Constant Temperature DPD, DPD-T
In constant-temperature DPD, temperature and momentum are conserved. The following are
the equations of motion that describe DPD-T:
dr i = pimi
dt (5)
dp i = j= i
(FC i j + F Di j + F
Ri j)dt (6)
In order for the system to sample the canonical ensemble ( i.e. constant number of particles,volume, and temperature), DPD-T must obey the following uctuation-dissipation theorem [7]:
2i j = 2 i jk BT (7)
and
D(r ) = [ R(r )]2 (8)
Typically, the weighting functions D(r ) and R(r ) are chosen to be:
D(r ) = [ R(r )]2 = ( 1 r r c
)2 (9)
2.2 Constant Energy DPD, DPD-E
In constant-energy DPD (DPD-E), the total energy of the system is conserved. In order to do
so, the equations of motion for DPD-T are coupled with an internal mesoparticle equation of state,
ui, which is taken to be a sum of two terms that account for internal energy transfer via mechanical
and conductive means. This requires that d ui = dumechi + ducond i [8]. The equations of motion for
3
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
4/20
DPD-E then become [2] :
dr i = pimidt (10)
dp i = j= i
FC i j dt i j D(
r ijr i j
vij )r ijr i j
dt + i j Rr ijr i j
dW ij (11)
duvisi = 12 j= i
i j D(r ijr i j
vij )r ijr i j
dt 2i j2
( 1mi
+ 1m j
)( R)2dt i j R(r ijr i j
vij )dW ij (12)
ducond i = j= i
i j ( 1
i
1
j) Dq dt + i j RqdW ij (13)
In Eqs. (10) - (13), and are the mesoscopic thermal conductivity and noise amplitude,
respectively, is the internal temperature, and W ij is an independent Wiener process, such that
W ij = W ji.
The uctuation theorem becomes:
2i j = 2 i j k Bi j (14)
where i j = 12 ( 1i +
1 j ) and with,
D(r ) = [ R(r )]2 (15)
The mesoscopic thermal conductivity and noise amplitude are also related through a uctuation-
dissipation theorem [9]:
2i j = 2k B i j (16)
Dq (r ) = [ Rq(r )]2 (17)
4
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
5/20
Similar to DPD-T, the weighting functions are generally assumed to be [2][8]:
Dq (r ) = D(r ) = [ R(r )]2 = [ Rq(r )]2 = ( 1 r r c
)2 (18)
2.3 Constant Pressure DPD, DPD-P
A constant pressure variant of DPD (DPD-P) can be formulated by coupling the equations
of motion for DPD-T with a barostat. This barostat xes pressure upon some imposed pressure
and allows volue to uctuate. Thus, DPD-P conserves pressure, temperature and momentum. For
uniform dilation using a Langevin barostat, the equations of motion for DPD-P[10] are:
dr i = p imi
dt + pW
r idt (19)
dp i = j= i
(FC i j + F Di j + F
Ri j )dt (1 +
d N f
) pW
p idt (20)
dlnV = dpW
dt (21)
d p = F dt (22)
In Eqs. (19) - (22), = ln V V 0 , V is the volume, W is a mass parameter, p is a momentum
conjugate, and F = dV (P P0) + d N f ip ip imi p p + pW p. The pressure, P , is calculated from
the virial formula [11],
P = 1dV
(i
p i p imi
+ i
j> i
FC i j r i j ) (23)
In Eq. (23), P0 is the imposed pressure, d is the dimensionality, N f = dN d , p and p are
Langevin barostat parameters, and W p is the Wiener process associated with the piston.
The associated uctuation-dissipation theorem relationships are those from DPD-T, Eqs. (7) -
5
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
6/20
(8), along with
2 p = 2 pW k BT (24)
As i j and p go to zero, the expression H = K + U + P0V + p22W should be conserved, where K
is the kinetic energy and U is the potential energy.
2.4 Constant Enthalpy DPD, DPD-H
Constant-enthalpy DPD is a new DPD variant proposed by M. L sal et al. [2]. It combines
the equations of motion for DPD-E with the barostat of DPD-P. As it conserves both energy and
pressure, DPD-H conserves enthalpy ( i.e . a system at constant energy and pressure is at constant
enthalpy). The resulting equations of motion become:
dr i = p imi
dt + pW
r idt (25)
dp i = j= i
(FC i j + F Di j + F
Ri j )dt (1 +
d N f
) pW
p idt (26)
dlnV = dpW
dt (27)
d p = F dt (28)
dumechi = 12 j= i
i j D(r ijr i j
vij )2dt 2i j2
( 1mi
+ 1m j
)( R)2dt i j R(r ijr i j
vij )dW ij (29)
ducond i = j= i
i j ( 1i
1 j
) Dq dt + i j RqdW ij (30)
The uctuation-dissipation theorem relationships from DPD-P, Eq. (24), and DPD-E, Eqs. (14)
- (17), dene the necessary relations for DPD-H.
In DPD-H, the total enthalpy is conserved [12]. Thus, H = K + U + U i + P0V + p22W is con-
served.
6
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
7/20
2.5 DPD Model
The DPD variants were tested with two types of potential models. The rst was the DPD
uid model, which is commonly used as a potential in DPD. The model denes the conservative
potential as [13]:
uCGi j = a i j r c D(r i j ) (31)
where a i j denes the magnitude of the repulsion between two particles.
DPD does not require that Eq. (31) be used as the potential. Thus, tabulated potentials have
also been implemented into the code which allow for any conservative potential and force to be
used within the DPD framework. As an example, simulations have been performed using a density
dependent tabulated potential for RDX, which was tted through a technique called force matching
by Izvekov et al . [14].
3 FORTRAN90 and CoreXMD
FORTRAN90 is a general-purpose, procedural programming language that is especially suited
to numeric computation and scientic computing. Because of its capabilities, it is used for pro-gramming the DPD variants into CoreXMD.
CoreXMD is a software package designed for performing particle simulations over multiple
processors [15]. In CoreXMD, simulations follow a typical procedure within the code involving
initialization, iteration and destruction. During the iteration step, CoreXMD splits up pairs of
particles onto different domains on different processors such that the forces and energy of those
pairs of particles can be calculated simultaneously and therefore, parallel speed-up occurs. The
goal of this project was to implement variants of DPD such that the parallel abilities of CoreXMD
were used effeciently.
7
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
8/20
4 Specics of Implementation of DPD Variants into CoreXMD
The fundamental steps required when performing simulations in CoreXMD involve the creation
and initialization of the system variables then the iteration for each time step in which forces are
calculated and nally the destruction of the simulation variables.
The creation and initialization of the simulation variables dene the simulation. Various pa-
rameters are read in from input which dene the unit cell, periodic boundaries, the initial positions
and velocities, as well as other parameters which dene the variant of DPD to be used ( i.e. , DPD-T,
DPD-E, DPD-P, DPD-H).
The iterative step is the most expensive step of the simulation as it requires calculating the
forces for each pair of particles as well as updating particle velocities and positions through iter-
ation of the pair lists. The iterative step from DPD as implemented requires a Shardlow Splitting
Algorithm (SSA) [16] and a two step velocity Verlet algorithm to update the positions and veloc-
ities [11]. Between the two velocity Verlet steps, the conservative forces for that time step are
calculated.
The Shardlow Splitting Algorithm method updates the velocity based on the dissipative and
random forces via integration of stochastic differential equations, while the velocity Verlet algo-
rithm updates the velocities via integration of ordinary differential equations. In DPD-E and DPD-
H, the Shardlow Splitting Algorithm also calculates the mechanical and conductive energies and
the internal temperature. The Shardlow Splitting Algorithm was implemented in the serial code as
it allows much larger time steps to be taken compared to velocity Verlet alone [16]. However, in
the following results section it is shown that it may be incompatible with domain decomposition,
which is fundamental to the parallel processing capabilities of CoreXMD. The nal step, the de-
struction of the simulation variables, is used for memory optimization, with the memory of various
variables and arrays being released.
DPD-P was implemented into CoreXMD by adding a stochastic Langevin barostat to the DPD-
8
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
9/20
T code. The CoreXMD DPD-E was implemented by the addition of the internal particle energy
comprising the conductive and mechanical energies, which account for some internal degrees of
freedom lost due to coarse graining. DPD-H was implemented through the addition of the Langevin
barostat from DPD-P into the DPD-E code. DPD-T was not implemented as a part of this project
as it was previously coded into CoreXMD.
5 Results
The success of the implementation of the DPD variants into the parallel code was measured
by two criteria: how well the results conserved the quantity stated in the variant ( i.e. , constant
temperature, pressure, energy, or enthalpy) and how well the parallel code scaled. This paper will
rst discuss the former by comparing results from CoreXMD codes to those from serial versions
of the codes.
5.1 DPD-E Results
In Figure 1, the total energy calculated from 1-million step DPD-E simulations is shown for the
DPD uid potential. For the serial code, very little energy drift occurs as evidenced by the black,
horizontal line. All of the CorexMD runs show energy drift, with the magnitude of the energy drift
increasing with the number of processors used. For the serial version of the code, the percentage
drift over 1-million steps is nearly 0, while in the 1 processor CorexMD runs it is > 0.01 % increas-
ing to > 0.05 % for 16 processors. In Appendix B1, the conductive, mechanical, congurational,
and kinetic contributions to the total energy are shown. When compared to the serial code, all have
similar standard deviations in their uctuations except total energy in CoreXMD. This suggests that
there is an incompatability between the splitting algorithm employed and domain decomposition.
Since the percentage drift is increasing with the number of processors, the error could be due
to the use of the Shardlow Splitting Algorithm. The Shardlow Splitting Algorithm relies on a
9
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
10/20
sequential update over the pairs of particles, which is fundamentally known to be incompatible with
parallelization via domain decomposition. This is due to domain decomposition utilizing loops
over pairs on different processors non-sequentially. Despite the drift, it is noted that the percentage
drift is small over 1 million steps, with the total energy varying only in the fourth decimal place in
Figure 1 in the CoreXMD results. Generally for velocity Verlet and other integration algorithms
for microcanonical molecular dynamics simulations, the energy should be constant within 0.01
% over the course of the simulation [1], and the results for DPD-E nearly meet this requirement.
Thus, even with the current energy drift, the results may be suitable for many applications.
Figure 1. A DPD-E simulation of 10,125 molecules over 1,000,000 timesteps using the DPD uid potentialfrom Eq. 31. Note that the serial run displays constant energy, but the CoreXMD parallel runs showconsiderable drift as the number of processors increases. While the drift looks signicant, it is important tonote that the drift is on the order of 10 4 which is quite minute.
5.2 DPD-P Results
Figure 2 displays the pressure from a DPD-P simulation of the DPD uid potential. The uctu-
ations of the CoreXMD results on 1, 8 and 16 processors compare well with that of the serial code
10
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
11/20
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
12/20
be in agreement between the serial and CorexMD implementations of DPD-H, with the enthalpy
uctuations being one or two orders of magnitude larger than that of the serial code.
Figure 3. DPD-H simulation of 10,125 molecules over 1,000,000 teimsteps using the DPD uid potentialfrom Eq. 31. Note that because the DPD-H was built off the DPD-E, there is still a drift between the en-thalpy of the CoreXMD parallel runs and the constant serial runs. It is important to note that these drifts aresignicantly close in magnitude to the serial run because DPD-H employs the same stochastic barostat usedin DPD-P.
5.4 Error Suppression
A technique to suppress energy and enthalpy drift in the DPD-E and DPD-H methods, respec-
tively, was proposed by M. L sal et al [2]. This was implemented into the CorexMD DPD-E and
DPD-H variants. Using error suppression, the mechanical energy is adjusted such that any energyor enthalpy drift is corrected, ensuring the total energy or enthalpy to be perfectly constant. It is
advised not to use this option until the cause of the DPD-E and DPD-H drift is further investigated.
12
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
13/20
6 Scalability
To measure the success of the parallel implementation of the DPD variants, the scalability of
the codes was investigated. The DPD-P simulation runs are used to demonstrate the scalability and
speed up from the serial counterpart. In Tables 1 - 3, the run times and parallel speedups are shown
for 9,216, 124,416 and 1,152,000 RDX particles for DPD-P as implemented into CorexMD with
density dependent tabulated potentials. Parallel speedups are shown in relation to 8 processors, the
number of cores on 1 node on the supercomputer employed. As the number of particles increases
the parallel speedup increases (Figure 4). This is the expected behavior as particle dynamic codes
generally scale better for larger systems due to the implementation of domain decomposition. For
effective utilization of processors, the correct choice of the number of processors and nodes based
on the number of particles is important. A scalability graph (Figure 4) is important to fully quantify
the efciency of ones parallel code. Generally, the aim is to distribute 1000 to 2000 particles per
processor. Thus, for 1.1 million particles using 2000 particles per processor, the scalability should
be efcient to 550 processors. However the scalability in Figure 4 is shown to drop off signicantly
with a parallel efciency of 75 % on 32 processors for 1.1 million particles, then decreasing to 37
% on 128 processors. Further optimization of the code should improve the scalability, allowing
for larger numbers of processors and nodes to be utilized. However, the parallelization has dra-
matically increased the speed at which results can be obtained. For example, for 124,416 particles
using 8 processors, CoreXMD was 293 % faster than the serial DPD-P code.
13
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
14/20
Table 1. This table shows the parallel speedup of the DPD-P CoreXMD run for 9,216 beads using the RDXpotential. All speedup values are relative to the run time on 8 processors because that is how many
processors are present on each node.
CoreXMD DPD-P Speedup of 9,216 RDX beads
# of Processors Run Time Parallel Speedup
1 0:21:27 N/A
8 0:03:29 8
16 0:02:19 12.02
Table 2. This table shows the parallel speedup of the DPD-P CoreXMD run for 124,416 RDX beads. Allspeedup values are relative to the run time on 8 processors because that is how many processors are presenton each node. It is important to note that as the number of processors increases, the speedup increases, but
in decreasing amounts. This is due to the additional processors not used to each ones fullest potential.CoreXMD DPD-P Speedup of 124,416 RDX beads
# of Processors Run Time Parallel Speedup
1 4:11:37 N/A
8 0:36:53 8
16 0:21:49 13.52
32 0:14:13 20.75
64 0:11:05 26.62
128 0:09:26 32.48
14
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
15/20
Table 3. This table shows the parallel speedup of the DPD-P CoreXMD run for 1,152,000 RDX beads. Allspeedup values are relative to the run time on 8 processors because that is how many processors are presenton each node. It is important to note that as the number of processors increases, the speedup increases, but
in decreasing amounts. This is due to the additional processors not used to each ones fullest potential
CoreXMD DPD-P Speedup of 1,152,000 RDX beads
# of Processors Run Time Parallel Speedup
1 N/A N/A
8 1:16:02 8
16 0:43:55 13.85
32 0:25:32 23.82
64 0:16:48 36.20
128 0:12:43 47.83
Fig 4. This is the scalability of DPD-P as implemented into CoreXMD for the Izvekov et al. [17] densitydependent model of RDX. Note that as the number of molecules increases, the speedup improves; thisis due to increased efciency in the use of the processors full potential. The 45 degree line is perfectscalability, essentially for some increase in the number of processors, the program will speed up by thatamount. This line is the objective so the white space between the current scalability lines and the 45 degreeline demonstrates room for improvement.
15
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
16/20
7 Summary
Constant temperature (DPD-T), pressure (DPD-P), energy (DPD-E) and enthalpy (DPD-H)
variants of dissipative particle dynamics have been implemented into ARLs CorexMD code. DPD-
T and DPD-P conserved temperature and pressure, respectively, in CorexMD as well as their serial
counterparts. However, the DPD-E and DPD-H implementations of CorexMD showed larger en-
ergy and enthalpy drift, respectively, compared to their serial counterparts. This was attributed to
the known problems of utilizing the Shardlow Splitting Algorithm [16] with domain decomposi-
tion. In light of that, the energy and enthalpy drifts were relatively small, a maximum of < 0.06
% over 1 million steps on 16 processors. An error suppression algorithm was implemented which
can eliminate this error but was advised not to be used until the source of the error was investigated
further.
Density dependent tabulated potentials were implemented which allow for any potential to be
used for the conservative force, not just the DPD uid potential. This allowed 1.1 million RDX
molecules to be simulated within the DPD-P code for the rst time.
The scalability of the DPD variant implementations shows that further improvement to the
codes are needed. This can be accomplished through removing numerous multiple variables and
transforming them into particle data or type variables. There is overhead from the initial coding
and removing these overheads would provide a signicant improvement toward scalability and
speedup. For 1.1 million RDX molecules the DPD-P code was shown to be 75 % efcient on 32
processors and was 293 % faster on 8 processors compared to the serial code. The effort presented
here provides a suitable starting point for future improvements and optimization of the code.
16
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
17/20
A DPD Variant Results
A.1 Constant Energy DPD, DPD-E
Table B1-1. Table containing components in DPD-E for a DPD-E simulation using DPD uid potentialfrom Eq. 31. Format is: average standard deviations
CoreXMD
Component Serial Serial 8 Processors 16 Processors
Umech/eV 0.776 0.000374 0.776 0.000372 0.776 0.000441 0.776 0.000457
Ucond/eV 0.776 0.0 0.775 0.0 0.775 0.0 0.775 0.0
Uconf/eV 0.117 0.000202 0.117 0.000195 0.117 0.000206 0.117 0.000203
Ukin/eV 0.0382 0.000308 0.0382 0.00031 0.0382 0.000316 0.0382 0.00029
Utot/eV 1.707 3.52 E -9 1.707 0.0000638 1.707 0.000175 1.707 0.000266
Tkin/K 295.484 2.381 295.292 2.399 295.226 2.442 295.229 2.241
Press/bar 1537.571 2.302 1537.427 2.284 1537.41 2.241 1537.418 2.27
A.2 Constant Pressure DPD, DPD-P
Table B2-1. Table containing components in DPD-P for DPD-P simulation using the uid potential fromEq. 31. Format is: average standard deviations
CoreXMD
Component Serial Serial 8 Processors 16 Processors
Tkin/K 299.987 2.403 299.649 2.323 299.487 2.411 299.536 2.42
Press/bar 1536.241 8.108 1537.073 7.972 1537.363 7.745 1536.801 8.37
Uconf/eV 0.117 0.000486 0.117 0.000493 0.117 0.000473 0.117 0.000514
Dens 0.00471 1.19 E -05 0.00471 1.14 E-05 0.00471 1.11 E -05 0.00471 1.22 E -05
17
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
18/20
A.3 Constant Enthalpy DPD, DPD-H
Table B3-1. Table containing components of DPD-H from a DPD-H simulation using the DPD uidpotential from Eq. 31. Format is: average standard deviations
CoreXMDComponent Serial Serial 8 Processors 16 Processors
Umech/eV 0.776 0.000364 0.776 0.00042 0.776 0.00048 0.776 0.000461
Ucond/eV 0.776 0.0 0.776 0.0 0.776 0.0 0.777 0.0
Uconf/eV 0.117 0.000506 0.117 0.000472 0.117 0.000508 0.117 0.000489
Ukin/eV 0.0382 0.000325 0.0382 0.000333 0.0382 0.000345 0.0381 0.000306
Utot/eV 1.707 0.00052 1.707 0.000498 1.707 0.000545 1.707 0.000559
Tkin/K 295.579 2.325 295.289 2.574 295.172 2.671 295.945 2.366
Press/bar 1536.3995 8.2 1536.948 7.935 1537.667 8.318 1537.396 8.205
Enthalpy/eV 1.911 6.60 E-6 1.91 0.000217 1.911 0.000158 1.911 0.000221
18
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
19/20
References
[1] K.E. Gubbins and J.D. Moore. Molecular modeling of matter: Impact and prospects in engi-neering. Ind. Eng. Chem. Res. , 49:30263046, 2010.
[2] M. Lisal, J.K. Brennan, and J.Bonet Avalos. Dissipative particle dynamics at isothermal,isobaric, isoenergetic and isoenthalpic conditions using shardlow-like splitting algorithms.Submitted , 2011.
[3] G. Stoltz. A reduced model for shock and detonation waves. i. the inert case. Eur. Phys. Lett. ,76(849), 2006.
[4] A. Strachan and B.L. Holian. Energy exchange between mesoparticles and their internaldegrees of freedom. PRL , 94(014301), 2005.
[5] B.L. Holian. Formulating mesodynamics for polycrystaline materials. Eur. Phys. Lett. ,64(330), 2003.
[6] P.J. Hoogerbrugge and J.M.V.A. Koelman. Simulating microscopic hydrodynamic phenom-ena with dissipative particle dynamics. Europhys. Lett. , 19(3):155160, 1992.
[7] P. Espanol and P. Warren. Statistical mechanics of dissipative particle dynamics. Europhys. Lett. , 30(4):191196, 1995.
[8] P. Espanol. Dissipative particle dynamics with energy conservation. Europhys. Lett. ,40(6):631636, 1997.
[9] J.B. Avalos and A.D. Mackie. Dissipative particle dynamics with energy conservation. Eu-rophys. Lett. , 40(2):141146, 1997.
[10] A.F. Jakobsen. J. Chem. Phys. , 122(124901), 2005.
[11] D. Frenkel and B. Smit. Understanding Molecular Simulation (Second ed.) . Academic Press,2002.
[12] M.P. Allen and D.J. Tildesley. Computer Simulation of Liquids . Clarendon Press, 1987.
[13] R.D. Groot and P.B. Warren. J. Chem. Phys. , 107(4423), 1997.
[14] S. Izvekov, P.W. Chung, and B.M. Rice. Particle-based multiscale coarse graining withdensity-dependent potentials: Application to molecular crystlas (hexahydro-1,3,5-trinitro-s-
triazine). J. Chem. Phys. , 135(044112), 2011.
[15] ARL. CoreXMD: New Component Tutorial .
[16] T. Shardlow. SIAM J. Sci. Comput , 24(1267), 2003.
19
-
8/12/2019 Parallel Implementation of Dissipative Particle Dynamics and Application to Energetic Materials
20/20