gale 'Bicentennial publication?


pale bicentennial publications

With the approval if tbt Prindent and FcUmn of Tali Unrveriity, a stria of volumes has keen prepared by a number of the Pnfesson and In- structorsj to be issued in connection with the Bicentennial Anniversary^ as a partial indica- ttm of the character of the studies in wbicb the University teachers are engaged.

This series of volumes is respectfully dedicated

ff)r ^nurtures of tljr








Proftuor of Matktmatual Pkyrict in YaU University

OF r






A<> '


Published, March, zgoz.



THE usual point of view in the study of mechanics is that where the attention is mainly directed to the changes which take place in the course of time in a given system. The prin- cipal problem is the determination of the condition of the system with respect to. configuration and velocities at any required time, when its condition in these respects has been given for some one time, and the fundamental equations are those which express the changes continually taking place in the system. Inquiries of this kind are often simplified by taking into consideration conditions of the system other than those through which it actually passes or is supposed to pass, but our attention is not usually carried beyond conditions differing infinitesimally from those which are regarded as actual.

For some purposes, however, it is desirable to take a broader view of the subject. We may imagine a great number of systems of the same nature, but differing in the configura- tions and velocities which they have at a given instant, and differing not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities. And here we may set the problem, not to follow a particular system through its succession of configurations, but to determine how the whole number of systems will be distributed among the various conceivable configurations and velocities at any required time, when the distribution has been given for some one time. The fundamental equation for this inquiry is that which gives the rate of change of the number of systems which fall within any infinitesimal limits of configuration and velocity.



Such inquiries have been called by Maxwell statistical. They belong to a branch of mechanics which owes its origin to the desire to' explain the laws of thermodynamics on mechan- ical principles, and of which Clausius, Maxwell, and Boltz- mann are to be regarded as the principal founders. The first inquiries in this field were indeed somewhat narrower in their scope than that which has been mentioned, being applied to the particles of a system, rather than to independent systems. Statistical inquiries were next directed to the phases (or con- ditions with respect to configuration and velocity) which succeed one another in a given system in the course of time. The explicit consideration of a great number of systems and their distribution in phase, and of the permanence or alteration of this distribution in the course of time is perhaps first found in Boltzmann's paper on the " Zusammenhang zwischen den Satzen iiber das Verhalten mehratomiger Gasmolekiile mit Jacobi's Princip des letzten Multiplicators " (1871).

But although, as a matter of history, statistical mechanics owes its origin to investigations in thermodynamics, it seems eminently worthy of an independent development, both on account of the elegance and simplicity of its principles, and because it yields new results and places old truths in a new light in departments quite outside of thermodynamics. More- over, the separate study of this branch of mechanics seems to afford the best foundation for the study of rational thermody- namics and molecular mechanics.

The laws of thermodynamics, as empirically determined, express the approximate and probable behavior of systems of a great number of particles, or, more precisely, they express the laws of mechanics for such systems as they appear to beings who have not the fineness of perception to enable them to appreciate quantities of the order of magnitude of those which relate to single particles, and who cannot repeat their experiments often enough to obtain any but the most probable results. The laws of statistical mechanics apply to conservative systems of any number of degrees of freedom,


and are exact. This does not make them more difficult to establish than the approximate laws for systems of a great many degrees of freedom, or for limited classes of such systems. The reverse is rather the case, for our attention is not diverted from what is essential by the peculiarities of the system considered, and we are not obliged to satisfy ourselves that the effect of the quantities and circumstances neglected will be negligible in the result. The laws of thermodynamics may be easily obtained from the principles of statistical me- chanics, of which they are the incomplete expression, but they make a somewhat blind guide in our search for those laws. This is perhaps the principal cause of the slow progress of rational thermodynamics, as contrasted with the rapid de- duction of the consequences of its laws as empirically estab- lished. To this must be added that the rational foundation of thermodynamics lay in a branch of mechanics of which the fundamental notions and principles, and the characteristic operations, were alike unfamiliar to students of mechanics.

We may therefore confidently believe that nothing will more conduce to the clear apprehension of the relation of thermodynamics to rational mechanics, and to the interpreta- tion of observed phenomena with reference to their evidence respecting the molecular constitution of bodies, than the study of the fundamental notions and principles of that de- partment of mechanics to which thermodynamics is especially related.

Moreover, we avoid the gravest difficulties when, giving up the attempt to frame hypotheses concerning the constitution of material bodies, we pursue statistical inquiries as a branch of rational mechanics. In the present state of science, it seems hardly possible to frame a dynamic theory of molecular action which shall embrace the phenomena of thermody- namics, of radiation, and of the electrical manifestations which accompany the union of atoms. Yet any theory is obviously inadequate which does not take account of all these phenomena. Even if we confine cur attention to the


phenomena distinctively thermodynamic, we do not escape difficulties in as simple a matter as the number of degrees of freedom of a diatomic gas. It is well known that while theory would assign to the gas six degrees of freedom per molecule, in our experiments on specific heat we cannot ac- count for more than five. Certainly, one is building on an insecure foundation, who rests his work on hypotheses con- cerning the constitution of matter.

Difficulties of this kind have deterred the author from at- tempting to explain the mysteries of nature, and have forced him to be contented with the more modest aim of deducing some of the more obvious propositions relating to the statis- tical branch of mechanics. Here, there can be no mistake in regard to the agreement of the hypotheses with the facts of nature, for nothing is assumed in that respect. The only error into which one can fall, is the want of agreement be- tween the premises and the conclusions, and this, with care, one may hope, in the main, to avoid.

The matter of the present volume consists in large measure of results which have been obtained by the investigators mentioned above, although the point of view and the arrange- ment may be different. These results, given to the public one by one in the order of their discovery, have necessarily, in their original presentation, not been arranged in the most logical manner.

In the first chapter we consider the general problem which has been mentioned, and find what may be called the funda- mental equation of statistical mechanics. A particular case of this equation will give the condition of statistical equi- librium, i. e., the condition which the distribution of the systems in phase must satisfy in order that the distribution shall be permanent. In the general case, the fundamental equation admits an integration, which gives a principle which may be variously expressed, according to the point of view from which it is regarded, as the conservation of density-in- phase, or of extension-in-phase, or of probability of phase.


In the second chapter, we apply this principle of conserva- tion of probability of phase to the theory of errors in the calculated phases of a system, when the determination of the arbitrary constants of the integral equations are subject to error. In this application, we do not go beyond the usual approximations. In other words, we combine the principle of conservation of probability of phase, which is exact, with those approximate relations, which it is customary to assume in the " theory of errors."

In the third chapter we apply the principle of conservation of extension-in-phase to the integration of the differential equations of motion. This gives Jacobi's " last multiplier," as has been shown by Boltzmann.

In the fourth and following chapters we return to the con- sideration of statistical equilibrium, and confine our attention to conservative systems. We consider especially ensembles of systems in which the index (or logarithm) of probability of phase is a linear function of the energy. This distribution, on account of its unique importance in the theory of statisti- cal equilibrium, I have ventured to call canonical, and the divisor of the energy, the modulus of distribution. The moduli of ensembles have properties analogous to temperature, in that equality of the moduli is a condition of equilibrium with respect to exchange of energy, when such exchange is made possible.

We find a differential equation relating to average values in the ensemble which is identical in form with the funda- mental differential equation of thermodynamics, the average index of probability of phase, with change of sign, correspond- ing to entropy, and the modulus to temperature.

For the average square of the anomalies of the energy, we find an expression which vanishes in comparison with the square of the average energy, when the number of degrees of freedom is indefinitely increased. An ensemble of systems in which the number of degrees of freedom is of the same order of magnitude as the number of molecules in the bodies


with which we experiment, if distributed canonically, would therefore appear to human observation as an ensemble of systems in which all have the same energy.

We meet with other quantities, in the development of the subject, which, when the number of degrees of freedom is very great, coincide sensibly with the modulus, and with the average index of probability, taken negatively, in a canonical ensemble, and which, therefore, may also be regarded as cor- responding to temperature and entropy. The correspondence is however imperfect, when the number of degrees of freedom is not very great, and there is nothing to recommend these quantities except that in definition they may be regarded as more simple than those which have been mentioned. In Chapter XIV, this subject of thermodynamic analogies is discussed somewhat at length.

Finally, in Chapter XV, we consider the modification of the preceding results which is necessary when we consider systems composed of a number of entirely similar particles, or, it may be, of a number of particles of several kinds, all of each kind being entirely similar to each other, and when one of the variations to be considered is that of the numbers of the particles of the various kinds which are contained in a system. This supposition would naturally have been intro- duced earlier, if our object had been simply the expression of the laws of nature. It seemed desirable, however, to separate sharply the purely thermodynamic laws from those special modifications which belong rather to the theoiy of the prop- erties of matter.

J. W. G.

NEW HAVEN, December, 1901.






Hamilton's equations of motion 3-5

Ensemble of systems distributed in phase 5

Extension-in-phase, density-in-phase 6

Fundamental equation of statistical mechanics 6-8

Condition of statistical equilibrium 8

Principle of conservation of density-in-phase 9

Principle of conservation of extension-in-phase 10

Analogy in hydrodynamics 11

Extension-in-phase is an invariant 11-13

Dimensions of extension-in-phase 13

Various analytical expressions of the principle 13-15

Coefficient and index of probability of phase 16

Principle of conservation of probability of phase 17, 18

Dimensions of coefficient of probability of phase 19



Approximate expression for the index of probability of phase . 20, 21 Application of the principle of conservation of probability of phase to the constants of this expression 21-25



Case in which the forces are function of the coordinates alone . 26-29 Case in which the forces are functions of the coordinates with the time 30, 31






Condition of statistical equilibrium 32

Other conditions which the coefficient of probability must satisfy . 33

"""" Canonical distribution Modulus of distribution 34

^ must be finite 35

The modulus of the canonical distribution has properties analogous

to temperature 35-37

Other distributions have similar properties 37

Distribution in which the index of probability is a linear function of

the energy and of the moments of momentum about three axes . 38, 39 Case in which the forces are linear functions of the displacements,

and the index is a. linear function of the separate energies relating

to the normal types of motion 39-41

Differential equation relating to average values in a canonical

ensemble 42-44

This is identical in form with the fundamental differential equation

of thermodynamics 44, 45


AVERAGE VALUES IN A CANONICAL ENSEMBLE OF SYS- TEMS. Case of v material points. Average value of kinetic energy of a

single point for a given configuration or for the whole ensemble

= f 0 46, 47

Average value of total kinetic energy for any given configuration

or for the whole ensemble = % v 0 47

System of n degrees of freedom. Average value of kinetic energy,

for any given configuration or for the whole ensemble = f 0 . 48-50

Second proof of the same proposition 50-52

Distribution of canonical ensemble in configuration 52-54

Ensembles canonically distributed in configuration 55

Ensembles canonically distributed in velocity 56



Extension-in-configuration and extension-in-velocity are invari- ants . 57-59



Dimensions of these quantities 60

Index and coefficient of probability of configuration 61

Index and coefficient of probability of velocity 62

Dimensions of these coefficients 63

Relation between extension-in-configuration and extension-in-velocity 64 Definitions of extension-in-phase, extension-in-configuration, and ex- tension-in- velocity, without explicit mention of coordinates . . 65-67



Second and third differential equations relating to average values

in a canonical ensemble 68, 69

These are identical in form with thermodynamic equations enun- ciated by Clausius 69

Average square of the anomaly of the energy of the kinetic en- ergy— of the potential energy 70-72

These anomalies are insensible to human observation and experi- ence when the number of degrees of freedom of the system is very

great 73, 74

Average values of powers of the energies 75-77

Average values of powers of the anomalies of the energies . . 77-80 Average values relating to forces exerted on external bodies . . 80-83 General formulae relating to averages in a canonical ensemble . 83-86



Definitions. V = extension-in-phase below a limiting energy (e).

$ = \o«dVldc 87,88

Vq = extension-in-configuration below a limiting value of the poten- tial energy (e?). fa = \o^dVqjdfq 89,90

Vp = extension-in-velocity below a limiting value of the kinetic energy

(*). ^p = loSdVpjd€p 90,91

Evaluation of Vp and $p 91-93

Average values of functions of the kinetic energy 94, 95

Calculation of FfromF^ 95,96

Approximate formulae for large values of n 97,98

Calculation of V or for whole system when given for parts ... 98 Geometrical illustration . 99




When n > 2, the most probable value of the energy in a canonical ensemble is determined by d(j> j de = 1 / e 100,101

When n > 2, the average value of d$ j de in a canonical ensemble isl/e 101

When n is large, the value of corresponding to d(f>/de=l/Q (<£o) js nearly equivalent (except for an additive constant) to the average index of probability taken negatively (— fj) . . 101-104

Approximate formulae for <£0 + fj when n is large 104-106

When n is large, the distribution of a canonical ensemble in energy follows approximately the law of errors 105

This is not peculiar to the canonical distribution 107, 108

Averages in a canonical ensemble 108-114



The microcanonical distribution denned as the limiting distribution obtained by various processes 115, 116

Average values in the microcanonical ensemble of functions of the kinetic and potential energies 117-120

If two quantities have the same average values in every microcanon- ical ensemble, they have the same average value in every canon- ical ensemble 120

Average values in the microcanonical ensemble of functions of the energies of parts of the system 121-123

Average values of functions of the kinetic energy of a part of the system 123, 124

Average values of the external forces in a microcanonical ensemble. Differential equation relating to these averages, having the form of the fundamental differential equation of thermodynamics . 124-128



Theorems I- VI. Minimum properties of certain distributions . 129-133 Theorem VII. The average index of the whole system compared with the sum of the average indices of the parts 133-135



Theorem VIII. The average index of the whole ensemble com- pared with the average indices of parts of the ensemble . . 135-137 Theorem IX. Effect on the average index of making the distribu- tion-in-phase uniform within any limits 137-138



Under what conditions, and with what limitations, may we assume that a system will return in the course of time to its original phase, at least to any required degree of approximation? . . 139-142

Tendency in an ensemble of isolated systems toward a state of sta- tistical equilibrium 143-151



Variation of the external coordinates can only cause a decrease in the average index of probability 152-154

This decrease may in general be diminished by diminishing the rapidity of the change in the external coordinates .... 154-157

The mutual action of two ensembles can only diminish the sum of their average indices of probability 158, 159

In the mutual action of two ensembles which are canonically dis- tributed, that which has the greater modulus will lose energy . 160

Repeated action between any ensemble and others which are canon- ically distributed with the same modulus will tend to distribute the first-mentioned ensemble canonically with the same modulus 161

Process analogous to a Carnot's cycle 162,163

Analogous processes in thermodynamics 163, 164



The finding in rational mechanics an a priori foundation forthermo- dynamics requires mechanical definitions of temperature and entropy. Conditions which the quantities thus defined must satisfy 165-167

The modulus of a canonical ensemble (0), and the average index of probability taken negatively (rj), as analogues of temperature and entropy 167-169



The functions of the energy del d log Fand log Fas analogues of

temperature and entropy 169-172

The functions of the energy de / cty and <p as analogues of tempera- ture and entropy 1 72-1 78

Merits of the different systems 178-183

If a system of a great number of degrees of freedom is microcanon- ically distributed in phase, any very small part of it may be re- garded as canonically distributed 183

Units of 0 and rj compared with those of temperature and entropy 183-186



Generic and specific definitions of a phase 187-189

Statistical equilibrium with respect to phases generically defined

and with respect to phases specifically defined 189

Grand ensembles, petit ensembles 189,190

Grand ensemble canonically distributed 190-193

Q must be finite 193

Equilibrium with respect to gain or loss of molecules .... 194-197 Average value of any quantity in grand ensemble canonically dis- tributed 198

Differential equation identical in form with fundamental differen- tial equation in thermodynamics 199, 200

Average value of number of any kind of molecules (i>) . . . . 201

Average value of (v-v)* 201,202

Comparison of indices 203-206

When the number of particles in a system is to be treated as variable, the average index of probability for phases generically defined corresponds to entropy 206






WE shall use Hamilton's form of the equations of motion for a system of n degrees of freedom, writing ql , . . ,qn for the (generalized) coordinates, qi , . . . qn for the (generalized) ve- locities, and

for the moment of the forces. We shall call the quantities Fl9...Fn the (generalized) forces, and the quantities p1 . . . pn, defined by the equations

Pl = ^-t p2 = ^, etc., (2)

dqi dq2

where ep denotes the kinetic energy of the system, the (gen- eralized) momenta. The kinetic energy is here regarded as a function of the velocities and coordinates. We shall usually regard it as a function of the momenta and coordinates,* and on this account we denote it by ep. This will not pre- vent us from occasionally using formulae like (2), where it is sufficiently evident the kinetic energy is regarded as function of the g's and ^'s. But in expressions like dep/dq1 , where the denominator does not determine the question, the kinetic

* The use of the momenta instead of the velocities as independent variables is the characteristic of Hamilton's method which gives his equations of motion their remarkable degree of simplicity. We shall find that the fundamental notions of statistical mechanics are most easily defined, and are expressed in the most simple form, when the momenta with the coordinates are used to describe the state of a system.


energy is always to be treated in the differentiation as function of the p's and q*s. We have then

* = ;fe* *l = -^ + Fl' etc> (3)

These equations will hold for any forces whatever. If the 'fetces^ &i*e £ dptt§erVative, in other words, if the expression (1) j.stant exact differential, we may set

where eq is a function of the coordinates which we shall call the potential energy of the system. If we write e for the total energy, we shall have

e = €P + e«> (5)

and equatipns (3) may be written

*' = ;£' * = -£' etc- [I <«>

The potential energy (e3) may depend on other variables beside the coordinates q1 . . . qn. We shall often suppose it to depend in part on coordinates of external bodies, which we shall denote by ax , #2 , etc. We shall then have for the com- plete value of the differential of the potential energy *

deq = FI dql . . Fn dqn A1 da^ A2 daz etc., (7)

where A^ A%, etc., represent forces (in the generalized sense) exerted by the system on external bodies. For the total energy (e) we shall have

de=qldpl . . . + qndpn~Pidqi . . .

pn dqn Al da-i A2 daz etc. (8)

It will be observed that the kinetic energy (e^,) in the most general case is a quadratic function of the p's (or g-'s)

* It will be observed, that although we call e the potential energy of the system which we are considering, it is really so defined as to include that energy which might be described as mutual to that system and external bodies.



involving also the ^'s but not the a's ; that the potential energy, when it exists, is function of the <?'s and a's ; and that the total energy, when it exists, is function of the jt?'s (or ^s), the 9's, and the a's. In expressions like dejdq^ them's, and not the q's, are to be taken as independent variables, as has already been stated with respect to the kinetic energy.

Lev us imagine a great number of independent systems, identical in nature, but differing in phase, that is, in their condition with respect to configuration and velocity. The forces are supposed to be determined for every system by the same law, being functions of the coordinates of the system q19 . . . qn, either alone or with the coordinates a1? a2, etc. of certain external bodies. It is not necessary that they should be derivable from a force-function. The external coordinates a15 a2, etc. may vary with the time, but at any given time have fixed values. In this they differ from the internal coordinates q1 , . . . qn , which at the same time have different values in the different systems considered.

Let us especially consider the number of systems which at a given instant fall within specified limits of phase, viz., those for which

Pi <Pi< Pi", qi <qi< q",

Pn <Pn< P", qn < <

the accented letters denoting constants. We shall suppose the differences p^' p{, q^ q^, etc. to be infinitesimal, and that the systems are distributed in phase in some continuous manner,* so that the number having phases within the limits specified may be represented by

i') (?»" - ?„'), (10)

* In strictness, a finite number of systems cannot be distributed contin- uously in phase. But by increasing indefinitely the number of systems, we may approximate to a continuous law of distribution, such as is here described. To avoid tedious circumlocution, language like the above may be allowed, although wanting in precision of expression, when the sense in which it is to be taken appears sufficiently clear.


or more briefly by

. . . dpn dql . . . dqn, (li)

where D is a function of the p's and q's and in general of t alb 3,

for as time goes on, and the individual systems change the\r

phases, the distribution of the ensemble in phase will in gen-

eral vary. In special cases, the distribution in phase will

remain unchanged. These are cases of statistical equilibr turn.

If we regard all possible phases as forming a sort oi exten-

ision of 2 n dimensions, we may regard the product of differ-

fentials in (11) as expressing an element of this extension, and

\D as expressing the density of the systems in that element.

We shall call the product

dpl... dpn dqlf. . dqn (12)

an element of extensionrin-phase, and D the density-inr-phase of the systems.

It is evident that the changes which take place in the den- sity of the systems in any given element of extension-in- phase will depend on" the dynamical nature of the systems and their distribution in phase at the time considered.

In the case of conservative systems, with which we shall be principally concerned, their dynamical nature is completely determined by the function which expresses the energy (e) in terms of the |?'s, <?'s, and a's (a function supposed identical for all the systems) ; in the more general case which we are considering, the dynamical nature of the systems is deter- mined by the functions which express the kinetic energy (ep) in terms of the p's and <?'s, and the forces in terms of the <?'s and «'s. The distribution in phase is expressed for the time considered by D as function of the p's and q's. To find the value of dD/dt for the specified element of extension-in- phase, we observe that the number of systems within the limits can only be varied by systems passing the limits, which may take place in 4 n different ways, viz., by the pl of a sys- tem passing the limit p^, or the limit p/', or by the ql of a system passing the limit q^ or the limit <?/', etc. Let us consider these cases separately.


In the first place, let us consider the number of systems which in the time dt pass into or out of the specified element by pl passing the limit p^. It will be convenient, and it is evidently allowable, to suppose dt so small that the quantities ^ dt, ql dt, etc., which represent the increments of pl, ql, etc., in the time dt shall be infinitely small in comparison with the infinitesimal differences p^, q±r <?/, etc., which de- termine the magnitude of the element of extension-in-phase. The systems for which pl passes the limit p^ in the interval dt are those for which at the commencement of this interval the value of p1 lies between p^ and p^ dt, as is evident if we consider separately the cases in which pl is positive and negative. Those systems for which p1 lies between these limits, and the other p's and j's between the limits specified in (9), will therefore pass into or out of the element considered according aH^t? is positive or negative, unless indeed they also pass some other limit specified in (9) during the same inter-

^val of time. But the number which pass any two of these limits will be represented by an expression containing the square of dt as a factor, and is evidently negligible, when dt

1 is sufficiently small, compared with the number which we are seeking to evaluate, and which (with neglect of terms contain- ing dt2) may be found by substituting pl dt for p^' p^ in (10) or for dp1 in (11). The expression

Dpi dt dpz . . . dpn dqi . . . dqn (13)

will therefore represent, according as it is positive or negative, the increase or decrease of the number of systems within the given limits which is due to systems passing the limit p^. A similar expression, in which however D and p will have slightly different values (being determined for p^' instead of Pi), will represent the decrease or increase of the number of systems due to the passing of the limit p^'. The difference of the two expressions, or

dpi . . . dpn dqi . . . dqn dt (14)


will represent algebraically the decrease of the number of systems within the limits due to systems passing the limits p^ and PI'.

The decrease in the number of systems within the limits due to systems passing the limits and <?/' may be found in the same way. This will give

for the decrease due to passing the four limits p±, p^", <?/, q^'. But since the equations of motion (3) give

^ + ^ = 0, (16)

dpi dql

the expression reduces to

(dD dD \ d^pi + d^ ?i) *• *• dyi *-•*• (17)

If we prefix 2 to denote summation relative to the suffixes 1 ... n, we get the total decrease in the number of systems within the limits in the time dt. That is,

T~ i*

-dDdPl... dpn dSl... dqn, (18)

d~ ^ ) dpl ' ' ' d d ' " d dt ~


where the suffix applied to the differential coefficient indicates that the JP'S and <?'s are to be regarded as constant in the differ- entiation. The condition of statistical equilibrium is therefore

If at any instant this condition is fulfilled for all values of the p's and <?'s, (dD/dt}ptg vanishes, and therefore the condition will continue to hold, and the distribution in phase will be permanent, so long as the external coordinates remain constant. But the statistical equilibrium would in general be disturbed by a change in the values of the external coordinates, which


would alter the values of tlie jt?'s as determined by equations (3), and thus disturb the relation expressed in the last equation. If we write equation (19) in the form

it will be seen to express a theorem of remarkable simplicity. Since D is a function of t, pl, . . . pn, ql , . . . qn, its complete differential will consist of parts due to the variations of all these quantities. Now the first term of the equation repre- sents the increment of D due to an increment of t (with con- stant values of them's and ^'s), and the rest of the first member represents the increments of D due to increments of the p's and g's, expressed by pl dt, ql dt, etc. But these are precisely the increments which the jt?'s and #'s receive in the movement of a system in the tune dt. The whole expression represents the total increment of D for the varying phase of a moving system. We have therefore the theorem :

In an ensemble of mechanical systems identical in nature and subject to forces determined by identical laws, but distributed in phase in any continuous manner, the density-in-phase is constant in time for the varying phases of a moving system ; provided, that the forces of a system are functions of its co- ordinates, either alone or with the time.*

This may be called the principle of conservation of density- in-phase. It may also be written


where a, . . . h represent the arbitrary constants of the integral equations of motion, and are suffixed to the differential co-

* The condition that the forces Flt ...Fn are functions of q1 , . . . qn and alf a2, etc., which last are functions of the time, is analytically equivalent to the condition that Flf . . . Fn are functions of qi, ...qn and the time. Explicit mention of the external coordinates, a1? «2, etc., has been made in the preceding pages, because our purpose will require us hereafter to con- sider these coordinates and the connected forces, Alt A2, etc., which repre- sent the action of the systems on external bodies.


efficient to indicate that they are to be regarded as constant in the differentiation.

We may give to this principle a slightly different expres- sion. Let us call the value of the integral


.dpndqi... dqn (23)

taken within any limits the extension-in-phase within those limits.

When the phases bounding an extension-in-phase vary in the course of time according to the dynamical laws of a system subject to forces which are functions of the coordinates either alone or with the time, the value of the extension-in-phase thus bounded remains constant. In this form the principle may be called the principle of conservation of extension-in-phase. In some respects this may be regarded as the most simple state- ment of the principle, since it contains no explicit reference to an ensemble of systems.

Since any extension-in-phase may be divided into infinitesi- mal .portions, it is only necessary to prove the principle for an infinitely small extension. The number of systems of an ensemble which fall within the extension will be represented by the integral

/ . . . / D dp! . . . dp

If the extension is infinitely small, we may regard D as con- stant in the extension and write

D I . . . I dpl . . . dpn dq^ . . . dqn

for the number of systems. The value of this expression must be constant in time, since no systems are supposed to be created or destroyed, and none can pass the limits, because the motion of the limits is identical with that of the systems. But we have seen that D is constant in time, and therefore the integral

I . . . / fa . . . dpn dql . . . dqn,


which we have called the extension-in-phase, is also constant in time.*

Since the system of coordinates employed in the foregoing discussion is entirely arbitrary, the values of the coordinates relating to any configuration and its immediate vicinity do not impose any restriction upon the values relating to other configurations. The fact that the quantity which we have called density-in-phase is constant in time for any given sys- tem, implies therefore that its value is independent of the coordinates which are used in its evaluation. For let the density-in-phase as evaluated for the same time and phase by one system of coordinates be DI, and by another system -Z>2'. A system which at that time has that phase will at another time have another phase. Let the density as calculated for this second time and phase by a third system of coordinates be Zy. Now we may imagine a system of coordinates which at and near the first configuration will coincide with the first system of coordinates, and at and near the second configuration will coincide with the third system of coordinates. This will give Dj' ^Y'- Again we may imagine a system of coordi- nates which at and near the first configuration will coincide with the second system of coordinates, and at and near the

* If we regard a phase as represented by a point in space of 2 n dimen- sions, the changes which take place in the course of time in our ensemble of systems will be represented by a current in such space. This current will be steady so long as the external coordinates are not varied. In any case the current will satisfy a law which in its various expressions is analogous to the hydrodynamic law which may be expressed by the phrases conserva- tion of volumes or conservation of density about a moving point, or by the equation

The analogue in statistical mechanics of this equation, viz.,

may be derived directly from equations (3) or (6), and may suggest such theorems as have been enunciated, if indeed it is not regarded as making them intuitively evident. The somewhat lengthy demonstrations given above will at least serve to give precision to the notions involved, and familiarity with their use.


second configuration will coincide with the third system of coordinates. This will give D% = Ds". We have therefore 2V = 2>J.

It follows, or it may be proved in the same way, that the value of an extension-in-phase is independent of the system of coordinates which is used in its evaluation. This may easily be verified directly. If g1^ . . ,qn^ Qlt . . . Qn are two systems of coordinates, and Pi, pn> P\i - Pn the cor- responding momenta, we have to prove that


when the multiple integrals are taken within limits consisting of the same phases. And this will be evident from the prin- ciple on which we change the variables in a multiple integral, if we prove that

. . P., ft, . . . ft) = 1

>Pn>2i, •-• 2V)

where the first member of the equation represents a Jacobian or functional determinant. Since all its elements of the form dQ/dp are equal to zero, the determinant reduces to a product of two, and we have to prove that


We may transform any element of the first of these deter- minants as follows. By equations (2) and (3), and in view of the fact that the (j's are linear functions of the <?'s and therefore of the _p's, with coefficients involving the <?'s, so that a differential coefficient of the form dQr/dpy is function of the <?'s alone, we get *

* The form of the equation

d dep _ d dfp dpy dQx dQx dpv

in (27) reminds us of the fundamental identity in the differential calculus relating to the order of differentiation with respect to independent variables. But it will be observed that here the variables Qx and py are not independent and that the proof depends on the linear relation between the Q's and the p's.


r dQx dpy

^^n/^dQL\=_d_de,==d^ dQx r^i W& %J d& cZft, d& '

But since f'0

r— i \ a (j/r /

d-k = ^. (28)

*& ^0.