nextupprevious
Next: Clarifying the Issues Up: The Origins of Entropy Previous: The Origins of Entropy

Introduction

Two of the most persistent `non-problems' in statistical mechanics center around the possible time evolution of entropy, and the so-called `paradox of irreversibility'. This characterization will be expanded upon below, and we simply define the problems here, beginning with the first. Because quantum-mechanical descriptions are usually simpler to formulate mathematically, we shall for the most part employ that formalism -- for example, by writing the entropy in terms of the density matrix, or statistical operator tex2html_wrap_inline133as

equation7

where tex2html_wrap_inline135is Boltzmann's constant, and N the number of particles in the system. The trace is invariant under unitary transformation, hence consternation is often expressed at the observation that S cannot evolve temporally by means of the dynamical equations of motion. We shall see that, in fact, this is precisely the behavior one should expect for the theoretical entropy of thermodynamics. As for the `paradox', it can be loosely stated as the apparent contradiction between time-reversal invariance of the microscopic dynamical laws on the one hand, and the observed asymmetrical time development of nonequilibrium systems on the other. If this macroscopic behavior could be derived rigorously from the time-symmetric microscopic equations there would indeed be a paradox, whereupon it has been thought that the microscopic theory must be supplemented by some kind of additional postulate [1,2]. The nature and uniqueness of such a postulate have always been uncertain, and below we shall argue that no such appendage to the physical laws is required.

If one is to talk of equilibrium and nonequilibrium states, and the relationships between them, then there is an obvious need to define them with some degree of precision. Historically this has been neither a simple chore, nor one about which there is essential consensus. There seems to be no other way to define an equilibrium state other than through measurement -- that is, operationally. In many cases this is meant to mean preparation, and in that event we see immediately an element of subjectivity in the process, for the experimenter must choose which macroscopic properties of the system to control and measure. When those quantities are chosen as manifestations of constants of the motion for the internal dynamics, and repeated measurement yields essentially the same values, then we are inclined to assert the system to be in an equilibrium state characterized by these quantities. How much repetition is necessary? This is equivalent to asking theoretically how much time is required for extraneous correlations in the system to decay -- there is no theory for either. There is one further constraint needed to complete the specification of an equilibrium state, and that is the assertion that it has no memory of anything done to the system in the past. Without this, clever experimenters can produce all kinds of magic.

To the extent that an equilibrium thermodynamic state depends on the defining measurements or observations, real or imagined, it is clearly not unique. Even when those same quantities are employed to re-define the state after perturbation away from equilibrium, the measured values need not be the same upon return to equilibrium. Thus, the notion of uniqueness in this regard must be coupled with considerable qualification. One can, however, introduce the notion of experimentally reproducible processes (ERP), so that if some nonequilibrium initial conditions are reproducible the system should relax, if at all, to the same equilibrium state as defined above. Indeed, we can take this as a definition of ERP.

As for nonequilibrium states, there is almost a continuum of qualitative possibilities, and the question of uniqueness is effectively beyond reach in the general case. But, if we narrow the scope appropriately, it is possible to proceed in somewhat the same way as above. One can define ERP to the extent that the initial conditions defining a nonequilibrium state are reproducible, and the principal processes studied in this regard are usually stationary. In this sense, the beautiful structure of high-Reynolds-number turbulence downstream from a sphere, say, is defined almost exactly. The essential feature, however, is once again operational, and the state is defined by the particular physical quantities monitored, whether or not they are constants of the motion.

Let us now rephrase the last three paragraphs so as to define both the system itself, as well as its macroscopic states. To any particular physical system there correspond numerous possible thermodynamics systems, defined by the macroscopic physical quantities monitored in its observation. The sets of values taken by these quantities, as discussed above, constitute the possible thermodynamic states of the system. Any other set of measured physical parameters defines a different thermodynamic system for the same physical system, along with the possible thermodynamics states. Whether the latter are equilibrium or nonequilibrium, as well as reproducible, depends on further considerations, as noted.

Macroscopic physical systems can be further classified as isolated, closed, or open, according to their degree of interaction with the outside world. A truly isolated system would not be observable, so we shall define it as one whose properties can be observed, but which exchanges nothing, not even energy, with the external environment. These strictures are relaxed for a closed system, which can exchange energy with its surroundings, a process sometimes referred to as thermal contact. Finally, a truly open system can exchange particles and other physical attributes with its environment, and the conservation laws generally contain source terms.

Thermodynamic systems and states are characterized by macroscopic parameters, yet their evolution is undoubtedly driven by the underlying microscopic dynamics. The connection between the two was uncovered by Clausius, who recognized the need for a new thermodynamic state function for this purpose. If a closed system undergoes a change from one thermodynamic state A to another, B, then the change in entropy for that process is defined as

equation11

The integral is to be taken over a reversible path connecting A and B, thought of as a locus of equilibrium states. This is necessary because the temperature T is only defined for equilibrium states, and therefore the experimental entropy is defined only for those states. The point cannot be emphasized strongly enough, for Eq.(2) describes only a net result of processes that begin and end in states of thermal equilibrium -- it says nothing about the nonequilibrium intermediate states passed through by real processes. These latter can often be approximated by quasi-static processes in the laboratory, of course. Note, also, that S is a function only of macroscopic quantities describing the system and makes no reference to microstates. It is thus defined in the same quasi-subjective way as the thermodynamic system and its states, through specification of which macroscopic parameters are to be monitored.

Equation (2) effectively describes how entropy changes are actually measured in the laboratory. The heat, dQ, is just a euphemism for energy transfer in an unorganized form, a place to lump our ignorance about the details of any orderly and coherent microscopic processes. In this sense, it plays a role similar to generalized forces in classical mechanics. The physical entropy provides a macroscopic description of this, and provides the connection between the microscopic and macroscopic behavior. It is in this sense that Clausius' prescience represents an ingenious leap of insight.

In view of the preceding comments, the obvious question at this point concerns the time evolution of the system. That is, if the system is prepared, or found to be, in some nonequilibrium state, how do the physical quantities describing the system, including S, evolve from that initial state? Under what conditions can one predict that evolution will be to an equilibrium state? Numerous attempts have been made, and continue to be made, to define time-dependent thermodynamic functions, including the entropy, so as to follow their evolution in detail by means of some kind of macroscopic equations of motion. Because there are no definite criteria for such constructions, they have always been plagued by the specter of non-uniqueness. This is particularly true with respect to the entropy, for experimentally it remains defined only for equilibrium states, and thus a definitive theory of an S(t) for nonequilibrium states remains an open question.

A related problem has to do with the stability of the equilibrium state and the notion of irreversibility. Why are thermodynamic systems not seen to evolve away from an equilibrium state? The standard response is, `` because of the second law of thermodynamics'', which provides a `selection rule' of sorts. Thus, we never see an automobile spontaneously cool itself and jump to the top of the nearest building, despite the fact that the process conserves energy. But, then, it is necessary to understand in some detail what exactly is meant by irreversibility and the second law. With regard to the former we shall follow Planck [3] and call a macroscopic process tex2html_wrap_inline159thermodynamically reversible if, by any means, the original macroscopic state can be recovered without corresponding changes in the external environment.

As Clausius formulated it, the second law simply amounts to the statement that tex2html_wrap_inline161, in terms of the total entropies of all systems involved in the process that begins and ends in a state of thermal equilibrium. We re-state this in what has come to be known as the weak form of the second law: in a reproducible adiabatic process that starts from a state of complete thermal equilibrium, the experimental entropy cannot decrease. The strong form, which is what is at issue in most questions of the type we have been asking, is that not only will the entropy not decrease in an irreversible process, but it in fact will increase to the maximum allowed by the constraints on the system. While it is relatively easy to derive the weak form, as we shall do below, the strong form is usually ignored by most writers. Again, the question of the exact role of particle dynamics arises -- as mentioned earlier, for example, in connection with the invariance of the theoretical expression for S under unitary transformations. A hint as to where the resolution lies was already provided by Gibbs [4]: ``The impossibility of an uncompensated decrease of entropy seems to be reduced to an improbability'', an observation that Boltzmann adopted as the theme for his own book [5].


nextupprevious
Next: Clarifying the Issues Up: The Origins of Entropy Previous: The Origins of Entropy

W.T. Grandy Jr.
Sat May 11 11:31:41 GMT-0600 1996