Whatever be the function [phi], the mean value tends toward zero as _t_ increases, and as the minor planets have certainly accomplished a very great number of revolutions, I may a.s.sert that this mean value is very small.

I may choose [phi] as I wish, save always one restriction: this function must be continuous; and, in fact, from the point of view of subjective probability, the choice of a discontinuous function would have been unreasonable. For instance, what reason could I have for supposing that the initial longitude might be exactly 0, but that it could not lie between 0 and 1?

But the difficulty reappears if we take the point of view of objective probability, if we pa.s.s from our imaginary distribution in which the fict.i.tious matter was supposed continuous to the real distribution in which our representative points form, as it were, discrete atoms.

The mean value of sin (_at_ + _b_) will be represented quite simply by

(1/_n_){[Sigma] sin (_at_ + _b_)},

_n_ being the number of minor planets. In lieu of a double integral referring to a continuous function, we shall have a sum of discrete terms. And yet no one will seriously doubt that this mean value is practically very small.

Our representative points being very close together, our discrete sum will in general differ very little from an integral.

An integral is the limit toward which a sum of terms tends when the number of these terms is indefinitely increased. If the terms are very numerous, the sum will differ very little from its limit, that is to say from the integral, and what I said of this latter will still be true of the sum itself.

Nevertheless, there are exceptions. If, for instance, for all the minor planets,

_b_ = [pi]/2 - _at_,

the longitude for all the planets at the time t would be [pi]/2, and the mean value would evidently be equal to unity. For this to be the case, it would be necessary that at the epoch 0, the minor planets must have all been lying on a spiral of peculiar form, with its spires very close together. Every one will admit that such an initial distribution is extremely improbable (and, even supposing it realized, the distribution would not be uniform at the present time, for example, on January 1, 1913, but it would become so a few years later).

Why then do we think this initial distribution improbable? This must be explained, because if we had no reason for rejecting as improbable this absurd hypothesis everything would break down, and we could no longer make any affirmation about the probability of this or that present distribution.

Once more we shall invoke the principle of sufficient reason to which we must always recur. We might admit that at the beginning the planets were distributed almost in a straight line. We might admit that they were irregularly distributed. But it seems to us that there is no sufficient reason for the unknown cause that gave them birth to have acted along a curve so regular and yet so complicated, which would appear to have been expressly chosen so that the present distribution would not be uniform.

IV. ROUGE ET NOIR.--The questions raised by games of chance, such as roulette, are, fundamentally, entirely a.n.a.logous to those we have just treated. For example, a wheel is part.i.tioned into a great number of equal subdivisions, alternately red and black. A needle is whirled with force, and after having made a great number of revolutions, it stops before one of these subdivisions. The probability that this division is red is evidently 1/2. The needle describes an angle [theta], including several complete revolutions. I do not know what is the probability that the needle may be whirled with a force such that this angle should lie between [theta] and [theta]+_d_[theta]; but I can make a convention. I can suppose that this probability is [phi]([theta])_d_[theta]. As for the function [phi]([theta]), I can choose it in an entirely arbitrary manner. There is nothing that can guide me in my choice, but I am naturally led to suppose this function continuous.

Let [epsilon] be the length (measured on the circ.u.mference of radius 1) of each red and black subdivision. We have to calculate the integral of [phi]([theta])_d_[theta], extending it, on the one hand, to all the red divisions and, on the other hand, to all the black divisions, and to compare the results.

Consider an interval 2[epsilon], comprising a red division and a black division which follows it. Let M and _m_ be the greatest and least values of the function [phi]([theta]) in this interval. The integral extended to the red divisions will be smaller than [Sigma]M[epsilon]; the integral extended to the black divisions will be greater than [Sigma]_m_[epsilon]; the difference will therefore be less than [Sigma](M - _m_)[epsilon]. But, if the function [theta] is supposed continuous; if, besides, the interval [epsilon] is very small with respect to the total angle described by the needle, the difference M - _m_ will be very small. The difference of the two integrals will therefore be very small, and the probability will be very nearly 1/2.

We see that without knowing anything of the function [theta], I must act as if the probability were 1/2. We understand, on the other hand, why, if, placing myself at the objective point of view, I observe a certain number of coups, observation will give me about as many black coups as red.

All players know this objective law; but it leads them into a remarkable error, which has been often exposed, but into which they always fall again. When the red has won, for instance, six times running, they bet on the black, thinking they are playing a safe game; because, say they, it is very rare that red wins seven times running.

In reality their probability of winning remains 1/2. Observation shows, it is true, that series of seven consecutive reds are very rare, but series of six reds followed by a black are just as rare.

They have noticed the rarity of the series of seven reds; if they have not remarked the rarity of six reds and a black, it is only because such series strike the attention less.

V. THE PROBABILITY OF CAUSES.--We now come to the problems of the probability of causes, the most important from the point of view of scientific applications. Two stars, for instance, are very close together on the celestial sphere. Is this apparent contiguity a mere effect of chance? Are these stars, although on almost the same visual ray, situated at very different distances from the earth, and consequently very far from one another? Or, perhaps, does the apparent correspond to a real contiguity? This is a problem on the probability of causes.

I recall first that at the outset of all problems of the probability of effects that have hitherto occupied us, we have always had to make a convention, more or less justified. And if in most cases the result was, in a certain measure, independent of this convention, this was only because of certain hypotheses which permitted us to reject _a priori_ discontinuous functions, for example, or certain absurd conventions.

We shall find something a.n.a.logous when we deal with the probability of causes. An effect may be produced by the cause _A_ or by the cause _B_.

The effect has just been observed. We ask the probability that it is due to the cause _A_. This is an _a posteriori_ probability of cause. But I could not calculate it, if a convention more or less justified did not tell me _in advance_ what is the _a priori_ probability for the cause _A_ to come into play; I mean the probability of this event for some one who had not observed the effect.

The better to explain myself I go back to the example of the game of ecarte mentioned above. My adversary deals for the first time and he turns up a king. What is the probability that he is a sharper? The formulas ordinarily taught give 8/9, a result evidently rather surprising. If we look at it closer, we see that the calculation is made as if, _before sitting down at the table_, I had considered that there was one chance in two that my adversary was not honest. An absurd hypothesis, because in that case I should have certainly not played with him, and this explains the absurdity of the conclusion.

The convention about the _a priori_ probability was unjustified, and that is why the calculation of the _a posteriori_ probability led me to an inadmissible result. We see the importance of this preliminary convention. I shall even add that if none were made, the problem of the _a posteriori_ probability would have no meaning. It must always be made either explicitly or tacitly.

Pa.s.s to an example of a more scientific character. I wish to determine an experimental law. This law, when I know it, can be represented by a curve. I make a certain number of isolated observations; each of these will be represented by a point. When I have obtained these different points, I draw a curve between them, striving to pa.s.s as near to them as possible and yet preserve for my curve a regular form, without angular points, or inflections too accentuated, or brusque variation of the radius of curvature. This curve will represent for me the probable law, and I a.s.sume not only that it will tell me the values of the function intermediate between those which have been observed, but also that it will give me the observed values themselves more exactly than direct observation. This is why I make it pa.s.s near the points, and not through the points themselves.

Here is a problem in the probability of causes. The effects are the measurements I have recorded; they depend on a combination of two causes: the true law of the phenomenon and the errors of observation.

Knowing the effects, we have to seek the probability that the phenomenon obeys this law or that, and that the observations have been affected by this or that error. The most probable law then corresponds to the curve traced, and the most probable error of an observation is represented by the distance of the corresponding point from this curve.

But the problem would have no meaning if, before any observation, I had not fashioned an _a priori_ idea of the probability of this or that law, and of the chances of error to which I am exposed.

If my instruments are good (and that I knew before making the observations), I shall not permit my curve to depart much from the points which represent the rough measurements. If they are bad, I may go a little further away from them in order to obtain a less sinuous curve; I shall sacrifice more to regularity.

Why then is it that I seek to trace a curve without sinuosities? It is because I consider _a priori_ a law represented by a continuous function (or by a function whose derivatives of high order are small), as more probable than a law not satisfying these conditions. Without this belief, the problem of which we speak would have no meaning; interpolation would be impossible; no law could be deduced from a finite number of observations; science would not exist.

Fifty years ago physicists considered, other things being equal, a simple law as more probable than a complicated law. They even invoked this principle in favor of Mariotte"s law as against the experiments of Regnault. To-day they have repudiated this belief; and yet, how many times are they compelled to act as though they still held it! However that may be, what remains of this tendency is the belief in continuity, and we have just seen that if this belief were to disappear in its turn, experimental science would become impossible.

VI. THE THEORY OF ERRORS.--We are thus led to speak of the theory of errors, which is directly connected with the problem of the probability of causes. Here again we find _effects_, to wit, a certain number of discordant observations, and we seek to divine the _causes_, which are, on the one hand, the real value of the quant.i.ty to be measured; on the other hand, the error made in each isolated observation. It is necessary to calculate what is _a posteriori_ the probable magnitude of each error, and consequently the probable value of the quant.i.ty to be measured.

But as I have just explained, we should not know how to undertake this calculation if we did not admit _a priori_, that is to say, before all observation, a law of probability of errors. Is there a law of errors?

The law of errors admitted by all calculators is Gauss"s law, which is represented by a certain transcendental curve known under the name of "the bell."

But first it is proper to recall the cla.s.sic distinction between systematic and accidental errors. If we measure a length with too long a meter, we shall always find too small a number, and it will be of no use to measure several times; this is a systematic error. If we measure with an accurate meter, we may, however, make a mistake; but we go wrong, now too much, now too little, and when we take the mean of a great number of measurements, the error will tend to grow small. These are accidental errors.

It is evident from the first that systematic errors can not satisfy Gauss"s law; but do the accidental errors satisfy it? A great number of demonstrations have been attempted; almost all are crude paralogisms.

Nevertheless, we may demonstrate Gauss"s law by starting from the following hypotheses: the error committed is the result of a great number of partial and independent errors; each of the partial errors is very little and besides, obeys any law of probability, provided that the probability of a positive error is the same as that of an equal negative error. It is evident that these conditions will be often but not always fulfilled, and we may reserve the name of accidental for errors which satisfy them.

We see that the method of least squares is not legitimate in every case; in general the physicists are more distrustful of it than the astronomers. This is, no doubt, because the latter, besides the systematic errors to which they and the physicists are subject alike, have to control with an extremely important source of error which is wholly accidental; I mean atmospheric undulations. So it is very curious to hear a physicist discuss with an astronomer about a method of observation. The physicist, persuaded that one good measurement is worth more than many bad ones, is before all concerned with eliminating by dint of precautions the least systematic errors, and the astronomer says to him: "But thus you can observe only a small number of stars; the accidental errors will not disappear."

What should we conclude? Must we continue to use the method of least squares? We must distinguish. We have eliminated all the systematic errors we could suspect; we know well there are still others, but we can not detect them; yet it is necessary to make up our mind and adopt a definitive value which will be regarded as the probable value; and for that it is evident the best thing to do is to apply Gauss"s method. We have only applied a practical rule referring to subjective probability.

There is nothing more to be said.

But we wish to go farther and affirm that not only is the probable value so much, but that the probable error in the result is so much. _This is absolutely illegitimate_; it would be true only if we were sure that all the systematic errors were eliminated, and of that we know absolutely nothing. We have two series of observations; by applying the rule of least squares, we find that the probable error in the first series is twice as small as in the second. The second series may, however, be better than the first, because the first perhaps is affected by a large systematic error. All we can say is that the first series is _probably_ better than the second, since its accidental error is smaller, and we have no reason to affirm that the systematic error is greater for one of the series than for the other, our ignorance on this point being absolute.

VII. CONCLUSIONS.--In the lines which precede, I have set many problems without solving any of them. Yet I do not regret having written them, because they will perhaps invite the reader to reflect on these delicate questions.

However that may be, there are certain points which seem well established. To undertake any calculation of probability, and even for that calculation to have any meaning, it is necessary to admit, as point of departure, a hypothesis or convention which has always something arbitrary about it. In the choice of this convention, we can be guided only by the principle of sufficient reason. Unfortunately this principle is very vague and very elastic, and in the cursory examination we have just made, we have seen it take many different forms. The form under which we have met it most often is the belief in continuity, a belief which it would be difficult to justify by apodeictic reasoning, but without which all science would be impossible. Finally the problems to which the calculus of probabilities may be applied with profit are those in which the result is independent of the hypothesis made at the outset, provided only that this hypothesis satisfies the condition of continuity.

CHAPTER XII

OPTICS AND ELECTRICITY

FRESNEL"S THEORY.--The best example[5] that can be chosen of physics in the making is the theory of light and its relations to the theory of electricity. Thanks to Fresnel, optics is the best developed part of physics; the so-called wave-theory forms a whole truly satisfying to the mind. We must not, however, ask of it what it can not give us.

[5] This chapter is a partial reproduction of the prefaces of two of my works: _Theorie mathematique de la lumiere_ (Paris, Naud, 1889), and _electricite et optique_ (Paris, Naud, 1901).

The object of mathematical theories is not to reveal to us the true nature of things; this would be an unreasonable pretension. Their sole aim is to coordinate the physical laws which experiment reveals to us, but which, without the help of mathematics, we should not be able even to state.

© 2024 www.topnovel.cc