But, as I have said above, it would not be only these partial sacrifices that would be in question; it would be the legitimacy of the whole of science that would be challenged.

I quite see that it might be said: "We are ignorant, and yet we must act. For action, we have not time to devote ourselves to an inquiry sufficient to dispel our ignorance. Besides, such an inquiry would demand an infinite time. We must therefore decide without knowing; we are obliged to do so, hit or miss, and we must follow rules without quite believing them. What I know is not that such and such a thing is true, but that the best course for me is to act as if it were true." The calculus of probabilities, and consequently science itself, would thenceforth have merely a practical value.

Unfortunately the difficulty does not thus disappear. A gambler wants to try a _coup_; he asks my advice. If I give it to him, I shall use the calculus of probabilities, but I shall not guarantee success. This is what I shall call _subjective probability_. In this case, we might be content with the explanation of which I have just given a sketch. But suppose that an observer is present at the game, that he notes all its _coups_, and that the game goes on a long time. When he makes a summary of his book, he will find that events have taken place in conformity with the laws of the calculus of probabilities. This is what I shall call _objective probability_, and it is this phenomenon which has to be explained.

There are numerous insurance companies which apply the rules of the calculus of probabilities, and they distribute to their shareholders dividends whose objective reality can not be contested. To invoke our ignorance and the necessity to act does not suffice to explain them.

Thus absolute skepticism is not admissible. We may distrust, but we can not condemn _en bloc_. Discussion is necessary.

I. CLa.s.sIFICATION OF THE PROBLEMS OF PROBABILITY.--In order to cla.s.sify the problems which present themselves _a propos_ of probabilities, we may look at them from many different points of view, and, first, from the _point of view of generality_. I have said above that probability is the ratio of the number of favorable cases to the number of possible cases. What for want of a better term I call the generality will increase with the number of possible cases. This number may be finite, as, for instance, if we take a throw of the dice in which the number of possible cases is 36. That is the first degree of generality.

But if we ask, for example, what is the probability that a point within a circle is within the inscribed square, there are as many possible cases as there are points in the circle, that is to say, an infinity.

This is the second degree of generality. Generality can be pushed further still. We may ask the probability that a function will satisfy a given condition. There are then as many possible cases as one can imagine different functions. This is the third degree of generality, to which we rise, for instance, when we seek to find the most probable law in conformity with a finite number of observations.

We may place ourselves at a point of view wholly different. If we were not ignorant, there would be no probability, there would be room for nothing but certainty. But our ignorance can not be absolute, for then there would no longer be any probability at all, since a little light is necessary to attain even this uncertain science. Thus the problems of probability may be cla.s.sed according to the greater or less depth of this ignorance.

In mathematics even we may set ourselves problems of probability. What is the probability that the fifth decimal of a logarithm taken at random from a table is a "9"? There is no hesitation in answering that this probability is 1/10; here we possess all the data of the problem. We can calculate our logarithm without recourse to the table, but we do not wish to give ourselves the trouble. This is the first degree of ignorance.

In the physical sciences our ignorance becomes greater. The state of a system at a given instant depends on two things: Its initial state, and the law according to which that state varies. If we know both this law and this initial state, we shall have then only a mathematical problem to solve, and we fall back upon the first degree of ignorance.

But it often happens that we know the law, and do not know the initial state. It may be asked, for instance, what is the present distribution of the minor planets? We know that from all time they have obeyed the laws of Kepler, but we do not know what was their initial distribution.

In the kinetic theory of gases, we a.s.sume that the gaseous molecules follow rectilinear trajectories, and obey the laws of impact of elastic bodies. But, as we know nothing of their initial velocities, we know nothing of their present velocities.

The calculus of probabilities only enables us to predict the mean phenomena which will result from the combination of these velocities.

This is the second degree of ignorance.

Finally it is possible that not only the initial conditions but the laws themselves are unknown. We then reach the third degree of ignorance and in general we can no longer affirm anything at all as to the probability of a phenomenon.

It often happens that instead of trying to guess an event, by means of a more or less imperfect knowledge of the law, the events may be known and we want to find the law; or that instead of deducing effects from causes, we wish to deduce the causes from the effects. These are the problems called _probability of causes_, the most interesting from the point of view of their scientific applications.

I play ecarte with a gentleman I know to be perfectly honest. He is about to deal. What is the probability of his turning up the king? It is 1/8. This is a problem of the probability of effects.

I play with a gentleman whom I do not know. He has dealt ten times, and he has turned up the king six times. What is the probability that he is a sharper? This is a problem in the probability of causes.

It may be said that this is the essential problem of the experimental method. I have observed _n_ values of _x_ and the corresponding values of _y_. I have found that the ratio of the latter to the former is practically constant. There is the event, what is the cause?

Is it probable that there is a general law according to which _y_ would be proportional to _x_, and that the small divergencies are due to errors of observation? This is a type of question that one is ever asking, and which we unconsciously solve whenever we are engaged in scientific work.

I am now going to pa.s.s in review these different categories of problems, discussing in succession what I have called above subjective and objective probability.

II. PROBABILITY IN MATHEMATICS.--The impossibility of squaring the circle has been proved since 1882; but even before that date all geometers considered that impossibility as so "probable," that the Academy of Sciences rejected without examination the alas! too numerous memoirs on this subject, that some unhappy madmen sent in every year.

Was the Academy wrong? Evidently not, and it knew well that in acting thus it did not run the least risk of stifling a discovery of moment.

The Academy could not have proved that it was right; but it knew quite well that its instinct was not mistaken. If you had asked the Academicians, they would have answered: "We have compared the probability that an unknown savant should have found out what has been vainly sought for so long, with the probability that there is one madman the more on the earth; the second appears to us the greater." These are very good reasons, but there is nothing mathematical about them; they are purely psychological.

And if you had pressed them further they would have added: "Why do you suppose a particular value of a transcendental function to be an algebraic number; and if [pi] were a root of an algebraic equation, why do you suppose this root to be a period of the function sin 2_x_, and not the same about the other roots of this same equation?" To sum up, they would have invoked the principle of sufficient reason in its vaguest form.

But what could they deduce from it? At most a rule of conduct for the employment of their time, more usefully spent at their ordinary work than in reading a lucubration that inspired in them a legitimate distrust. But what I call above objective probability has nothing in common with this first problem.

It is otherwise with the second problem.

Consider the first 10,000 logarithms that we find in a table. Among these 10,000 logarithms I take one at random. What is the probability that its third decimal is an even number? You will not hesitate to answer 1/2; and in fact if you pick out in a table the third decimals of these 10,000 numbers, you will find nearly as many even digits as odd.

Or if you prefer, let us write 10,000 numbers corresponding to our 10,000 logarithms, each of these numbers being +1 if the third decimal of the corresponding logarithm is even, and -1 if odd. Then take the mean of these 10,000 numbers.

I do not hesitate to say that the mean of these 10,000 numbers is probably 0, and if I were actually to calculate it I should verify that it is extremely small.

But even this verification is needless. I might have rigorously proved that this mean is less than 0.003. To prove this result, I should have had to make a rather long calculation for which there is no room here, and for which I confine myself to citing an article I published in the _Revue generale des Sciences_, April 15, 1899. The only point to which I wish to call attention is the following: in this calculation, I should have needed only to rest my case on two facts, to wit, that the first and second derivatives of the logarithm remain, in the interval considered, between certain limits.

Hence this important consequence that the property is true not only of the logarithm, but of any continuous function whatever, since the derivatives of every continuous function are limited.

If I was certain beforehand of the result, it is first, because I had often observed a.n.a.logous facts for other continuous functions; and next, because I made in my mind, in a more or less unconscious and imperfect manner, the reasoning which led me to the preceding inequalities, just as a skilled calculator before finishing his multiplication takes into account what it should come to approximately.

And besides, since what I call my intuition was only an incomplete summary of a piece of true reasoning, it is clear why observation has confirmed my predictions, and why the objective probability has been in agreement with the subjective probability.

As a third example I shall choose the following problem: A number _u_ is taken at random, and _n_ is a given very large integer. What is the probable value of sin _nu_? This problem has no meaning by itself.

To give it one a convention is needed. We _shall agree_ that the probability for the number _u_ to lie between _a_ and _a_+ is equal to [phi](_a_)_da_; that it is therefore proportional to the infinitely small interval _da_, and equal to this multiplied by _a_ function [phi](_a_) depending only on _a_. As for this function, I choose it arbitrarily, but I must a.s.sume it to be continuous. The value of sin _nu_ remaining the same when _u_ increases by 2[pi], I may without loss of generality a.s.sume that _u_ lies between 0 and 2[pi], and I shall thus be led to suppose that [phi](_a_) is a periodic function whose period is 2[pi].

The probable value sought is readily expressed by a simple integral, and it is easy to show that this integral is less than

2[pi]M_{_k_}/_n_^{_k_},

M_{_k_} being the maximum value of the _k_th derivative of [phi](_u_).

We see then that if the _k_th derivative is finite, our probable value will tend toward 0 when _n_ increases indefinitely, and that more rapidly than 1/_n_^{_k_ - 1}.

The probable value of sin _nu_ when _n_ is very large is therefore naught. To define this value I required a convention; but the result remains the same _whatever that convention may be_. I have imposed upon myself only slight restrictions in a.s.suming that the function [phi](_a_) is continuous and periodic, and these hypotheses are so natural that we may ask ourselves how they can be escaped.

Examination of the three preceding examples, so different in all respects, has already given us a glimpse, on the one hand, of the role of what philosophers call the principle of sufficient reason, and, on the other hand, of the importance of the fact that certain properties are common to all continuous functions. The study of probability in the physical sciences will lead us to the same result.

III. PROBABILITY IN THE PHYSICAL SCIENCES.--We come now to the problems connected with what I have called the second degree of ignorance, those, namely, in which we know the law, but do not know the initial state of the system. I could multiply examples, but will take only one. What is the probable present distribution of the minor planets on the zodiac?

We know they obey the laws of Kepler. We may even, without at all changing the nature of the problem, suppose that their orbits are all circular, and situated in the same plane, and that we know this plane.

On the other hand, we are in absolute ignorance as to what was their initial distribution. However, we do not hesitate to affirm that their distribution is now nearly uniform. Why?

Let _b_ be the longitude of a minor planet in the initial epoch, that is to say, the epoch zero. Let _a_ be its mean motion. Its longitude at the present epoch, that is to say at the epoch _t_, will be _at_ + _b_. To say that the present distribution is uniform is to say that the mean value of the sines and cosines of multiples of _at_ + _b_ is zero. Why do we a.s.sert this?

Let us represent each minor planet by a point in a plane, to wit, by a point whose coordinates are precisely _a_ and _b_. All these representative points will be contained in a certain region of the plane, but as they are very numerous this region will appear dotted with points. We know nothing else about the distribution of these points.

What do we do when we wish to apply the calculus of probabilities to such a question? What is the probability that one or more representative points may be found in a certain portion of the plane? In our ignorance, we are reduced to making an arbitrary hypothesis. To explain the nature of this hypothesis, allow me to use, in lieu of a mathematical formula, a crude but concrete image. Let us suppose that over the surface of our plane has been spread an imaginary substance, whose density is variable, but varies continuously. We shall then agree to say that the probable number of representative points to be found on a portion of the plane is proportional to the quant.i.ty of fict.i.tious matter found there. If we have then two regions of the plane of the same extent, the probabilities that a representative point of one of our minor planets is found in one or the other of these regions will be to one another as the mean densities of the fict.i.tious matter in the one and the other region.

Here then are two distributions, one real, in which the representative points are very numerous, very close together, but discrete like the molecules of matter in the atomic hypothesis; the other remote from reality, in which our representative points are replaced by continuous fict.i.tious matter. We know that the latter can not be real, but our ignorance forces us to adopt it.

If again we had some idea of the real distribution of the representative points, we could arrange it so that in a region of some extent the density of this imaginary continuous matter would be nearly proportional to the number of the representative points, or, if you wish, to the number of atoms which are contained in that region. Even that is impossible, and our ignorance is so great that we are forced to choose arbitrarily the function which defines the density of our imaginary matter. Only we shall be forced to a hypothesis from which we can hardly get away, we shall suppose that this function is continuous. That is sufficient, as we shall see, to enable us to reach a conclusion.

What is at the instant _t_ the probable distribution of the minor planets? Or rather what is the probable value of the sine of the longitude at the instant _t_, that is to say of sin (_at_ + _b_)? We made at the outset an arbitrary convention, but if we adopt it, this probable value is entirely defined. Divide the plane into elements of surface. Consider the value of sin (_at_ + _b_) at the center of each of these elements; multiply this value by the surface of the element, and by the corresponding density of the imaginary matter. Take then the sum for all the elements of the plane. This sum, by definition, will be the probable mean value we seek, which will thus be expressed by a double integral. It may be thought at first that this mean value depends on the choice of the function which defines the density of the imaginary matter, and that, as this function [phi] is arbitrary, we can, according to the arbitrary choice which we make, obtain any mean value. This is not so.

A simple calculation shows that our double integral decreases very rapidly when _t_ increases. Thus I could not quite tell what hypothesis to make as to the probability of this or that initial distribution; but whatever the hypothesis made, the result will be the same, and this gets me out of my difficulty.

© 2024 www.topnovel.cc