Content-Type: text/html Wikipedia: Probability axioms

[Home]Probability axioms

HomePage | RecentChanges | Preferences
You can edit this page right now! It's a free, community project

The probability P that some event in the event set E (denoted P(E)) is defined with respect to a "universe" or sample space S of all possible events in such a way that P must satisfy these axioms:

  1. For any set E, 0 <= P(E) <= 1. That is, the probability of an event set is represented by a real number between 0 and 1.
  2. P(S) = 1. That is, the probability that some event in the entire sample set will occur is 1, or certainty. More specifically, there are no events outside the sample set. This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample set, then the probability of any subset cannot be defined either.
  3. Any sequence of mutually disjoint events E1, E2, ... satisfies P(E1 + E2 + ...) = ∑ Ei. That is, the probability of an event set which is the union of other disjoint subsets is the sum of the probabilities of those subsets. This is called σ-additivity. If there is any overlap among the subsets--if they are not totally independent--this relation does not hold.

In the event that the sample space is finite or countably infinite, a probability function can also be defined by its values on the elementary events {e1}, {e2}, ... where S = {e1, e2, ...}


From these axioms one can deduce other useful rules for calculating probabilities. For example:

See also frequency probability -- personal probability -- eclectic probability -- statistical regularity


HomePage | RecentChanges | Preferences
You can edit this page right now! It's a free, community project
Edit text of this page | View other revisions
Last edited August 1, 2001 10:00 pm (diff)
Search: