I have a question about relative frequency for the formal epistemologists out there. I would like to know whether the following claim is true, given either von Mises’ or Kolmogorov’s definitions of randomness:

(RF) The limiting frequency of positive even integers in a randomly selected sequence of positive integers is 1/2.

Richard von Mises (Probability, Statistics and Truth, 1957, 24-25) defined a random sequence to be one where the limiting value toward which the relative frequency of the attribute in question converges “remain[s] the same in all partial sequences which may be selected from the original one in an arbitrary way.” Examples of the sort of subsequences he had in mind include those formed by all odd members of the original sequence or by all members for which the place number in the sequence is the square of an integer or a prime number. The Kolmogorov-inspired definition of randomness I had in mind was the following: a sequence is random if the shortest program that can generate it is no shorter than the one that simply lists the elements of the sequence one by one.

Note that the ordinary sequence of integers (1, 2, 3, 4,…) does not qualify as random on either conception of randomness. Nor do the other sequences commonly used to illustrate the importance of ordering when defining limiting frequencies in infinite sets (e.g., those in which the evens appear at every 3rd place or every 4th place, etc.). I have no clear intuitions about whether there would be a limiting frequency of positive even integers in the case I describe or, if there were, whether it would of necessity be the commonsensical limit of 1/2. Any formal intuitions or theoretical insights?

“a sequence is random if the shortest program that can generate it is no shorter than the one that simply lists the elements of the sequence one by one.”

If the sequence is infinite then the programme that lists the elements one by one is infinite. I’m ignorant of these matters, but is there any sense in comparing programmes of infinite length? For example, would a programme that listed prime numbers one by one be shorter than one that listed positive integers one by one? The dilemma I’m thinking of here is that if there is sense to be made of these comparisons, then there will always be a shorter programme than a simple listing, since there will always be some localised summary to be made of a sample of the sequence. The other horn is to say there is no sense to be made of comparisons of length in infinite programmes, in which case perhaps the definition given in terms of programme length is itself senseless.

If on the other hand you are talking about finite sequences, then only the first horn is available and the simple listing programme has a finite length. So the sequence isn’t random if there is a shorter finite programme and the definition makes perfect sense. But in this case, given the sequence in the first place, it would again seem unlikely that there was no way to summarise sections of the sequence, which as a matter of chance fell into some arbitrary pattern. This is just an intuition, the intuition is that there exists a proof that for any sequence of integers with over a certain threshold number of elements there will be a shorter programme than a simple listing that will generate the sequence. Even if this intuition is false, the exceptions, although certainly random, will not capture all random sequences.

Suppose there are C characters in an alphabet used to characterize strings of characters of that same alphabet. There are C^n (C to the nth power) strings of length n. But there are at most C^(n-1) characterizations that are n-1 characters long. It follows that almost all strings of length n cannot be characterized using strings of less than length n.

Jonny and Gilbert, thanks for your comments on my question.

Here’s a different way of getting at what I am most interested in: Suppose there is a cosmic lottery machine that contained an infinite number of lottery balls. On each ball is inscribed one positive integer. For every positive integer, there is a ball with that number inscribed on it. (Even though physically impossible, we can imagine such a device being simulated in the mind of God.)

If the balls were randomly distributed in the drum of the lottery machine, would the limiting frequency of even integers converge to 1/2 as the number of drawn balls approached infinity? (Here I am operating with an intuitive notion of randomness rather than any particular account of randomness. We can think of this notion as what various mathematical accounts of randomness are trying to capture.)

Even if there are technical problems with all the available accounts of randomness, there still seems to be (from my perspective) a strong intuitive pull toward saying that the relative frequency in the situation I have described would be 1/2. I was wondering if anyone else shared that intuition.

My suspicion is that talk of relative frequencies makes sense in application to infinite sets only relative to an appropriate ordering relation. Thus, it’s only because every other natural number is even (when taken in the order defined by the successor relation) that it makes sense to say that the “relative frequency” of even numbers among the naturals is 1/2. But in application to infinite sets as to which no obvious ordering relation is to hand talk of relative frequences is either undefined or defined only relative to an essentially arbitrary choice of measure. This is what makes it so difficult to analyze, e.g., nomic probabilities in terms of proportions among sets of possibile worlds. There’s no obvious order in which to run through them. Likewise for logical probabilities. (Doubt I’m saying anything new, though.)

Here’s support for your intuition from a more Bayesian angle.

If the draws are random in

anyintuitive sense, then each draw must have a .5 chance of being an even ball, and the draws are independent. The relative frequency of even balls, then, approaches 1/2 with probability 1.Here’s some relevant information:

Kolmogorov is defining randomness for *finite* sequences, and defining a finitely random sequence as a random sequence with high measure of complexity, where the complexity of random sequence is the length of the shortest descrition of it using a universal method of description. Chaitin puts this all in terms of Turing machines, which is very elegant.

In mathematics sequences are by definition infinite ordered sets, and an arbitrary sequence is usually denoted by ‘{xn}’ by mathematicians, where the index is over natural numbers N.

A limiting relative frequency is the limit of the sequence of relative frequencies Xs, if it exists e.g.

Sequence of Xs

Sequence of relative frequencies

l is the limit of a sequence {xn} iff for all eta > 0 there exists m in N such that for all n>m, |xn-l|

last post mangled by html (problem didn’t show up in preview), here is how it should have been.

A limiting relative frequency is the limit of the sequence of relative frequencies Xs, if it exists e.g.

Sequence of Xs

Sequence of relative frequencies

l is the limit of a sequence {xn} iff for all eta > 0 there exists m in N such that for all n>m, |xn-l|

last post still totally mangled. one more try

A limiting relative frequency is the limit of the sequence of relative frequencies Xs, if it exists e.g.

Sequence of Xs {0,1,1,0,1,0,0,….. }

Sequence of relative frequencies { 0, ½, 2/3, 2/4,3/5, 3/6, 3/7,…. }

l is the limit of a sequence {xn} iff for all eta > 0 there exists m in N such that for all n>m, |xn-l| is less than eta.

Von Mises definition is part of his frequentist programme for probability, where probability of X is defined by limiting relative frequency of X. Von Mises says X is a random event iff sequences of trials for X have limiting frequencies and you can’t independently choose a subsequence such that the subsequence has a different limiting frequency. The second conjunct is supposed to capture the intuition that randomness requires there be no gambling systems for X. I’ve used the term ‘independently choose’ to refer to the restrictions on choosing subsequences, which are specified in technically complex ways. Its bound to end up that an admissable way of choosing a subsequence is a recursively enumerable sequence of natural numbers which satisfies some restriction on the functions used in the recursive enumeration.

The problems in specifying ruling out gambling systems may well be precisely the problems you are interested in. One thing that will make evident the difficulty is that it is provable that for any 0 less than r less than 1, there is a binary sequence (sequence of zeros and ones) with r as its limiting frequency.

James, to play devil’s advocate to your intuition, suppose there was this lottery drum in the mind of God. The balls come out in a random sequence is the only condition you are putting on the order in which the balls come out. I suppose that there are an infinite amount of infinitely long random sequences of odd numbers. Therefore it seems that the chances that ALL the balls to be drawn are odd, all here meaning every member of an infinite set, is infinity/infinity. I don’t know what the conventions are about such a number, but there seems to be equal justification for say it is 1, and 0. So although it is nearly certain that all the balls won’t be odd, there are infinite ways that they could all be odd. The same is true of even balls. So although there are an infinity of ways that the frequency will tend to 0.5 of even balls, there are infinite ways that the frequency will tend to any other proportion including 1 and 0. The only way around this would be to disallow purely odd, or disproportionately odd or even sequences on the grounds that they are non random. But I fear that this would be to beg the question, and assume your original intuition. Of course none of this applies to finite sequences.

Stephen wrote: “But in application to infinite sets as to which no obvious ordering relation is to hand talk of relative frequences is either undefined or defined only relative to an essentially arbitrary choice of measure.” I was hoping (perhaps in vain) that satisfaction of the randomness requirement would function as a substitute for a particular ordering relation. Instead of having a particular sequence with a particular ordering relation, we would have a set of admissible sequences of positive integers, each of which satisfied the randomness requirement.

In this light, I think my query can be understood as the question whether the relative frequency of evens in each of the admissible sequences would be 1/2. I still feel there is some intuitive pull to say “Yes,” even though I believe there must surely be some technical reason for saying “No.”

It seems that Jonny’s infinitely long sequences of odd numbers would not be random. We can distinguish between the randomness of a sequence and the randomness of the process that generates a sequence. I agree that it is possible for a random process to generate an infinitely long sequence of odds. But the sequence itself would not be random. I was conceiving of my cosmic lottery as involving sequences that were themselves random.

James, I was wondering if this might work to pinpoint the problem. Let’s back off from von Mises and Kologorov for the moment, and instead think about random sampling from an infinite population.

If a population contains N elements and we draw n

James asks “If the balls were randomly distributed in the drum of the lottery machine, would the limiting frequency of even integers converge to 1/2 as the number of drawn balls approached infinity?” Here is the technical reason for the answer being, not necessarily. Since you are interested in the limiting frequency of evens we can reduce this question to that of considering the limiting frequencies of the corresponding parity sequences (sequences of zeroes and ones given the oddness or evenness of the drawn numbers). As I said in the last post, it is provable that for any r between 0 and 1, there is a binary sequence with r as its limiting frequency. However, I suspect that the set of such sequences has measure 0 whilst the measure of the set of binary sequences with limiting frequency ½ is 1, so it might well be that Pr(the limiting frequency of even integers converge to 1/2 as the number of drawn balls approached infinity)=1.

Secondly, James then says “I was conceiving of my cosmic lottery as involving sequences that were themselves random.”. I’m pretty sure that some of the binary sequences whose limiting frequence is not ½ will satisfy von Mises definition of a random sequence but the easy ones to construct are not. Certainly there is one which will satisfy Chaitin’s definition of randomness, namely the binary sequence that has Chaitin’s Omega number as it’s limiting frequency. We know this number is not ½ because it is irrational and it is random because (I hope I’m remembering this right) each binary digit tells you whether a certain Turing machine halts or not and if it wasn’t random according to Chaitin’s definition (algorithmic incompressibility) then we’d have a solution to the halting problem, which we don’t.

Actually, the standard definition of Kolmogorov randomness considers the limit of the length of the smallest program to output the first n elements of the set. If the limit of the ratio of the smallest program length to the length of just listing the first n approaches 1, then the sequence is said to be random.

I suspect that if you take some random sequence, and then replace each term k by 2k+1, then the resulting sequence will still be random, but all values will be odd. (Basically, we haven’t increased the length of the program much, but we also have only increased the length of listing the numbers by 1 bit per number. Presumably this will be swamped by the complexity of writing down a lot of numbers, though I’m not completely certain.)

Note that this definition of randomness works quite differently if you just ignore the number and pay attention only to the parity, because a sequence of just odd numbers can still be quite complex, even though a sequence of just 1’s is very simple.

Following on from Kenny, even if you named odd number 0 and even numbers 1, and thereby claimed that a sequence of only odd numbers was non random, how about an infinite sequence of odd numbers with just one even number? Perhaps you could also call this sequence non random. But whether or not it is random, we know the frequency of even numbers, it is 1/infinity which conventionally equals 0. Now consider a sequence with all but two odd numbers. The relative frequency of even numbers would still be 0. 2/infinity = 0. We can extend this argument to any number of even numbers. So however many even numbers there are in the infinte sequence, the relative frequency will still be 0. We can reverse the argument for odd numbers, so that for any n such that n is the number of odd numbers in the sequence, then the relative frequency of odd numbers is 0. There is no upper limit on n, so there are infinite sequences where the frequency of odd numbers is 0 and infinite sequences where the frequency of even numbers is 0. The only sequence where this is not the case is where both odd and even numbers are infinite, in which case the proportion is infinity/infinity which is undefined.

This argument demonstrates that the frequency of even numbers is either 0,1 or undefined in any infinite set. The intuition that it should be 1/2 comes from trying to picture infinity by picturing a finite sequence and extending it. I refer to Wittgenstein Philosophical investigations 352. Also to the St Petersburg game, which is a very close analogy. You get £2 if the first coin is heads, £4 if the first two coins are heads, £8 if the first 3 coins are heads and so on infinitely. The expected utility of this game is paradoxically infinite, since the pay off doubles as the probability halfs, creating £1 + £1 + £1……. But the real value of entering such a game cannot be infinite, since it is clear that there is a 1/2 chance of losing in the first throw.

Thanks for the post by the way, I share your intuition and it turns out much more difficult that it seems.

Ut oh. Most of my post was lost after ‘<‘. (Damn!)

The general idea was to look at sampling from infinite populations that

dowork out the way James wants his experiment to work, and then work backwards from there to see if some sampling model that works makes sense for sampling positive integers for the categories ‘Even’ and ‘Odd’. The intuition was that you could work on constructing various sampling mechanisms to describe sampling of positive integers that behave exactly like sampling on infinite populations. None of these things might do what you’d like to do, but you might get a nice view of where the catch is.Hi Kenny. I think that the 2k+1 alteration to the sample that you suggest would not remain random from a sampling point of view. Rather, the renaming scheme would amount to a biased, second sampling of the population. (Reason: We are using two categories for classification, Odd and Even, and this renaming scheme is also a reclassification scheme on these two categories that we’re using to sort observations.)

The idea behind the algorithmic approach to defining random processes is to give a precise meaning to one process “being more random than” another process. Until Kolmogorov, being a little bit random was like being a little bit pregnant. Nevertheless, von Mises’s conception of randomness and Kolmogorov’s do come apart; I think von Mises randomness fails to preserve log scaling…or something like this.

Returning to 2k+1: This is a nice example. It highlights Steve’s point earlier on about the categories ‘Odd’ and ‘Even’ being a function of the ordering relation in the population. Typically we use the positive integers to order things and count them…black and white balls, let’s say. Then we sample on those things, the balls, with the fixed properties (color, uniform shape) used in the process of categorize them from observations, and the mathematical properties of the integers to manipulate them and calculate probabilities.

James’s experiment asks us to bypass the balls and go right to the positive integers themselves; so, one has to be very careful about designing a sample mechanism for the categories that you’re interested, so that the categories that you use when you build your count vectors will remain fixed, and not collapse when you do calculation.

Jonny – when the sets are both infinite, the traditional way to consider frequencies is by taking the limit of the frequencies in the first n numbers, as n goes to infinity. For very large n, the frequency of even numbers in the first n is very close to 1/2, so this particular infinity/infinity becomes 1/2. However, for the squares, for very large n the ratio becomes close to 0.

Also, I think I’ve changed my opinion since last night after thinking about things a bit further. The proper way to think about a set of natural numbers does seem to be as a sequence of 0’s and 1’s, where a 1 occurs in position n iff n is in the set in question.

For a set S that only has odd numbers, we can just consider the set of k such that 2k+1 is in S, and then add the instruction to insert 0’s between members of the sequence. This instruction has finite size, so this results in a compression of 1/2 in the limit, so the sequence is no longer Kolmogorov random.

It seems plausible that if the limiting frequency of odds in the set is substantially more than 1/2, then one could compress the information in the odds by this much, and then just list the evens, and still achieve substantial compression. (Similarly for ones substantially less than 1/2.) Thus, contrary to what I said last night, I think it will turn out true that in any Kolmogorov random set, the limiting frequency of odd numbers will be 1/2.

Regarding Kenny’s remark that “Note that this definition of randomness works quite differently if you just ignore the number and pay attention only to the parity, because a sequence of just odd numbers can still be quite complex, even though a sequence of just 1’s is very simple.” That’s quite true, but my thought was that all that is required to refute ‘all random sequences of natural numbers have limiting frequency of evens = 1/2’ is a single sequence of natural numbers which is random and has a limiting frequency that is not 1/2. If such a parity sequence exists (and infinitely many do) then that sequence itself refutes the claim (since it is a sequence of natural numbers) and also shows that there will be a whole class of such sequences each of which contains each natural number exactly once (which are perhaps the sort of sequence James is particularly interested in).

Regarding Kenny’s last remark, here is what seems to me a way of constructing a Kolmogorov random sequence of odd numbers whose limiting frequency of odds is 1 (he said ‘ Kolmogorov random set’ but sets of natural numbers do not have limiting frequencies, so I presume he means that sequence which is got from a set of natural numbers taken under the usual ordering). Take any infinite binary sequence, BKR (for Binary Kolmogorov Random), that is random by the definition given by Kenny: a sequence is Kolmogorov random iff the the limit of the ratio of the smallest program length that outputs the first n of the sequence to the length of just listing the first n is 1. Then construct the sequence ONKR as follows. For all n, m in N, the nth member of ONKR = 2m+1 iff the nth occurrence of 1 in BKR is at the mth place in BKR. Then ONKR will be Kolmogorov random. Suppose ONKR is not Kolmogorov random and so its limiting ratio is less than 1. In that case we could use the programs of smallest program length that output the first n of ONKR, plus 1 step, to output the first n of BKR. Adding one step to each program won’t change the limiting ratio, so for BKR that ratio would be less that 1, which would contradict BKR being Kolmogorov random.

Thank’s Kenny. But surely one could (and should) apply inductive skepticism to such a “traditional” sampling technique, which is why I put in the Wittgenstein reference. If you took the limiting frequency of the first n numbers, and the sequence was truely random, you still have infinite numbers left, and the remaining draws n+1 to infinity could still all be odd. So even if you establish that the limiting frequency of evens in the first n of the sequence is 1/2, however large n is, the actual frequency could still end up being 0. The sampling has established nothing.

How about this: each ball in the sequence gives some information about the proportion of odds and evens in the whole sequence. Suppose the whole sequence is 1 ball long, then that ball gives the total information about the proportion of odds to evens. If the sequence is 2 balls long, each ball will give 1/2 the information about how many balls are odd to even, If the sequence is n balls longs each ball will give 1/n information about the proportion of odd to even. For very large n, it does not matter that each ball gives such small amounts of information, because the large number of balls will multiply this small number up. But in an infinite sequence each ball gives zero information about how many balls are odd, so however many balls we look at it will yield no information about the relative frequency of odd to even balls.

I suppose this is what you’ve been getting at all along. The only place to look is process by which the sequence was generated. But if the process is truely random, it seems wrong that this should mean that there is therefore exact parity.

what is relative frequency??!