Buridan’s Ass Paradox

One response to the paradox (the starving ass is caught between two equally attractive haybales, and starves from indecision, needing a reason to prefer one to the other in order to eat) is to allow for the possibility, and rationality, of arbitrary actions. One can, for example, decide to choose between the two equally attractive options by flipping a coin. After the flip, the haybales remain equally attractive, but there is an explanation why the ass chooses one bale over the other. This response has implications for the theory of rational action, but I won’t go into that here. What I wonder is if anyone knows of literature proposing other responses to the paradox, or reasons for being dissatisfied with this one?


Buridan’s Ass Paradox — 47 Comments

  1. Jon,

    Aren’t the worries about tossing a coin based on (some version of) the principle of sufficient reason? So someone might maintain that the problem can be resolved rationally (say, for instance, by hyper-rational agents who only act on reasons that are relevant to the preferability of an option) only if a strictly rational resolution must invoke the principle of sufficient reason. The notion of hyper-rational agents is discussed in J. Howard Sobel’s _Taking Chances_ CUP (1994)

  2. Here’s my way out.

    The ass has no reason to choose one bale before the other.

    But does the ass have a reason to toss the coin before proceeding to eat? (Is tossing the coin “closer” to the ass than just standing there, starving?)

    If the ass does have something like a reason to flip the coin then another option, even more rational option, is also available.

    So long as the ass is equidistant from the two bales, it has a reason to move rather than remain standing. This decision to move rather than not is rational given that the situation currently offers no basis to decide what to eat.

    That is, move = conduct inquiry = change the conditions under which to make the decision.

    The coin toss represents such an “arbitrary” move (which is only arbitrary with respect to the bales, but is grounded in the basis for its comparison with the alternative: do nothing further.)

    It has no reason to take a step to the right over a step to the left; but it does have a good reason to take a step either to the right or the left rather than remain standing.

    So it experiments until one of the bales actually is closer, and proceeds (is proceeding) rationally.

  3. My previous solution (it obviously isn’t mine, but I don’t know who came up with it) can, I think, be put in epistemic terms.

    The ass believes the bales to be equidistant but may be mistaken.

    The only reason the ass has for its belief will be something like the evidence provided by triangulation.

    It therefore only has a reason for any hypothesis about the relative position of the bales if it has been (and, ideally, now is) in motion.

    If the ass has knowledge of the position of the bales it is in motion (or is capable of moving aimlessly — where the “aim” is defined by the two bales). If the ass is already moving (or can initiate aimless motion), there is no paradox.

    I am not sure that this (bale-relative) idea of “aimless” is the same as the one used in the “arbitrary action” response. At bottom, the choice between move/don’t move is contingent (not arbitrary) on the asses need to know more (or something else) about the bales. It’s decision to move is not a decision to engage in arbitary action, but is, indeed, defensible as a necessary act of inquiry.

  4. Suppose that the ass finds himself equidistant between the coin and a magic 8 ball and has no reason to prefer one over the other…

  5. There is a quantum physics solution to Buridan’s Ass Paradox: the ass takes both options. After a time, the system would enter a state that is a superposition of the state in which the ass takes the hay on the left and the state in which the ass takes the hay on the right.

  6. Hi Jon,

    I am not familiar with the specific literature on Buridan’s ass. The curious thing about this example is that we’re told straight off that there is no reason to prefer one hay bale to the other because both are equally attractive. This last bit of information is important.

    The reason why it is important is that, typically, puzzles about preference are generated from considering cases in which we’re told that there is no reason to prefer x to y simpliciter, i.e., without mentioning the positive claim that both are equally attractive (on whatever measure). The root of the problem here is to specify under what conditions a judgment of indifference between two alternatives (i.e., no reason to prefer one to the other) is to be treated as a judgment of equal merit (i.e., both are equally attractive). In other words, the question raised is whether all of the outcomes in the scenario you wish to model are comparable under the measure you’ve adopted.

    It seems that the Buridan’s ass case doesn’t run afoul of this particular objection, since we’re told that they are equally attractive. Under these conditions, the coin-flip decision procedure appears to be a reasonable one to adopt.

    Or did you have another sort of worry in mind? Is the thought that there is a compelling argument in the other direction, namely that there are cases in which two outcomes are equally attractive but nevertheless there is a reason to prefer one to the other?

  7. Asses can’t flip coins. I suspect that actually matters.

    Here’s why. Once the coin flip takes place, there is a reason to prefer one alternative to the other—it was the winner of the coin flip. But if the coin flip cannot take place, for whatever reason, we have a case in which not only is there no reason to prefer one alternative to other, and not only are both equally attractive, but there CAN BE no reason to prefer one to the other.

    It looks as though the ass faces the dilemma of either doing what is clearly worst—standing still—or acting arbitrarily.

    I’m worried about a more morbid version of the problem. Call it Hagar’s predicament, because Hagar the Horrible so often finds himself on the edge of a cliff with an enormous army bearing down on him. He faces a forced choice between two equally unattractive options.

    The trouble is, there is a conclusive-seeming argument against whichever option he takes: If he takes that option, he will die.

  8. Chase,
    The result of the coin toss does not entail a difference in preference between hay bales, under the constraints of the problem.

    Arbitrary action, if by arbitrary it is meant eating one of the two hay bales, still appears rational under the constraints of the problem…along with the assumption that there is a preference of eating hay over starving

    The choice to prefer eating the hay bale corresponding to the outcome of the coin toss is conditioned on all this, which amounts to respecting the preference to eat over starving, not a preference for the winning hay bale over the losing hay bale.

  9. I think Chase is on to something. It might be put this way: an ass wouldn’t flip a coin. It would have no reason to do so.

    Suppose the ass is situated at an equal distance between two bales that are equally attractive. Picture it like this:

    Bale 1 Bale 2


    So far it would be rational for the ass to move forward (up in the picture) because it would get closer to both bales. When the situation looks like this,

    Bale 1 Ass Bale 2

    however, it’s stuck. It would be irrational to proceed or to back up (because this would increase the distance) but there is no reason to prefer going left to going right.

    There is, interestingly, less reason to stay put (hunger). Moreover, as soon as the ass moves right or left the dilemma is solved because, whatever the similarities between the two bales, one of them is now more accessible (i.e., closer).

    The only question is how to take that first step. Seen as a decision between going left and right, no reason can be given, meaning the decision is irrational. But seen as a decision between moving or staying put, there is a reason (gotta eat) and the decision is rational.

    Perhaps the absurdity of the problem can be seen through the following. Suppose one of the bales was clearly most attractive but there was no reason to prefer moving any one its four legs first. Would that constitute a paradox?

  10. I’m failing to grasp the problem with Jon’s sketched solution, so long as the ass prefers eating to starving. His eating-to-starving preference is his reason to move, if not a specific reason to move left nor a specific reason to move right.

    One source of trouble is articulating the conditions under which the utility of a pair of actions are (or should be) judged equivalent. But we’re told that they are equivalent. Another source of trouble is that the dymanics of a case might change the ultilities or change the preferences. But this doesn’t seem to be the case, either: chosing one bale does not make the other less nourishing, and the ass prefers eating to starving even after he is sated. So, I don’t see the problem with the rationality of making an “arbitrary” choice between the bales under these conditions. Otherwise, are we to assume that rational action demands that there is always a strict ordering on possible actions? This seems highly implausible.

  11. I agree that it is important to accept the premise that the two bales are equally attractive. But then there is no solution; or the coin toss, at least, isn’t one.

    It is a paradox precisely because once we put the ass in this situation, there is no reasonable way out. The coin toss does not make the choice of bales more rational.

    It represents a solution at a more basic level, which reveals the paradox to be (like all paradoxes if they have a solution) a pseudo-paradox.

    How did the ass get itself into a situation in which there was no way to decide how to proceed? Why did the ass stop the investigation of the bales at the precise moment when it could not select one of them to eat?

    And if we grant that it can gather more information, we grant it a more dynamic situation. An ass that could not move arbitrarily (which move would solve its problem before it began) cannot (this may be Chase’s point) flip a coin.

  12. It seems a more interesting problem if it concerns whether a perfectly rational being can have a reason to choose one bale over the other on the assumption that *all* of the information relevant to the preferability of each bale has been exhausted, the agent knows this, and the bales are equi-preferable on that information. Or, can a perfectly rational being make a decision on the basis of information that is not relevant to the preferability of each bale? No doubt a less-than-perfectly rational being can do so. But that doesn’t show much.
    The problem is perfectly analogous to Leibniz’s problem of two unsurpassably good possible worlds. According to Leibniz, a perfectly rational being (in this case God) could not choose to actualize either one of them. Or, rather, were he to actualize one of them, he’d be less than perfectly rational. In the case of the perfectly rational ass, he must choose between two unsurpassably good bale-worlds.

  13. Mike,

    Is the problem analogous to the Leibniz example? What is the negative cost to God of not actualizing one of the two best possible worlds?

    The Leibniz example appears to involve an action (i.e., actualizing a possible world) that affects the initial preference ordering between worlds. Since God has perfect knowledge and perfect rationality, he should have known all this before hand so the problem is picked up epistemically. The action of his picking one of the two worlds seems to run afoul of God’s perfect rationality or his perfect knowledge.

    This doesn’t seem to be what is going on in the Buridan’s ass example. Here we may assume the agent is perfectly rational but it does not seem necessary to assume that he has perfect knowledge. This suggests, then, that the costs are different for each agent: the costs associated with the ass not acting is starving, which is presumably worse than acting. The costs of acting for God are to give up being god-like (i.e., give up perfect rationality or perfect knowledge) which, presumably, is worse than not acting.

    I still feel like I’m missing something in this problem…. Why the unease over arbitrary choice between equi-preferable actions? Particularly when there is preference for an arbitrary choice (eating) over indecision (starving)?

  14. Thomas,

    I don’t understand your point. Is it correct for me to infer that you think rational decision may occur only if the possible outcomes obey a strict order? Then, I agree, this would entail that there is no solution in cases where outcomes obey only only a partial ordering. But this is just to deny the possibility of rational action when there is more than two equi-preferable action. But why?

    Here I’m restating the question above: what’s so bad about arbitrary choice over equi-preferable actions?

  15. _Why the unease over arbitrary choice between equi-preferable actions? _

    The “unease” is due to there being no reason available for making one choice rather than the other. The reasons you adduce are for choosing some bale or other rather than starving. And of course there are such reasons. But there is nonetheless no rational basis for choosing this bale rather than that one. This is a problem for a perfectly rational being.

  16. We’re in agreement, I think, on the description of the problem. And it is a consequence of allowing for a partial order of possible actions.

    Given that we want to work with a partial order, I fail to see what is wrong with Jon’s solution.

  17. I think you have to place special constraints on the notion of a perfectly rational agent in order to get a paradox here. If we think of a PRA as an agent who performs all and only rational actions, then there is no problem performing arbitrary actions when there is more than one rational option. To get a problem, one has to think of the notion of perfect rationality in terms of noticing something like intrinsic preferability-making features of actions, and only acting on the basis of such recognized features. I don’t see why that amounts to a perfection of a rational capacity, however.

    Perhaps we could put it this way. In order for the paradox to disturb, perfect rationality in the second sense would somehow have to be better or more perfect than perfect rationality in the first sense. What argument could be given for that claim?

  18. Jon,

    Could you clarify ‘intrinsic preferability-making’? Do you mean an action that, when actualized, also changes the preference order? That is, there may be some feature of an action that forces a revision of preference. This is not something decision theory handles very well: you have to have preferences over possible preference orderings before hand, if I’m not mistaken, which is wildly implausible…(No, that’s probably not what you mean.)

    I’m not following you. I like the coin toss idea!

  19. There is an interesting analogue in the failure of contrastive explanations. Under the assumption of indeterminism we can have a statistically improbable event e occur for which there is no contrastive explanation: i.e. there is no explanation why e occurred at t given the laws L and history H prior to t *rather than* e’which was more (or equi-)probable on L and H. Reason seems to urge that there must be some reason why e occurred rather than e’. But there seems to be no such reason.
    In the case of perfectly rational agents there is no rational explanation of choice of bale b *rather than* b’if they are equi-preferable. The choice must have been based on reasons R that are not relevant to the preferability of the option. But the question then re-arises. Why would a perfectly rational agent base his choice on reasons R when reasons R’ are equi-relevant to the decision between b and b’ and R’would have the agent choose b’? Now facing R & R’ he is back in the original problem. Knowing that there is nothing intrinsic to R or R’ that determines his choice, suppose he bases his choice of R on reasons E. Why would a perfectly rational agent base his choice of R on E when reasons E’ are as relevant as E to his choice between R and R’ and E’ would have the agent chose R’? And so on upward. And so a perfectly rational agent can’t make the initial choice between b and b’.

  20. Greg, I can’t say much about the point about intrinsicality. The idea is that the toss of the coin is not an intrinsic feature regarding which action is preferable, but the quality of the hay is. So the actions are tied on intrinsic preferability grounds, but not on all the grounds (since that includes the coin toss).

    There’s a further problem with the coin toss, by the way. Before tossing, you have to decide whether heads means go to the left, or right. And there is no reason to prefer one option to the other…

  21. Jon- I think I got it now. It seems that this collapses to excluding partially ordered outcomes, then. For if you rely solely on features of actions and it is possible in the algebra for actions to be identical, then you’ll get stuck in one of these saddle points. I wonder, then, if an argument against this ‘intrinsic’ version of a PRA cannot be grounded in this observation. Sticking to an intrinsic PRA view severely restricts the problem domains under which the theory works. And if you allow external reasons, the set-up for the coin toss (or lottery draw) should go through.

    The next question is: why not prefer the external PRA to the internal PRA? (I still like the coin toss idea!)

    Hi Mike- I still don’t see the problem w.r.t. R, R’ and the regress: in the limit he starves!

  22. Greg, I agree with you: the best approach endorses the external PRA account rather than the internal. The argument for it is that it keeps the poor ass from starving, which is obviously the proper conclusion for a theory of rational action to generate. The only reason one would have to abandon this view is if there was an argument to think it is a perfection to respond only to internal preferability features, but I can’t see what that argument would be.

  23. It is true that the ass starves, but how does that show it is not a rational choice–much less obviously irrational? Rationality doesn’t always pay and perfect rationality pays even less. The obvious examples are the two-boxers in Newcomb problems. The probablity of their losing is quite high, but it is (in my view) the rational choice to make.

  24. Mike, you’re right that rationality doesn’t always pay. Even worse, I sometimes suspect that it rarely pays…! The harder cases are where your theory says to do A and you know in advance that doing something else will get better results. If you know in advance that being a one-boxer will get you better results, I bet not even you would be a two-boxer–right?

  25. Jon-

    Hell of a good question. I know that Nozick was asked whether two-boxing is rational in cases where the predictor is really perfect–100% accurate. In these cases you lose–guaranteed–if you two-box and you win–guaranteed–if you one-box. He said he didn’t know. That took guts. It’s hard not to conclude that one-boxing is rational in the sure bet cases.

  26. This is Plantinga’s response to the problem, in a way. He thinks the case is underdescribed, but that one way of making the problem precise has the predictor prescient. In that case, he says, there’s a knock-down argument for being a one-boxer.

  27. Jon,

    You say, “The harder cases are where your theory says to do A and you know in advance that doing something else will get better results. If you know in advance that being a one-boxer will get you better results, I bet not even you would be a two-boxer–right?”

    Welcome to Subject-Sensitive Invariantism!

  28. Jeremy, I do have somewhat of a backup position. I could have said: if you can deduce in advance… or if you can be certain in advance… But now I’m straining at gnats, aren’t I? 🙂

  29. I think my misunderstanding of the case, if that’s what it is, has to with assuming that a Perfectly Rational Ass (who only behaves rationally) cannot perform an arbitrary action, i.e., must always have a reason to do something. The paradox of the ass, I assume, shows us that a PRA would get stuck when faced with these two bales.

    The trick to survival as a perfectly rational being, then, is never to make proceeding depend on a choice between equi-preferable options. (The case is defined precisely to create this stituation; so this “solution” is really a rejection of the paradox at a fundamental level.)

    “If we think of a PRA as an agent who performs all and only rational actions, then there is no problem performing arbitrary actions when there is more than one rational option,” says Jon. But an arbitrary action is one for which no reason can be provided: it is irrational. Thus we have a being who can do only rational things who must do something irrational. That’s the paradox, I thought.

    Like I say, I think the best thing for a rational agent is never to assess two outcomes as equal. It must always work with a hypothetical ordering of outcomes. If there are two bales that are, in fact, equally preferable, they must have appeared on the ass’s list of preferences as a result of a set of inquiries to determine their desirability. That list, must have no room for a “tie”.

    It spots one bale first and notes that it is pretty good. Since the ass is hungry, it appears near the top of the list. But it may still favour further study over getting down to eating.

    Then it sees the other and puts it hypothetically right underneath it on the list. I would argue that it can’t proceed from here without moving back and forth between the two bales. And as it assures itself that there is no difference, the only relevant difference will be which is closest.

    As it moves back and forth between the two bales, learning that all else is equal, the bale that is closest will appear higher on the list than the other. Meanwhile, the inquiry to determine which bale is best moves steadily down as BOTH bales move up (with increasing hunger).

    Thus a rational agent can only survive if it is irrational to believe that two outcomes are equally preferable. I think it is.

  30. _But an arbitrary action is one for which no reason can be provided: it is irrational_


    A perfectly rational being can give a reason for selecting the left bale L. Say, for instance, he flipped a coin. So it is not that he has no reason. Rather he has no better reason for selecting the left bale than he does for selecting the right bale R. What would make the choice of the left bale better–as the case is now being construed–is that there would be more preference-satisfaction in choosing left. But by hypothesis there wouldn’t be. Since L is as good as R (and vice versa), and he is perfectly rational, he remains indifferent between L and R.
    You could run the problem in a slightly different way: Suppose you offer him the left bale. Before he eats it, you offer an exchange of L for R. He is indifferent, and perfectly rational, so he makes the exchange. Before he eats R, you offer another exchange, R for L. He is indifferent between R and L and so makes the exchange. Same problem. A perfectly rational being would keep exchanging to his own detriment. The fact that he is getting hungier doesn’t make the left or right bale more preferable to the other.

  31. Mike, I don’t think I understand your point about how the coin toss provides a reason. Since it is arbitrary, he could also say, “There are no clouds in the sky today, so I’ll choose the left bale.” Heads or tails provides no reason to go left or right.

    But I’m with you on the alternative way of running the problem.

    Maybe there’s a way out of that too, which takes into account the cost (trouble) of computation. At some point, having decided on one bale, he’d prefer just to eat it rather than *considering* the alternatives you offer him. Even where the alternative is *better* than what he has decided, the operation of making the decision is not preferable to acting. Put another way: eating now is better than eating even something better later. (I imagine that’s how we get through “what will you be having” at the restaurant: we just start rank ordering and stop moving things around on the list–things which are never tied–when the waiter shows up again.)

    In the end, my point is that the ass should not be certain that one bale is exactly as good as the other. That would be irrational. And that’s what the paradox is trying to show us (by leading us into contradictions when we assume otherwise).

  32. _Put another way: eating now is better than eating even something better later_.

    Those aren’t the choices in this problem. You’re choices are “do not exchange” ~E or “exchange” E. There is no assumption in the case that there is any cost to exchanging. If there were the slightest cost, then a rational being would never exchange: by hypothesis each option has the same value V, the exchange would involve giving up V for (V-e), for some small e.
    There is nothing in these cases that is “contradictory” and there is nothing in the consistent choice to exchange that is “irrational”. You seem to be assuming that a rational choice is one that has a better payoff than its alternatives. But, as noted above, that is not in general true.

  33. Yes, my way out of the problem is to suggest that a perfectly rational being would never get into the situation where it arises.

    I take it there would be no problem if one of the bales constituted a better payoff. And the contradiction is that the ass ends up with a poorer bale (none) than common sense tells us he could have had (either).

    Jon mentioned Plantinga’s critique that the boxer case is “underdescribed”. What I am doing here is just to say that once the case is described to include the ass’s certainty about his situation, that is, to self-awareness about the possibility of being rational but mistaken, then the problem does not arise.

    I’ve been saying throughout that I know that this is not quite a solution to the problem because, as you say, I’m giving the ass choices that the problem doesn’t give it.

    Certainly, however, I think the “paradox” is intended at the point where we suggest the coin toss. The only rational thing for the ass to do is something arbitrary, i.e., groundless, without reason, irrational. And that’s what should fail to satisfy us.

  34. ‘Perfect rationality’ is a term of art, and an odd one at that. It is clear that if an agent can act on one of two outcomes only if there is a clear preference among the pair, and he’s placed in a situation in which he judges the pair of outcomes equi-preferable, then he won’t act.

    It is worth stressing that the variations Mike is suggesting in this thread are variations on this particular set of constraints and not variations on the Buridan’s ass example. There is a very good reason to resist viewing the Buridan’s ass example as one that must exhibit these constraints: namely, we are told that it doesn’t. I’ve been glibly making this point by remarking that the ass starves in the long-run. However, Mike’s reply in 37 denies that this is a legitimate concern. Let’s see if I can make the case a bit clearer for why it is.

    The value that an ass places on a bale of hay is not unlike the value an investor places on a unit of currency, the euro let’s say. Consider the choice between two identically minted 1 euro coins. Since there is no monetary difference between the two, our investor judges them of equal value precisely like our poor ass has done in valuing the hay bales.

    However, judging two outcomes equi-preferable at time t does not entail that one will judge them equi-preferable at t+1. It is not the case that after selecting one of the coins that our investor is willing, rationally, to trade it for the other coin. The reason is due to a euro in hand being better than the promise of one in the future, a principle that bankers live by called discounting. A discount rate is like interest in reverse: it is the rate at which future money is reduced to determine present value.

    For example, at 5% (simple) interest, 1.05 euro in one year is equivalent to a euro today; or, conversely, one euro is the present value of 1.05 a year hence. The expression 1/(1+0.05) is the discount factor. It is a rudimentary way to express the time value of money.

    Returning to hay bales and asses, food is also time valued. Heaven knows what the discount rate is for bales of hay, although some broker in Chicago could probably help come up with a reasonable estimate. The point is, a future hay bale is worth less to the ass than one at present. Moreover, and this is crucial, we are being told precisely this when we are told that in the long run the ass starves. Yet it is this information that is ignored when generating regress arguments, since it is to assume that the discount rate on the trades is 0.

    The consequences of this point are not trivial, since what hangs on it is whether one views the ass as acting as he should, rationally, which in turn determines whether we are to view the example as a genuine paradox. In sum, the primary reason for resisting the view that there is a paradox here is that there is information given in the problem that is, that must be, ignored in order to generate the paradox.

    It is this information that explains the asymmetry between arbitrary choice and indecision, which is what makes the coin toss method reasonable, and which makes this regress argument a red herring.

  35. _There is a very good reason to resist viewing the Buridan’s ass example as one that must exhibit these constraints_.

    I’m not worried so much about discovering what the “real” conditions of Buridan’s ass example. That looks like an exercise in scholastic exegesis that I have no (or, nearly no) interest in. I’m worried about the conditions under which we do get a paradox, even if these are not explicitly stated in the example. The interesting fact is that we do get one. The less interesting fact (to me in any case) is that there is a way of interpreting the example where there is no paradox. I would have granted the latter (almost) a priori.

  36. But, to press, I hope not too hard: the conditions under which the paradox you are describing is generated, and why, are well known. Moreover, these conditions are contrary to what we are given in the setup for the example.

    My objection is that this is a change in subject. The objection is not met by suggesting that it is an unanswerable scholastic exercise.

  37. Here are a few excerpts of a slew that might have been listed.

    _One source of trouble is articulating the conditions under which the utility of a pair of actions are (or should be) judged equivalent. But we’re told that they are equivalent_
    _But this is just to deny the possibility of rational action when there is more than two equi-preferable action. But why?_

    I was trying to specify such conditions–the conditions I think are hinted at in the Buridan’s ass example. These are the conditions sufficient to generate the puzzle for perfectly rational agents. But, then, more recently you offer this,

    _the conditions under which the paradox you are describing is generated, and why, are well known_.

    So, quite obviously, these are conditions that you knew all along. I mistakenly read the comments above as expressing some uncertainty over, among other things, what these conditions might be. So let’s say we were at cross-purposes.

  38. A slew of what? On the one hand you have conditions under which a PRA will get stuck; on the other hand, you have the problem of figuring out when certain modeling assumptions are reasonable to make, i.e., when it is suitable to model a problem with a PRA satisfying conditions x, y, z. In this case, we know how a PRA you’re working with will get stuck and why. Whether the assumptions underlying the PRA you imagine are reasonable to assume to hold in the Buridan case is another issue. An interesting one, I think. And material.

    Does this explain our cross-purposes?

  39. Thomas suggests that the paradox of Buridan’s Ass points out that we should never value two goods exactly equally. But it also suggests that we should value two goods comparably in every case. Traditionally, when goods are partially-ordered, there are two problems – the more pressing case is when neither is seen as better than the other and they are not seen as equal. People try to rule this out by assigning everything a utility, but Alan Hájek and Harris Nover’s “Vexing Expectations” shows that such a case may arise by considering a “Pasadena game” whose expectation is a non-convergent infinite series.

    Buridan’s Ass is in a situation that’s not so bad. She just has two goods that are equally preferred, but if this is a problem, then that would count even against people that assign utilities (and not just expected utilities!) to individual actions.

    As for the time-discounting in post 39 that is supposed to make things work out: consider the paradox (I don’t remember who it’s due to) of the bottle of wine that gets twice as good every year as it ages. Even if you discount future wine by 25%, you should constantly hold off on drinking it, and therefore never get it. And if you discount wine at some higher rate, then an even more wondrous bottle would still never get drunk.

  40. Kenny,

    I like the wine example. Straight discounting wouldn’t work if hay aged. And it does, in a sense: waiting out piles of green hay for several days in the sun could be a mortal problem for the ass. Hmm. Nice example: equally appealing piles at each t, t+1,…, without the pairs either keeping constant or declining in appeal through time.

  41. Pingback: Certain Doubts » Stats for November

  42. Just stumbled upon this:

    I see a simpler solution than any suggested above:

    The stale-mate may be broken just given the ass doesn’t always consider BOTH options (left bale vs. right bale) *at the same time*. This seems like a fair restriction on ass psychology.

    Suppose the ass is positioned at equal distance from the eternally equally desirable bales. Forget decay etc.

    Now the ass always prefers food to starvation, hence it prefers getting closer to *any* bale. Hence any other option than moving directly towards either bale is irrational.

    Now suppose that at one point the ass considers, whether it should move towards the left bale. It will choose to move towards that bale, since taking a step in that direction brings it *closer to a bale* (here: the left bale), which is the goal it desires. It doesn’t seem to matter that the corresponding move to the right would have happened had it considered the right bale in isolation first.

    But isn’t this course of events ruled out since the ass faces the disjunctive question: “Should I move to the right or the left?” and has no reason to consider one option before the other? Hence is left in a meta-stalmate.

    Well, here the paradox shows the utter irrationality of such a desicision procedure, becaude of its very impotence in the Bunyan situation. A rational agent should hence consider whatever horn of a disjunctive question happens to enter its mind first (no need for cointossing hoofs). Hence the deadlock is broken on the lines above.

Leave a Reply

Your email address will not be published. Required fields are marked *