Sunday, May 31, 2009

Coin Balancing Problems

Nine Coins - difficulty: medium
The Setup: You have nine coins that are identical in weight save for one, which is lighter than the others—a counterfeit. The difference is only perceptible by using a balance, but only the coins themselves can be weighed, and it can only be used twice in total.

The Problem: Is it possible to isolate the counterfeit coin with only two weighings?

The Solution: Divide the nine coins into three groups of three. Weigh one group against another. If the first two groups balance then the counterfeit coin is not there. Then you can take the third group and put one coin on each side. If it balances, you hold the counterfeit coin. If it doesn't balance, then the lighter coin is the counterfeit one. If the original groups of three do not balance, then put two coins from the lighter group on the balance and whichever side is lighter holds the counterfeit (or the third coin is the counterfeit if the two balance, by process of elimination.)

Twelve Coins - difficulty: hard

The Setup: You have twelve coins and a balance scale. One of the coins does not weigh the same as the other eleven, but you don't know if the odd coin is heavier or lighter than the others. The difference is only perceptible by using a balance, but only the coins themselves can be weighed, and it can only be used three times in total.

The Problem: How can you determine, in three weighings, which coin is the counterfeit?

The Solution:
First weigh four coins against four coins. If they balance, you know the counterfeit is among the remaining four and can deduce it in two weighings by comparing three of the suspect coins with three you know to be legitimate (from the first weighing); this will tell you whether the counterfeit is heavier or lighter and then you can repeat the final step from the nine coin problem to isolate it.

Alternatively, if the first weighing produces uneven sides the solution is a little trickier. We now have eight suspects instead of merely four and will have to investigate more carefully. We essentially have two different tactics at our disposal:

Tactic One - Weigh the Lighter Group against a Control Group
The trick is to use the information we already know: that one group is lighter than the other group. If we swap the heavy group out for the coins we know to be legitimate (the unweighted group from the first weighing) and "the lighter group" is still lighter, we know it holds the counterfeit and that the counterfeit is lighter. Alternatively if "the lighter group" and the legitimate coins balance, the heavy group must have held the counterfeit (which would have to be heavier). But this method will only narrow the candidates down to four coins...still one too many to solve the problem with the single weighing we have left, so this tactic alone is insufficient.

Tactic Two- Swap coins from Group X and Group Y
If we swapped one coin from the heavier group and one from the lighter with each other, either the scales will tip the other way or stay the same (they can't balance because the counterfeit is still in play). If the scales change, we know one of the two we swapped is the counterfeit. If the scales are the same, we know the two we swapped are legitimate (but we would have six suspects left, and only one weighing).

If we want to get maximum information out of the second weighing and solve this puzzle, we need to combine both tactics in our second weighing. Before our second weighing, we should swap one of the lighter side's coins with one of the heavier, then replace the three remaining heavy coins with legitimate ones. This can only produce three results:
  • The lighter group is now heavier - this could only happen if the counterfeit was one of the coins we swapped. Weigh one of them against a known legitimate, if the scales balance then the other one is the counterfeit. If they don't, you've found the counterfeit.
  • The lighter group is still lighter - this could only happen if the counterfeit is lighter than a legitimate coin and was amongst the three coins originally on the lighter side. You now have three suspect coins and know the counterfeit is lighter. Repeat the final step from the nine coins problem.
  • The sides balance - this could only happen if the counterfeit is heavier than a legitimate coin and was amongst the three coins removed from the heavier side. You now have three suspect coins and know the counterfeit is heavier. Repeat the final step from the nine coins problem.

Thursday, May 28, 2009

Plessy v. Ferguson Dissent: a Historical Curiosity

Plessy v. Ferguson is a landmark U.S. Supreme Court decision, upholding the constitutionality of racial segregation even in public accommodations under the doctrine of "separate but equal". It is one of America's most infamous and unpopular decisions.

Justice John Marshall Harlan, a former slave owner who decried the excesses of the Ku Klux Klan, wrote a scathing dissent in which he predicted the court's decision would become as infamous as that in Dred Scott. Harlan, decades ahead of his time, wrote: "In view of the Constitution, in the eye of the law, there is in this country no superior, dominant, ruling class of citizens. There is no caste here. Our Constitution is color-blind, and neither knows nor tolerates classes among citizens. In respect of civil rights, all citizens are equal before the law."

For a century, the vision of racial equality expressed in John Marshall Harlan's dissent in Plessy v. Ferguson has captured the imagination in a way matched by few other texts. Even today, the symbolic power of Harlan's rejection of segregation of African Americans and whites in New Orleans streetcars is rivaled only by the Reverend Martin Luther King, Jr.'s I Have a Dream speech and Brown v. Board of Education.

There is a curious point in Harlan's Plessy dissent however. After arguing that the government should guarantee equality without regard to race, the next paragraph begins like this: "There is a race so different from our own that we do not permit those belonging to it to become citizens of the United States. Persons belonging to it are, with few exceptions, absolutely excluded from our country. I allude to the Chinese race. But by the statute in question, a Chinaman can ride in the same passenger coach with white citizens of the United States, while citizens of the black race [cannot]...."

Of course, the Chinese had nothing to do with the Plessy decision, but that did not stop Harlan from including a tirade against them in his opinions. In United States v. Wong Kim Ark, Harlan objected to persons of Chinese descent born in the United States becoming citizens. He described, "the presence within our territory of large numbers of Chinese laborers, of a distinct race and religion, remaining strangers in the land, residing apart by themselves, tenaciously adhering to the customs and usage of their own country, unfamiliar with our institutions and religion, and apparently incapable of assimilating with our people."

Some modern commentators have explained Harlan's position as, "the law is irrational because it burdens one despised minority but not another." However this misstates Harlan's philosophy: he held no ill will towards blacks or even any other Asians; he singled out the Chinese as the sole nationality worthy of segregation.

God's Algorithm

God's algorithm is a way to solve puzzles and games using the least possible number of moves, the idea being that an omniscient being would know an optimal step from any given configuration. God's algorithm is essentially the most efficient strategy for any given game, one that cannot be improved upon in any way. The solutions to the recently posted river crossing puzzles are an example.

Tic-tac-toe and Tower of Hanoi are all examples of games with simple God's algorithms. Most children figure out the ideal strategy themselves pretty quickly. One of the reasons children like these games is once they figure out the optimal strategy, no one has an edge on them--not adults, not their parents, not their teachers. They are 'experts' at the game. In fact, someone who knows the God's algorithm for a game could best God himself in a fair match. Even the eyes of God see no more to a game of tic-tac-toe than a capable player. This has lead to a wealth of fiction where supernatural beings are bested by humans in games of strategy and chance.

God's algorithms have been suggested for Rubick's cubes, chess, Irensei and Go. Theoretically any game with perfect information should have a God's algorithim solution. A game is said to have perfect information if all players know all moves that have taken place. For instance, on a chessboard there are no secrets. You and your opponent can see all of the pieces on the board at all times. By contrast, poker is a game with imperfect information since you can neither see your opponent's cards nor know which cards will come out the deck next.

Wednesday, May 27, 2009

The Double Slit Experiment

Voted by Physics World readers as “the most beautiful experiment in physics”, the classic two slit experiment was first conducted by Sir Geoffrey Ingram Taylor in 1909. It involves firing tiny individual particles at a thin plate with two parallel slits and watching the pattern they make on the wall behind it.

The outcome of this simple experiment is quite shocking because it suggests light behaves differently when being observed. But to understand the science of the Double Slit Experiment, first take a look at the way particles and waves behave in different circumstances.

Firing light at a plate with a single slit in the middle always leads to the same result: a vertical line on the back wall. This is because some light reflects off the plate, while some go directly through the slit and land on the wall predictably. Adding a second slit should therefore create the same result: two vertical lines on the back wall.

Waves are a little more complex; so think of them as like ripples in a pond. When the ripple collides with the plate, only the apex passes through the slit and radiates out. It strikes the back wall with the most intensity in the middle – directly in line with the single slit. This is similar to the single vertical line created by firing particles.

But, sending waves through two slits has a completely different effect. As the top of one wave meets the bottom of another wave, they cancel each other out. This creates an interference pattern on the back wall. There are now lots of vertical lines across the wall where the many little waves hit it with the highest intensity.

In the Double Slit Experiment, electrons are fired at the slit in much the same way. A single slit causes a single vertical line of electrons along the back wall. But when the electrons are fired at two slits – and here comes the kicker – the result is an interference pattern. Why are these tiny bits of matter, fired individually at the wall, suddenly behaving like waves? What are they interacting with?

For some physicists, the conclusion was inescapable. The tiny electron arrives at the plate as a single particle, becomes a wave of potentials, goes through both slits, and interferes with itself. Mathematically, this theory is even more bizarre. The electron goes through both slits – and neither. It goes through just one – and just the other. By this reckoning, the Double Slit Experiment suggests that every possibility actually occurs in parallel worlds.

But this was just a theory, so physicists put up a measuring device – an observer – next to the plate to see which one the electron really went through. Amazingly, the electrons returned to behaving like particles again, creating two vertical lines on the back wall. The act of observing the quantum world actually changed the outcome! In short, the electron "decided" to act differently, as if it were "aware" it was being watched. The observer collapsed the wave function simply by observing.

Thursday, May 21, 2009

River Crossing Puzzles

Fox, Goose and Beans - difficulty: easy
The Setup: Once upon a time a farmer went to market and purchased a fox, a goose, and a bag of beans. On his way home, the farmer came to the bank of a river and hired a boat. But in crossing the river by boat, the farmer could carry only himself and a single one of his purchases - the fox, the goose, or the bag of the beans. If left alone, the fox would eat the goose, and the goose would eat the beans.The farmer's challenge was to carry himself and his purchases to the far bank of the river, leaving each purchase intact. How many trips will this take?
The Solution:
1. Bring goose over
2. Return
3. Bring fox or beans over
4. Bring goose back
5. Bring beans or fox over
6. Return
7. Bring goose over
Missionaries and Cannibals - difficulty: medium
The Setup: Three missionaries and three cannibals must cross a river using a boat which can carry at most two people, under the constraint that, for both banks, if there are missionaries present on the bank, they cannot be outnumbered by cannibals (if they were, the cannibals would eat the missionaries.) The boat cannot cross the river by itself with no people on board. How many trips will this take?
The Solution: There are four separate solutions to the missionaries and cannibals problem, but all result in eleven trips across the river.

Tuesday, May 19, 2009

Contingent a priori and necessary a posteriori

It was once the consensus that all a priori knowledge was necessarily true and all a posteriori knowledge was contingently true. In fact, this consensus is seemingly itself a priori knowledge. However, the philosopher Saul Kripke has shown that this could be wrong. He proposes examples that successfully demonstrates how an a priori knowledge could be contingently true, and also how an a posteriori knowledge could be necessarily true.

For the uninitiated, I'll give some quick and simple background information. The idea was that a priori knowledge was knowledge that could be confirmed in the absence of experience. For example, the proposition "all bachelors are unmarried men" would be an a priori knowledge. A person, if they understand that proposition, can verify the proposition without appealing to any experience as the very essence of the term "bachelor" is that it refers to unmarried men. That is, we need not find all of the bachelors in the world to see that all bachelors are unmarried; we know this to be true if we understand the meaning of the proposition's constituents. Also, the proposition is necessarily true, as it is impossible to present a case in which that proposition is false.

The proposition "some unmarried men are not bachelors" would be considered a posteriori knowledge. In this proposition, we must appeal to our experience to make sure that there are cases in which an unmarried man is not a bachelor in order to verify the proposition. Since we know that the Pope is an unmarried man and not a bachelor, we know that this proposition is true. However, since this proposition could be false (say, a deadly virus wipes out all unmarried men that are not bachelors), we say that this proposition is contingent. To see if a proposition is true contingently or necessarily, you can perform this test: negate the proposition (in the contingent case, "no unmarried men are not bachelors") and check to see if there are any inherent contradictions. No contradictions means that the original proposition is contingently true, while an inherent contradiction means that it is necessarily true.

If this is something new to you, take a moment to reflect on the idea of a priori/a posteriori and necessity/contingency. I think you will find that the two have very hard to break the relationship, though, as you should be able to tell from the purpose of this post, not impossible.

Kripke offers two examples: one that demonstrates necessary a posteriori knowledge and another that demonstrates contingent a priori knowledge.

Necessary A Posteriori
The Greeks often gave celestial entities names by which to identify them. The name Hesperus was given to an evening star, and Phosphorus was given to a morning star. As you might imagine, Hesperus appears only in the evening, while Phosphorus appears only in the mornings. Their positions relative to other celestial entities (that is, their position to the background stars) were dramatically different. There was little in common other than perhaps their luminosity.

But it turns out that, upon further investigation, both Hesperus and Phosphorus are in fact a single entity. It just so happens that both Hesperus and Phosphorus are Venus. Since Hesperus=Venus and Phosphorus=Venus, we can say that Hesperus=Phosphorus is necessarily true. After all, they ARE one in the same, and to negate it would lead to a contradiction. However, we would not know that Hesperus=Phosphorus unless we know what Hesperus and Phosphorus are and that they both refer to Venus. In order to know this, we must appeal to empirical evidence, thus making this a posteriori knowledge.

While this demonstrates a case in which a posteriori knowledge is necessarily true, many people might find the whole idea of identity and the process of naming a bit uneasy. Of course, this is certainly pertinent to the case presented here (as well as any other necessary a posteriori cases that I can think of), but this leads us down a very complex path. That said, please feel free to contribute to this point if you wish; I just don't have the balls to get into this on my own.

Contingent A Priori
Take the proposition "a meter stick is a meter long." This statement can be verified without appealing to experience: we don't need to go check how long a meter is and compare it to the length of a meter stick, for however long a meter happens to be, we know that a meter stick will also be that long (after all, it is the essence of a meter stick to be a meter long).

Or do we need to check? There is a bar in Paris that is supposed to be the defining measurement of a meter; the standard meter bar, if you will. We also know that a bar, as with any other physical objects, could potentially fluctuate in length when exposed to different temperatures. You can, then, imagine a world in which the average temperature is significantly higher such that the standard bar is actually a bit longer than in our world. While this might be a little confusing, you can say that the proposition "a meter stick is a meter long" is true a priori but is contingent: we know that a meter stick is a meter long (duh) but, at the same time, it's possible that a meter stick (say, a meter stick in our world) is not a meter long (as measured by the standard bar in the hotter, alternate world).


Much progress has been made on this, particularly in the domain of language since it is there that this peculiarity seemingly resides. Instead of me going into difficult and painstaking details here (which I lack the capacity to do so in any respectable manner, anyway), I'll save myself (and you) the pain and instead encourage you to leave your thoughts in the comments.

Thursday, May 14, 2009

The Son of Man

Artist René Magritte's The Son of Man is one of western culture's most famous paintings but also one of its most misunderstood. What the heck is going on in this painting? Who is this well groomed businessman? Why is there an apple floating in midair in front of his face?

The Son of Man poses more questions than answers--and that's precisely the point. The true subject of the work is neither the man nor the apple, but human curiosity. The apple intentionally blocks the man's face, which would otherwise be the focal point of the painting. Human curiosity kicks in, and people want to know what the man's face looks like all the more because it is obscured. Magritte is playing with your senses. At showings of the painting, its not uncommon to see people shifting their heads and standing on tip toes to try and sneak a glance of the man's face behind the apple.

As Magritte explained, "You have the apparent face, the apple, hiding the visible but hidden, the face of the person. It's something that happens constantly. Everything we see hides another thing, we always want to see what is hidden by what we see. There is an interest in that which is hidden and which the visible does not show us. This interest can take the form of a quite intense feeling, a sort of conflict, one might say, between the visible that is hidden and the visible that is present."

The apple and bowler hat are all tangential to the painting's message. They could just as easily been anything else (Magritte also produced derivative works featuring a woman's face obscured by a flower and businessman's face obscured by a bird). However the apple and bowler hat have become iconic in their own regard--see the climax of The Thomas Crowne Affair, where Pierce Brosnan breaks into a museum dressed as the Son of Man. In so far as the apple floating in mid-air deepens the sense of mystery, it contributes to the painting's evocation of human curiosity.

"Everything we see hides another thing; we always want to see what is hidden by what we see, that it is impossible. Humans hide their secrets too well." -René Magritte

Wednesday, May 13, 2009

The Peter Principle

The Peter Principle says, "In a Hierarchy Every Employee Tends to Rise to His Level of Incompetence." Members are promoted so long as they work competently; Sooner or later they are promoted to a position at which they are no longer competent (their "level of incompetence"), and there they remain, being unable to earn further promotions. Peter's Corollary states that "in time, every post tends to be occupied by an employee who is incompetent to carry out his duties" and adds that "work is accomplished by those employees who have not yet reached their level of incompetence".

An example:

If you're a proficient and effective accountant, you're most likely demonstrating peak competence in your job right now. As a result of your performance, your valuable contribution results in a promotion to a management position. In this new position, you now do few of the original tasks which gained you acclaim. Given this, promotions stop, and there you stay, until you retire.

A dramatic example of the Peter Principle at work is Michael Scott from the Office. Michael is frequently shown to be an impressive salesman but an utterly inept manager. He has been promoted to his level of incompetence, where he will now remain.

Monday, May 11, 2009

Going to the Movies

Here's a situation we've all seen before; suppose Alice and Bob have to decide whether to go to the movies to see a chick flick, and that each has the liberty to decide whether to go themselves. If the personal preferences are based on Alice first wanting to be with Bob, then thinking it is a good film, and on Bob first wanting Alice to see it but then not wanting to go himself, then the personal preference orders might be:
  • Alice wants: both to go > neither to go > Alice to go > Bob to go
  • Bob wants: Alice to go > both to go > neither to go > Bob to go
What should they choose? One thing that they shouldn’t choose is Bob to go alone--this is everybody's least favored outcome. There are good arguments to make for both going or just Alice going, but they shouldn't choose neither going. Both prefer going together to not going. Any option where there are other possibilities that all parties prefer is ‘Pareto dominated’. It seems obvious that whatever system we want making our choices for us shouldn’t be choosing options that are Pareto dominated.

How will these preferences play out? Bob will not go on his own: he would not set off alone, but if for some reason he did, then Alice would follow because she prefers both to go > Bob to go. Alternatively, if Alice decided to go alone, Bob would not join because that is his most desired outcome because he prefers Alice to go > both to go. However if Bob chooses not to go, Alice will want to stay home too because she prefers neither to go > Alice to go.

Herein lies the rub: even though Alice and Bob prefer both to go > neither to go, if Alice and Bob choose individually, neither will end up going because neither prefers going alone to both staying home. Bob might try to convince Alice to go, since both scenarios he prefers over neither going have Alice go; but Alice can't convince Bob to go because as soon she's going Bob can achieve his optimal outcome by staying home.
  • Probable outcomes: neither go > Alice goes > both go > Bob goes.

Friday, May 8, 2009

Code of Silence: Two Perspectives

Whether it's the Mafia or the Underground Railroad, codes of silence are vital to some organizations' continued livelihood. This post focuses on one aspect of the code of silence: no snitching. Are tight knit groups of individuals who trust one another in turn able to work more productively or is the code of silence just a clever ploy to get kingpins off the hook while their subordinates take the fall?

Take One: Manipulated Minions
Codes of silence are philosophically inconsistent. The implications of a code of silence are all about putting the group over the individual, but at the same time they limit the group so narrowly that they do not serve the greater good in any general sense.

If you believe in a way of conduct, you shouldn't have to conceal it--whenever people rely on secrecy, it's a warning they might be up to no good. Granted that the underground railroad might be an exception to this rule, but the underground railroad was underpinned by a broader position: that slavery was wrong. Criminal organizations however don't have that kind of broad message underneath: they're not trying to encourage everyone to become a criminal like the underground railroad was trying to encourage everyone to become an abolitionist.

For criminal organizations, the only underlying interest is personal gain--which is why a code of silence is inconsistent because its advocating the good of the group over protecting the self. The only way it can be explained is if you can narrowly tailor it to mean 'protection of this groups interests are more important than the individual but less important than everyone else's, but since the group in question's interest is self interest, it doesn't really add up as a cohesive position.

Codes of silence are only good for the kingpin, not the individual and certainly not society's. Kingpins promulgate the code because it favors them, and often artificially increase the incentives to cooperate by threatening to retaliate against people who break it. Silence is not a very philosophically sound system which is why it cant be sustained without killing people or otherwise redistributing the costs. From a game theory standpoint, its a preferable position to advocate if you RUN a criminal organization because it protects your self interest while masking itself as a broader principle.

Take Two: Real Life Prisoner's Dilemma
For anybody on earth not familiar with the prisoner's dilemma, here's a little recap:

Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects from the other) for the prosecution against the other and the other remains silent (cooperates with the other), the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

OUTCOMES Prisoner B Stays Silent Prisoner B Betrays
Prisoner A Stays Silent Each serves 6 months Prisoner A: 10 years
Prisoner B: goes free
Prisoner A Betrays Prisoner A: goes free
Prisoner B: 10 years
Each serves 3 years

What's remarkable about the prisoner's dilemma is the way it exploits our self interest. No matter what the other player does, one player will always gain a greater payoff by defecting. Since in any situation playing defect is more beneficial than cooperating, all rational players will play defect, all things being equal--even though we know the sentence would be only 1/10 as long if they cooperated.

A code of silence is one possible solution to the the prisoner's dilemma. Kingpins may be able to see the best outcome because they have no personal stake in the pot (experience helps too, a good way to induce cooperation is have people play the game over and over again).

Tuesday, May 5, 2009

Knowledge and the Gettier Problem P2

This is part 2 of Knowledge and the Gettier Problem. This addresses some of the things left in the comments section, so read that as well.

I have to agree with Graham in that the problem lies with justification for the propositions. It is, in a sense, logically spurious, but I think that statement alone doesn't address the possibility of whether "a man with 10 cents in his pocket will get the job" and "Jones owns a Ford or Brown is in Barcelona" should be considered knowledge IF Jones actually got the job or if Jones actually owned a Ford (and, in my opinion, I think they should be considered knowledge--to say otherwise would be neglecting the fact that logical operations are truth preserving). I started writing this before the comments came in so it might seem to backtrack a little bit, but here it is. Oh, and I apologize that this is somewhat incomplete... I've just lost the will to go on.

My first intuition upon first learning of this problem was that the justification condition of JTB conditions was the culprit. One option is to say that Smith was not actually justified in believing P1 (in both cases). In case 1, Smith was justified only to the extent that one can be justified by someone else's claim. But how is this different than believing pi has infinitely many decimal places without a pattern when told that it is so by a mathematician? Or how about when we are taught that Australia is a continent southeast of Asia by our 7th grade geography teacher? We most certainly would ignore claims made by the compulsive liar or delusional schizophrenic, but we also consider both of these cases to have reliable, and thereby justifiable, sources of knowledge. Even if we ascribe such reliability to the hiring manager (besides, the hiring manager really was telling the truth!), the problem remains. If you want to say that we in fact don't gain knowledge when told facts by others and we gain knowledge only when we ourselves verify that a belief is in fact true, then I will have to say that I don't know that the world is round and that the sun is the center of our solar system and so on. But wait, aren't all of these knowledge? I think the contradictions that we arrive to by considering other people as unviable sources of justification lets us safely rule this one out since we are appealing to the common-sense notion of knowledge.

But what about justification of facts that could change? I think this might answer an aspect of John's concerns with this problem. Imagine that I am outside and it is raining. I hope that you would at least grant that my belief "it is raining" is a JTB (if you don't, you have issues that can't be resolved here, or anywhere else for that matter). Say that I find shelter in a sound-proof shed with no windows or other means of seeing or hearing the outside. When I go inside, you would say that my means of justification have been removed, but does my knowledge that it is raining cease to be knowledge? Common-sense tells us that believing in the proposition "it is raining" is still knowledge even when immediate justifications have been removed, as evidenced by the fact that we often ask people "is it still raining" and expect that their knowledge of weather outside to be sound. Even if you claim that knowledge stripped of immediate justification is somehow lesser knowledge than one who has immediate justifications, I don't think you can reasonably argue that it is no longer knowledge (though you can say that someone inside who claims to know that "it is raining" is actually claiming to know only that "I believe it is raining"... and so on and so on... and I don't know if I want to explore this here, at least not in this post). What you can say, though, is that the knowledge is valid only within a reasonable range: I don't expect "it is raining" to count as knowledge if the last justification for it was removed 2-3 days ago. Ultimately, the predictive powers of knowledge are limited, but I think we all have to agree that, by common-sense knowledge, that some predictive propositions are knowledge (whatever the extent of their predictive powers may be).

Anyway, I'd like to focus on the logical intricacies of this problem rather than the metaphysical, sense-datum, intentionality, etc., issues--I think any investigation into the nature of belief or truth will ultimately summon questions in the aforementioned fields, which ultimately don't have definitive answers (we really just need to take the 'leap of faith' in believing that our experiences are causally related to an objective world--or at least consistent with something like it).

Let's first consider the logical steps that we take from going from P1 to Pn. In case 1, we take the instantiation of someone and make an existential claim. Consider we have the following:

Q1-Bessy is a cow.

Given this, we can make the existential claim that:

Q2-There exists a cow.

If Q1 is true and we are justified in believing that it is true, then Q2 should be equally true and justified. There's really no way around this as it is logically sound. So here, we can see that the transitive property of logic successfully preserves the truth and justification conditions by virtue of Q1's entailment of Q2, but Gettier's first case leaves us feeling uneasy. Even when we add a third proposition that has, as its subject, the same subject as in Q1, the justification and truth should not change.

But what I think makes this peculiar is the generalization of Q1 to Q2. Q2 and existential propositions, by nature, claims that there exists a certain set (namely, that there exists something that has bovinity, if bovinity is the essential property of bovines). If at least one member of this set exists, then Q2 is true. Since Q1 entails the existence of a member of the set defined by Q2, Q2 becomes true by virtue of its extension from Q1. In addition, the existence of any other member of the set defined in Q2 will also make Q2 true (e.g., "Bo is a cow" will also make Q2 true, even if Bessy undergoes a transpecies operation and turns herself into a goat).

It is here, I think, that the uneasiness in the first case of the Gettier problem can be explained. Smith knew that P2-Jones was a member of a set (i.e., men with 10 cents in their pocket) and that P1-Jones would get the job. So when the instantiation (Jones) is generalized (a man with 10 cents in his pocket), the generalization is justified and true only because the instantiation is justified and true. In other words, P1 and P2 together prescribe the the justification and truth of P3. I would, then, like to say that P3 is justified and true only insofar as P1 and/or P2 is justified and true. P3 is knowledge only when P1 and/or P2 are also knowledge, and then cease to be knowledge if P1/P2 cease to be knowledge (the idea of P1/P2 no longer being knowledge begs the question of what qualifies as justification, as mentioned in the first paragraphs in this post).

That is not to say that the likes of P3 could never by themselves satisfy the JTB conditions. If Smith knew that he had 10 cents in his pocket as Jones did, and knew that Smith and Jones were the only ones being considered for the job, P3 would be by itself justified and true. However, in Gettier's first case, Smith was only justified in believing that Jones would get the job and that Jones had 10 cents in his pocket. In Gettier's case, then, P3 is knowledge only if P1/P2 are knowledge. When P1/P2 fails (or if you want to say that it never was knowledge, then you can say that too), so does P3.

I think the constraints are similar for the second case. The only difference here is that instead of making an existential claim that includes a certain category of things as members of the set, we are adding an arbitrary propositions to the set. I think we can say that both of Gettier's cases involve making propositions from an instantiation to making a proposition from a set of possible instantiations.

Anyway, my conclusion is that P3 is knowledge if and only if P1 and P2 are knowledge. I say this because that P3 is an extension of P1 and P2, and little else alone. If you want to say that P1 and P2 are knowledge because they would have been true, then so is P3, but only up to the point when the hiring manager changed his mind (unless you want to say P1 is still knowledge even after that fact). If you want to say that P1 and P2 are not knowledge because they are predictive by nature and thus subject to falsity, then P3 is also not knowledge. You can discuss which of these schools of thought you think is best, but I prefer to stay away from these tough questions.

This solution is akin to Alvin Goodman's solution (or rather, his epistemological theory that tries to buff up the justification condition to avoid the Gettier problem) but the scope of his theory is much greater. That is, he tries to explain the connection between the world and the contents of our experience (I think), and then includes a little tidbit about how a something in the experience must cause a justified belief. It is to my understanding, anyway, that he runs into a bit of trouble when defining the necessary/sufficient causal relationship between experience and belief. I tried to avoid this burden by compartmentalizing the perculiarity to the logical transformations of knowledge, regardless of whatever JTB could ever be satisfied (of course, I take for granted that it can).

This is long, confusing, and unclear, and I apologize. I'll try to write more concisely and in a more timely manner in the future.

Saturday, May 2, 2009

The Dollar Auction: Irrational Escalation of Commitment

The dollar auction is a game designed by economist Martin Shubik to illustrate a paradox brought about by traditional rational choice theory in which players with perfect information are compelled to make irrational decisions.

The setup involves an auction for a one dollar bill with the following rule: the dollar goes to the highest bidder, who pays the amount he bids. The second-highest bidder also must pay the highest amount that he bid, but gets nothing in return. The second highest bidder might not have to pay on eBay, but in many real contests, both sides end up paying but only one gets the prize--like lawsuits, sports competitions, gambling and political campaigns.

Bidding when the price is below fifty cents or so seems harmless because it’s an obvious deal to buy a dollar for any amount less. The twist becomes clear about when the high bid is 80 cents. People start to think about how the second rule–the one requiring the loser to pay–would affect incentives. What might the second highest bidder think at this stage? He is offering 70 cents but being outbid. There are two choices he could make:

  • do nothing and lose 70 cents if the auction ends
  • bid up to 90 cents, and if the auction ends, win the dollar, and profit 10 cents

But this action has an effect on the person bidding 80 cent, who is now the second highest bidder. This person will now make a similar calculation. He can either do nothing and lose 80 cents if the auction ends, or he can raise the bid to a dollar and have a chance of breaking even. Again, bidding higher makes sense. Thinking more generally, it always make sense for the second highest bidder to increase the bid.

Soon people will bid more than one dollar and fight over who will lose less money. It is the incentives that dictate this weird outcome. Consider an example when the highest bid is $1.50. Since the high bid is above the prize of $1, it is clear no new bidder will enter. Hence, the second bidder faces the two choices of doing nothing and losing $1.40, or raising the bid to $1.60 to lose only 60 cents if the auction ends.

In this case, it makes just as much sense to limit loss as it does to seek profit. The second highest bidder will raise the bid. In turn, the other bidder will perform a similar calculation and again raise the top bid. This bidding war can theoretically continue indefinitely. In practical situations, it ends when someone chooses to fold. This game is played at Stanford in economics classes, and its not uncommon to see the game end anywhere between five and ten dollars.

Here are some other real life examples of the irrational escalation of commitment:

  • After a heated and aggressive bidding war, Robert Campeau ended up buying Bloomingdale's for an estimated 600 million dollars more than it was worth. The Wall Street Journal noted that "we're not dealing in price anymore but egos." Campeau was forced to declare bankruptcy soon afterwards.
  • Supporters of the Iraq War have used the casualties of the conflict in Iraq since 2003 to justify years of further military commitment. This rationale was also used during the sixteen-year Vietnam War, another military example of the logical fallacy.
  • Two competing brands often end up spending money on advertising wars without either increasing market share in a significant manner. Though the most commonly cited examples of this are Maxwell House and Folgers in the early 1990s, this has also been seen between Coke and Pepsi, and Kodak and Polaroid.
  • Shakespeare's Macbeth comments, "I am in blood stepped in so far that, should I wade no more, returning were as tedious as go o'er." The metaphor represents Macbeth's crimes and rather than stop committing crimes (presumably, for fear of damnation) Macbeth says that he has "passed the point of no return" and might as well continue, even though it will inevitably lead to his downfall.

Friday, May 1, 2009

Three Brief Logic Riddles

The Problem: How can you throw a ball as hard as you can and have it come back to you, even if it doesn't bounce off anything? There is nothing attached to it, and no one else catches or throws it back to you.

Answer: Throw the ball straight up in the air.

The Problem: Imagine there are 3 coins on the table: gold, silver, and copper. If you make a truthful statement, you will get one coin. If you make a false statement, you will get nothing.
What sentence can guarantee you getting the gold coin?

Answer: "You will give me neither copper nor silver coin." If it is true, then I have to get the gold coin. If it is a lie, then the negation must be true, so "you give me either copper or silver coin", would break the given conditions that you get no coin when lying. Therefore the sentence must be true and you must get the gold coin.

The Problem: Hotel Infinity has infinite rooms and infinite guests. If a traveler came by looking for a vacant room how could the hotel provide him with one without kicking out any guests?

Answer: Tell all the guests to move down one room, so room #1 is open.