Justified true belief
S knows that P, iff:
1. P is true,
2. S believes that P is true, and
3. S is justified in believing that P.
Winner of the thread for the most clear exposition of what’s wrong with this and how to improve it.
Labels: philosophy of knowledge





where’d you get yer example from?
I don’t think the conditions you describe are enough to constitute real knowledge, even though it is one of the textbook examples when it comes to epistemology.
I don’t remember where I read it, but I do recall reading an additional point that was important to this chain of “if and only if”:s; the third condition “S is justified in believing P”, has to be non-dependent on any external evidence that justifies S belief in P.
For example; my friends and acquaintances know that I read very many books. If I were to bring somebody who has never been to my place to the home of one of my friends, they would comment on the fact that I sure have quite a lot of books. Even if it is true, they reached that assumption on witnessing the books at somebody else’s home.
The Gettier problems/cases.
The money for exposition certainly goes to Gettier. I don’t think you are going to find a solution or at least you won’t find one that isn’t controversial.
http://www.geocities.com/black_tim/Lewis.pdf
This paper by David Lewis is the best treatment on the subject I’ve ever seen.
Sorry, that’s a commentary. This is the paper:
phil1reading.pdf
http://philosophy.ucsd.edu/faculty/rarneson/Courses/lewiselusive
Sorry, “know” is an ordinary English word, and is not defined as you suggest. Everyone knows (groan!) of cases where he “knew” something and it turned out to be untrue. Rather than construct a false definition of “know”, shouldn’t philosophers invent a new word, or adapt an old one, for the activity they wish to refer to? I suggest “Razza witted S”.
The statement is wrong because assigning binary values to “believes,” “knows,” and “justified” is not a terribly useful abstraction. The poor fit becomes more obvious the more we know about the human brain, but the given counterexamples don’t really require recent observations.
Sorry, “know” is an ordinary English word, and is not defined as you suggest. Everyone knows (groan!) of cases where he “knew” something and it turned out to be untrue.
That definition is closer to “believe” than it is to “know”. Usually, people don’t say someone knew S when in fact S wasn’t true. They say that he (falsely) believed S (wrongly).
Bob_R – nice call on the Bayesian analysis, but how far does that get you?
http://plato.stanford.edu/entries/epistemology-bayesian/
One example of a justified true belief that is not knowledge is someone who is dreaming of an event that is actually occurring. Edward Lear gave a specific illustration:
There was an old man from Peru
Who dreamt he was eating his shoe
He awoke in the night
In a terrible fright
And found it was perfectly true
JuJuby, they only say “believe” after the event. Before falsity is established, it’s quite common to say “know”. If they only suspect that the thing is true, they often say “I know it for a fact”.
>>JuJuby, they only say “believe” after the event. Before falsity is established, it’s quite common to say “know”. If they only suspect that the thing is true, they often say “I know it for a fact”.
JuJuby, they only say “believe” after the event. Before falsity is established, it’s quite common to say “know”. If they only suspect that the thing is true, they often say “I know it for a fact”.
They may “say” that but we usually consider that a case of word misuse. We don’t say that the people in the Flat Earth Society know that the earth is flat though they may say they know. We say that they think they know but don’t really or they falsily believe that’s the case.
I wouldn’t necessarily suppose his point is bayesian. Its genesis could just as easily follow from the insights of Quine, Lakatos, Putnam, whatever that knowledge can’t be defined in a formal decision procedure, that there is a strange obsession in 20th century philosophy to collapse reasoning and human faculties into truth functional logic, that uncertainty and fallibility can’t and won’t be ruled out because scientific judgements are floating on an ocean of unscientific assumptions borne out of normal limits on rational beings, that every pretension to fact presupposes a web of knowledge, some more likely, solid, reasonable, persuasive according to good criterion that nevertheless cannot be derived from first principles, however worthy it is to pursue a more precise epistemological perspective.
Any undergrad major in philosophy (Anglo-American analytic philo, not continental “theory” BS) would be able to explain what’s wrong with this.
ue_belief#The_Gettier_problem
Justified true belief (JTB) basically started with Plato and was accepted by epistemologists until ’63 with Edmund Gettier and his famous Gettier problem. He basically provided a few very basic and easily understood counterexamples in which all three conditions of JTB are fulfilled yet where one would have trouble attributing them as cases of knowledge.
anyway, just look it up. no prize for winning the thread anyhoo.
http://en.wikipedia.org/wiki/Justified_tr
Also, what’s up with Jurgen Habermas’s face? It’s funny that he’s all about reason as being about social communicative discussion and action, but it’s hard to see how anyone could stand looking at his face and talking with him.
@JuJuby: who is this “we” of whom you speak?
@JuJuby: who is this “we” of whom you speak?
The common linguistic community for which these types of semantic issues are adjudicated. Just because someone uses a word in a certain way does not makes its meaning that way. Just because someone is using “know” in roughly the same way as “believe” does not make it so. Most members of the english speaking linguistic community would not consider beliefs that are wrong to be knowledge. Though people who have wrong beliefs may claim they know, that don’t make it so.
I would say that the first condition — P is true — is not a necessary condition. That one believes on justifiable grounds is good enough. For instance, I know the earth is round because I once travelled around it. Or because I have seen pictures of earth taken from the moon. Etc.
Luke: I would say that the first condition — P is true — is not a necessary condition. That one believes on justifiable grounds is good enough. For instance, I know the earth is round because I once travelled around it. Or because I have seen pictures of earth taken from the moon. Etc.
Say that someone has observed the repeated flipping of a coin over 10,000 consequitive times. On each of those occasions the coin landed heads. He believes, with very good justification, that the coin is rigged and that the probability of it landing heads on the next flip is close to 1. Unbeknowst to him, the coin is actually fair and the 10,000 straight heads has simply been a crazy coicidence.
Does this person know that the probability the coin will land on heads is close to 1?