OLD Methodology/Theory Debates
Hello bloggers! As requested, I have turned off comment moderation. But, if it becomes a personal free-for-all I will ask someone else to manage the blog because I do not have time to devote to continuously deleting comments.
704 Comments:
Not in and of themselves, obviously, but as an integral part of causal analysis.
I do not see integral in Beer's formulation.
structural explanations that, explained with more sophisticated mathematical and statistical techniques
Seems integral to me. By the way, why does he start the sentence with "Yet"? so he rejects scientism and behaviorim. YET, a more realistic approach, with sophisticated techniques, spelled progress.
The use of the word "yet" clearly shows that he doesn't equate sophisticated techniques with behaviorism.
You fail, once again. It's so funny how the text you cite doesn't even support your position, and in many respects contradicts it.
Aw, the joys of arguing with a qually...
I now you really, REALLY, hate it that Beer added the part about sophisticated techniques. I understand. It completely undermines your position and this has been proven in this discussion. It's really funny watching you spinning and slipping and trying your hardest to attach your biased view to Beer's words.
At the end of the day, he lauds sophisticated mathematical and statistical techniques, which helped spell progress for causal analysis. It's very clear prose. Funny how Beer doesn't think those techniques obfuscate anything. Then again, he is a social scientist.
Man, you must really hate the fact that he used those words...:))
What's amazing is that the qually still has yet to find a single source that supports his position. The best he can do is lie.
So how do we describe him? What's the word I'm looking for? Oh yeah.
Fail.
structural explanations that, explained with more sophisticated mathematical and statistical techniques
Seems integral to me. By the way, why does he start the sentence with "Yet"? so he rejects scientism and behaviorim. YET, a more realistic approach, with sophisticated techniques, spelled progress.
You fail, once again.
Why do you insist on acting like a jerk? (I am certain this is not how you normally act.)
In any event, Beer uses "yet" to indicate that the move away from scientism (read: laws of social/political behavior) undermined the viability (read: theoretical justification) of behavioralism (read: quantitative analytical techniques), YET this move led to more justifiable and analytically useful uses of quantitative analytical methods. This sounds very much like the ILP.
behavioralism (read: quantitative analytical techniques)
Get it through your thick head, ILP: behavioralism does not mean quantitative methods.
Fail.
Get it through your thick head, ILP: behavioralism does not mean quantitative methods.
For Beer, in the specific context he was writing, it does.
For Beer, in the specific context he was writing, it does.
What?
no, no no.
Behavioralism is a subset of quantitative methods. One that many other quants have throughly criticized. To say that they are equal is absurd and demonstrates a deep lack of knowledge.
No, no, no.
Behavioralism is not even a subset of quant methods. It is a general theoretical perpsective.
Saying that behavioralism is a subset of quant methods is like saying that rat-choice is a subset of quant methods.
No, no, no.
Behavioralism is not even a subset of quant methods. It is a general theoretical perpsective.
Saying that behavioralism is a subset of quant methods is like saying that rat-choice is a subset of quant methods.
Right.
I should have been more careful. I hope I have not confused ILP ;)
In any event, Beer uses "yet" to indicate that the move away from scientism (read: laws of social/political behavior) undermined the viability (read: theoretical justification) of behavioralism (read: quantitative analytical techniques), YET this move led to more justifiable and analytically useful uses of quantitative analytical methods.
Are we reading the same chapter?
Is this really how you conduct qualitative research? How you interpret policy papers, biographies, press releases, etc?
I'm speechless. Again, I asked you earlier to interpret Beer's discussion of Lowell's work and his own. You say that Beer equates behavioralism with quant methods, so I ask: does he consider Lowell and himself to be behaviorists? Yes or no?
Also, I see that you haven't refuted the claim that Beer's discussion of sophisticated methods contradicts your view. I interpret this silence as your admission that Beer's view differs from yours. Just as a reminder, you claimed that sophisticated methods obfuscate, while Beer embraces those methods. You two couldn't be more in opposition.
No, no, no.
Behavioralism is not even a subset of quant methods. It is a general theoretical perpsective.
You say that Beer equates behavioralism with quant methods, so I ask: does he consider Lowell and himself to be behaviorists? Yes or no?
I think what you are failing to understand is that in critiquing behavioralism Beer is arguing for what he views as the appropriate use of quantitative analytical methods.
But where does he say that without laws, quant methods are of limited utility? That's the claim that you've been asked to sustain.
It seems to me that Beer is arguing that regardless of whether laws exist or not, sophisticated methods can be of great utility, as long as they are coupled with more realistic analyses.
sophisticated methods can be of great utility
I do not see great in Beer's thinking about quantitative analytical methods. Yes, they have utility, but I do not see his opinion going further.
But where does he say that without laws, quant methods are of limited utility?
Scientism: analyzing the laws of . . . "
Beer wrote: "The shift with regard to behavioralism had negative and positive aspects. The sharp rejection of scientism was the most prominent feature. It was accompanied by a clearer recognition of the inadequacy of behaviorism."
spelled progress. I see that as a great feature. Maybe you're against progress?
Beer wrote: "The shift with regard to behavioralism had negative and positive aspects. The sharp rejection of scientism was the most prominent feature. It was accompanied by a clearer recognition of the inadequacy of behaviorism."
I don't see a discussion of quant methods in that passage.
Yes, they have utility, but I do not see his opinion going further.
So you clearly don't think that he's against them. So why did you cite him?
It's almost as if you had never read the passage before.
So you clearly don't think that he's against them.
Who is against quantitative methods?
I don't see a discussion of quant methods in that passage.
Among all subfields (including Congress), behavioralism is the most quantified and seemingly predicated on the assumption of the existence of laws of social/political behavior.
Among all subfields (including Congress), behavioralism is the most quantified and seemingly predicated on the assumption of the existence of laws of social/political behavior.
As a Congress scholar I am surprised to hear this. Oh wait...it is just another instance of ILP having no idea what he is talking about.
If I had to guess I would say that of all quantitative work (theory and empirics) that behavioralism makes up less than 1/3 of the total probably less than 1/4 if you include pure methods work.
Why do you constantly put forth conjecture as fact that you have absolutely no evidence for?
Further behavioral work does not need to be quantitative. Why are you trying to conflate the two?
Behaviorism does not equal behavioralism.
Behaviorism does not equal behavioralism.
Please explain the difference?
Qualitative methods have largely failed in political science. Perhaps their only utility for causal inferential purposes in our discipline is in probing and illustrating the findings of quantitative political research. This is certainly important, but qualitative methods must be at the service of quantitative methods.
The qually's very posts have demonstrated this. As he has repeatedly shown, qualitative work is less scientific, less cumulative, and less precise than quantitative research. Qualitative scholars such as the qually willfully misrepresent facts and statements in order to defend their points. Errors, biases, and misstatements abound. This makes it impossible for political scientists to rely on qualitative research to demonstrate causal claims.
By contrast, quantitative methods (when done properly ) are attuned to the fundamental problem of causal inference. They are grounded in a philosophy of science and causal inference that extends from Mill to Neyman to Rubin and beyond. When properly done, quantitative research is replicable, something that most qualitative work is not. Variable codings are rigorous and clear, not amorphous and ad hoc. Quantitative methods are often difficult, but that is because reality is complex and science is hard. They are clear to anyone who bothers to learn about them.
We need waste no more time with the qually. He has failed.
Behaviorism was a broad movement in the 20th century. The main aspect relevant for political psychologists is the work on stimulus-response.
You might research a little of this yourself. If you even bothered to type "behavioralism" into Wikipedia you would find the first line cautions that it is different than behaviorism.
So Beer was not talking about behavioralists at all. He was taking on, in part, the way in which behaviorism did not allow things like the context or environment to matter for behavior.
More from Beer: "In behavioralism [emphasis in original] I find . . . 'scientism', . . . the belief that the study of politics can produce laws of political process that have a validity comparable to that of the laws of natural science. . . . From time to time, I would ask colleagues and graduate students who believed in scientism whether, after all this labor, anyone had yet discovered a universal law of political behavior -- with nil results. . . . The hope for [scientism} goes back as far as Francis Bacon, who . . . displayed greater expectation for an empirical science of society than for an empirical science of nature. The nonappearance of the former in comparison with the immense progress of the latter in the course of three centuries must surely suggest that there is something about the substance of social studies that makes it unsuitable to study by the methods of the hard sciences.
Quoted from pp. 54-55
Boy! Do the quants have egg on their face!!!
What does the Beer quote prove? Nothing!
My mistake about Beer referring the behavioralism, not behaviorism. Both terms were being thrown around loosely and, in contrast to ILP, I like to be precise with terms and concepts.
So Beer was referring to behavioralism - ILP, you proved me wrong there. But in that passage Beer is offering pure conjecture without any evidence. So there is only egg on my face - not the faces of quants.
What does the Beer quote prove?
The Beer quote establishes three points. First, it shows the lie of the quant post that alleged that the book Perestroika! made no repudiation of the notion of laws of political/social phenomena.
Second, the quote demonstrates the lie of those quants who tried to use the Beer chapter as support for quant analytical methods.
Third, it puts to rest the lie that the ILP’s arguments have no pedigree, nor support.
Of course, I expect quants to lie, and to baselessly smear others. Those who benefit from corruption, or hope to, will aggressively deceive and falsely assail those seeking to expose their corruption.
But in that passage Beer is offering pure conjecture without any evidence.
Of course, what Beer is arguing is a patent point of logic -- namely, without laws of political/social phenomena the utility of quant analytical methods is of limited utility.
Along this line I will pose the same query I posited a while back. What is the goal of quant analytical methods in political science? (In the natural sciences the goal of employing such methods is to analyze the laws of physics.) If you cannot answer this question, then you agree with Beer.
What is the goal of quant analytical methods in political science?
I don't understand: how can a method have a goal?
We wouldn't ask "what is the goal of a cross-tab?", "what is the goal of a regression?" etc.
Can you rephrase this?
I don't understand: how can a method have a goal?
We wouldn't ask "what is the goal of a cross-tab?", "what is the goal of a regression?" etc.
Can you rephrase this?
In the physical sciences methodologies are designed and employed to analyze the operation of the laws of physics. Why are quant analytical methods employed in political science?
In the physical sciences methodologies are designed and employed to analyze the operation of the laws of physics.
Again, I'm confused: what is this qualifier about "the laws of physics"?
I mean--- why is it inherently 'right' to employ quant methods to ask "what is the speed of light" but inherently 'wrong' to use them to answer "does democratization lead to economic growth?"
I mean--- why is it inherently 'right' to employ quant methods to ask "what is the speed of light" but inherently 'wrong' to use them to answer "does democratization lead to economic growth?"
In the former quant analytical methods produce reliable answers/knowledge, while not in the latter.
The Beer quote establishes three points. First, it shows the lie of the quant post that alleged that the book Perestroika! made no repudiation of the notion of laws of political/social phenomena.
From 6/09/2008 9:44 PM:
Beer rejects the rise of "scientism," which he refers to as "the belief that the study of politics can produce laws of political process that have a validity comparable to that of the laws of natural science" (54). So far, so good for ILP.
Could you please get off that high horse of yours? The discussion of Beers started with an interpretation of the chapter whose first point was that he rejected the rise of scientism, i.e., general laws akin to the natural sciences.
I guess that was lost on you.
So let me get this straight. If I pull out a chapter from, say, Riker, on the scientific goals of political science, would you say "quals: 1, quants: 1" and continue?
In other words, is finding one (or two or ten) citations in favor of one's position evidence that one's position is 'better' than some other position?
RE: 6:20
I was referring to this post:
Monroe, Kristen Renwick . 2005. Perestroika!: The Raucous Rebellion in Political Science.
New Haven: Yale University Press.
Ahahaha! You clearly haven't even read this book.
is finding one (or two or ten) citations in favor of one's position evidence that one's position is 'better' than some other position?
Beer's chapter is an argument against the dominance of quant analytical methods in political science. Hence, he asserts: "there is something about the substance of social studies that makes it unsuitable to study by the methods of the hard sciences." The fact that he later allows for the fact that quant analytical methods do have some utility in political science does not diminish the fact that Beer rejects the idea that we should study political phenomena in the same way that we study matter and energy.
I'm completely blown away by the thought of a tenured prof. claiming that quantitative methods are useless in a non-deterministic world and that any type of behavior with a non-zero variance cannot be explored with quantitative methods.
I've been reading this blog for a while now, and I think I've had it. How can anyone take this seriously?
any type of behavior with a non-zero variance cannot be explored with quantitative methods.
That has never been asserted. You do not understand the arguments at hand.
The idea that quantitative methods are inappropriate for the analysis of any type of behavior that cannot be predicted perfectly is crazy.
There's no way the idea should be taken seriously.
I'm completely convinced that anyone who argues that is just using it as an excuse to justify not learning quantitative methods.
The idea that quantitative methods are inappropriate for the analysis of any type of behavior that cannot be predicted perfectly
Again, you are creating a strawman, and one that is not reflective of any argument.
I'm completely convinced that anyone who argues that is just using it as an excuse to justify not learning quantitative methods.
Quants use their training in statistics as an excuse to not learn about politics and history.
Quants use their training in statistics as an excuse to not learn about politics and history.
So you're not denying it, then. Glad that we are in agreement on that one as well.
I guarantee you that Gary Cox knows more about the intricacies of British politics than any qually does.
I guarantee you that Adam Przeworski knows more about Polish labor politics, and South American political economy, than any qually does.
I guarantee you that John Huber knows more about French politics than any qually does.
Quants study research methods in order to make good causal inferences. Quallies refuse to do so because they are either (1) not smart enough or (2) afraid that real scientific finds will negate their biases and prejudices.
6/14/08 11:01 PM here...
I never said anything about qualitative scholars in general -- just those that use that particular argument.
And just so we're clear, I'm all for methodological pluralism. A good set of case studies can be as informative as any regression.
It's nice to see that you (8:44 AM and 11:14 AM) feel so free to generalize about half the people in the discipline as you do.
Quants study research methods in order to make good causal inferences. Quallies refuse to do so because they are either (1) not smart enough or (2) afraid that real scientific finds will negate their biases and prejudices.
These are just meaningless platitudes. Unfortunately, this kind of rhetoric is too characteristic of quant posts.
It's nice to see that you (8:44 AM and 11:14 AM) feel so free to generalize about half the people in the discipline as you do.
It is true that those earlier posts were too broadly construed. Nonetheless, the astounding lack of knowledge conveyed by quants on this blog of political philosophy can only lead one to the conclusion that the quants here know little about politics and history.
These are just meaningless platitudes. Unfortunately, this kind of rhetoric is too characteristic of quant posts.
The qually doesn't have a response, so once again he refuses to answer. Typical. The qually fails once again.
Quantitative political science (when done correctly) is rigorous, replicable, clear, and scientific. Qualitative research is story-telling--nice and illustrative, but not helpful for scientific progress.
Quantitative political science (when done correctly) is rigorous, replicable, clear, and scientific. Qualitative research is story-telling--nice and illustrative, but not helpful for scientific progress.
The question I would ask is why do you feel this way about quant and qual methods?
What's funny is ILP once claimed to be open-minded and pluralistic. Clearly that was just posturing.
We have been seeing his true colors emerge. And it ain't pretty...
.....I guarantee you that Gary Cox knows more about the intricacies of British politics than any qually does.
I guarantee you that Adam Przeworski knows more about Polish labor politics, and South American political economy, than any qually does.
I guarantee you that John Huber knows more about French politics than any qually does.......
Jesus H. Christ, have some humility man. Surely there are some historians, or maybe even some journalists (living in say England or France), that know more about politics in these countries. Unless, that is, your definition of knowledge politics = a certain kind of political science.
Your self-assuredness is staggering, and suggestive of a larger ignorance.
Sincerely,
first time lurker
What's funny is ILP once claimed to be open-minded and pluralistic. Clearly that was just posturing.
We have been seeing his true colors emerge. And it ain't pretty...
The following does not strike me as very open-minded nor pluralistic. Why do you only demand this of quals?:
Quantitative political science (when done correctly) is rigorous, replicable, clear, and scientific. Qualitative research is story-telling--nice and illustrative, but not helpful for scientific progress.
The idea that "everyone is right and we can all do a magical happy dance of feeling good about ourselves" is not appropriate for scientific inquiry. Some methods are better for causal inferences than others.
The vast literature on the philosophy of causality in the social sciences (cited previously on this blog and ignored by the qually) stresses the superiority of quantitative methods for uncovering causal relationships. Qualitative methods have no such literature to defend them.
This is why I make the strong case that qualitative research amounts to "story-telling--nice and illustrative." I do not apologize for it.
I will amend a previous statement, however. I wrote that qualitative research is "not helpful for scientific progress." This, I believe, is too harsh an indictment. So long as causal claims are not made based on qualitative evidence alone, qualitative research may complement quantitative methods. See John Gerring's recent book for an explanation of why.
I have no idea whether the person who wrote this claimed to be open-minded and pluralistic.
The ILP, however, once claimed exactly that. Clearly he has shown himself to be close-minded.
It's okay, most people are close-minded, you know. You're no different than most. At least now we're past the posturing stage.
The vast literature on the philosophy of causality in the social sciences (cited previously on this blog and ignored by the qually) stresses the superiority of quantitative methods for uncovering causal relationships.
In the past you have falsely asserted that some specific literature has been cited to support some argument. When in fact no one has cited any literature to support this argument. Again, this is a tactic that has been engaged in the past. Being wrong is acceptable, but deception is not.
So long as causal claims are not made based on qualitative evidence alone, qualitative research may complement quantitative methods. See John Gerring's recent book for an explanation of why.
What Gerring actually argues in Case Study Research is that quantitative methods can be applied to case studies, but he does not hold that quantitative methods are necessary, nor necessarily desirable.
Not only have I cited the literature, which you have failed to read, but I have also reminded you of the literature that I cited when you pleaded ignorance.
A lying qually once again demonstrates his intellectual dishonesty.
On page 16 of the draft copy of Gerring's book that I have in my possession, he writes "If the within-case evidence drawn from a case study can be profitably addressed with quantitative techniques, these techniques must be assimilated into the case study method."
Note that this is a strict injunction. They MUST be addressed quantitatively.
"If the within-case evidence drawn from a case study can be profitably addressed with quantitative techniques, these techniques must be assimilated into the case study method."
This is radically different than your original claim that "so long as causal claims are not made based on qualitative evidence alone, qualitative research may complement quantitative methods." Nor does it suggest that quantitative methods are necessary, but only if they can be deployed then they must.
There is nothing inconsistent here. Gerring has not claimed that qualitative methods can address causality.
You, on the other hand, write "but he does not hold that quantitative methods are necessary, nor necessarily desirable."
This is precisely false. He holds that quantitative methods MUST be used when they can be.
Gerring has not claimed that qualitative methods can address causality.
That is untrue.
He holds that quantitative methods MUST be used when they can be.
This does not mean they are necessary, but only when available must be employed.
The ILP, however, once claimed exactly that. Clearly he has shown himself to be close-minded.
You are mistaking open-mindedness for agreeing with your position.
Quantitative political science (when done correctly) is rigorous, replicable, clear, and scientific.
A scientist does not mistake what he wants for the truth.
I just viewed the movie Kinsey. Kinsey pioneered the field of sexology -- the study of human sexuality. A character in the movie asks Kinsey "what about love?" To which Kinsey responds, "Love cannot be measured; what cannot be measured cannot be studied scientifically." A hard scientist's view of quant social scientists seeking to measure what is beyond measure -- qualitative concepts.
You are mistaking open-mindedness for agreeing with your position.
No, I'm just calling you out. It's an empty statement to say that you're open-minded but then you think quant methods are of limited utility.
Come on, there's no point denying it. You've shown your true colors. I'm assigning your book in the fall to my grad class on research design. We'll tear it apart :)
A scientist does not mistake what he wants for the truth.
Real scientists debate using evidence and citations, not platitudes.
It's an empty statement to say that you're open-minded but then you think quant methods are of limited utility.
The truth is never an empty statement.
I'm assigning your book in the fall to my grad class on research design. We'll tear it apart :)
If you have a specific work you would like to discuss, please invoke it.
Real scientists debate using evidence and citations, not platitudes.
Interesting, because all many quants offer on this blog is platitudes.
Well, since we don't want to name authors on the blog, I'll refrain from telling you which book I'm assigning. Gee, it's getting real hot these days, isn't it?
Oh the fun I'll have showing my students how NOT to conduct research. Better yet, they'll figure it out themselves once they read it.
Earlier there was a discussion over the use of ordinal numbers by quant social scientists. One appropriate use of such numbers can be seen in evolutionary biology. A researcher assigns numbers in ascending order to the accrual of each evolutionary "breakthrough." So a species's development of oxygen assimilation is labeled 1, lungs 2, fur 3, sight 4, etc. Thus, these different developments took place in different time intervals, but the research uses ordinal numbers because he is interested solely in the factors, outside of time, that drove the evolutionary process. With such examples, we can see the inappropriateness of utilizing ordinal numbers to label non-accruing concepts, e.g., ideology, justice, democracy, etc.
It is more than a little ridiculous that we are prohibited from critically analyzing specific works. Earlier nothing was said about a particular person's research that should be deemed out of bounds. (I am referring to when someone invoked a specific review of a book. I thought that discussion was interesting.)
A researcher assigns numbers...
Really? Which researcher? When and in what journal?
I see no reason to believe that this is actual research and not just something you made up.
The ludicrousness of the example -- as if a given species first acquired oxygen assimilation, and then lungs, and then fur, and then sight -- is more consistent with your well-established profound ignorance of physical sciences than it is with actual research conducted by people with more than two neurons to rub together.
Typical of quants, focus on the secondary (i.e., the specifics of the example), and ignore the fact that it is example of the appropriate use of ordinal numbers. This is opposed to the misguided utilization of ordinal numbers by quants in political science.
How on earth can it be an appropriate example if it does not exist?
I strongly suspect that there is either no such research, or that you are vastly misunderstanding what the researcher is doing. A typically Stalinist tactic; if there is no real evidence, just make some up and assert that it is true. The Big Lie in action.
utilization of ordinal numbers
What do you mean by ordinal numbers? Why not just "numbers"?
It is more than a little ridiculous that we are prohibited from critically analyzing specific works. Earlier nothing was said about a particular person's research that should be deemed out of bounds. (I am referring to when someone invoked a specific review of a book. I thought that discussion was interesting.)
I'd rather not bring up names. But I can tell I'll have a field day using it as a prime example of how NOT to conduct social scientific research.
Typical of quants, focus on the secondary (i.e., the specifics of the example), and ignore the fact that it is example of the appropriate use of ordinal numbers. This is opposed to the misguided utilization of ordinal numbers by quants in political science.
Nice try, but you've been proven wrong time and again. In your every day, you hear quallies use ordinal language to describe findings: "corporate interest have a more influential role than citizen activists;" "going to war against Iraq has made the US more vulnerable than before", etc.
If you can't attach a number, then you're saying that these things can't be ordered. In which case, we should eliminate all such language in the social sciences.
If you think these things can be ordered (e.g., influence of one interest versus another), then there's no reason why you can't attach a number to it. A number is just a convention. It can be 1, 5, 10000000. We've been over that, and it's clear you haven't learned anything.
This is simply a basic statement. You don't understand how, for instance, ordered probit/logit works. That's the ONLY way you still cling on to your silly argument about "2 is twice as democratic than 1."
If you can't attach a number, then you're saying that these things can't be ordered. In which case, we should eliminate all such language in the social sciences.
While things may be ordered, they cannot necessarily be quantified. Someone may say "I love my partner more today than I did yesterday." But does he love him 1 unit more? That is an absurd supposition.
At least this example has the virtue of being on a continuum. But there are many concepts that quants try to quantify that in fact cannot be ordered, or that are not continual, e.g., democracy. To say a society is 1 unit more democratic than another makes little sense. (If the ordering is dependent on one's theory, then by definition it cannot be ordered.)
I lurk occassionally here, but haven't posted in a long time. I felt that 6:54's statement was so incredibly stupid, that I just had to indicate my horror that he teaces in a higher institution.
To say a society is 1 unit more democratic than another makes little sense. (If the ordering is dependent on one's theory, then by definition it cannot be ordered.)
Again, if that is your dependent variable, and you use the appropriate method, it doesn't matter.
so to take your example, if you love your partner more today than yesterday, then you code today as higher than yesterday. If tomorrow is the same as today, then you code today and tomorrow as the same. Etc.
If all you care about is the order (more, less, the same), then the size of the intervals doesn't matter.
It doesn't matter. I'm not using semantics, here.
I'll tell you what. Let's do an experiment. I'll give you some data. Then you can recode the dependent variable (Y). Then we both run an ordered probit. I'll guarantee you that our results will be the same.
X Y
0 1
0 3
1 2
1 1
2 3
2 2
Now, take all the "3" you see in the Y column, and change them to 532. Run the ordered probit on the original data, and on the transformed data. Then report back to us.
Let's see if, at the very least, you'll try this and see what happens.
I felt that 6:54's statement was so incredibly stupid, that I just had to indicate my horror that he teaches in a higher institution.
If I were you, I would be more concerned by the fact that someone in higher education makes baseless and insulting statements.
so to take your example, if you love your partner more today than yesterday, then you code today as higher than yesterday. If tomorrow is the same as today, then you code today and tomorrow as the same. Etc.
If all you care about is the order (more, less, the same), then the size of the intervals doesn't matter.
It doesn't matter. I'm not using semantics, here.
You are so caught up in the math that you do not see the illogic of what you are proposing. Can someone really love someone more and more with each passing day? Probably not. Nonetheless, the problem is we do not have an objective measure of love, and that is the reality that quants do not want to face. Namely, that there is no objective measure of the qualitative concepts that are at the core of political science: e.g., democracy, liberty, freedom, justice, etc. This is what makes much of the quant project futile.
Ok, I'm thoroughly confused. Can you, or can you not, order things like political influence. If you think that the answer is no, that's fine. But then, please explain to me how one can then say that corporate interests had a bigger influence on legislation than some other entity.
objective measure of love
What do you mean by "objective"?
In an earlier discussion, you said that the way to get at cognition is simply to ask people directly (or figure it out through speeches, biographies, etc.). Are you saying that asking someone about their emotions cannot get at their emotions?
By the way, what did you mean by "ordinal numbers"?
Namely, that there is no objective measure of the qualitative concepts that are at the core of political science: e.g., democracy
Have you read the literature on the measurement of democracy? I'm just curious whether you've read it, or if you're just stating that such a concept cannot be measured (thus rejecting any literature that purports to measure the concept).
Have you read that literature before coming to that conclusion, or not?
Ok, I'm thoroughly confused. Can you, or can you not, order things like political influence. If you think that the answer is no, that's fine. But then, please explain to me how one can then say that corporate interests had a bigger influence on legislation than some other entity.
Again, just because you can order something does not mean one can quantify it.
In an earlier discussion, you said that the way to get at cognition is simply to ask people directly (or figure it out through speeches, biographies, etc.). Are you saying that asking someone about their emotions cannot get at their emotions?
We can gain reliable answers from people on their emotions. It does not mean we can meaningfully quantify their responses.
Have you read the literature on the measurement of democracy? I'm just curious whether you've read it, or if you're just stating that such a concept cannot be measured (thus rejecting any literature that purports to measure the concept).
Have you read that literature before coming to that conclusion, or not?
I do not need to read the literature that seeks to quantify democracy to know that it cannot objectively be done. Just like I do not need to read the Bible to arrive at atheism. Nonetheless, if you want to recommend some specific literature on the subject of quantification of democracy I would be willing to review it.
What do you mean by "objective"?
Objective measurement means that we can measure something independent of any theory. So, for example, without making any claim about the effects of CO2, we can objectively count (i.e., measure) the number of CO2 molecules in the atmosphere.
Again, just because you can order something does not mean one can quantify it.
Never read this blog anymore, but wanted to make a comment:
If you are ordering things, you must be ordering them with respect to something --- presumably "magnitude". It makes no difference what labels (e.g. 'most powerful', 'medium', '2', 'very large' etc) you assign to the categories of objects.
Actually, your concern seems to be about something called measurement 'validity' ("are we really measuring love?"), but that's actually conceptually different from whether measures are on a continuous scale, or whether they are ordered categories or whatever.
Is this the part that you don't understand?
I know it is tricky to grasp at first, and sounds a bit strange, but it is actually fundamental to much social science inquiry, and you can find it in most undergrad text books.
Talk to a professor in your department who does work on surveys or someone who teaches your data analysis classes; your TA will probably be able to point you in the right direction on this too.
I wanna know what love is. I want you to show me.
Again, just because you can order something does not mean one can quantify it.
Actually, if you can order it, you can attach an ordinal value to it, whether it's a number, or a label. So you're wrong on that one.
We can gain reliable answers from people on their emotions. It does not mean we can meaningfully quantify their responses.
What do you mean by "meaningfully quantify"?
I do not need to read the literature that seeks to quantify democracy to know that it cannot objectively be done. Just like I do not need to read the Bible to arrive at atheism. Nonetheless, if you want to recommend some specific literature on the subject of quantification of democracy I would be willing to review it.
Here you go again with your grandstanding rhetoric.
But here is some literature for you. Start with Lijphart's book "Democracies ; Patterns Of Majoritarian And Consensus Government In Twenty-One Countries".
Then take a look at the Polity website (and the literature associated with it):
http://www.systemicpeace.org/polity/polity4.htm
Here are a couple more articles for you:
-Treier and Jackman, 2008 AJPS
-Elkins, 2000 AJPS
And there's lots more...
Objective measurement means that we can measure something independent of any theory
Well, I think that many would disagree with that statement. What do you think of the psychometrics literature? Do you really think that theory plays no role in measurement? I don't know any measure (quantitative or not) that isn't based on theory. Do you really think that you can measure things absent any theory? That's a revolutionary way of thinking about measurement--do you realize that?
Do you really think that theory plays no role in measurement? I don't know any measure (quantitative or not) that isn't based on theory. Do you really think that you can measure things absent any theory? That's a revolutionary way of thinking about measurement--do you realize that?
It is incredible that I have to defend the rather rudimentary and obvious claim that scientific measurement requires no theory. Do I really have to point out that you can measure the amount of water in a cup without a theory?! Your position challenges the bifurcation of theory and empirical reality – i.e., there is no reality outside of theory.
If you are ordering things, you must be ordering them with respect to something --- presumably "magnitude". It makes no difference what labels (e.g. 'most powerful', 'medium', '2', 'very large' etc) you assign to the categories of objects.
What you seem to be saying is that if "x" is more politically influential than "y", we can assign the number 2 to x and the number 1 to y. If you are acknowledging that the numbers 1 and 2 have no defined relationship, then why use numbers at all. Why not simply state that x is more politically influential than y. In other words, this is not quantifying the relationship between x and y; it is simply substituting numbers for language, which is not analytically helpful.
Just think of a crosstab where income is the column variable and ideology (or whatever) is the row variable. You have 3 categories of income (low, med., high -- arranged from left to right) and 3 categories of ideology (liberal, moderate, conservative -- arranged from top to bottom). We don't have to assign any numbers to anything, and we can study how pairs of observations are ordered in the crosstab (that is, we can see if it's actually the case that lower income people are, on average, more liberal). Without assigning any numbers to anything (except counting cases as they fall in each category) we can assess the strength and direction of the relationship between ideology and income.
It's analytically helpful.
Make sense?
PS -- This is called a gamma coefficient. People use it, undergrads get it. Check it out.
Doesn't the poster have a point? You can build operational definition of ideologies on previous work, but it's still as much an art as a science. Work in the "hard" sciences can be just as iffy, and applied research involving so-called hard sciences can be especially slippery. The definition of ideology varies from nation to nation and sometimes region to region or state to state anyway. A liberal in south Georgia is a conservative in New England, etc. To a certain extent, trying to pin these things down will always be at least some part art, and I would think that most political and social science people would admit as much. This is an annoyingly, needlessly polarized debate, and I certainly hope it doesn't reflect much in the way of mainstream PoliSci opinion.
Just think of a crosstab where income is the column variable and ideology (or whatever) is the row variable. You have 3 categories of income (low, med., high -- arranged from left to right) and 3 categories of ideology (liberal, moderate, conservative -- arranged from top to bottom). We don't have to assign any numbers to anything, and we can study how pairs of observations are ordered in the crosstab (that is, we can see if it's actually the case that lower income people are, on average, more liberal). Without assigning any numbers to anything (except counting cases as they fall in each category) we can assess the strength and direction of the relationship between ideology and income.
It's analytically helpful.
Why not just report what percentage of each quintile self-proclaimed liberal, moderate, conservative? It is seemingly more useful and transparent than the procedure you propose.
If you are acknowledging that the numbers 1 and 2 have no defined relationship, then why use numbers at all. Why not simply state that x is more politically influential than y.
Because you might be interested in learning whether Z tends to be associated with greater influence, holding other theoretically relevant factors constant. A simple ordered logit is a convenient and reasonable way to do this.
Why not just report what percentage of each quintile self-proclaimed liberal, moderate, conservative?
(1) Because it might be difficult to determine the effect by eye.
(2) Because just reporting the values doesn't include any statement of uncertainty about the population values in question.
This is hopelessly elementary.
Because you might be interested in learning whether Z tends to be associated with greater influence, holding other theoretically relevant factors constant. A simple ordered logit is a convenient and reasonable way to do this.
You are taking numbers that have no defined relationship with one another and using these numbers to determine the influence of Z (nothing you have defined). Again, I would use submit the entire exercise you propose is analytically meaningless. A qualitative description is much more useful.
Because it might be difficult to determine the effect by eye.
I would submit that readers themselves must determine the "significance" of the relationship between income and self-proclaimed ideology. In this way, the statistical technique you posit obfuscates readers’ ability to determine for themselves the relevance of the relationship between income levels and self-proclaimed ideology.
Because just reporting the values doesn't include any statement of uncertainty about the population values in question.
I did not see that addressed in the procedure you initially proposed. Moreover, is not a sampling issue different than the matter we are discussing?
A few things here:
1. Why write a paper at all if you think it's the reader's job to determine the significance of the relationship? Why not just post your dataset on your web page and say that you eyeballed it and it looks like there's some relationship between which ever two variables you're concerned with?
2. There's a test associated with the gamma coefficient that takes into account uncertainty. Basically, using the test, you can tell if your gamma coefficient is significantly different from 0.
3. You can do all of this with a pencil, a piece of paper, and the calculator on your cell phone. This doesn't obfuscate anything if you put 20 minutes into learning it.
Why write a paper at all if you think it's the reader's job to determine the significance of the relationship? Why not just post your dataset on your web page and say that you eyeballed it and it looks like there's some relationship between which ever two variables you're concerned with?
I would not write a paper identifying the relationship between self-proclaimed ideology and income. Instead, I would answer why the relationship exists.
I would answer why the relationship exists.
But first, don't you need to establish that relationship if you're going to write about its causes?
Oh and by the way, your reply to the other poster's suggestion about crosstabs was really telling. He suggested using the gamma statistics and you didn't see how it incorporates uncertainty?!? Why not simply say "I'm unfamiliar with the gamma and crosstabs" rather than making things up?
But first, don't you need to establish that relationship if you're going to write about its causes?
In terms of what we were discussing, it strikes me as more appropriate to let the reader determine the significance the "strength" of the relationship between self-proclaimed ideology and income levels. In my view, we should only use statistical analysis when necessary -- otherwise, the analysis and data presentation should be as transparent/accessible as possible.
Why not simply say "I'm unfamiliar with the gamma and crosstabs" rather than making things up?
I am not "making things up." The comment appears directed at sampling issues.
it strikes me as more appropriate to let the reader determine the significance the "strength" of the relationship between self-proclaimed ideology and income levels.
What would you present in this case (for "the reader to determine..." etc)?
I'm asking this in the literal sense of what would appear in the paper: a table? raw counts? a cross-tab? the entire survey?
In terms of what we were discussing, it strikes me as more appropriate to let the reader determine the significance the "strength" of the relationship between self-proclaimed ideology and income levels. In my view, we should only use statistical analysis when necessary -- otherwise, the analysis and data presentation should be as transparent/accessible as possible.
OK, now you're really being silly. What you're suggesting is "report the numbers and let readers decide for themselves." What we're suggesting is "report the numbers, and a statistic such as the gamma, and let the readers decide for themselves." Now, you're saying that the latter is less transparent/accessible as the former? We've presented everything you would want to present, but we've also added one more number (perhaps with an asterisk), and now all of a sudden everyone is confused? You're killing me!
I am not "making things up." The comment appears directed at sampling issues.
This is what was written:
"(2) Because just reporting the values doesn't include any statement of uncertainty about the population values in question."
I don't know where you see discussion of sampling. Are you suggesting that the poster was saying that we need to measure uncertainty because there might be a problem with this sample? What exactly do you think the sampling issues are in this example?
OK, now you're really being silly. What you're suggesting is "report the numbers and let readers decide for themselves." What we're suggesting is "report the numbers, and a statistic such as the gamma, and let the readers decide for themselves." Now, you're saying that the latter is less transparent/accessible as the former? We've presented everything you would want to present, but we've also added one more number (perhaps with an asterisk), and now all of a sudden everyone is confused? You're killing me!
The question is how much valued added is gained through the statistical technique? I do not see much. Of course, quantoids will demand it, for no other reason than to show that it can be done.
The question is how much valued added is gained through the statistical technique? I do not see much.
So the fact that you do not personally understand simple quantitative methods (which, btw, my undergrads can handle) is the reason they should not be used?
Bit weak as arguments go, isn't it?
So the fact that you do not personally understand simple quantitative methods (which, btw, my undergrads can handle) is the reason they should not be used?
Again, the question is how much value is added? I submit little to none.
Because just reporting the values doesn't include any statement of uncertainty about the population values in question.
I assume what is meant here is that each income quintile may not be adquately represented in the sample.
So the fact that you do not personally understand simple quantitative methods (which, btw, my undergrads can handle) is the reason they should not be used?
I think this post conveys the near obession that many quants have with statistical devices. An obession that usually comes at the expense of substantive analysis.
The question is how much valued added is gained through the statistical technique? I do not see much. Of course, quantoids will demand it, for no other reason than to show that it can be done.
No, there's lots of value added in knowing whether the differences you find across quintiles allow you to make statements about the population of interest. Well, ok, they always allow you to make some statement--the question is, which?
Let me give you an example. If I told you that 52% of women support Obama while 52% of men support McCain, can you tell me whether a "gender gap" exists or not? Let's say that there were 500 men and 500 women in the sample. Is there a "gap" in the population of interest (e.g., likely voters)? The problem if you just leave it as that, is that some people might say "yep, there's a gap" while others will say "nope, no gap there." Who's right, who's wrong? can they both be right?
I assume what is meant here is that each income quintile may not be adquately represented in the sample.
No, you're completely off base here. Why do you keep making claims that are simply off the mark?
Let me give you an example. If I told you that 52% of women support Obama while 52% of men support McCain, can you tell me whether a "gender gap" exists or not? Let's say that there were 500 men and 500 women in the sample. Is there a "gap" in the population of interest (e.g., likely voters)? The problem if you just leave it as that, is that some people might say "yep, there's a gap" while others will say "nope, no gap there." Who's right, who's wrong? can they both be right?
Certainly, there is a margin of error in these finding. So we would need to see more samples to determine whether the 4 percent difference among genders is actual or just indicative of this particular sampling of public opinion.
No, you're completely off base here. Why do you keep making claims that are simply off the mark?
Instead of playing gotcha, please tell was meant earlier.
Instead of playing gotcha, please tell was meant earlier.
I'm not playing gotcha, I'm just surprised that you (or anyone) would think that this was the issue with respect to the crosstab.
Look, the problem is not that I don't want to explain it to you, it's that it starts with a basic knowledge of stats that you either don't have or are not willing to use.
It's pretty simple, really. You have a sample. In that sample, you have a distribution on a variable (X). You also have a distribution on another variable (Y). Then you have a joint distribution on X and Y. The question is: does that joint distribution differ from what you would expect if there was no relationship between X and Y?
Now, because we have a sample, it's not the case that ANY difference means that there is a difference in the population. The goal of statistical analysis is to be able to attach probabilities to statements such as "there is a gender gap."
Let's approach this question with another example. Let's say that I want to figure out whether a coin is fair or not. I toss it twice, and it land on tails each time. What you're suggesting is "tell the reader that it landed on tail each time, and let the reader decide whether based on that outcome, the coin is fair."
What we suggest is "tell the reader that the coin landed on tails each time, but also tell the reader that if the coin were fair, then 25% of the time tossing it twice would result in an outcome of tail-tail. Let the reader decide whether that's enough to conclude that the coin is fair or not."
See, we both included the same information (i.e., it landed on tails twice), but I've added the additional information that there is a 25% chance than a fair coin will land on tails twice. Why do you say that my statement obfuscates while yours 'lets the reader decide'?
Certainly, there is a margin of error in these finding. So we would need to see more samples to determine whether the 4 percent difference among genders is actual or just indicative of this particular sampling of public opinion.
That's one way to go about it. But don't you see that you don't need to see more samples to determine anything? Of course the 4 percent difference is indicative of this particular sample.
But this is true also if the sample produced a 20-point difference. Would you treat such a big difference as definitive, or would you suggest taking another sample? In other words, would you do anything differently after you found out that you had a 4-point gap as opposed to a 20-point gap? If so, what would you do differently and why?
It's pretty simple, really. You have a sample. In that sample, you have a distribution on a variable (X). You also have a distribution on another variable (Y). Then you have a joint distribution on X and Y. The question is: does that joint distribution differ from what you would expect if there was no relationship between X and Y?
Now, because we have a sample, it's not the case that ANY difference means that there is a difference in the population. The goal of statistical analysis is to be able to attach probabilities to statements such as "there is a gender gap."
Thank you very much for the explanation. I greatly appreciate it.
Nonetheless, I will note that this procedure in my view does not inspire more confidence in the "gender gap" you hypothesize. Only more samples over time will convincingly document the existence/persistence of the gap.
Nonetheless, I will note that this procedure in my view does not inspire more confidence in the "gender gap" you hypothesize. Only more samples over time will convincingly document the existence/persistence of the gap.
Or, translated from qually-speak, "I don't understand this method so I am going to continue to make unsubstantiated and unscientific claims."
Or, translated from qually-speak, "I don't understand this method so I am going to continue to make unsubstantiated and unscientific claims."
Quant speak: "I like unnecessarily complex statistical techniques, because while they do not prove anything they make me look smart (i.e., like a physical scientist) and allow me to arrogantly insult the intelligence of others."
Nonetheless, I will note that this procedure in my view does not inspire more confidence in the "gender gap" you hypothesize. Only more samples over time will convincingly document the existence/persistence of the gap.
OK, I guess we have a disagreement there. But why do you think that more samples over time is the answer? Again, what if you found that there was a 20-point gap? Would you be satisfied, or would you ask for more samples? What if the sample had 1000 men and 1000 women and showed a 4-point gap? Would that satisfy you? why or why not?
But why do you think that more samples over time is the answer?
Basic sampling logic tells you that 1 out of every 20 samples will incorrectly sample the universe (i.e., population) being studied. Thus, additional samples can inspire further confidence that a "gender gap" of 4 percent actually exists. Moreover, if the gap persists over time this would suggest that the gap is not solely the result of a temporary fluctuation.
allow me to arrogantly insult the intelligence of others
ILP, insulting your intelligence doesn't require arrogance.
Only more samples over time will convincingly document the existence/persistence of the gap.
Aaaaand scene.
The ILP has just insisted on the collection of a series of large quantitative datasets.
The ILP has just insisted on the collection of a series of large quantitative datasets.
Sampling is not a quantitative analytical method.
Basic sampling logic tells you that 1 out of every 20 samples will incorrectly sample the universe (i.e., population) being studied.
Well, first, that's just not true. It's only true if you construct 95% confidence intervals (which isn't a given, it's a modeling decision), and it's not the case that the universe is being improperly sampled (whatever that means). Rather, we know that 95% of such confidence intervals will include the true population parameter.
So we don't know whether 4% is the "true" gap, but we know that 95% of the time, our 95% confidence intervals will include the true parameter (in this case, gender gap).
But again, the ONLY reason why we're even having this discussion is that we both recognize that uncertainty is inherent in social science. The question is: do we want to model that uncertainty and take it into account, or disregard it? By constructing 95% confidence intervals around our statistic, we allow for uncertainty in our inferences about the population parameter. This is inherent in quant work, Qual work doesn't take that into account.
But again, the ONLY reason why we're even having this discussion is that we both recognize that uncertainty is inherent in social science.
In actuality, the discussion has virtually nothing to do with uncertainty. The issue is what is the utility of quant analytical methods?
So I see you're skirting the issue at hand. But at least you've recognized that uncertainty is inherent and that quant methods can address that uncertainty (since you spoke of 1 out of 20 samples, you implicitly recognize the importance of uncertainty). So that's good, we're in agreement yet again.
Sampling is not a quantitative analytical method.
I like when the qually makes pronouncements that are totally divorced from reality.
So I see you're skirting the issue at hand. But at least you've recognized that uncertainty is inherent and that quant methods can address that uncertainty (since you spoke of 1 out of 20 samples, you implicitly recognize the importance of uncertainty). So that's good, we're in agreement yet again.
I like when the qually makes pronouncements that are totally divorced from reality.
Is it your position that pollsters are political scientists? I am sure that would be a shock to them!
"Is it your position that pollsters are political scientists? I am sure that would be a shock to them!"
Absolutely no one has argued that utilizing quantitative analytical methods means you are a political scientist. Quite on the contrary, quant methods are used by a wide range of academic disciplines.
Honestly, do you really think everyone who uses quantitative methods is a political scientists? Please tell me you are not that ignorant.
Absolutely no one has argued that utilizing quantitative analytical methods means you are a political scientist.
If your position is that polling is a quantitative analytical method, then, yes, pollsters who gauge the public's political opinions are political scientists.
Pollsters, and polling, are not normally taken up with the issue of why public opinion is what it is, but only with documenting what that opinion is. Of course, science is predominately about the causal factors (the why) underlying phenomena.
First time poster here.
Interesting debate, but 10:50's repeated references to"quallys" reminds me of Fox News' pejorative use of the word "liberal". Perhaps we should we make similar assumptions about the veracity of his/her claims as we do when Fox claims "We report, you decide!"
Also, I enjoy the bracketed "(when done correctly)" in his/her posts on the superiority of quant. methods. Entirely correct to include it of course, and also worth noting that it entirely undermines the ideal type he/she is fatuously presenting as an accurate representation of the mean standard of work being done in pol. sci. in this tradition. Perhaps he/she needs to amend this to "(when done correctly, and when there is data available, or able to be constructed, that makes the research question amenable to the application econometric methods") .
Not addressing formal methods in this post of course, which are a whole different ballgame.
"We report, you decide!"
Actually, this has been the motto of the qually on this board. He argues that qual methods are superior because it lays out the evidence and "lets the reader decides."
Fox News indeed...
Putting brackets around "when done correctly" makes perfect sense. I'm not prepared to defend bad research, qualitative or quantitative. Quantitative scholars may be unclear and deceptive just like qualitative researchers are.
It would be ludicrous to claim that all quantitative research is perfect. Since it is not, I make the proper caveat.
RE: 7:11
I am not 9:35, but I believe what he is arguing is that the situations when quant analytical methods can be appropriately employed are relatively limited. Hence, he wrote:
Perhaps he/she needs to amend this to "(. . . when there is data available, or able to be constructed, that makes the research question amenable to the application econometric methods").
What I would add is that because there is an unwarranted emphasis on the application of quant analytical methods in the discipline those instances when these methods can be applied receive excessive attention. In other words, in too many cases the discipline is being driven by the applicability of methods, and not by the importance or appropriateness of research questions.
Is this a debate, or an episode of "methods, she wrote?" HAHAHAHA.
Seriously, though, why do we seek resolution to the question of which method (qual, quant, rhythm, crystal, etc.) *is* superior?
Haven't we learned from the voluminous literature on voting systems that no aggregation method is a priori superior to all others?
Perhaps I'm muddled, but the analogy between the two debates certainly appears more than superficial.
For example, phrases such as "when done correctly" and "for appropriate topics" can be directly related to domain restrictions in social choice theory.
I will submit that we are having this debate because quants have repeatedly shown their ignorance of basic philosophy of science issues.
I will submit that we are having this debate because quants have repeatedly shown their ignorance of basic philosophy of science issues.
If had shown yourself to be even passingly familiar with the entire philosophy of causal inference from J.S. Mill on, people would listen to this claim. As it is...
If had shown yourself to be even passingly familiar with the entire philosophy of causal inference from J.S. Mill on
I find to be this a fascinating claim, since not one quant has even made a passing argument with regard to theories of causality. (All they can seemingly talk about is quant methods, but little having to do with actual science, and certainly nothing to do with political philosophy.) Indeed, one quant even held that quant methods could be credibly used to analyze the relationship between the number of clouds that look like Hillary Clinton, and her electoral performance total in the primaries.
re: 7:11
I am not the original "qually" poster, as a "quanty" poster so endearingly, and pejoratively, puts it, and will not stand behind all the arguments he/she is making. (Not because I have read them and found them to be poorly constructed, but because there are 134 comments + 600 odd on the thread.)
My points were simply that: 1) the poster is indeed to be lauded for including an important, albeit vague, caveat about the utility of econometrics in political science research (on this we apparently agree); 2) there are others; 3) these important caveats are not being used as the basis on which the argument is being prosecuted, other than in passing.
Arguing from the premise that a perfectly executed research design using statistical methods, with perfect data, is inherently superior to a qual. research design is fatuous. The correct comparison would be an ideal type quant. study, against a qual. research design in which all primary documents, decisionmakers etc. were available for use. For many, many questions (including about democracy), neither exists.
So how about comparing a relatively poor data set about a concept that is contested as the standard for assessing the relative utility of quant./qual. methods? No-one (in their right mind) is going to say quant. research a waste of time; far from it. But with significant problems related to measurement error, concepts, as well as technical constraints, being pervasive, statistical analysis can surely also be characterized as "story-telling." If the claim it is more *precise* story-telling, well, best to judge that on a case-by-case basis, but story-telling nevertheless.
After all, building models that do a good job of representing the real world is, we are told again and again, an art as much as a science.
one quant even held that quant methods could be credibly used to analyze the relationship between the number of clouds that look like Hillary Clinton, and her electoral performance total in the primaries.
Why couldn't they? Could a qualitative study be used?
So how about comparing a relatively poor data set about a concept that is contested as the standard for assessing the relative utility of quant./qual. methods?
I absolutely agree that poor data and measurement problems are very common. But that, of itself, cannot be an indictment of the method.
Get better data, get better measures (yes, I know this is not easy), explain the shortcomings of both to the reader, but the (de)merits of the method remain.
one quant even held that quant methods could be credibly used to analyze the relationship between the number of clouds that look like Hillary Clinton, and her electoral performance total in the primaries.
Why couldn't they? Could a qualitative study be used?
No, qual analysis could not be used to "analyze" the relationship between certain clouds and Hillary Clinton's electoral performance in the primaries. Qual analysis is about studying human cognition as it relates to politics. Since there is no theory that clouds affect the cognition of primary voters, quals would not (could not) pursue such a study.
Precisely because quant analytical methods have no overriding subject of study (i.e., the laws of social/political phenomena), nor have any theory of causation, quant scholarship lends itself to the study of the relationship of clouds to electoral behavior. Thus, if a statistical correlation was found between clouds and electoral behavior there is nothing in the theory of quant analytical methods that would lead one to a priori reject such findings.
Thus, if a statistical correlation was found between clouds and electoral behavior there is nothing in the theory of quant analytical methods that would lead one to a priori reject such findings.
Says who? A silly and unhelpful strawman.
There seems to be a lot of confusion on this board: "quantitative methods" is not synonymous with
"atheoretical data-mining". Perhaps you dislike the latter, but at least try and produce a coherent argument against the former: if I have a theory, why not test it with quant methods?
BTW, if the two terms were synonymous, medical researchers would suggest short people cut their hair to gain height (since there is a negative correlation between height and hair-length)
There seems to be a lot of confusion on this board: "quantitative methods" is not synonymous with
"atheoretical data-mining". Perhaps you dislike the latter, but at least try and produce a coherent argument against the former: if I have a theory, why not test it with quant methods?
In the first instance, it is worth noting that one of your quant cohorts asserted that a cloud/HRC study is valid (see 9:31 a.m.). What he is correctly pointing to is the fact than nothing in the theory of quant analytical methods precludes findings from such a study. You may not like, but it is the case.
BTW, if the two terms were synonymous, medical researchers would suggest short people cut their hair to gain height (since there is a negative correlation between height and hair-length)
What you are missing is that medical scientists' understanding of the laws of physics tells them that hair length has no relationship to human height. So such a study would be (could be) legitimately rejected on its face.
In the first instance, it is worth noting that one of your quant cohorts asserted that a cloud/HRC study is valid (see 9:31 a.m.). What he is correctly pointing to is the fact than nothing in the theory of quant analytical methods precludes findings from such a study.
Very sloppy response.
What does it mean to say "precludes findings"?
If you mean that we can find variables to be correlated and that correlation between two things does not necessarily imply a causal relationship then this is
(a) well known (it is taught to high school kids)
(b) of no bearing on whether or not quantitative methods are appropriate for testing a theory.
medical scientists' understanding of the laws of physics tells them that hair length has no relationship to human height.
Quite right: there is no theory that links the two variables.
So we need theory: atheoretical data-mining could be very misleading.
See 1.11pm which made that point.
Other than pointing out something that high school kids know, what are you saying of value?
What does it mean to say "precludes findings"?
If you mean that we can find variables to be correlated and that correlation between two things does not necessarily imply a causal relationship then this is
What you have to ask yourself is why would a quant political scientist suggest/support a HRC/cloud study? The is because nothing in quant analytical methods theory precludes it. In contrast, the theory underlying qual methods would preclude such a study, as would the theory upon which quant methods in phyics is predicated on.
"quantitative methods" is not synonymous with "atheoretical data-mining".
Maybe not, but this is exactly how many of your quant colleagues conduct so-called research.
What you have to ask yourself is why would a quant political scientist suggest/support a HRC/cloud study? The is because nothing in quant analytical methods theory precludes it.
Who said it would be "supported" as a worthy academic endeavor? (No one)
Please help me out: do you mean one could come up with a theory that relates clouds to political support?
Surely, one could (and surely it would be pretty silly). This has *nothing* to do with whether one *tests* this theory using qual or quant methods.
There is *no* "theory" inherent in quantitative or qualitative methods. They are *methods*. They exist (and have their merits) independent of the theories you wish to test.
You could have a Marxist theory, a rational choice theory, a Clouds theory, whatever: there is nothing about the method that makes the theory a "good one".
You understand this, yes?
Surely, one could (and surely it would be pretty silly). This has *nothing* to do with whether one *tests* this theory using qual or quant methods.
There is *no* "theory" inherent in quantitative or qualitative methods. They are *methods*. They exist (and have their merits) independent of the theories you wish to test.
You are way, way off!! All scientific empirical methods are predicated on theories of epistemology -- i.e., theories of knowledge. This is a prime problem of quant analytical methods: they are not based on theories of knowledge. Thus, other than common sense, it is not clear why a quant analytical scientist would reject a strong statistical correlation between clouds and HRC's primary vote.
There is *no* "theory" inherent in quantitative or qualitative methods. They are *methods*. They exist (and have their merits) independent of the theories you wish to test.
You are way, way off!! All scientific empirical methods are predicated on theories of epistemology -- i.e., theories of knowledge.
This exchange demonstrates the validity of an earlier post:
I will submit that we are having this debate because quants have repeatedly shown their ignorance of basic philosophy of science issues.
6/22/2008 9:57 PM
Thus, other than common sense, it is not clear why a quant analytical scientist would reject a strong statistical correlation between clouds and HRC's primary vote.
That's exactly why--though I wouldn't call it 'common sense' I would call it 'lack of a theory'.
You're so ludicrous. You brought up the clouds theory. Someone said 'nothing precludes a statistical analysis' (very true). Someone should also point out that nothing precludes a qualitative analysis (how about in-depth interviews with 5 voters asking them whether cloud patterns affected their vote choice)?
Neither method precludes an analysis. What everyone on this board agrees with is that on its face the relationship (if there is one) would be spurious. Why? Why do both quants (e.g., me) and quals (e.g., you) think that such a relationship is ludicrous? It can't be the method, since it varies and we both agree. Hmmmm... the... theo... theory?
This exchange demonstrates the validity of an earlier post:
I will submit that we are having this debate because quants have repeatedly shown their ignorance of basic philosophy of science issues.
No it does nothing more than demonstrate that you constantly conflate the *theory* of a political phenomena that is being analyzed with the *theory* that underlies the method of analysis.
They are distinct things.
Why? Why do both quants (e.g., me) and quals (e.g., you) think that such a relationship is ludicrous? It can't be the method, since it varies and we both agree. Hmmmm... the... theo... theory?
If a quant analytical researcher found a positive statistical relationship between clouds and HRC's electoral performance, on what basis would quants reject that finding? Nothing in quant methods or epistemological theory would lead quants to reject such a finding.
If you reject it, it is based on your intuitive sense. Of course, the reality is that this is how much quant work is conducted -- intuitive sense (not methods) determine what are valid findings.
Quals would reject such a HRC/clouds hypothesis because their methods would lead them to reject it. People are not going to say that clouds influence their voting behavior.
If a quant analytical researcher found a positive statistical relationship between clouds and HRC's electoral performance, on what basis would quants reject that finding? Nothing in quant methods or epistemological theory would lead quants to reject such a finding.
You seem very confused about some key terminology issues.
Suppose we consider those medical researchers we mentioned earlier: the ones who notice a correlation between hair length and height. That is a finding obtained from quantitative methods. It isn't very helpful, and is likely to be spurious.
Now, the *finding* (which not even you dispute) comes via the method, but that is entirely separate to any notion of causation and the theory that accompanies it.
As you rightly note, no one believes short hair causes "tallness" because there is no plausible/meaningful theory that connects the two (from the "laws of physics" or whatever).
There is no "theory" inherent in the *test* for correlation (that is, the quant method) between hair length and tallness. These are *separate* things.
Quals would reject such a HRC/clouds hypothesis because their methods would lead them to reject it.
No, their theories of human behavior would lead them to reject it. Methods are methods, theories are theories.
Some qual researchers are interested in testing Marxist theories, some in testing "Elite" theories, some in testing rational choice theories etc etc.
I'm very surprised you are not aware of this: the entire body of qual research does not conform to some uniform theory of human behavior.
Suppose we consider those medical researchers we mentioned earlier: the ones who notice a correlation between hair length and height. That is a finding obtained from quantitative methods. It isn't very helpful, and is likely to be spurious.
Now, the *finding* (which not even you dispute) comes via the method, but that is entirely separate to any notion of causation and the theory that accompanies it.
As you rightly note, no one believes short hair causes "tallness" because there is no plausible/meaningful theory that connects the two (from the "laws of physics" or whatever).
I do not intend any disrespect, but you are approaching this matter with the superficiality inherent in quant analytical methods in political science. Medical researchers would reject the hair/height hypothesis because they already know enough about human growth and hair length to know that the two are unrelated. Thus, their studies of the laws of physics and the human body would lead to the rejection of the hair/height hypothesis.
Quals would reject such a HRC/clouds hypothesis because their methods would lead them to reject it.
No, their theories of human behavior would lead them to reject it. Methods are methods, theories are theories.
It is true that we already know enough about human cognition to conclude that at least currently voting behavior is not effected by clouds. (Perhaps in the future humans will take their voting cues from clouds.) But you are wrong to suggest that qual methods would not lead us to reject the HRC/clouds hypothesis. Because once we examine human cognition (alla qual methods, people will tell us their voting behavior is not influenced by clouds. This is sharp contrast to quant analytical methods, which could find a correlation between clouds and HRC's electoral performance.
You seem to have missed 6/24/2008 5:38 AM.
The theory that under girds a method of analysis is a separate thing from the theory of politics that is being tested. You keep talking about them as if they are the same thing.
It further reveals you ignorance o what you are critiquing.
For example the *theory* and mathematics that underlie linear regression would not prevent someone from studying clouds and HRC poll number. Of course to get "good" results from the regression, linear regression requires that you have some underlying *theory* of the phenomena in order for you to build your specification.
You have no idea what the difference is I think because you refuse to try to educate yourself.
Medical researchers would reject the hair/height hypothesis because they already know enough about human growth and hair length to know that the two are unrelated.
So, just to be clear, your response amounts to: "methods and theories are the same, because no one is actively researching the link between hair and height". Is that right?
The example was hypothetical (much like the clouds example) -- you understand this, yes?
The point is that METHODS are different to THEORIES. Again, could someone interested in Marxist theories use a case-study approach? Could someone studying rational choice theories use a case-study approach? Yes and yes. You agree with this, so what the hell are you arguing?
alla qual methods, people will tell us their voting behavior is not influenced by clouds.
To move away from this example, they also claim they are not racist or homophobic.
Do you take that at face value too?
For example the *theory* and mathematics that underlie linear regression would not prevent someone from studying clouds and HRC poll number. Of course to get "good" results from the regression, linear regression requires that you have some underlying *theory* of the phenomena in order for you to build your specification.
The theory is that clouds that look like HRC have a positive effect on her electoral performance. When I controlled for all other relevant variables I found I could reject the null hypothesis. There is nothing in quant theory, methods, or knowledge that would lead you to refute this theory and findings.
So, just to be clear, your response amounts to: "methods and theories are the same, because no one is actively researching the link between hair and height".
I do not know what you are saying here.
The point is that METHODS are different to THEORIES.
You are absolutely incorrect. Methods are predicated on theories of knowledge. If you theorize that knowledge is derived from divine sources, then your methods are going to be very different than if you theorize that knowledge is derived from understanding the laws of physics.
To move away from this example, they also claim they are not racist or homophobic.
You are incorrect. People will tell whether or not they are racist or homophobic. Not in so many words, but they will indicate it through other means: e.g., "would you have a ‘best friend’ that is gay/lesbian?" "Should gay/lesbian couples be able to adopt children?"
This should be fun. What does,
"When I controlled for all other relevant variables" mean?
How did the researcher pick these variables? And why?
You answer will maybe help us explain to you what you are conflating.
People will tell whether or not they are racist or homophobic. Not in so many words, but they will indicate it through other means:
And what are you? A blade runner?
So the point is they *won't* tell you. And you admit it. Fair enough.
This should be fun. What does,
"When I controlled for all other relevant variables" mean?
How did the researcher pick these variables? And why?
You answer will maybe help us explain to you what you are conflating.
This is not pertinent, so I do not know why you posing these questions, except to avoid acknowledging I am correct. In any event, I would control for income, gender, ethnicity, ideology, religious identification, etc.
So the point is they *won't* tell you. And you admit it. Fair enough.
People will not label themselves with charged (i.e., insulting) words. Thus, you would not ask someone if they are "racist" or "homophobic."
Moreover, you can skew any interview question to gain biased results. So would you ask "do you plan on voting for the fascist Republican?" Alternatively, would you pose "do you intend on voting for the communist Democrat"?
The point is with the appropriately worded questions people will tell you a great about themselves, and their attitudes/beliefs. To suggest otherwise is to make the scientific study of human behavior an impossibility.
Methods are predicated on theories of knowledge. If you theorize that knowledge is derived from divine sources, then your methods are going to be very different than if you theorize that knowledge is derived from understanding the laws of physics.
This begs the question: what theory of knowledge are quant analytical methods based on?
"How did the researcher pick these variables? And why?
You answer will maybe help us explain to you what you are conflating."
This is not pertinent, so I do not know why you posing these questions, except to avoid acknowledging I am correct. In any event, I would control for income, gender, ethnicity, ideology, religious identification, etc.
I did not ask you *what* you would control for I asked you *how* the researcher picked them and why... what was the criteria for selecting them? This is a general question and one not specifically related to this example. Further it is very pertinent to helping you understand that you are conflating two different things here. Even further it will illustrate another gaping hole in you understanding that renders you criticisms impotent.
I did not ask you *what* you would control for I asked you *how* the researcher picked them and why... what was the criteria for selecting them? This is a general question and one not specifically related to this example. Further it is very pertinent to helping you understand that you are conflating two different things here. Even further it will illustrate another gaping hole in you understanding that renders you criticisms impotent.
I do understand what you are aiming for here. In any event, you are evading the central question before you: what theory of knowledge are quant analytical methods based on?
That should read: I do not understand what you are aiming for here.
what theory of knowledge are quant analytical methods based on?
C'mon, help me out: what theory of knowledge are qual techniques based on?
what theory of knowledge are qual techniques based on?
Quals hold that political/social phenomena are determined by human cognition. Thus, the quals epistemological theory (or conception of knowledge) is geared toward understanding/analyzing human cognition.
Thus, the quals epistemological theory (or conception of knowledge) is geared toward understanding/analyzing human cognition.
I understand that quals are interested in cognition, and want to focus on it in their methods, but what's the theory of knowledge behind all this?
That's what you've been asking for, so I want to know what you think it is for qual work.
Is it "externalism"? "reliabilism"? "skepticism"? what?
BTW, if you don't know what a "theory of knowledge" actually is, please stop acting like you do.
Cognition isn't a theory of knowledge. Try again.
What if conducted in-depth interviews and 2/3 of your subjects told you that cloud patterns were a factor in their voting decision? Don't say 'that would never happen.' What if it did? How would qual methods inherently allow you to discard that relationship? Again, you've interviewed people in-depth and that's what they told you. Is there something inherent about qual methods that allows you to discard such a result?
My guess is the answer is 'there's no theory behind this result so that's why I discard it.' I would agree with that. Of course, the same rationale can be applied with quant methods.
Not all 'quals' care about cognition as the focus of our "theory." We sure don't.
Signed,
Barrington Moore
Theda Skocpol
Gregory Luebbert
[and many others...]
I do [not] understand what you are aiming for here. In any event, you are evading the central question before you: what theory of knowledge are quant analytical methods based on?
I am asking you how a researcher on this hypothetical project decided what "controls" to include. I am asking in general.
The answer lies in both the theory behind the analysis and the theory of politics being tested.
What if conducted in-depth interviews and 2/3 of your subjects told you that cloud patterns were a factor in their voting decision? Don't say 'that would never happen.'
No. I wrote earlier: It is true that we already know enough about human cognition to conclude that at least currently voting behavior is not effected by clouds. (Perhaps in the future humans will take their voting cues from clouds.)
Not all 'quals' care about cognition as the focus of our "theory." We sure don't.
Signed,
Barrington Moore
Theda Skocpol
Gregory Luebbert
I have never heard the name Gregory Luebbert, but I strongly suggest you reread, or perhaps actually read, Moore and Skocpol. Skocpol (in state autonomy theory) explicitly holds that there is a cognition of the state that sets the thinking and actions of public officials apart from other political actors.
I understand that quals are interested in cognition, and want to focus on it in their methods, but what's the theory of knowledge behind all this?
That's what you've been asking for, so I want to know what you think it is for qual work.
Is it "externalism"? "reliabilism"? "skepticism"? what?
I do not know if we need to explicitly label the theory of knowledge that qual methods are predicated upon. Beer called the study of the laws of physics "scientism." I imagine the qual epistemology could be referred to as "subjectivism." This is because quals are seeking to analyze/understand the subjective state of mind of political actors.
Cognition isn't a theory of knowledge. Try again.
I do not understand your point here.
Honestly though, sometimes when I run really computationally intensive models, I hear this really soft whispering... usually something about clouds, presidential candidates, or human cognition. At first I refused to believe that R could actually talk to me about my theories, but after reading this discussion I think I've changed my mind. This must be what ILP is talking about when he says that the methods guide the theory.
Right?
Please address the question: what theory of knowledge are quant analytical methods based on?
The central limit theorem for most parametric estimations.
Please address the question: what theory of knowledge are quant analytical methods based on?
You have been told this over and over again. J.S. Mill, Neyman-Rubin, etc. Quantitative methods are based on what philosophers of causality refer to as "the fundamental problem of causal inference."
http://tinyurl.com/6mx29l
Read up, qually.
This article does not speak to topics of concern for political scientists (e.g., democracy, justice, fairness). Thus I do not see the relevance for the discussion at hand.
You asked for the theory that quantitative analytical methods are based on. Not for a discussion of politics. I gave you what you asked for.
Seriously, that's just a lame attempt by the ILP to avoid having to read something he doesn't understand.
I do not know if we need to explicitly label the theory of knowledge that qual methods are predicated upon.
Oh, but we do.
See, you want the theory of knowledge that quant methods are based on, and I want it for qual methods. I care about the philosophy of science (in particular, I care about how that philosophy relates to epistemology).
If you don't care about philosophy of science issues, or you don't have sufficient knowledge to comment on them, perhaps you should admit as much?
In addition, if you don't know what the theory of knowledge is for qual methods, it is hypocritical in the extreme to berate quants for failing to define theirs.
Poor show, my man, poor show.
http://tinyurl.com/6mx29l
This article has nothing to do with epistemology. It deals with statistics. This is tantamount to my saying that the epistemology of qualitative methods is the case study. Statistics and case studies are methods. They are not theories of knowledge. In other words, they are not theories of how/why the world works (i.e., metaphysics), and what to gauge/measure in order to gain knowledge (i.e., epistemology) of the world.
you want the theory of knowledge that quant methods are based on
Then describe this theory, and the discussion can proceed accordingly.
if you don't know what the theory of knowledge is for qual methods
This theory of knowledge has been provided by quals on this blog:
Quals hold that political/social phenomena are determined by human cognition. Thus, the quals epistemological theory (or conception of knowledge) is geared toward understanding/analyzing human cognition.
Why do you take issue with it?
you want the theory of knowledge that quant methods are based on
"Then describe this theory, and the discussion can proceed accordingly."
Good, finally.
You now admit it: you actually don't know what the "theory of knowledge" behind qual work is (if you did, you would give it).
Fair enough, but it makes all the posts about quants not knowing the philosophy of science look hollow.
As an aside (just my 2 cents), I'd argue that most quant work is basically "positivism" while most qual work is "phenomenology". Neither are theories of knowledge per se, but they are philosophical positions with long supporting histories and literatures---with varying connections to different theories of knowledge.
I take no stance on which is "better" (this is a very old debate), but please stop the intellectual dishonesty of requesting "philosophy of science" information when you yourself are ignorant of it.
You now admit it: you actually don't know what the "theory of knowledge" behind qual work is (if you did, you would give it).
I do not understand this post at all. Where was anything like this admitted?
The following is what you quoted:
you want the theory of knowledge that quant methods are based on
Then describe this theory, and the discussion can proceed accordingly.
As you can see, I am asking for the quant epistemology. Nothing is written here about qualitative analysis.
As you can see, I am asking for the quant epistemology. Nothing is written here about qualitative analysis.
Well, out of interest then, could you tell me what the qualitative "theory of knowledge" is?
Seriously, stop stalling. You don't have an answer, and you don't care/are ignorant of the philosophy of science.
PS. I note that your comment to the effect of "they will indicate it through other means" is apt here: you don't know the answers, and you don't explicitly say so, but everyone reading this thread can work it out from what you _don't_ say.
I'd argue that most quant work is basically "positivism" while most qual work is "phenomenology".
In actuality, both quant and qual work is positivist -- in so far as they both emphasize sensory perception as the basis of knowledge.
Phenomenology is not entirely pertinent to the discussion at hand.
Well, out of interest then, could you tell me what the qualitative "theory of knowledge" is?
This is the third time this has recently been posted, so please do not write that a qual theory of knowledge has not been offered. If you find fault with this epistemology, then say so, but, again, please do not claim a qual epistemology has not been provided.
Quals hold that political/social phenomena are determined by human cognition. Thus, the quals epistemological theory (or conception of knowledge) is geared toward understanding/analyzing human cognition.
PS. I note that your comment to the effect of "they will indicate it through other means" is apt here: you don't know the answers, and you don't explicitly say so, but everyone reading this thread can work it out from what you _don't_ say.
What are you saying here?
Thus, the quals epistemological theory (or conception of knowledge) is geared toward understanding/analyzing human cognition.
'understanding human cognition' is not an epistemological theory.
Such theories (you can find this in any textbook) have three components
1. what is the thing called "knowledge"---what is its nature?
2. how to we obtain this knowledge?
3. what can we "know"?
At best, you seem to be hinting at an answer to 2 ("by asking people"), but not 1 or 3. So your answer is incomplete.
Both 1 and 3 have vast literatures behind them, involving "internalism", "externalism", "skepticism" and so on.
So, let's try again: what is the theory of knowledge behind qualitative work?
both quant and qual work is positivist -- in so far as they both emphasize sensory perception as the basis of knowledge.
Yes, but positivists tend to also believe in general/universal laws that can be 'uncovered'. Do quals believe this too?
Yes, but positivists tend to also believe in general/universal laws that can be 'uncovered'.
You are way off! Positivism is simply the claim that knowledge is derived from sensory perception. BTW, your definition of positivism would exclude quant analytical social science researchers, since you have acknowledged that there are no laws of political/social phenomena.
1. what is the thing called "knowledge"---what is its nature?
2. how to we obtain this knowledge?
3. what can we "know"?
Answers to these questions can readily be derived from qual epistemology.
1. what is the thing called "knowledge"---what is its nature?
It is human cognition. Its nature is highly creative and autonomous.
2. how to we obtain this knowledge?
That is self-evident.
3. what can we "know"?
We can know what people are thinking, or have thought.
1. what is the thing called "knowledge"---what is its nature?
2. how to we obtain this knowledge?
3. what can we "know"?
What are the quant answers to these questions.
You are way off! Positivism is simply the claim that knowledge is derived from sensory perception.
...positivists tend to believe in universal laws and the search for them. Read Comte, read Mill, read Spencer.
BTW, your definition of positivism would exclude quant analytical social science researchers, since you have acknowledged that there are no laws of political/social phenomena.
Do you have comprehension difficulties? I didn't define positivism, I noted a feature of it.
Now, stop stalling: let's have the qual theory of knowledge in accordance with the (fundamental) requirements of 8.01am.
Read Comte, read Mill, read Spencer.
So three people make a tendency? Not really. In any event, both quals and quants are positivists.
let's have the qual theory of knowledge in accordance with the (fundamental) requirements of 8.01am.
Do you have comprehension difficulties?!!! =-)
Read 8:16 a.m.
Read 8:16 a.m.
You are having a joke with me, no?
This is a half-baked undergrad answer. Take just the first question: "what is knowledge?".
You wrote "It is human cognition." From an epistemological perspective, that is meaningless. I was expecting a few comments on the fundamentals of theories of knowledge, like "belief" and "truth" etc.
How does the statement 'human cognition' answer the "Gettier problem", for example?
You are not making any sense; I'm beginning to wonder if you know anything about epistomology at all.
From an epistemological perspective, that is meaningless.
Why?
How does the statement 'human cognition' answer the "Gettier problem", for example?
This is not relevant to the discussion at hand.
At this stage, you are employing sophistry and distraction to avoid the obvious conclusion: quants in political sciences have no epistemological theory, while quals do. This would make qualitative methods in political science more scientific than quant methods.
At this stage, you are employing sophistry and distraction to avoid the obvious conclusion: quants in political sciences have no epistemological theory, while quals do. This would make qualitative methods in political science more scientific than quant methods.
Don't have a dog in this fight: quant, qual, whatever -- just trying to understand the philosophical underpinnings of your position.
You don't know what they are, and are very unfamiliar with the philosophy of science literature that you should peruse.
"From an epistemological perspective, that is meaningless."
Why?
I was going to start writing here, but then I realized your knowledge of epistemology is actually lower than that of my phil freshman at the end of their first semester of classes. So, my advice is: take a freshman philosophy class. Seriously.
I was going to start writing here, but then I realized your knowledge of epistemology is actually lower than that of my phil freshman at the end of their first semester of classes. So, my advice is: take a freshman philosophy class. Seriously.
So you do not have an answer. Fine. At least you are taking a reasonable position.
<< Home