Wednesday, August 08, 2007

Old Methodology/Theory Debates

508 Comments:

Anonymous Anonymous said...

I have a wooden patio set in need of some rejuvination. Would I be better-off with the more time-consuming staining followed by water sealing, or simply applying one coat of a tinted polyurethane?

8/13/2007 7:18 PM  
Anonymous Anonymous said...

Oops! Sorry, wrong blog. My apologies.

8/13/2007 7:57 PM  
Anonymous Anonymous said...

Don't apologize - this would be a more interesting discussion than some of the methodology debates we've witnessed on this blog...

8/14/2007 6:35 AM  
Anonymous Anonymous said...

"Would I be better-off with the more time-consuming staining followed by water sealing, or simply applying one coat of a tinted polyurethane?"

Depends. Do you have seasonality problems where you live?

8/14/2007 11:52 AM  
Anonymous Anonymous said...

Actually, yes, now that you mention it. High humidity as well.

8/14/2007 1:17 PM  
Anonymous Anonymous said...

I, for one, found the discussion on this blog during the "dog days summer" to be very stimulating and rather enlightening. I strongly suspect those that deride it do so because they do not like their thinking/assumptions challenged. This is the exact opposite attitude that an intellectual (or would be intellectual) should adopt.

8/18/2007 2:25 PM  
Anonymous Anonymous said...

Go elsewhere. Please.

8/19/2007 3:20 AM  
Anonymous Anonymous said...

2:25 must be GAG

8/19/2007 9:52 AM  
Anonymous Anonymous said...

wow, the provost and dean must be on vacation if GAG is blogging again.

8/19/2007 12:46 PM  
Anonymous Anonymous said...

Who is GAG?

8/19/2007 7:04 PM  
Anonymous Anonymous said...

Not very familiar with set theory. But for patio set, the interaction term (cuprinol*paintbrush) should solve the equation. Save the residuals for later bootstrapping on fence repair.

8/20/2007 4:34 AM  
Anonymous Anonymous said...

I think GAG is that guy from miami that used to post all the time

8/20/2007 9:58 AM  
Anonymous Anonymous said...

"I strongly suspect those that deride it do so because they do not like their thinking/assumptions challenged."

----------------

While there are people on all sides of every argument who are simply there to defend their arguments, I assure you that any derision that came from me was because ILP revealed that he/she didn't understand some of the basics of quantitative analysis, in particular the basics of experiments. And I mean the absolute basics.

8/20/2007 1:54 PM  
Anonymous Anonymous said...

The ILP did apparently know very little about the operation of statistical methodologies. The debate, however, was not about how statistics work, but about what they measure. On this issue, in my estimation, the ILP's arguments held up well.

8/21/2007 6:41 AM  
Anonymous Anonymous said...

Here we go again. Can you answer questions about what statistics measure (or don't measure) without understanding how they work?

8/21/2007 8:07 AM  
Anonymous Anonymous said...

Most people do not know how their speedometer works, but they know what it is measuring.

8/21/2007 8:27 AM  
Anonymous Anonymous said...

What is your spedometer measuring?

8/21/2007 8:37 AM  
Anonymous Anonymous said...

preferences, as opposed to party, effects.

8/21/2007 9:53 AM  
Anonymous Anonymous said...

The last four posts are not relevant. 6:41 a.m. feels that the ILP's arguments "held up well". 8:07 a.m. do you disagree? If so, why? Please answer substantively. In other words, please do not respond because the ILP does not know "x". The issue at hand is not what the ILP does or does not know, but whether his/her arguments have validity.

8/21/2007 10:01 AM  
Anonymous Anonymous said...

"The issue at hand is not what the ILP does or does not know, but whether his/her arguments have validity."

Was it ever clear what ILP was arguing? S/he said one thing, then took it back. Said something else, then contradicted it the next day. And on and on.

8/21/2007 12:15 PM  
Anonymous Anonymous said...

ILP's argument was that quantitative methods in the social sciences have very limited utility because such methods are appropriately used to establish and analyze the operation of the immutable laws of physics.

The other component of his/her argument was that the focus of political science should be on studying the human cognition underlying social/political phenomena.

8/21/2007 12:37 PM  
Anonymous Anonymous said...

Not again. Please.

8/21/2007 12:39 PM  
Anonymous Anonymous said...

"ILP's argument was that quantitative methods in the social sciences have very limited utility because such methods are appropriately used to establish and analyze the operation of the immutable laws of physics."

And when ILP was asked to explain WHY this should be the case, s/he could not do so convincingly. ILP: No need to try again. We can reference earlier posts. I agree with 12:39: there is no need to start this again. Unless you have does some more studying and come up with something new and useful, just stop.

8/21/2007 12:57 PM  
Anonymous Anonymous said...

"And when ILP was asked to explain WHY this should be the case, s/he could not do so convincingly."

One study conducted in 2004 found that get-out-the-vote telephone calls had a positive effect, while a study in 2006 found that they did not. In terms of scientific knowledge it is not clear what is gained here.

8/21/2007 1:05 PM  
Anonymous Anonymous said...

"And when ILP was asked to explain WHY this should be the case, s/he could not do so convincingly."

One study conducted in 2004 found that get-out-the-vote telephone calls had a positive effect, while a study in 2006 found that they did not. In terms of scientific knowledge it is not clear what is gained here.

8/21/2007 1:05 PM
---------------------------------
Well my Ouija board says asidfoasijf, my Magic 8 ball says "ask again later", the guy next door says no, and my gut says yes. I am not sure what to make of all this.

8/21/2007 1:32 PM  
Anonymous Anonymous said...

RE: 1:05 p.m.

What if I could show that get-out-the-vote telephone campaigns had a positive effect 60 percent of the time? Would that not be a significant finding?

8/21/2007 1:59 PM  
Anonymous Anonymous said...

RE: 1:59 p.m.

The difficulty is we no confidence that your finding will persist into the future. In other words, beginning today get-out-the-vote campaigns could have a positive effect in 20 percent of the cases, and in five years that could change to 35 percent. Then again, beginning today such campaigns may have a negative effect.

8/21/2007 2:11 PM  
Anonymous Anonymous said...

blogmistress, shut this down!

8/21/2007 2:55 PM  
Anonymous Anonymous said...

"The difficulty is we [have] no confidence that your finding will persist into the future."

Is this specific to quant research? If not, what's your point? More important, there are tons of examples of findings that persist across time for reasons that make theoretical sense. And, when we observe changes in the effect of X on Y, they often make sense too.

"One study conducted in 2004 found that get-out-the-vote telephone calls had a positive effect, while a study in 2006 found that they did not. In terms of scientific knowledge it is not clear what is gained here."

See any book on the philosophy of scientific inquiry. These sorts of conflicting findings happen in "hard" sciences too. We do a third study and a fourth study that will probably shed light on why studies one and two had conflicting findings.

8/21/2007 3:25 PM  
Anonymous Anonymous said...

RE: 3:25 p.m.

"More important, there are tons of examples of findings that persist across time for reasons that make theoretical sense."

That is patently false.

"These sorts of conflicting findings happen in 'hard' sciences too. We do a third study and a fourth study that will probably shed light on why studies one and two had conflicting findings."

In the physical sciences conflicting findings are the result of the need for the refinement of theory, data collection, and/or date analysis. In the social sciences inconsistent findings are the result of the autonomous and highly variable quality of human cognition – the ultimate cause of social/political phenomena.

8/22/2007 8:38 AM  
Anonymous Anonymous said...

wrong and wrong. do some reading.

8/22/2007 11:47 AM  
Anonymous Anonymous said...

"We [in political science] do a third study and a fourth study that will probably shed light on why studies one and two had conflicting findings."

I wish I could accept that. I, now, have to ponder that the different quantitative findings of studies focusing on the same issues may be the natural state of things, and not the result of imperfect research.

8/22/2007 12:01 PM  
Anonymous Anonymous said...

For educational purposes, let's completely exclude outside/future applications (i.e., assume no external validity, beyond the population under study, for the time under study).

subject: why North Dakotans voted in 2006.

Fearless scholar Adam got a list of ND voters prior to the 2006 election, randomly assigned half to some treatment (X). After the election, Adam checked to see who turned out. He noted that 55% of the treatment group voted, while 50% of the control group voted. He is able to say that the probability of getting a treatment effect of that size, if X actually played no role in determining whether or not North Dakotans voted, is less than, say, 2% (depends on his sample size). So he is able to make a statement, WITH KNOWN ASSOCIATED UNCERTAINTY, about one of the things that makes present-day North Dakotans vote.

A qualitative scholar (Beth) can interview people asking them about what makes them vote. The voters may or may not know what actually makes them vote. They may think they know and be dead wrong. They may be dead right. We can't know. Beth can present a very plausible story on what leads citizens in ND to vote. The story may be largely right, it may be largely wrong. (It is likely largely right, if only because most of our intuitions are largely right.) BUT THE LEVEL OF UNCERTAINTY ASSOCIATED WITH EACH OF BETH'S CLAIMS IS UNKNOWN.

Carl looks at the same subject using observational data (income, education, age, etc.). Try as Carl might, we can never know if Carl has included all of the relevant covariates, AND THUS WE CANNOT KNOW THE LEVEL OF UNCERTAINTY ASSOCIATED WITH EACH OF CARL'S PARAMETER ESTIMATES.

Adam wins.

8/22/2007 4:45 PM  
Anonymous Anonymous said...

4:45 is right about experimental designs vs. haphazardly drawn case studies and anyone with a first-semester grad education should know so. ILP must not have that education. However:

"Try as Carl might, we can never know if Carl has included all of the relevant covariates, AND THUS WE CANNOT KNOW THE LEVEL OF UNCERTAINTY ASSOCIATED WITH EACH OF CARL'S PARAMETER ESTIMATES."

While this may be technically true, we can often be pretty darn sure what covariates (Z) are need for controls. By "relevant", you must mean that Z is correlated with both X and Y in ways that would make X potentially spurious AND Z is theroetically likely to influence (not just correlate with) Y AND Z is temporally prior to X. Or are you really arguing that non-experimental designs do not allow for estimates of certainty? What about time-series designs?

8/22/2007 6:16 PM  
Anonymous Anonymous said...

"we can often be pretty darn sure what covariates (Z) are need for controls."

I guess this is where the debate lies. Is this really true? In medical studies they can often have absurd amounts of information about their subjects (often nurses, followed for decades). Nevertheless, observational studies continue to be countered by experimental findings. And there we are typically just talking about the health/mortality of individuals. Much of political science involves the interaction of individuals, which ratchets up the complexity a bit.

I'm open to your claim. I just don't see the reason for your optimism.

8/22/2007 7:12 PM  
Anonymous Anonymous said...

i want to know where you people finished your phds, if at all

8/22/2007 8:12 PM  
Anonymous Anonymous said...

who?

8/23/2007 11:50 AM  
Anonymous Anonymous said...

I did.

A thesis on Epistemic communities and the ontological self.

8/23/2007 12:12 PM  
Anonymous Anonymous said...

i finished my phd at University of Phoenix. so take that!

8/23/2007 8:08 PM  
Anonymous Anonymous said...

You Hack! I'm a DeVry man.

8/23/2007 8:37 PM  
Anonymous Anonymous said...

ooh - mine's on ontological communities and the Epistemic self!!!

8/24/2007 8:56 AM  
Anonymous Anonymous said...

Hey, I don't know where else I might ask this question: does anyone know if there's a protocol for calculating Laakso-Taagepera effective #s of parties where there are large numbers of independent candidates? I'm currently treating 'independent' as a party, but this can misstate the degree of party fragmentation, since independents in a couple of places I'm looking at regularly top 20% of the vote.

8/26/2007 5:04 AM  
Anonymous Anonymous said...

Hey, I don't know where else I might ask this question: does anyone know if there's a protocol for calculating Laakso-Taagepera effective #s of parties where there are large numbers of independent candidates? I'm currently treating 'independent' as a party, but this can misstate the degree of party fragmentation, since independents in a couple of places I'm looking at regularly top 20% of the vote.

Sounds like ENPP would work perfectly there. What's the problem?

8/26/2007 8:40 PM  
Anonymous Anonymous said...

8/26/2007 8:40 PM
I don't see how one could not see the problem, although it's admittedly a small one. So in case anyone's interested, my earlier post actually solved the problem incorrectly. Independent vote / seat share should be left out of the calculation, which effectively treats each independent as a separate party. You then get the proper high ENPPs.

It'd be great if someone could point me to a precedent in the literature, but I can't think of anyplace where non-party candidates attract enough votes to substantially cahnge the ENPP (Brazil? Russia?).

8/27/2007 4:00 AM  
Anonymous Anonymous said...

Re: 8/27/2007 4:00 AM

It happens a lot in some of the post-communist countries, or at least it did in some of their earlier elections. See Taagepera's 1997 article in Electoral Studies where he presents a method for calculating ENEP under these circumstances. I've seen a number of papers where this 'new' method is used. If you are interested in the difference between the effective number of parties figures that you get with the different methods, I think the data put together by M. Golder (also from an Electoral Studies article, a few years ago) has both for a wide array of countries. Hmm. Sounds like you might usefully spend a half an hour perusing the Electoral Studies webpage! The articles themselves are online and you'd likely find some other cites of use. I think you'd find a lot of Taagepera stuff, anyway.

8/27/2007 5:25 AM  
Anonymous Anonymous said...

8/27/2007 5:25 AM

Thanks! That's not a literature that I typically move in, so that's good advice.

8/27/2007 6:44 AM  
Anonymous Anonymous said...

A thesis on Epistemic communities and the ontological self. [and more in that vein]

Good God, where did all this come from? To play devil's advocate: what exactly is heterodox about a claim that, [whatever] conditions obtaining, transnational scientific networks can facilitate policy coordination? It's a pretty straightforward claim, no?

Unless this all is an allergic reaction to neologism. It is that time of year, I guess.

Hint: it ain't going to get any better as you get closer to exams. Get treated now :)

8/29/2007 2:17 PM  
Anonymous Anonymous said...

Look who's talking!

8/30/2007 12:00 AM  
Anonymous Anonymous said...

8/22/2007 4:45 PM has the virtue of describing his/her application of quantitative methods as the effort to gauge human cognition -- as opposed to holding that such methods are the means to discovering the "immutable laws of politics."

9/10/2007 7:31 AM  
Anonymous Anonymous said...

I have a virtue!!

Thank you.

9/10/2007 8:17 PM  
Anonymous Anonymous said...

8/22/2007 6:16 PM, what is the basis for you writing "we can often be pretty darn sure what covariates (Z) are needed for controls" ?

9/11/2007 9:45 PM  
Anonymous Anonymous said...

"8/22/2007 6:16 PM, what is the basis for you writing "we can often be pretty darn sure what covariates (Z) are needed for controls" ?"

What is the basis for your question?

9/12/2007 5:25 PM  
Anonymous Anonymous said...

I guess the complexity of institutions and human motivations. How can we be sure we know all the relevant covariates?

9/12/2007 6:26 PM  
Anonymous Anonymous said...

We can't. But we can think really hard about all the covariate that should matter and are causally prior to X, and control for them. I possibility that there is some other covariate out there that I missed despite my greater knowledge of what I am studying is not going to keep me from using quantitative methods. But do what you like.

9/13/2007 5:20 PM  
Anonymous Anonymous said...

I asked a question long ago that still remains unanswered... ILP, feel free to respond...

if human cognition is what we should be trying to get a better grasp of, doesn't that lead you to use the tools of political psychology? Just out of curiosity, are you a political pschologist? If not, shouldn't you try to learn that literature? MRI studies, experiments, etc.?

I can't understand how else we would get at human cognition. And I certainly cannot see how qualitative approaches are better suited than political psychology if the goal is to better understand human cognition. Given ILP's desdain for experiments, his/her position is really mind-boggling...

9/14/2007 10:15 AM  
Anonymous Anonymous said...

Qualitative methods provide a superior route to understand the fine points of human cognition. Anyone who does not see this is living a lie.

9/14/2007 10:25 PM  
Anonymous Anonymous said...

What does Adam win? A job with Donald Rumsfeld. Golly gee whilikers, Adam knows one of the unknowns. Of course, he doesn't know the other unknowns (at least not with any certainty), nor does he not know the other knowns (at least not with any certainty), but he's got a statistically significant test of one known known that may, or may not, have any substantive political importance at all.

Because if he introduced a dichotomous variable, "weather," to his experiment, he'd also find that weather is one of the things that causes people to vote.

And sunrises. In fact, though I haven't run this regression yet, I'm fairly sure (because our intuitions are almost aways correct) that the correlation coefficient between sunrise and decision to vote is highly significant. Every time the sun rises on Election Day, people vote. And murder, too; whenever there's a sunrise, there's also a murder. So maybe there's confounding. Maybe it's murder that causes North Dakotans to vote.

But he's got a good t-statistic, so by the lickspittle devotion of most "methodologists" to "findings," he wins.

A most impressive example.

9/15/2007 9:12 AM  
Anonymous Anonymous said...

And sunrises. In fact, though I haven't run this regression yet, I'm fairly sure (because our intuitions are almost aways correct) that the correlation coefficient between sunrise and decision to vote is highly significant. Every time the sun rises on Election Day, people vote.

Actually, you're wrong. The sun rises every day. Yet there are only a few days on which people vote. So first, you don't get variation on your independent variable. Which means that it is not a significant predictor of why people vote.

Let's try again, shall we? I'm sure you'll have another very funny example to prove your point. But what you've clearly shown here is that you don't understand even the slightest bit about statistical inference. Pathetic...

9/15/2007 11:14 AM  
Anonymous Anonymous said...

Qualitative methods provide a superior route to understand the fine points of human cognition. Anyone who does not see this is living a lie.

I wholeheartedly agree. And I will respectuflly submit that this has been proven beyond doubt in previous discussions.

Let's move to the next topic.

9/15/2007 11:20 AM  
Anonymous Anonymous said...

"I wholeheartedly agree. And I will respectuflly submit that this has been proven beyond doubt in previous discussions.

Let's move to the next topic."

Proven? You must be a qualitative scholar! One unfamiliar with a great deal of work looking at human cognition.

9/16/2007 10:08 AM  
Anonymous Anonymous said...

Hmmm... must've missed that proof.

9/16/2007 1:41 PM  
Anonymous Anonymous said...

I recall a "methodologist" job talk at a Top 10 R-1 in Southern California a couple years ago that was a model of what "methodologists" produce.

In it, the ABD Ivy League "methodologist" was purporting to explain success in war.

The "methodologist" had a very interesting, well-specified model with exciting variables taken from POLITY, CoW, and the like. The "methodologist's" vita was full of papers from the Midwest, and the transcript was heavy with classes in the mathematics and computing departments. The letters from other "methodologists" were spectacular. Basically, the candidate was a lock.

The job talk was fully up to expectations.

The graphs in the "methodologist's" presentation moved back and forth in very exciting ways, the underlying formulae were elegant, and the correlation coefficients on the most important independent variables were significant and robust.

The "methodologist" did the math on the white board and showed how past literature on the subject failed some basic tests of formal logic.

Best of all, the "methodologist" then "tested" the formal model with a large-n statistical analysis to show why qualitative scholars had completely missed the boat on the question. They had the inference wrong, you see.

Oh, it was wonderful.

Sadly, the "methodologist" didn't get the job. It might have had something to do with the fact that one of those pathetic qualitative scholars pointed out in the Q&A that the "methodologist's" model predicted that Germany won World War II.

That pathetic qualitative type. Didn't understand a damn thing about statistical inference.

9/17/2007 6:20 AM  
Anonymous Anonymous said...

re: Adam, Beth, and Carl.

The point is that Adam was able to say something with a known level of uncertainty. The others couldn't. What they said may or may not have been true, and we can't know how likely it is that they got it right. If their analysis agreed with our intuitions, knowledge, etc, we're likely to agree. If not, we likely won't.

9/17/2007 6:54 AM  
Anonymous Anonymous said...

"Sadly, the "methodologist" didn't get the job. It might have had something to do with the fact that one of those pathetic qualitative scholars pointed out in the Q&A that the "methodologist's" model predicted that Germany won World War II."

Wow! How sad that such ignorance can keep someone from getting a job. Or maybe the job candidate was saved in this case.

I get comments like this from undergrad methods students. "I know someone who is Black and votes Republican!" Or "That can't be the case because that isn't how it works in my home town!"

Just because we can't predict every outcome doesn't mean we haven't learned something.

9/17/2007 11:39 AM  
Anonymous Anonymous said...

Exactly! Just because our model doesn't successfully predict one outcome doesn't mean that we should dump it. Uncertainty is a necessary element of statistical inference. The question is how well the model predicts outcomes relative to other models or some baseline.

9/17/2007 11:53 AM  
Anonymous Anonymous said...

this might be a good example of the differences between qual and quant scholars. qual scholars aren't interested in research that doesn't explain the one case that they care so much. quant scholars are more interested in explaining a phenomenon generally and are less worried about whether their model predicts every single case successfully.

9/17/2007 12:06 PM  
Anonymous Anonymous said...

Sadly, the "methodologist" didn't get the job. It might have had something to do with the fact that one of those pathetic qualitative scholars pointed out in the Q&A that the "methodologist's" model predicted that Germany won World War II.

And who gets to decide which 'critical' case a model ought to be able to predict correctly? What if the model predicts 99% of outcomes correctly, but misses that one? What if my model predicts that Argentina wins the Falklands War? Is that okay? Why or why not?

9/17/2007 2:01 PM  
Anonymous Anonymous said...

So if a model can predict the outcome of most situations (e.g., wars), to the quant. researchers here it is defensible even if it cannot predict the outcome of the most important war (perhaps event) in modern human history.

This is confirming the worst stereotype of quant. attitudes. Namely, they lack historical and theoretical depth and understanding.

9/17/2007 3:15 PM  
Anonymous Anonymous said...

9/17/2007 3:15 PM, are you an undergrad? I can't imagine that you are a grad student, let alone a professor.

9/17/2007 3:24 PM  
Anonymous Anonymous said...

"So if a model can predict the outcome of most situations (e.g., wars), to the quant. researchers here it is defensible even if it cannot predict the outcome of the most important war (perhaps event) in modern human history.

This is confirming the worst stereotype of quant. attitudes. Namely, they lack historical and theoretical depth and understanding."

If the worst stereotype of quant attitudes is that we care to understand a broader phenomenon and are less interested in YOUR impression of what single case was most important, then yes, you have confirmed YOUR worst stereotype of us.

WWII may be the most important war of the 20th Cent and it is important to understand its orginins. However, it is also imporant to understand the origins of wars, generally. If said quant scholar was able to do this, while still not explaining WWII, that is of great value. If you can't see this, then you are blind!

How sad for those poor U of Miami students who must study under you!!!

9/17/2007 5:52 PM  
Anonymous Anonymous said...

Quant here.

If you can't explain WWII, you need a new model.

9/17/2007 5:55 PM  
Anonymous Anonymous said...

"Quant here.

If you can't explain WWII, you need a new model."

I don't believe you are a quant scholar. We may need a new model for WWII. But, research suggest that the start of wars, GENERALLY, have something in common. As long as wars, generally, have something in common, then we can study them using quantitative methods.

9/17/2007 6:06 PM  
Anonymous Anonymous said...

The issue was the outcome of wars.

The continuing defense of the above described study is just digging the quants deeper into the "lacking theoretical and historical depth" hole. World War II was of such a grand scale -- i.e., engaging such a massive amount of people and material -- that its significance is a thousand (perhaps a million) times the significance of the invasion of Granada. Thus, the fact that a model can predict the outcome of 50 invasions of Granada pales in comparison to the inability to correctly predict the outcome of World War II.

9/17/2007 7:28 PM  
Anonymous Anonymous said...

There is something seemingly wrong with a system of thought that would give more analytical weight to a series of small events (e.g., small-scale military engagements), than to a major world history shaping event (e.g., global war).

9/17/2007 7:44 PM  
Anonymous Anonymous said...

This issue comes down to questions of "immutable laws of social behavior" versus understanding human cognition. If your goal is to measure the "immutable laws of social behavior", then more analytical importance is assigned to explaining the largest number of events -- regardless of their relative scale. If, however, your goal as a researcher is to grasp human cognition, then your focus will be on events that ostensibly have the greatest effect on humans broadly, and human thinking in particular.

9/18/2007 4:44 AM  
Anonymous Anonymous said...

[yawn]...


troll.


[shrug]

9/18/2007 4:51 AM  
Anonymous Anonymous said...

"It is essential not to confuse the statistical usage of 'significant' with the everyday usage. In everyday usage, 'significant' means 'of practical importance,' or simply 'important.' In statistic usage, 'significant' means 'signifying a characteristic of the population from which the sample is drawn,' regardless of whether that characteristic is important."

--Wallis and Roberts, _Statistics: A New Approach_, 1956.

"This chapter...explains the limitations of significance tests. The first one is that 'significance' is a technical word. A test can only deal with the question of whether a difference is real, or just a chance variation. It is not designed to see whether the difference is important."

--Freedman, Pisani, and Purves, _Statistics_, 1978.

"There is a problem with the term, 'statistical significance.' It is a technical phrase that simply means that enough data has been collected to establish that a difference does exist. It does not mean that the difference is necessarily important."

-- Wonnacott and Wonnacott, _Statistics: Discovering its Power_, 1982.

"The scientist often seems rather to be struggling with facts, trying to force them into conformity with a theory [whose veracity] he does not doubt."

--Kuhn, _The Essential Tension_, 1977.

"In the Department of Mathematics and the Department of Statistics, the outputs are not scientific findings. They are theorems about mathematical objects and statistical tests... Nothing scientific comes from the theorems from the departments of Mathematics or Statistics or Economics, for the good reasons that (1) the set of theorems is practically unbounded and (2) statistical significance has practically nothing to do with scientific significance."

--McCloskey, _The Rhetoric of Economics_, 1985.

9/18/2007 11:11 AM  
Anonymous Anonymous said...

Great quotes, but your selection seems somewhat biased (like your qualitative research, no doubt).

I'm sure someone has done a thoughtful qualitative case study of the outbreak of WWII. I can also understand why some would think that it is important to understand the occurrence of minor conflicts - what do they share in common? If you can't understand this, then I don't know what else to say. And, since you have gone back to your Immutable Laws of Science crap, I am leaving your to yourself. Some other sucker will have to take your bait.

9/18/2007 12:11 PM  
Anonymous Anonymous said...

"I can also understand why some would think that it is important to understand the occurrence of minor conflicts - what do they share in common? If you can't understand this, then I don't know what else to say."

If someone posits a model that explains the outcomes of "small wars", and that model incorrectly predicts the outcome of World War II, then an explanation should (needs to be) offered to justify (explain) this discrepancy. Again, people can legitimately reject the model on the basis of its incorrect prediction of World War II -- questioning its overall utility. As you put it, "if you can't understand this, then I don't know what else to say."

9/18/2007 2:08 PM  
Anonymous Anonymous said...

"Great quotes, but your selection seems somewhat biased (like your qualitative research, no doubt)."

Instead of wantonly accusing someone of bias in their treatment of evidence, why don’t you invoke evidence to support your critique. In addition to actually being persuasive, it would put you in a better light as a person.

9/18/2007 2:50 PM  
Anonymous Anonymous said...

These quotes are great, thanks anon! Once again, it proves beyond doubt that quantitative analysis is incapable of discovering 'significant' relationships, such as those that undergird the dynamics of human cognition.

I would respectfully submit that quantitative scholars would be best advised to read a book or two about major wars of the 20th century.

9/18/2007 4:02 PM  
Anonymous Anonymous said...

Does anyone else think that GAG is posting multiple times to make it look like more people support his position?

9/18/2007 4:52 PM  
Anonymous Anonymous said...

Of course, GAG/ILP is stupid enough to believe that most quantitative scholars are unfamiliar with the fact there are limitations to quantitative approaches. And stupid enough not to understand the points of each of the quotes he has cherry-picked, which do not indict quantitative analysis but merely its naive application, to boot.

Next ILP will tell us (as most of the Perestroikan persuasion believe) that Green, Palmquist, and Schickler is a book indicting all quantitative analysis. Mark my words.

9/18/2007 6:25 PM  
Anonymous Anonymous said...

RE: 6:25 p.m.

If you are going to throw around the word "stupid", please try and say something substantive.

9/18/2007 6:40 PM  
Anonymous Anonymous said...

My selection was indeed biased.

Biased by selecting only from statistics textbooks, the Almighty Kuhn, and the work of an economist.

The point is one that few, if any, "methodologists" seem to understand -- mathematical significance is not the be-all, end-all of political science.

Substantive significance is the goal -- saying something meaningful about social phenomena.

But "methodologists" -- particularly third-year graduate students much like yourself -- delude themselves into believing that, by virtue of having a significant correlation coefficient, they have "found" something.

But you're not alone in this. As McCloskey and Ziliak (1996) found in a study of every paper published in the American Economic Review from 1980 to 1990, 70 percent of conflated statistical and substantive significance; 72 percent did not make an attempt to assess whether or not "large" findings were empirically large (that is, meaningful) or merely mathematically large (assuming, perhaps, that the two are synonymous); and 32 percent admitted to eliminating variables from subsequent iterations of their equations simply on the basis of statistical significance but without any empirical warrant for doing so.

Few, if any, political scientists so infatuated with statistical machinery are willing (or able) to acknowledge the inherent bias in quantitative research, where decisions are taken and conclusions reached on the basis of mathematical properties that bear little relationship to the substantive issues under investigation.

For example, the success-in-war job talk was not about war initiation nor about small wars, despite the attempt by one/some commenters above to defend it as such -- it was a straight, IR, systemic Big War job talk. One that predicted Germany winning WWII (but, hey, they got Italy correct, though they couldn't reject the possibility that the Soviets lost the war with confidence).

McCloskey illustrates the point with a paper by the economist J.D. Richardson (1978): "He regresses Canadian prices on American prices multiplied by the exchange rate for a number of industries and concludes, 'It is notable that the law of one price fails uniformly. The hypothesis of perfect commodity arbitrage is rejected with 95 percent confidence.'"

McCloskey then asks a pathetic, qualitative-like question: "Why in an imperfect world would it matter that perfect arbitrage is rejected?" (McCloskey 1985, 114). What Richardson "shows" (his "finding") is that something that doesn't actually exist isn't important.

(For further biased analysis on this point by a qual-, er, quantitative scholar, see Edward Leamer, "Let's Take the Con out of Econometrics," American Economic Review 73 (March 1983), 31-43.)

I'll leave that line of argument with another biased quote from a notorious qualitative scholar, Ronald Coase, who observed that the beauty of methods was that "one could cover the blackboard with diagrams and fill the hour in one's lectures without the need to find out anything about what happened in the real world."

Though several commenters above would have it seem so, I do not reject quantitative studies out of hand. Done well, they are important and valuable -- David Laitin's work on terrorism, for example.

Regrettably, most of them are done poorly.

In part, I think this is due to the elevation of "methods" as a sub-field graduate students can engage.

Methods are not fields. Statistics is not method. Those are tools -- means, not ends. Yet by allowing a student to "major" in methods, technique is reified.

And in part it is because the field -- and graduate school -- doesn't punish the ad hoc elimination of outliers. But as Bryan Jones and Frank Baumgartner (2002) note, by eliminating outliers simply on the basis of their mathematical properties (cf, McCloskey and Ziliak), "it is easy to predict [because] one will be right most of the time, and wrong only when it matters most" (i.e., giving Germany a "win" in WWII).

Unlike the voice of Qualitative Triumphalism, however, I am a more ecumenical sort and believe we need both in political science.

Qualitative research can -- doesn't always, but can -- tell us which of the things we are looking at are, in fact, substantively meaningful. Quantitative work can then parse out just how meaningful they are.

The ideal job candidate would be able to do both, but very few departments reward fluency in both (or even recognize it). That is also regrettable.

As Max Black (one of those pathetic philosophers of mathematics) put it, "perhaps every science must begin with metaphor and end with algebra."

By the way, I'm not GAG, whoever that is.

9/18/2007 6:54 PM  
Anonymous Anonymous said...

As to the "stupid" commenter, show me a quantitative analysis that doesn't "cherry-pick" variables but survives accusations by fellow quantitativists of being merely a kitchen-sink model.

9/18/2007 6:58 PM  
Anonymous Anonymous said...

By the way, I'm not GAG, whoever that is

I actually believe that you aren't GAG--he couldn't have written this post to save his life.

But please, do you really expect us to believe you do not know who GAG is?

9/18/2007 7:18 PM  
Anonymous Anonymous said...

Well, for starters, I'm not a third-year grad student. And, if you met me in person, you'd probably find out that I broadly agree with you that there is an overemphasis on "sexy" methods over substance in the discipline.

That said, since you are not GAG (and, since you are not--maybe I should have gotten the hint because you'd actually cracked a stats book, and I apologize for suggesting you are him and calling you stupid as a result), you should also be aware that you're defending the position of someone who goes well beyond our common position, to the point of denigrating the value of any and all scientific approaches to the study of human behavior. Your Boy GAG isn't worth defending in this, or any other, discussion.

9/18/2007 8:50 PM  
Anonymous Anonymous said...

You have piqued my curiosity. How is it that GAG “denigrat(es) the value of any and all scientific approaches to the study of human behavior”?

9/19/2007 4:32 AM  
Anonymous Anonymous said...

Well I don't know who GAG is -- though obviously I can't prove it (but perhaps I can establish a strong correlation? -- JOKE!).

That's because I only heard of this blog (and the IR Rumor Mill blog) over drinks at the Sheraton at APSA. I assume GAG is the nom de plume of someone who is against quantitative methods?

9/19/2007 7:49 AM  
Anonymous Anonymous said...

GAG is George A. Gonzalez. He has argued for the superiority of qualitative methods over quantitative methods. In fairness to him, GAG has not argued against the scientific study of human behavior, but, more specifically, that qualitative methods are more scientific than quantitative methods.

Regardless of whether or not I agree with him, I can nonetheless see that calling him (or anybody else) "stupid" is completely inappropriate. From what I can gather, GAG, and most everyone else on this blog, has up until this point remained within the bounds of decorum.

9/19/2007 8:15 AM  
Anonymous Anonymous said...

Okay, then, thanks for the info re: GAG.

I wouldn't be inclined to agree with someone who argued that one "method" (as you can tell, I hate that term) is superior to another. They serve different functions in different sub-fields.

One thing I believe -- and I do do the math, by the way -- is that different sub-fields and different questions within the PS tent are differentially (for lack of a better term) suited for quantitative analysis. I don't see how, for example, anyone could argue with longitudinal studies of public opinion or economic growth.

On the other hand, not every war and peace question is readily amenable to quantitative analysis.

Likewise, I'd be suspicious of a quantitative analysis of Hobbes or Locke -- though not suspicious of a formalization of their arguments.

My real beef with quantitative scholarship is with those who naively (and the naivete runs from grad students to tenureds) believe that merely by finding something mathematically they've found something "real." There are lots of mathematical artifacts we can dig up in the archeology of PS, but they're not all meaningful or even important.

Even when such scholars do unearth a mathematical artifact that is important, comparatively few of them can transmit that importance in a meaningful way. Their articles all seem to say, "Look at my correlation coefficients; aren't they swell?"

In Comm Studies, for example, they like to tell the story of an earnest young Assistant who gave a talk at their version of APSA on corporate influences on campaign advertising. Based upon an extensive time-series analysis of televised political conventions, she "discovered" that the liquor industry never advertised. Her conclusion was that the liquor interests did not believe they could exercise influence in politics and so rationally allocated their advertising dollars to print media.

Of course, during the period under investigation, there was a law against liquor advertising on television....


If I had to have a rule-of-thumb, then, it would be this: As soon as one starts relying upon the technical apparatus to do the analytical work, I have to call into question the validity of the enterprise. This is political science, not engineering.

I admire the works of Russell Hardin and William Riker, for example, not "pathetic" qualitative scholars by any stretch of the imagination, but able to explain their formal results in clear and accessible language. The Chicago school economists were like that, too, but that tradition seems to be dying out a bit.

But of course there's a lot of bad qualitative analysis too, and I'm sure that I've contributed my fair share to that tradition. But I hope to get better, which I think is the goal we all share, no?

Well, for the little that it's worth, that's my $.05 of contribution to the discussion.

Tastes great; less filling. It's all beer.

9/19/2007 9:51 AM  
Anonymous Anonymous said...

"the correlation coefficients on the most important independent variables were significant and robust"

"Their articles all seem to say, "Look at my correlation coefficients; aren't they swell?"

Why does it seem to be the case that the anti-quant people keep talking about 'correlation coefficients' over and over again as if correlation is held out by quantitative scholars as some kind of golden shovel?

If someone is criticizing quantitative methods and the best weapon deployed is 'correlation coefficients,' that really doesn't make the critic seem like she knows much.

Pardon the awkward phrasing, dinner's cooking.

9/19/2007 4:02 PM  
Anonymous Anonymous said...

4:02 p.m.:

My first response is that you are the standard bearer for quantitative scholarship since you obviously can't read.

At no point in the postings does the person claim to "anti-quant."

In fact, s/he clearly said s/he was not anti-quant -- just anti-bad-quant, of which I suspect you are likely to be one of the field's foremost practitioners.

My second response is "ummm, because so many quantitative scholars trot them out as if they were golden shovels?"

Here's a bit snipped from an abstract (the ABSTRACT, fer gads sake!) at APSA07 (and NOT from a paper on regression, parameter estimation, etc.):

In comprehensive regression analyses controlling for many demographic and attitudinal factors, most coefficients for a foreign policy Information Index remained substantial and highly significant. We found no support for the media spuriousness hypothesis. On the other hand, the SES spuriousness hypothesis does have considerable merit. In many cases, indicators of socioeconomic status (probably related to differences in values and material interests) have substantial impacts on policy preferences independently of information levels, and the information coefficients drop in their presence.

As s/he put it, aren't those coefficients swell?

9/19/2007 5:37 PM  
Anonymous Anonymous said...

First, it doesn't sound like those are correlation coeffs. Too bad you don't know the substantive difference between correlation coeffs and regression coeffs. Second, that is ONE APSA paper. But I guess one selective case is all the evidence you need?

Quant researchers know the difference between statistical and substantive significance. Go back to your ILP argument...it was stronger.

I enjoy reading qualitative research and I think that it has a great deal to offer the study of politics, even in areas like public opinion (see Walsh's Talking about Politics). But arguing that there is no place for quant research is just silly. Arguing that most quant scholars are out of touch with what they are studying is just silly.

9/19/2007 6:12 PM  
Anonymous Anonymous said...

4:02 here.

I'm sorry I didn't read that 1 APSA paper. I was working. You write, in part,

"...information levels, and the information coefficients drop in their presence."

Please explain the extent to which information coefficients are the same as correlation coefficients.

Cite all the APSA papers you wish.

9/20/2007 3:46 AM  
Anonymous Anonymous said...

RE: 6:12 p.m.

"But arguing that there is no place for quant research is just silly."

Going back to the Spring, when this discussion began, no one has argued that "there is no place for quant research" -- not GAG, not anyone!

9/20/2007 6:35 AM  
Anonymous Anonymous said...

Sorry, but the overuse of "correlation coefficient" in these and previous posts clearly signals a lack of understanding with quantitative methods generally and regression specifically.

This isn't the first time I've heard qualitative folks refer to results based on "correlation coefficients" and such.

This isn't just a matter of semantics. A correlation coefficient refers to a statistic that is subtantively and statistically different from, say, an unstandardized OLS parameter estimate or a Generalized least square parameter estimate. Not being able to recognize this shows one's ignorance in the realm of statistics.

9/20/2007 8:12 AM  
Anonymous Anonymous said...

9:19 didn't write:

"...information levels, and the information coefficients drop in their presence."

So when 4:02 wrote:

"Please explain the extent to which information coefficients are the same as correlation coefficients."

and when 6:12 wrote:

"First, it doesn't sound like those are correlation coeffs. Too bad you don't know the substantive difference between correlation coeffs and regression coeffs"

they both validated 9:19's sentence:

"...you obviously can't read"

because 9:19 clearly said that the phrase

"...information levels, and the information coefficients drop in their presence"

came from the abstract of a quantitative paper at APSA (by a Known Figure in the study of public opinion, by the way, since it was easy enough to find online)

a point that was made in response to 4:02's disingenuous rhetorical question about qualitative scholars' use of the term, "correlation coefficients."

But since 6:12 dismissed the point because it was

"ONE APSA paper"

cleverly adding

"But I guess one selective case is all the evidence you need",

here's a bit more data for use when considering 4:02's question

"Why does it seem to be the case that the anti-quant people keep talking about 'correlation coefficients' over and over again as if correlation is held out by quantitative scholars as some kind of golden shovel?"

JSTOR--> Advanced Search-->

Search Terms: "correlation coefficient*" AND {show OR suggest OR demonstrate} in ABSTRACT -->

Search Articles--> Date range 1970-2006--> Journal Database: Political Science.

Search Results: 2032 for « ((ab:(correlation coefficient) AND ab:(suggest)) OR ab:(show) OR ab:(demonstrate)) AND ty:FLA AND (year:[1970 TO 2006])^0 AND la:(eng) in multiple journals » (0.49 seconds)

Seems that quantitative scholars trot out "correlation coefficient" as if it were some kind of "golden shovel" quite often.

9/20/2007 9:44 AM  
Anonymous Anonymous said...

9:44 -- if you really wanted to make that really systematic, you'd have to go through a sample of those abstracts to eke out the context, but the point is taken.

9/20/2007 9:52 AM  
Anonymous Anonymous said...

Funny.

When I do a very general search (notice the few constraints I put) in JSTOR for "correlation coefficient" in the "abstract" of articles published in the field of "political science", I get...

...5 hits.

And the only hit after 1981 is from reknowned quantitative guru Jane Mansbridge (I'm not disparaging her, by the way; I think she's a top contemporary political scientist)...

Now, 9:44, could it be the case that some (many?) of the 2,032 articles you come up with are in... um... econometrics or statistics journals? How weird would that be, huh?

Next, you will tell us that because there is no such thing as the immutable laws of statistics, the use of quantitative analysis has little value in the field of statistics...

9/20/2007 11:20 AM  
Anonymous Anonymous said...

Talk about the selective use of 'evidence'...

Hey, 9:44, is that how you conduct research?

9/20/2007 11:53 AM  
Anonymous Anonymous said...

I can't see why people are getting so heated. The issue is not over the utility of statistical methodologies in the social sciences -- which GAG did seek to call into question -- but over their proper application. Certainly some individuals misapply such methodologies. That is not a big surprise. Now, the more germane issue is do most? I do not believe that is a question that can be appropriately answered via a blog.

9/20/2007 12:13 PM  
Anonymous Anonymous said...

How did GAG call the utility of statistical methodologies into question in the social sciences?

9/20/2007 12:56 PM  
Anonymous Anonymous said...

I think the issue that some have with GAG is that he criticizes the use of quantitative methods, while it is clear that he doesn't understand much about the methods he criticizes.

That's the issue. The remedy is simple. Enroll in a stats class, or perhaps read a stats book or two, then make your informed judgment about the use of statistics.

I recently got interested in a state politics research question I didn't know much about. I had a hunch that our understanding was wrong, but since I didn't know the literature, I first read several articles and books on state politics. Then I wrote my critique. I felt much more comfortable criticizing that literature once I had read it (and it also allowed me to see that my first intuition wasn't entirely on the mark).

9/20/2007 1:23 PM  
Anonymous Anonymous said...

"How did GAG call the utility of statistical methodologies into question in the social sciences?"

Very poorly. See the old American Jobs posts.

9/20/2007 1:25 PM  
Anonymous Anonymous said...

Very poorly. See the old American Jobs posts.

I can't... it's very painful and quite sad to read...

9/20/2007 2:07 PM  
Anonymous Anonymous said...

11:20: No, if you look closely, the search parameter was for Political Science journals only.

And what it shows, of course, is that in the abstracts of articles -- which are presumably the items one wishes to call most attention to in order to hook the reader -- "correlation coefficient" appears over 2000 times.

But be a good quantitative methodologist. Replicate the study -- don't do it a different way, get a different finding, and say "see".

9/20/2007 2:08 PM  
Anonymous Anonymous said...

But I guess like your fellow-travelers, you don't really read all that closely.

9/20/2007 2:09 PM  
Anonymous Anonymous said...

But be a good quantitative methodologist. Replicate the study -- don't do it a different way, get a different finding, and say "see".

You were right, the search was for political science journals only. So let's see what happens when I replicate the search.

I found the same number of articles: 2,032.

Here's the first abstract:
"The distribution of population used in drawing electoral boundaries for the Canadian House of Commons from 1903 to 1976 is examined. Using the Gini index, the results show that independent Electoral Boundaries Commissions following specific rules have reduced significantly the inequalities in size of Canadian federal constituencies and have moved them closer to political equality. Cross-national comparisons demonstrate also that Canada has moved closer than Britain to political equality and not as close as the United States."

Please point to me where the term "correlation coefficient" appears. Anyone?

Again, as I wrote earlier, if you search abstracts in political science journals for "correlation coefficient", you get 5 hits, not 2,032.

I'll let others judge which one of us is right. 2,032 vs. 5. We can't both be right...

9/20/2007 2:28 PM  
Anonymous Anonymous said...

re: 2:09

And I guess like your fellow travelers, you didn't bother to actually read any of the abstracts. So you *thought* that each of them contained the term "correlation coefficient" when in fact they didn't.

The previous poster admitted being wrong about which journals were being searched. Can you stand up and admit that you were wrong as well? I think I know the answer, but let's wait and see...

9/20/2007 2:33 PM  
Anonymous Anonymous said...

This "debate" is making my head hurt.

How's this for a set of propositions:

1) Quantitative methods have substantial merit when done well and applied to appropriate contexts, yet much quantitative work is indeed shoddy.

2) Ditto for qualitative methods.

9/20/2007 3:10 PM  
Anonymous Anonymous said...

Wow, I just looked at JSTOR and there are indeed 5 articles that have the words correlation coefficient in the abstract.

Someone wrote earlier: "Seems that quantitative scholars trot out "correlation coefficient" as if it were some kind of "golden shovel" quite often."

That's just flat wrong! And it completely undermines that poster's point. George, how could you miss that? LOL

9/20/2007 4:14 PM  
Anonymous Anonymous said...

I believe that was noted in 9:52.

But I can't replicate your replication. JSTOR returns the results to me in a different order.

The first result I get is some article on coalitions in Europe, not on Canada that does mention them.

So while I'm happy to accept the validity of your correction, I am now worried about how JSTOR works....

(Searching by "coefficient" instead of "c.c." produced 77 abstracts; searching by "correlation" instead of "c.c." produced 173 results.)


The basic point remains that quantitative scholars also highlight their "golden shovels" (whatever that means).

I found 22 dissertation abstracts in pol sci over the past 10 years that specifically highlighted their "coefficients", and 51 that specifically mentioned their "correlations", and these include dissertations from Chicago, Harvard, Cal -- so I find it difficult to believe that they are being written by "unsophisticated" individuals who don't know the literature.

(Six, by the way, referred to "parameter estimates," for the benefit of 8:12 a.m.)

But I believe the point 3:10 p.m. is making is the same one that the original poster, qualitative 9/18 and 9/19, concluded with in both of her/his substantive posts.

That good research is good research, whether qual- or quantitative.

I've read them both carefully, and neither seems to declare that quantitative methods are unscientific, that quantitative scholarship is bad, or that qualitative scholarship is better.

In fact, s/he says specifically s/he wouldn't agree with anyone who promoted one over the other.

At which point some thin-skinned Jacksonian quantitative scholars (yer either fer us or yer agin us) decided that being, in 9/18-19's words, 'ecumenical' was synonymous with being 'anti-quant.'

Others, meanwhile, focused their rage on fact that the writer's -- in a blog posting no less -- only referred to "correlation coefficients" and not "parameter estimation," "curve-fitting," "t-statistic," or any number of other terms-of-art that might have been used but which, like "correlation coefficient," would have been used MERELY AS EXAMPLES.

Those supposedly "scientific" scholars then latched onto a simple rhetorical device and proclaimed it proof that the user didn't know anything about statistics, etc.

Which seems rather qualitative of them.

But if it makes them feel any better, "correlation coefficient" is not indexed at all in the usually much-maligned Green and Shapiro book, so I'm sure this will raise that book in their -- if you'll pardon the term -- estimation.

9/20/2007 4:17 PM  
Anonymous Anonymous said...

I miss p-tests...

9/20/2007 8:49 PM  
Anonymous Anonymous said...

The animosity directed at GAG suggests that his critiques of quantitative methods were anything but “stupid” or “poor”. As someone once said, “if your opponents are cursing your name, you must be doing something right.”

After following this discussion for months, I think I grasp the thrust of GAG’s position(s). His first point, and perhaps most convincing one, is that the value placed on quantitative methods in the social sciences is misplaced. More specifically, quantitative methods are most aptly applied to studying the laws of physics. With such laws determining the behavior matter, these laws can be detected/measured utilizing quantitative methods. In the social sciences there are no “immutable laws” determining human behavior. Thus, as a general scientific point, it is not clear what quantitative methods are measuring in the social sciences.

GAG’s arguments in favor of qualitative methods were in my view more tenuous. His position on qualitative methods was predicated on his assertion that the causation underlying human behavior is cognition. (Unlike the physical sciences, were the behavior of matter is solely driven by the immutable laws of physics.) Thus, social scientists should focus their efforts on analyzing the cognition (i.e., thinking) propelling human behavior, and this, he held, could be most readily done through qualitative methods. I do not know if this is entirely feasible or workable for many of the questions that are taken up by social scientists, but, whatever it is worth, this position does have the virtue of being a coherent theory of knowledge.

9/21/2007 6:59 AM  
Anonymous Anonymous said...

Hello, this is Not-GAG from 9/18 and 9/19.

I want to apologize for having been the source of so much discord on this board. I was led to believe that there was a debate over political science methods here, but it seems mostly to be position-taking and name-calling.

I would just like to go on the record as saying for myself the following:

No, I do not know what ILP means, so I don't know if I am or am not.

Yes, I prefer qualitative methods in most instances.

No, I do not believe qualitative methods are superior to quantitative methods.

No, I am not "anti-quant." Two of the books I most enjoyed in graduate school were Gates and Humes, _Games, Information, and Politics_ (Michigan 1997) and Brams and Taylor, _Fair Division_ (Cambridge 1996). My copies are heavily annotated; I have used insights from them in my own work.

It is merely the case that the questions that interest me do not seem (to me) to be amenable to quantitative analysis (yet -- cf Max Black, first metaphor then algebra).

I have an interest in formalizing some of my intuitions at some point in the future, but as yet they are not mature enough, systematic enough, well-developed enough, or rigorous enough to do so.

I can "see," however, that some of my intuitions could lend themselves to formalization (probably in set theory) at some point in the future -- assuming my research program goes well. At which point, I will add that mathematical formalization to my qualitative analysis. Why would I do that? Because formalizing the argument (i.e., in set theory) would make the logic clearer than I could express in whole language.

Yes, I reject the promotion of one approach to political science as more "scientific" than another.

(Although in my case, I also think the scientism idolatry in Pol Sci is overblown by an order of magnitude. Chemistry is a science.)

I am agnostic on the question of whether or not their are general covering laws that apply to politics. No one has convinced me one way or the other.

Yes, I think that many qualitative scholars criticize quantitative scholarship unfairly.

Yes, I think that many quantitative scholars criticize qualitative scholarship fairly.

If that makes me a "Perestroikan," then so be it. But all I know about that is the first piece that was written a few years ago. I haven't followed the issue and don't even know if it still exists.

But if I had to use a term from the end of the Soviet era, I would characterize myself as a Glasnostian -- one who believes in openness.

No, I do not believe the field is a zero-sum game.

If qual scholars were depriving quant scholars of jobs, or vice-versa, I could understand the animosity on the basis of competition for scarce goods. But I do not think that is happening. I cannot imagine, say, the Univ of Minnesota suddenly deciding to be a quantitative department.

I would also like to go on the record as saying I'm a bit surprised by how defensive some quantitative types seem to be. I thought that was principally limited to critical scholars of I.R.

So again, I apologize for creating strife and hope for the benefit of those who have attacked and attempted to defend me that I have made my own position clear.

9/21/2007 9:45 AM  
Anonymous Anonymous said...

And I apologize for misspelling "there" in the previous post. I initially had a noun there but rewrote most of the sentence.

9/21/2007 9:48 AM  
Anonymous Anonymous said...

"After following this discussion for months, I think I grasp the thrust of GAG’s position(s)."

GAG learning about himself!

9/21/2007 1:23 PM  
Anonymous Anonymous said...

1:23 p.m. you are oddly fixated on GAG. I think it is time to let it go. We really don't know if GAG has ever blogged on any of these threads -- or, for that matter, ever blogged.

9/21/2007 2:03 PM  
Anonymous Anonymous said...

9:45 sounds like a perfectly reasonable chap. Agree with most of what he says.

6:59 is ILP, trying to hide that he is ILP, as he evaluates ILP's arguments. He thinks his second paragraph makes sense.

9/21/2007 7:22 PM  
Anonymous Anonymous said...

6:59 a.m. is not ILP (or GAG).

9/21/2007 8:14 PM  
Anonymous Anonymous said...

"1:23 p.m. you are oddly fixated on GAG. I think it is time to let it go. We really don't know if GAG has ever blogged on any of these threads -- or, for that matter, ever blogged."

We know GAG has blogged here. He used to post with his name. That said, I will let it go. Must stop my fixation!!

9/22/2007 1:29 PM  
Anonymous Anonymous said...

"We know GAG has blogged here. He used to post with his name."

It could have been someone impersonating GAG.

9/22/2007 7:07 PM  
Anonymous Anonymous said...

6:59am is definitely ILP

9/23/2007 11:18 PM  
Anonymous Anonymous said...

It is quite possible that someone just randomly decided to impersonate GAG and GAG chose not to dispute it.....sure....

For those of us who have been exposed to him on the Perestroika list these arguments are consistent with what he has said there too (in emails that far precede the blogs).

9/24/2007 10:05 AM  
Anonymous Anonymous said...

Not-GAG again. What IS the perestroika list? If I'm going to be accused of being one, I'd like to see what it's all about. And didn't that tempest blow over when APSA started PoP?

9/26/2007 1:27 PM  
Anonymous Anonymous said...

Perhaps appropriately, the Perestroika list is a Yahoo newsgroup that you must request admission to join. If accepted, you will get the secret handshake and GAG will inundate your email account with his musings.... really.

9/26/2007 4:11 PM  
Anonymous Anonymous said...

4:11: Okay, thanks, I'll avoid that.

I'm all for methodological and epistemological pluralism, but not for more e-mail.

I did read a kind of review essay of some Perestroika books from a past number of Political Theory, and I have to say that if the reviewer accurately characterized the group -- which I have some reason to doubt -- I wouldn't number myself among its adherents.

As I have noted a couple times in this thread, I don't reject objectivist or positivist research at all. Nor do I reject, in general, the notion of PS as a science, though (as I pointed out above), I think many of our colleagues carry their scientism to the level of fetish.

I'll use an example but will try to keep it vague for what will be obvious reasons. I did my graduate work at what would be called a "rational choice department." Pretty well-known as such, etc.

One of the most mathematically inclined faculty members once expressed to me (I was working as an assistant to her at that time) her belief that the Model Political Scientist was Professor X. Prof X had a Ph.D. in math and perhaps Econ, as well. Prof X taught the advanced stats sequence in the department and an econometrics class -- both very interesting, by the way.

Prof X's problem, though, was that he didn't actually know anything about politics, even those of his home country. So he never published anything.

What a surprise -- he didn't advance up the ladder, resigned his position, and is now a staff economist or something or other at some corporation. Richer, to be sure; happier, perhaps. Obviously, most political scientists are not going to be like Prof X.

But what struck me was how blithely my then-boss disregarded Prof X's obvious lack of qualifications for Pol Sci simply on the basis of his knowing a lot of math.

To her (and she's still in the business, by the way), math knowledge trumped political knowledge -- you could just get the "facts" from a book. (Begging the question, "a book written by whom?")

So my principal objection (perhaps that's too strong a word -- concern, hesitation, reservation) to what the Perestroikans claim is the "hegemony" of rational choice or quantitative methods is the assumption that "mathematizing" research inevitably produces better answers.

For me, that's an open question, one which needs to be demonstrated, rather than assumed. In some cases, I think it has been shown -- public opinion research, for example.

But there, too, one needs to draw what are inherently qualitative conclusions about survey work, for the reasons we all learned in PS 101. Snapshot, retrospective evaluation, etc. etc. So to make the marginals in survey research meaningful, you have to bring in (often without acknowledging you're doing so) all kinds of contextual matter -- precisely the sort of unsystematic thing qualitative scholars are often (fairly) accused of doing.

Indeed, what I find so curious about the debate is that it proceeds by assumption. We assume that quantitative research is "better," so we publish more of it; once more of it is published, we assume it was published because it was better.

So I find both the proof-by-assumption and the smuggling-in of normative definitions of good and bad to be rather curious.

I also worry somewhat about the fashionableness problem. OLS was in fashion. Then probit/logit. Now neural-networks. As Coco Chanel -- someone who knew a lot about fashion if not a lot about political science -- once said, fashion exists to become unfashionable.

Where I do agree with the Perestroika position -- again, assuming it was accurately summarized by the author -- is that research ought to be problem-centered, rather than method-centered. If a quantitative method gets at the problem better, then use it; if it doesn't, however, we ought not abandon the problem just because we can't use the tools we like.

Okay, so that's it. Not-GAG

I have to confess, I feel bad about signing "Not-GAG." That person either has very thick skin or must have been using a pen-name.

9/27/2007 6:04 AM  
Anonymous Anonymous said...

RE: 6:04 a.m.

For whatever it is worth, you sound like someone who would be very at home in the Perestroika movement. (You put forth a position that is at the core of Perestroika.) As for GAG, I don't think he minds or minded the insults. From what I observed, he probably viewed it as reflective of the veracity of his argument(s). I think the reason he does not post under his name anymore is because he does not want to invoke Univ. of Miami (his home institution) haters.

9/27/2007 1:47 PM  
Anonymous Anonymous said...

As someone who subscribes to the Perestroika list-serve, I can say that GAG never posted any discussion of "immutable laws" via that list-serve.

9/27/2007 2:49 PM  
Anonymous Anonymous said...

I heard his provost and chair made him stop posting under his own name.

9/27/2007 3:53 PM  
Anonymous Anonymous said...

RE: 3:53 p.m.

That's highly improbable. In the first instance, how would the Provost accomplish this? Secondly, GAG was aggressively defending his university/department. Why would the Provost want to shut him down?

I think the discussion on University of Miami ended because both sides "ran out of steam." (Too bad, because I found the whole thing entertaining and hilarious.)

9/28/2007 7:25 AM  
Anonymous Anonymous said...

With friends/defenders like that...

9/28/2007 8:18 AM  
Anonymous Anonymous said...

8:18 AM,
I am unsure what you mean?

9/28/2007 9:19 AM  
Anonymous Anonymous said...

Are you unsure of whether you are unsure? If not, why end with a "?"?

My hunch: 8:18 is GAG...who else would care?

9/28/2007 5:39 PM  
Anonymous Anonymous said...

9:19 is GAG, not 8:18

9/28/2007 5:40 PM  
Anonymous Anonymous said...

Here we are again with the GAG fixation. What's curious is that I am not even sure how it's relevant, or even tangentially addresses the question at hand.

9/28/2007 8:00 PM  
Anonymous Anonymous said...

What is the question at hand???

9/30/2007 4:26 AM  
Anonymous Anonymous said...

8:18 AM,
I am unsure what you mean?

9/30/2007 9:39 AM  
Anonymous Anonymous said...

well, if that's what you think...

9/30/2007 11:47 AM  
Anonymous Anonymous said...

I am unsure what you mean?!?

Respectfully,

Anon

9/30/2007 4:38 PM  
Anonymous Anonymous said...

I have a question for both sides of this debate. Which approach provides more analytical leverage over the decision to invade Iraq? One that focuses on human cognition (i.e., decision-making), or one that seeks to detect patterns of behavior through statistical methods? I am very interested in hearing from both sides on this.

10/12/2007 11:37 AM  
Anonymous Anonymous said...

So ILP/GAG is now going to pose a question for himself? Lonely are you?

10/12/2007 1:34 PM  
Anonymous Anonymous said...

1:34 p.m.,
Does your response mean that - as usual - you have no answers?

10/12/2007 5:23 PM  
Anonymous Anonymous said...

Listen 5:23 pm, if you spent your entire professional career studying statistical methods, you, too, would have no understanding of politics. So back off!!!

10/12/2007 8:09 PM  
Anonymous Anonymous said...

Re: 11:37 -- You must be asking a trick question, because the answer is "both." It depends upon what question you are asking.

Do you want to know why Great Powers embark upon preventive wars when it appears that the balance in some region of interest is changing and whether the 2003 invasion of Iraq is an example?

There having been many, many dyad-years of Great Power behavior, and many dyad-years of Great Power military intervention against smaller powers, I have every confidence that a large-n statistical analysis will reveal interesting patterns in such behavior. One could then compare those interesting findings to the case at hand and see if it bears out the pattern or is in some important way an anomaly.

If anomalous, qualitative scholarship could then compare representative cases of preventive war with the (hypothetically) dissimilar case of Iraq and attempt to understand why.

Or are you asking what were the psychological determinants of key decision-makers within the administration of George W. Bush who advised/decided to invade Iraq in the wake of 9/11?

I have every confidence that in-depth case study analysis that relies upon social psychology's models of human cognition will offer interesting and exciting findings about the psychology of that administration's decision-making.

Then one could cross-compare those findings with like findings from other case studies and develop a meta-theory about the psychology of decision.

Then one could formalize it mathematically (i.e., poli-heuristic theory) and test interesting and exciting hypotheses to tease out the essential elements of the model.

Then one could take those findings and apply them to other decision-making frameworks, such as Congressional log-rolling or corporate innovation.

And then you'd have a field. And a social science. And an academy.

And everyone would be happy.

Except for those who will then post on a blog, complaining that quantoids have captured the field or that qualtoids keep waving their hands.

And then you'll ask that question again, only 5 years from now.

Your friend in qualitative scholarship,

Not-GAG

10/13/2007 5:22 PM  
Anonymous Anonymous said...

RE: 5:22 p.m.

Thank you for your thoughtful response. I have to wonder, however, whether a meaningful comparison can be made between the Bush Administration’s decision to invade Iraq, and Hitler’s to absorb Czechoslovakia.(I chose these two cases because there was a high degree complicity among the existing “Great Powers” for both of these actions.) The context of each of these cases is just so radically different that I would suspect that any so-called similarities in causal mechanisms would be an artifact of the methodology employed.

10/14/2007 7:46 AM  
Anonymous Anonymous said...

7:46 -- let me be the first to challenge the validity of the comparison. Even the most dedicated qualitative scholar has to have some standards, and even if Czech/Iraq were both found in the same regression, that doesn't mean that one is arguing they are "similar." One would want to know IF they proved to be similar, and in what ways.

Qualitative or quantitative, research can't proceed via assumption.

On the merits of the cases, I think the comparison is ridiculous. Why not Iraq and the colonization of the moon?

The case study method isn't license to do whatever you want to do. What you're proposing there is the case study version of a kitchen-sink regression.

What is the Sudetenland analog in Iraq? What treaty relationship with American enemies allegedly protected Iraq? What pro-America agents provocateur entered Iraq before March 2003 to instigate riots that the US could then claim "proved" the "Sudeten" Iraqis wanted to be part of the USA? Who is the Iraq Chamberlain? With whom did he sign a treaty in Iraq?

There has to be a logic to case selection that is transparent to the reader, so that the case can -- so to speak -- be replicated. Perhaps the writer was unaware of recent historiography on the case, perhaps s/he did not use primary material that offered new insights. In the same way that historians "test" each other's work, so too can qualitative scholars follow suit.

Now if your research agenda in that example was to paint Bush as the New Hitler, I guess I can understand your proposal -- See, Bush's cognition was just the same as Hitler's!

But if you're seriously interested in whether and how the Iraq invasion is like/unlike past GP interventions in middle- or small-powers, then you're barking up the wrong tree.

Not-GAG

10/14/2007 2:28 PM  
Anonymous Anonymous said...

Let me add that I do not believe that qualitative scholarship is synonymous with "non-positivist" scholarship.

Perhaps that's how people mistakenly view the quant/qual divide -- they assume that qualitative scholarship is synonymous with interpretivist scholarship. That's not the case.

Alex George and his students and successors, among others, deliberately sought a positivist qualitative scholarship.

Not all qualitative scholars deny the existence of objective knowledge, etc. I certainly do not.

10/14/2007 2:33 PM  
Anonymous Anonymous said...

Not-GAG: Finally a thoughtful qualitative scholar is posting here!

10/14/2007 5:15 PM  
Anonymous Anonymous said...

RE: 2:28 p.m.

I am uncertain as to who your argument is directed against. The substance of your post would seemingly be directed against large-n approaches – alla statistical methods. But much of your rhetoric appears cast as critical of small-n (comparative case study) research. I would greatly appreciate it if you could clarify your position.

10/15/2007 6:38 AM  
Anonymous Anonymous said...

6:38 -- my argument was directed against 10/12 11:37 and her/his subsequent 10/14 7:46 (I'm assuming it is the same poster).

The question posed on 10/12 was: would large-n/small-n statistical analysis do better in explaining Iraq 2003 than studies based on "human cognition."

On 10/13 5:22 p.m., I wrote "both," depending upon the specific question being asked. Won't repeat it here, it's just up-thread.

The reply on 10/14 7:46 proposed a comparison between Iraq 2003 and Czechoslovakia 1938, suggesting that a statistical correlation would be misleading.

My subsequent reply, to which you are querying, was not directed at statistical analysis but at the validity of the case selection and the promiscuity with the case-study method that the question implied.

As I have posted earlier in this discussion, though I am a qualitative methodologist (hate that term) by choice, I am neither hostile to quantitative analysis nor to the positivist ontology generally.

My belief, again a longer discussion is up-thread, is that we need both in I.R. because not every question is amenable to formalization and because there are important anomalies in the empirical record that bear closer explanation such as that provided by solid qualitative research (for example, in the pattern of the demo peace, why does France/Germany in WWI not fit, when each was about as democratic as the other if we rely upon standardized measures of democracy (i.e., POLITY, etc.)?).

So I was not, as a qualitative scholar, attacking quantitative research; I was attacking bad qualitative research (at least a bad qualitative example).

10/15/2007 9:03 AM  
Anonymous Anonymous said...

RE: 9:03 p.m.

Thank you again for your thoughtful response. In thinking about your argumentation here, I am somewhat confused. This is because in my estimation quantitative researchers presumably assume that all cases of “x” (e.g., Great Power invasions of small/medium powers) can be grouped together and effectively analyzed to induce causal mechanisms across infinite cases.

Again presumably, it is the qualitative/case study researcher that admonishes against large-n study approaches precisely because they obscure/ignore/suppress variables that are rather unique (nevertheless important) to each case. Therefore, the Czechoslovakia/Iraq examples were ostensibly offered to suggest precisely the point that the grouping together of “like” cases can lead to very misleading research outcomes. Thus, I do not see how the invocation of these examples serves as a case of “bad qualitative research.”

10/15/2007 10:25 AM  
Anonymous Anonymous said...

"Therefore, the Czechoslovakia/Iraq examples were ostensibly offered to suggest precisely the point that the grouping together of “like” cases can lead to very misleading research outcomes. Thus, I do not see how the invocation of these examples serves as a case of “bad qualitative research.”"

And your choice of Czecheslovakia/Iraq were picked because they supported your argument. Others chould be selected to oppose your argument. I believe this was one of the main points of the previous posts.

Not GAG or Not-GAG

10/15/2007 11:23 AM  
Anonymous Anonymous said...

Yes, it is your method of selection of cases that makes it bad qualitiative research.

10/15/2007 11:24 AM  
Anonymous Anonymous said...

Iraq and 1938 Czechoslovakia are unalike on so many dimensions I'm not sure how a "large n" scholar would classify them as even close to identical cases. They might have the same value on one variable (type of invasion?), but there would be other variables distinguishing the cases (most notably offhand, Germany and Czechoslovakia shared a border; Iraq and the United States don't).

10/15/2007 11:31 AM  
Anonymous Anonymous said...

“And your choice of Czechoslovakia/Iraq were picked because they supported your argument. Others could be selected to oppose your argument. I believe this was one of the main points of the previous posts.”

“Yes, it is your method of selection of cases that makes it bad qualitative research.”

In the first instance, no one on this blog has proposed a comparative case study of the Iraq/Czechoslovakia invasions. So no one has suggested any qualitative research, much less “bad qualitative research.”

Second, if someone could posit suggestions that counter my position, I would welcome those suggestions.

Third, the initial claim that started this line of dispute was the assertion that large-n studies can provide insight into the causal factors prompting Great Power invasions of small/medium powers. There were no qualifiers/parameters on this assertion. Thus, the invocation of the Iraq/Czechoslovakia cases is appropriate insofar as these cases indicate that any large-n study of Great Power invasions must have significant qualifiers/parameters stipulated. I will further aver that even with qualifiers/parameters I cannot see the analytical utility of large-n studies of Great Power invasions of small/medium powers.

10/15/2007 12:44 PM  
Anonymous Anonymous said...

"Third, the initial claim that started this line of dispute was the assertion that large-n studies can provide insight into the causal factors prompting Great Power invasions of small/medium powers. There were no qualifiers/parameters on this assertion."

Anyone using large n studies knows that the qualification is that there is uncertainty in our estimates. Thus, your choosing a case that doesn't fit a general patter found is not so interesting in its self.

10/16/2007 10:43 AM  
Anonymous Anonymous said...

RE: 10:43 a.m.

Your post verifies the prime criticism of statistical methods. Namely, their frequent unscientific/irrational usage in the social sciences is obscured by/hidden in what amounts to statistical mumbo jumbo.

10/16/2007 11:20 AM  
Anonymous Anonymous said...

"Your post verifies the prime criticism of statistical methods. Namely, their frequent unscientific/irrational usage in the social sciences is obscured by/hidden in what amounts to statistical mumbo jumbo."

Uncertainty in estimates is statistical mumbo jumbo?? If you have had my undergrad methods class you know exactly what I mean. If you have read any good methods book, whether more quantitative or qualitative (e.g., Gerring's book), you know exactly what this means. Please do yourself a favor and read one.

10/16/2007 5:27 PM  
Anonymous Anonymous said...

Your assertion that theoretical/empirical difficulties in comparing Great Power invasions of small/medium powers can be overcome with statistical methods sleight of hand confirms the broad suspicion that statistical methodologies often mask/obscure theoretical/empirical difficulties. It is important to stress that what you are suggesting is the masking/obscuring of theoretical/empirical difficulties, and not resolving them. Following from your thinking, statistical analysis, far from elucidating empirical phenomena, serves as a source of disinformation and ignorance.

10/16/2007 5:48 PM  
Anonymous Anonymous said...

Why do I get the feeling that at least one of the participants in this debate is Eliza?

e.g.:

8:30: "{Detailed undergraduate research methods discussion mentioning confidence intervals.}"
8:32/GAG: "Confidence intervals are mumbo jumbo."
8:34: "No they aren't."
8:36/GAG: "Confidence intervals are a source of disinformation and ignorance."
8:38: "GAG is a douchebag."
8:40/GAG: "How does "GAG is a douchebag" make you feel?"

GAG is either a blithering idiot or the best troll poster in the history of the Internet. Perhaps both.

10/16/2007 7:41 PM  
Anonymous Anonymous said...

RE 5:48: I really don't know where to start in responding to this, so I won't. I will simply, one last time, encourage you to actually read something. This is my final post.

"GAG is either a blithering idiot or the best troll poster in the history of the Internet. Perhaps both."

I think you're right...both. Even when you have some really thoughtful discussion by some (e.g., Not-GAG), GAG (i.e., 5:48) is incapable of understanding basic arguments. So, yes, he is a total idiot. I hate to call names, but I am now convinced of this.

10/16/2007 8:45 PM  
Anonymous Anonymous said...

RE: 7:41 p.m. and 8:45 p.m.

How do confidence intervals erase/overcome the real substantive hurdles in meaningfully comparing the decisions to invade Iraq in 2003 and Czech. in 1938/1939? Put differently, why should anyone accept the validity of confidence intervals with such comparisons?

P.S.: I have no problem that people seem to think that I am GAG, except he is the taking all the credit for my work.

10/17/2007 3:32 AM  
Anonymous Anonymous said...

The only way to truly get at the bottom of the invasion of Czechoslovakia is to interview Hitler. Hitler is dead. Ergo, you cannot explain that invasion. Only mediums can.

10/17/2007 9:59 AM  
Anonymous Anonymous said...

RE: 9:59 a.m.

That is an excellent and thoughtful point (although I would respectfully disagree with the claim about mediums).

Since it has been established that the only way to explain social/political phenomena is to gauge human cognition, it must follow that a case study of the invasion of Czechoslovakia by Germany is the relevant methodological approach. No amount of statistical mumbo jumbo can alleviate the real concern that quantitative tools are unable to grasp human cognition and its deleterious consequences for innocent Czechs during the 1930s.

10/17/2007 11:17 AM  
Anonymous Anonymous said...

This is totally hilarious! Keep it up.

10/17/2007 4:57 PM  
Anonymous Anonymous said...

9:59: I have some correspondents in the deepest, darkest jungles of Argentina who disagree with one of your premises.

10/17/2007 5:44 PM  
Anonymous Anonymous said...

RE: 9:59 a.m. and 11:17 a.m.

Is this your way of conceding that quantitative methods cannot offer any analytical leverage over the issues at hand?

10/18/2007 6:37 AM  
Anonymous Anonymous said...

10/15 11:23 was not Not GAG, at least not me, who is Not GAG (though perhaps I have been promiscuous in my punctuation, viz, Not-GAG).

As to the debate: Here is where we part company -- "Third, the initial claim that started this line of dispute was the assertion that large-n studies can provide insight into the causal factors prompting Great Power invasions of small/medium powers."

I take issue with your use of the word, "can," in that sentence.

As I have said repeatedly in this thread, I believe BOTH methodological families are necessary and useful, depending upon HOW the research question is framed and upon WHERE the research field is in terms of maturity, etc.

Quant scholars will be more adept at framing what I'm about to say, but my position would be: "large-n studies MIGHT (or COULD, depending on which you prefer) provide SOME insightS into SOME OF the causal factors prompting Great Power invasions of small/medium powers."

We would then go on to assess whether those did, in fact, explain Czech/Iraq, quant and qual alike almost certainly via a case study.

You are creating a bit of a strawman -- you're suggesting that quantitative research is determinative by design -- " will show that..."

Now granted, many of them write in those terms, but (again, quant folks please make this more accurate) my two years of grad school stats a decade ago understanding of the quant method is that it helps us LIMIT the numbers of things we think about by establishing relationships of varying strengths.

I can easily imagine a large-n study that tells us very little. But at least in the telling us very little about Cases 1 and 2, it would tell us rather a lot about what things DON'T SEEM to matter.

And frankly, I don't see how a "human cognition" approach does any better.

At best, it is arguing by inference -- some of it quite good -- but by it's very nature inference implies uncertainty in associations, which is also the backbone of (at least) the statistical method. "It would seem that Hitler believed X,Y, and Z based upon the evidence" sounds rather like "Variables X, Y, and Z cannot be rejected because...."

I have to go teach class right now, so let me end with this: If we've learned anything from survey research, it's that survey research can be unreliable because humans are unreliable. We index our answers, we guess, we say what we think others want to hear. I'm not sure I would credit a study based on "human cognition" -- particularly one based upon interviews -- with any greater reliability.

Doesn't mean it wouldn't be interesting or useful, but like any good statistical analysis would need to be taken with several grains of salt.

10/18/2007 11:47 AM  
Anonymous Anonymous said...

RE: 2:04 p.m.

There are at least two posters on this blog that are taking up the banner of qualitative methods. Not that anything justifies your insulting tone/language.

10/18/2007 7:09 PM  
Anonymous Anonymous said...

Why is there so much methods hostility from both camps? Are both camps so arrogant in their self-righteousness that any earnest intellectual discussion or disagreement is impossible? I guess the century-long search for the methodological holy grail continues...

10/18/2007 7:23 PM  
Anonymous Anonymous said...

RE: 11:47 a.m.

"I can easily imagine a large-n study that tells us very little. But at least in the telling us very little about Cases 1 and 2, it would tell us rather a lot about what things DON'T SEEM to matter."

I do not see how that necessarily follows.

"And frankly, I don't see how a 'human cognition' approach does any better."

If the goal of science is to determine causal factors, then analyzing the human cognition underlying social/political behavior is the appropriate scientific methodology for most key social science issues. It goes without saying that there are important obstacles to analyzing human cognition, and we will never achieve absolute certainty when doing so. Nonetheless, human cognition is the ultimate cause of all human behavior, and it is the correct scientific focus of the social sciences.

10/18/2007 7:52 PM  
Anonymous Anonymous said...

RE: 7:23 p.m.

I only note hostility, arrogance, and self-righteousness from the quant. side of the discussion.

10/18/2007 7:56 PM  
Anonymous Anonymous said...

It seems to me to be the very height of arrogance to say that quantitative approaches can give us no leverage on the social sciences, especially when giving no evidence in support of that proposition besides asserting that margins of error are "mumbo jumbo."

Not to mention that I don't believe any advocate of quantitative approaches in this discussion has suggested that there is no place for qualitative or interpretivist approaches in the discipline.

10/18/2007 8:12 PM  
Anonymous Anonymous said...

RE: 8:12 p.m.

To argue a position vigorously, or to hold that the other side is mistaken, is neither hostile, arrogant, nor self-righteous. In contrast, quants. have been explicitly insulting. (I, for example, was recently called an idiot -- not by one, but by two quant. posters.)

Finally, no pro-qualitative poster has denied the utility of quantitative methods in the social sciences. Even if they did, this would not necessarily be illegitimate/inappropriate. They must, however, thoroughly explain and justify their position in a full-faithed manner.

10/18/2007 8:39 PM  
Anonymous Anonymous said...

RE: 8:12 p.m.

It should also be stressed that to invoke "margin of error" without a substantive explanation does amount to mumbo jumbo. If I said that qual. methods are superior to quant. methods because of "squanto", and that you need to read about squanto to understand what I am saying, you, too, would accuse me of offering mumbo jumbo to advance my position.

10/18/2007 8:47 PM  
Anonymous Anonymous said...

10:18 7:52 -- I will concede your point that human cognition is at the root of all explanation.

But then the human brain is at the root of all human cognition.

And evolution (sorry ID'ers) is at the root of the human brain.

So if I take your point, we need to refocus political science methods on evolution, since that is the true root of all of our models.

But, then, there is the Big Bang.

So perhaps we should be studying physics?

You seem to be engaging in a somewhat slippery exercise, trying to use human cognition as both a variable (something to study, something to "use" in a methodological sense) and as a constant.

Not GAG

10/18/2007 8:57 PM  
Anonymous Anonymous said...

RE: 8:57 p.m.

Dude,
I hope you're joking!

10/18/2007 9:07 PM  
Anonymous Anonymous said...

discussion of margin of error can be found in so many places including newspaper articles on public opinion polls.

i think the hostility from quants is not directed at qual methods, but at idiots (yes, i too will use that name) who make those studying qual methods look bad by association.

10/18/2007 9:27 PM  
Anonymous Anonymous said...

RE: 9:27 p.m.

Nonetheless, in the context you invoked margin of error it had no obvious meaning, and you offered no explanation of its utility with regard to the issue at hand.

10/18/2007 9:44 PM  
Anonymous Anonymous said...

Re: 9:07 -- of course I was being facetious, right up to the last point, namely that the human cognition argument seems to want human cognition to be the dependent variable, an independent variable, and a constant.

But there's a kernel of a serious question there, my silliness notwithstanding: If the brief in favor of qualitative, human cognition-based models is that quantitative models cannot exist independently of human cognition, then why stop there? Cognition obviously can't exist independently of other things.

Not GAG

10/19/2007 2:59 PM  
Anonymous Anonymous said...

RE: 2:59 p.m.

"namely that the human cognition argument seems to want human cognition to be the dependent variable, an independent variable, and a constant."

I am not certain what you mean by "constant". Nonetheless, you are correct that human cognition reacts (i.e., dependent variable), but it also acts (i.e., independent variable). As Marx noted over a century ago, humans are unique insofar as they are shaped by their environment AND they consciously shape their environment.

10/19/2007 8:21 PM  
Anonymous Anonymous said...

8:21: Please don't tell you're invoking dialectic as methodological justification. My god -- even the Frankfurt school has given up on that.

That's enough to turn me into a quant!

But to the point: Why did the US invade Iraq? Because of human cognition. How did human cognition affect the invasion of Iraq? Because humans with cognition made the plans. Which aspects of human cognition affected the decision to invade Iraq? The cognitive ones. What accounts for the lack of occupation planning? Bad human cognition. What affects other Great Power interventions in small powers? Human cognition. When does human cognition not affect politics? Never. What affects human cognition? Cognition.

Cogito ergo sum.

Dependent, independent, constant.

Not GAG

P.S.: To what extent would you allow that there can be non-human cognition-based qualitative methodology? Or are you all-cognition/all-the-time?

Because as I recall, there's a non-trivial amount of psychological research in IR (Hermann and Hermann, for example) that is heavily quantitative.

10/20/2007 7:38 PM  
Anonymous Anonymous said...

RE: 7:38 p.m.

"Please don't tell you're invoking dialectic as methodological justification. My god -- even the Frankfurt school has given up on that."

Because of such moves, it is arguable that Frankfurt School thinkers are not Marxists.

"What affects human cognition?"

Yes, obliviously other humans (with their cognitive reasoning) affect the cognitive decisions of other humans. Nonetheless, there are lots of other factors that shape human thinking/actions -- for example, the environment (i.e., global warming, resource depletion [e.g., oil]), technological development (e.g., economies of scale, the advent of the railroad), geography, etc.

10/21/2007 9:01 AM  
Anonymous Anonymous said...

But 9:01, if these other things (which strike be as being eminently measurable and, therefore, amenable to statistical analysis) affect human cognition, doesn't this undermine your claims that explanations not based on human cognition are essentially doomed to failure? What happens when the human cognition independent variable proves less meaningful than you have alleged it is?

10/24/2007 2:42 PM  
Anonymous Anonymous said...

RE: 2:42 p.m.

The issue at hand is are we going to focus on these external factors (alla statistical analysis), or is the focus of study going to be human cognition itself. Again, the latter is the cause of human behavior.

Moreover, different people/groups/countries will respond differently to the same external factors. This includes temporally, which means that the same people/groups/countries will respond differently to the same external factors over time.

10/24/2007 5:37 PM  
Anonymous Anonymous said...

Therefore, 5:37PM, do you suggest using the tools of political psychology to study political phenomena? I assume you do, since these are best suited to get at human cognition.

Are you a trained political psychologist?

10/26/2007 7:42 AM  
Anonymous Anonymous said...

GAG is not a trained political psychologist. He prefers to talk to people and take their word for it. We ask them what they were thinking when they made X decision and we assume they tell the truth.

10/27/2007 9:02 AM  
Anonymous Anonymous said...

OK, but doesn't that weaken his position with respect to human cognition?

Since my question didn't get a direct answer from him, I'll assume that he knows he cannot argue this one successfully (or to death--his preferred method).

I mean, if I were concerned with the ins and outs of human cognition, I would think I'd try to learn and use the tools from cognitive psychology, cognitive sciences, etc. It's hard to understand how the search for human cognition leads one to adopt qualitative case studies as the preferred methodology.

Perhaps GAG (or one of his supporters) can point us to some literature that makes the case that uncovering processes that underlie human cognition is best achieved with qualitative case studies; and that cognitive psychologists are incapable of doing so (or that they aren't as well equipped as the case student to do so).

10/27/2007 10:21 AM  
Anonymous Anonymous said...

10:21 a.m. wrote: "It's hard to understand how the search for human cognition leads one to adopt qualitative case studies as the preferred methodology."

If the prime goal of political science is to analyze why people make "political" decisions, the qualitative case study method is seemingly the appropriate one to conduct such analysis. Such a method allows for policy tracing, treatment of policy documents as well as of policy networks, the thorough examination of the context of decisions, and personal interviews.

10/28/2007 7:22 PM  
Anonymous Anonymous said...

So is it your position that cognitive psychologists are barking up the wrong tree?

10/28/2007 7:33 PM  
Anonymous Anonymous said...

7:33 p.m. please explain what cognitive psychologists have to offer the social scientist in the study of political phenomena?

10/29/2007 5:15 AM  
Anonymous Anonymous said...

Character classifications were offered by James Barber in his famous study of presidential politics. More germane to psychologists is a political leader that suffers from apparent mental disorders (e.g., Hitler). So psychology can be of utility in analyzing certain political phenomena.

I would, nonetheless, submit that most political decision-making can be analyzed without reference to the specific psychological characteristics of actors. Even in the case of Hitler, we cannot rely on his seeming dementia to understand the outbreak of World War II in Europe. (Hitler's mental state ostensibly did, however, have a significant impact on Germany's conduct of the War [e.g., the decision to pursue so many military objectives in the Soviet Union, and the refusal to redeploy German forces even after it was apparent that the Western invasion of the continent would take place through Normandy].)

10/29/2007 6:43 AM  
Anonymous Anonymous said...

aren't cognitive psychologists social scientists themselves? So you're asking what social scientists have to offer social scentists? I really don't understand the question.

10/29/2007 7:30 AM  
Anonymous Anonymous said...

RE: 7:30 a.m.

The specific question was what do "cognitive psychologists have to offer the social scientist in the study of political phenomena?"

10/29/2007 7:49 AM  
Anonymous Anonymous said...

OK, let me try this one: choice theory can inform "the social scientist" as it relates to the process that leads people to make choices. Political decisions=*choosing* among alternatives that pertain to politics.

BTW, you talk about Hitler suffering from "mental disorders," "dementia," etc. How are you able to claim this? Based on qualitative tools? In your answer, you cannot, of course, reference the works of psychology, lest you wish to make my case for me...

10/29/2007 8:31 AM  
Anonymous Anonymous said...

RE: 8:31 a.m.

You wrote: "lest you wish to make my case for me..."

I didn't realize you were making a case. If your point is that cognitive psychology has utility for political science, then that is a rather uncontroversial position. Moreover, this point has already been conceded.

As for choice theory (as you put it), game theory was actually developed by economists (not psychologists). They were trying to model the economic behavior of consumers. BTW these economists concluded that consumers could not be successfully modeled. Thus, their conclusion was that no specific set of variables guide human decision-making when it comes to consumption. Instead, we need to analyze each consumer on an individual basis.

10/29/2007 8:54 AM  
Anonymous Anonymous said...

Instead, we need to analyze each consumer on an individual basis.

Each one of the US' 300 potential consumers? really? whose conclusion is this?

10/29/2007 11:39 AM  

<< Home