Wednesday, February 13, 2008

Old Methodology/Theory Debates

636 Comments:

Anonymous Anonymous said...

If we limit ourselves to the public data referred to above by a pro-quantitative poster, the discipline would no longer be about politics. Instead, it would be a variant of sociology. In other words, it would solely be analyzing behavior. The discipline would no longer be about "who gets what, when, and how."

Perhaps someone would argue that the behavior analyzed by much quantitative scholars (e.g., voting) has political implications. Virtually everything, however, has political implications.

It is because qualitative work is in large part explicitly about "who gets what, when, and how" that I am attracted to it. This is spite of its limitations.

12/15/2007 8:03 AM  
Anonymous Anonymous said...

Yes, quant scholars and formal theorists have never considered, for example, budgetary politics. Or, international trade patterns. Or, wealth distribution by electoral system. Is this post a joke?

12/15/2007 10:22 AM  
Anonymous Anonymous said...

The question that most quantitative work elides is who (what groups) is (are) driving these factors? The human dynamics tend to be assumed away in too much quantitative work. Also, the initial post referred to publicly available data, such as from the census.

12/15/2007 2:05 PM  
Anonymous Anonymous said...

You are so tiresome.

12/15/2007 2:35 PM  
Anonymous Anonymous said...

# The human dynamics tend to be assumed away in too much quantitative work#

I've heard riffs on this logic before---e.g. quant work is 'too reductive' etc---but I simply cannot understand what it means.

I'm not carping, just curious...can you expand?

12/15/2007 3:47 PM  
Anonymous Anonymous said...

Let us take environmental legislation as an example. A quantitative scholar may look at public opinion to determine its relationship to the passage of environmental legislation. If a statistically positive relationship is found, then it is determined to be the causal factor behind the passage of the legislation (or of a set of laws). What is ignored is the interest group activity surrounding the legislation and the ideation (i.e., human cognition) that informs it.

This is not a trivial matter. Because after examining the intent underpinning this legislation (as well as its content), it could be found that the legislation is designed as symbolic -- intended not to protect the environment but to assuage/pacify public opinion.

12/15/2007 3:58 PM  
Anonymous Anonymous said...

Yes, none of the things which I mentioned are publicly available. You know, budgets, international trade flow data, data on wealth distribution and electoral systems. Wanker. This stuff is infinitely more available and transparent than "the interest group activity surrounding the legislation and the ideation."

Find another horse. This one's dead.

12/15/2007 4:06 PM  
Anonymous Anonymous said...

@ 3:58 PM,

3:47PM here.

Thanks for the explanation.

But isn't this just "correlation does not equal causation"?

Otherwise put, isn't the 'behind-the-scenes' IG activity just an omitted variable?

Quite right that we should understand the problem, have a good theory/model, and include all the relevant players/factors. But quant can do this, no?

12/15/2007 4:27 PM  
Anonymous Anonymous said...

"Quite right that we should understand the problem, have a good theory/model, and include all the relevant players/factors. But quant can do this, no?"

How so?

12/15/2007 6:33 PM  
Anonymous Anonymous said...

What's funny is that ILP never addressed how one chooses to categorize a civil rights bill as a civil rights issue in qualitative research. ILP claimed that quant work makes opaque distinctions based on issue types, yet he cannot answer the claim that the exact same decisions need to be made in qualitative work.

What's funny also is that he keeps giving examples from environment politics. Hmmm, I wonder who s/he is? Could this be the author of a book ranked 2,352,506th in Amazon? Ha!

12/15/2007 11:30 PM  
Anonymous Anonymous said...

"What's funny is that ILP never addressed how one chooses to categorize a civil rights bill as a civil rights issue in qualitative research. ILP claimed that quant work makes opaque distinctions based on issue types, yet he cannot answer the claim that the exact same decisions need to be made in qualitative work."

This issue has been addressed before, and it speaks to an advantage that in my view qual. work has over quant. work. Namely, that those subjective decisions inherent in virtually all political science research are explicitly explained in the text of qual. work, while they tend to be concealed/suppressed in quant. work -- via coding.

12/16/2007 6:26 AM  
Anonymous Anonymous said...

"What's funny also is that he keeps giving examples from environment politics. Hmmm, I wonder who s/he is? Could this be the author of a book ranked 2,352,506th in Amazon? Ha!"

What are you saying?

12/16/2007 6:34 AM  
Anonymous Anonymous said...

re: 12/16/2007 6:26 AM

Uh, no. Listing your coding rules makes your measurement decisions transparent, unlike the measurement of concepts you keep pushing.

12/16/2007 8:08 AM  
Anonymous Anonymous said...

It does not matter that one lists their coding rules. I can see is a myth among quantitative researchers.

There are virtually always judgment calls in political science research. The difference between qual. and quant. work is that in qual. work the judgment calls are abundantly apparent, whereas they are concealed/suppressed in quant. work.

12/16/2007 9:05 AM  
Anonymous Anonymous said...

I know this is heresy, but part of the attraction of Political Science to me as a young graduate student was that the discipline seemed capable of appreciating both quantitative and qualitative approaches, as long as the rigor of the logic and the profundity of the argument(s) were appreciable.

In the words of Rodney King, "can't we all just get along?"

Hmmm... no... probably not. Good move to get into law, on second thought....

12/16/2007 9:37 AM  
Anonymous Anonymous said...

# in qual. work the judgment calls are abundantly apparent, whereas they are concealed/suppressed in quant. work.#

I'm not sure that 'concealed/suppressed' is the right term. There is no attempt to deceive, surely!?

If someone tells me a democracy (for their project) has features a, b and c and I see Turkey has those features between date 1 and date 2, then it is a democracy between date 1 and date 2.

Now, I can quibble about whether a, b and c are 'good' indicators, or whether Turkey possessed a, b and c, but IMO that is a _strength_ of quant work, not a weakness.

In qual work, can this be done? Or do I have to take the researcher's word for it? This is not intended to be flippant, I'm genuinely interested.

12/16/2007 9:53 AM  
Anonymous Anonymous said...

"I'm not sure that 'concealed/suppressed' is the right term. There is no attempt to deceive, surely!?"

Just like in politics and finance, if there is a lack of transparency in science I believe it is prudent to assume fraud/corruption is transpiring.

As for your hypothetical, I am unsure what you are arguing.

12/16/2007 10:02 AM  
Anonymous Anonymous said...

#lack of transparency in science I believe it is prudent to assume fraud/corruption is transpiring.#

But is there generally a lack of transparency?

Take, for example, Przeworski et al's "Democracy and Development".

I'm not in that field, but I remember that book having copious detail about coding. You could disagree with it, but there it was.

BTW: I don't want to get into a fight on that particular book (since I don't know it very well), but it is the general point about coding rules being clearly laid out that matters to me...

12/16/2007 10:29 AM  
Anonymous Anonymous said...

I do not know that book either, so there is no point in discussing it. Nevertheless, that is one book. How many studies (especially articles) lay out there coding schemes and type of cases, and then just describe their results? Lots! I find it difficult to believe that so many people on this blog would fight this point so doggedly, since it is entirely patent.

I imagine many quants. truly believe they are carrying out "normal" science and think of themselves as akin to physical scientists. Unfortunately, it is a false belief.

12/16/2007 10:42 AM  
Anonymous Anonymous said...

#How many studies (especially articles) lay out there coding schemes and type of cases, and then just describe their results? Lots! #

You mean they don't provide their data?

If you are arguing that more authors _should_ provide their data (in which coding has taken place) along with their coding rules, more power to you.

But I sense you mean something else(?)

12/16/2007 10:53 AM  
Anonymous Anonymous said...

"You mean they don't provide their data?"

"If you are arguing that more authors _should_ provide their data (in which coding has taken place) along with their coding rules, more power to you."

That is a key aspect of the my argument.

The other component is that two researchers can look at the same data, use the same coding scheme, and end up coding the data differently. Alternatively (similarly), coding schemes can contain subjective biases, such as defining democracies narrowly or expansively.

Again, there are subjective aspects of political science that cannot be obviated. My problem is too many quantitative scholars act as if this reality does not exist. Whereas qualitative research, in my view is more amenable to this.

12/16/2007 11:03 AM  
Anonymous Anonymous said...

#My problem is too many quantitative scholars act as if this reality does not exist. Whereas qualitative research, in my view is more amenable to this.#

Fair enough.

My take would be slightly different: quant researchers want to _minimize_ (perhaps not eliminate) this subjectivism, and many are making good faith attempts to do so.

An overly optimistic spin perhaps!

12/16/2007 1:47 PM  
Anonymous Anonymous said...

Sorry, but most quant scholars don't just wave this away. Rather, they report things like intercoder reliability, or they rely on public datasets.

I think it's abundantly clear that whoever it is that is making these arguments really knows very little about how quantitative work is actually undertaken. I would actually recommend that you ask a quant colleague to let them work on a project with you. You'd learn a lot. At this point you just sound like a religious boob.

12/16/2007 2:01 PM  
Anonymous Anonymous said...

#making these arguments really knows very little about how quantitative work is actually undertaken.#

Maybe so: I know a little about quantitative methods, but not much about coding and related issues.

Actually, I'd hoped this may be the place to find out a little more from people who have a deeper understanding of the topic than myself. Kinda thought that was (in part) the point of the blog.

#At this point you just sound like a religious boob.#

Very helpful, thanks!

You win!

12/16/2007 2:19 PM  
Anonymous Anonymous said...

The work out there suggests that qualitative researchers are actually worse at coding than quantitative researchers. Qual researchers wave their hands at "deep knowledge," yet we cannot separate this from "deep bias."

12/16/2007 2:20 PM  
Anonymous Anonymous said...

Nothing about the way your write on this site indicates you are interested in learning. You appear much more interested in teaching.

12/16/2007 2:46 PM  
Anonymous Anonymous said...

@sod verba

#Nothing about the way your write on this site indicates you are interested in learning. You appear much more interested in teaching.#

Sorry bro: I assumed your 2:01 was a response to my 1:47. I now see it wasn't...

My bad: I basically agree with you!

12/16/2007 2:58 PM  
Anonymous Anonymous said...

I presume you were: 12/15/2007 8:03 AM

12/16/2007 3:09 PM  
Anonymous Anonymous said...

#I presume you were: 12/15/2007 8:03 AM#

No: not me chief!

You will know me by my quotes (I use # #).

Peace.

12/16/2007 3:16 PM  
Anonymous Anonymous said...

"Just to clarify, when I write that many quantitative studies cannot be replicated, I mean that when they are replicated researchers come to different findings. Thus, it is the outcomes that cannot be replicated."

If the findings are somewhat close to the original findings, that is just a matter of sampling variability. It's not a problem, just a reality when using samples to study populations (and your 'population' is something like American voters or rats, not carbon atoms). For example, when the ten legitimate polls the week before an election give slightly different results, this is not a problem with replication, this is just sampling variability--they are helping to build what is called a sampling distribution.

There is a related problem of publication bias that, in contrast, is a real problem: Scholar A presents finding X. Scholars B through R study the same topic and also find X, but none can get published because their findings are old news. Scholar S finds not X and gets published. Problem.

12/16/2007 3:41 PM  
Anonymous Anonymous said...

##: I wd just make up a name. Might I suggest Cary King?

12/16/2007 4:34 PM  
Anonymous Anonymous said...

"Sorry, but most quant scholars don't just wave this away. Rather, they report things like intercoder reliability, or they rely on public datasets."

You write "most," but you really do not know how many. Moreover, even if we agree it is a minority of scholars, it presents a problem. Finally, if scholars have to rely on public data sets to pursue research, this dramatically reduces the number of political science questions that can be pursued.

12/16/2007 4:48 PM  
Anonymous Anonymous said...

"There is a related problem of publication bias that, in contrast, is a real problem: Scholar A presents finding X. Scholars B through R study the same topic and also find X, but none can get published because their findings are old news. Scholar S finds not X and gets published. Problem."

The real problem is that scholars are in too many instances not able to replicate the findings of earlier studies -- again even if we agree it is a minority of studies. The number of non-replicated studies in political science would be shocking in virtually all of the physical sciences.

12/16/2007 4:53 PM  
Anonymous Anonymous said...

"The work out there suggests that qualitative researchers are actually worse at coding than quantitative researchers. Qual researchers wave their hands at 'deep knowledge,' yet we cannot separate this from 'deep bias.'"

I am uncertain what you mean by "deep knowledge." As to "deep bias," it is true that qualitative scholars normally "lay out an argument." Is this is what you mean by "deep bias"?

I would posit that qualitative analysis advances through a dialectical method. In other words, competing theses (arguments) sharpen and deepen our overall understanding of political/social phenomena (i.e., synthesis). Thus, pluralism, state autonomy, etc. interpretations of the same events, or series of events, leads to a better comprehension of these phenomena.

12/16/2007 5:04 PM  
Anonymous Anonymous said...


I am uncertain what you mean by "deep knowledge." As to "deep bias," it is true that qualitative scholars normally "lay out an argument." Is this is what you mean by "deep bias"?


Durr, yes, only qualitative researchers lay out arguments.

I mean that qualitative scholars often don't define terms and concepts and variables in any rigorous way. They appeal to expertise rather than replicable knowledge. APSA's Qualitative Methods section has had some good papers on this in the past couple of years.

(Of course, there certainly are some exceptions. Good qualitative research does exist and it is powerful.)

12/16/2007 5:39 PM  
Anonymous Anonymous said...

"I mean that qualitative scholars often don't define terms and concepts and variables in any rigorous way. They appeal to expertise rather than replicable knowledge."

When this happens it is evident to the reader. In other words, such flaws are readily evident to the reader in qual. work. Such flaws can be concealed/suppressed such in quant. work (i.e., it is less transparent).

12/16/2007 5:53 PM  
Anonymous Anonymous said...


When this happens it is evident to the reader. In other words, such flaws are readily evident to the reader in qual. work. Such flaws can be concealed/suppressed such in quant. work (i.e., it is less transparent).


Bwa ha ha ha!

I urge you to read some of the papers presented in the Qualitative Methods section. Even qualitative scholars think this is a problem. Really, it's only unreflective qualitative scholars who don't recognize the weaknesses of their work.

12/16/2007 6:03 PM  
Anonymous Anonymous said...

"I urge you to read some of the papers presented in the Qualitative Methods section. Even qualitative scholars think this is a problem."

I am not questioning it is a problem. My point is that in qualitative work is an evident problem, whereas such problems can be hidden in quant. work.

12/16/2007 6:16 PM  
Anonymous Anonymous said...

That should read: "My point is that in qualitative work [it] is an evident problem . . ."

12/16/2007 6:30 PM  
Anonymous Anonymous said...

My point is that in qualitative work is an evident problem, whereas such problems can be hidden in quant. work.

Can be hidden in quant work? Sure.

Can be hidden in qual work? Sure.

In other words, qual isn't better here. Good work is good work.

12/16/2007 6:49 PM  
Anonymous Anonymous said...

"Can be hidden in qual work? Sure."

I do not see how that is possible. I have seen some of the analysis (criticisms) you are referring to, and this analysis is drawn directly from the text of the qual. work critiqued.

12/16/2007 6:55 PM  
Anonymous Anonymous said...

"The real problem is that scholars are in too many instances not able to replicate the findings of earlier studies -- again even if we agree it is a minority of studies. The number of non-replicated studies in political science would be shocking in virtually all of the physical sciences."

I think we are in agreement. Any individual quantitative study should be thought of as providing a single data point is a sampling distribution for the parameter of interest. Others should then seek to estimate the same parameter using different data. Absolutely should be done.

12/16/2007 9:00 PM  
Anonymous Anonymous said...

"in a sampling distribution", that is.

12/16/2007 9:30 PM  
Anonymous Anonymous said...

I do not see how that is possible.

Right. This is the problem. You are so unreflective that you can't imagine a field researcher biasing her field report. Where in the text would this problem show up?

12/17/2007 5:51 AM  
Anonymous Anonymous said...

Let us say someone writes that "x" happened in Congress in qual. work. The key difference between qual. and quant. work is in quant. work that same claim would not be made in the text where the claim would be apparent to the reader, but concealed in a coding decision -- where this observation is not explicitly brought to the readers' attention, nor explained.

12/17/2007 6:10 AM  
Anonymous Anonymous said...

#writes that "x" happened in Congress#

What type of 'x' do you have in mind?

If it is something like "From my over-hearing the pages talk, I got the general impression that Senator Z thinks 'x' about this bill", then I would agree. That is, quant researchers don't get the right to code it up and call it a 'harder' data point than the qual researchers.

OTOH, I'm not sure there is a lot of this type of thing going on in APSR. I mean, I wouldn't let undergrads would get away with this!

Just trying to understand the (perceived) _size_ of the problem here...

12/17/2007 6:31 AM  
Anonymous Anonymous said...

Let us say someone writes that "x" happened in Congress in qual. work.

That's not political science, that's journalism. Surely qualitative researchers that do excellent work on Congress aren't simply summarizing what happened in Congress.

12/17/2007 7:06 AM  
Anonymous Anonymous said...

In "The Politics of Homelessness: Shelter Now and Political Protests" (PRQ, 58(3): 497-509), the author examines the role played by one group--Shelter Now--in the politics of city homelessness. The data come from anonymous interviews with policymakers, group leaders, etc.

This work is obviously qualitative, and the conclusions reached seem to make sense, given the data presented.

How can I know whether such data are representative of the entire set of interviews conducted?

In your response, you cannot say "conduct your own interviews" as that would be akin to "gather your own data," which has been shown to be too demanding a task for the replication-worried person out there.

How do I replicate this analysis (or such an analysis if you don't wish to discuss an individual article)?

12/17/2007 9:22 AM  
Anonymous Anonymous said...

"That's not political science, that's journalism. Surely qualitative researchers that do excellent work on Congress aren't simply summarizing what happened in Congress."

As my graduate professor in normative theory aptly noted, all empirical work is summarizing (describing) what happened. Of course, the empirical theorist is also supremely concerned with why something happened -- or least should be.

12/17/2007 10:02 AM  
Anonymous Anonymous said...

If I reviewed work on anonymous interviews, I would probably rejected it on the basis that there is no way to determine if the interviews were real. I think it would be a set back for the discipline if work were regularly published that was based on anonymous sources.

12/17/2007 10:09 AM  
Anonymous Anonymous said...

"That's not political science, that's journalism. Surely qualitative researchers that do excellent work on Congress aren't simply summarizing what happened in Congress."

The hypothetical example was in response to the issue of transparency. It was not meant as an example of an analytical method.

To take the example forward, someone wants to analyze the influences on labor legislation, and they focus on three key pieces of such legislation. If the researcher was able to show that legislation ideas developed by pro-labor union think tanks made it into final legislation, and that think tank members had regular meetings with key congress members, this would seemingly be an important analytical finding established through qual. methods.

The more germane point, because this is ostensibly a discussion on transparency, is that the reader can clearly see (read) the data/evidence that the researcher uses to draw his/her conclusions. Again, such transparency is virtually impossible for large-n quantitative work.

12/17/2007 10:17 AM  
Anonymous Anonymous said...

10:09-- should we then reject all work that is based on survey data (since there is "no way to determine if the interviews were real?"). There are strong arguments against the use of anonymous sources in scholarship and journalism. Yours ain't one of em.

12/17/2007 10:19 AM  
Anonymous Anonymous said...

"should we then reject all work that is based on survey data (since there is "no way to determine if the interviews were real?"). There are strong arguments against the use of anonymous sources in scholarship and journalism."

Your point about survey data raises a key issue for survey, or questionnaire, data that is privately collected. As pointed out earlier, this data collection, nonetheless, can be replicated. But there is no way I can replicate interviews if I do not know who the researcher interviewed. Perhaps worse, I cannot even contact the interviewee to confirm that the researcher in question actually interviewed him/her.

12/17/2007 10:35 AM  
Anonymous Anonymous said...

The more germane point, because this is ostensibly a discussion on transparency, is that the reader can clearly see (read) the data/evidence that the researcher uses to draw his/her conclusions.

False. The reader can only read the data that you provide. There is no way to tell if the data have been truncated (i.e., ignored or suppressed) to fit your point. See: Margaret Mead.

I find your blind willingness to trust qualitative researchers to present all of the relevant data rather astounding, given what we know about how most academics work.

12/17/2007 10:41 AM  
Anonymous Anonymous said...

Dear ILP/GAG,

what is your answer to 9:22?

12/17/2007 10:53 AM  
Anonymous Anonymous said...

10:53 a.m. -- 10:09 a.m. is the response to 9:22 a.m.

12/17/2007 12:45 PM  
Anonymous Anonymous said...

RE: 10:41 a.m.

It is not a question of blind trust, nor is a researcher required to incorporate every piece of information that may be tangential to the matter at hand. The issue is does the researcher marshal enough evidence to sustain their thesis. With qualitative work, the reader can make this judgment.

12/17/2007 12:51 PM  
Anonymous Anonymous said...

#The issue is does the researcher marshal enough evidence to sustain their thesis. With qualitative work, the reader can make this judgment.#

Wait: isn't this (trivially) the case with *all* work, quant and qual?

Or are you simply pointing out that this is true of qual work *in addition* to quant work?

12/17/2007 1:15 PM  
Anonymous Anonymous said...

The point is that with qual. work the reader is in a much better position to assess the researcher's argument/conclusions than with quant. research.

12/17/2007 1:21 PM  
Anonymous Anonymous said...

10:35-- 10:19 here. I see what you are saying, but replication tests are still possible with anonymous sources. The best qualitative work will not simply say, "according to my anonymous source," but will rather say, "according to a local official of the Graustarkian ministry of the interior..." I can replicate this by doing my own interviews of similarly-placed Graustarkian officials, just as I can replicate a survey of Graustarkians by conducting one of my own. Right? There's no reason survey work is more replicable than elite interviewing, as long as I know the questions asked and the general profile of the respondents.

Still, I appreciate your response and suspect we could have a fruitful conversation about this in person.

12/17/2007 1:41 PM  
Anonymous Anonymous said...

Most human subjects committees will allow you to use the name of an elected official (at least within the US--for comparative research, this may be a different story).

Otherwise, an outside researcher cannot call up respondents in any other researcher's project just to confirm whether interviews took place, surveys were actually administered, etc. Federal law prohibits disclosure of information about private citizens. (You may ask a qualitative or mixed methods researcher for transcripts, but you cannot ask for identifying data. Well, you can ask, but you will be denied unless the researcher is a scofflaw.)

12/17/2007 1:53 PM  
Anonymous Anonymous said...

Not to wade too deeply into this debate, but Political Analysis I believe requires everyone who publishes in their journal to provide replication data for their research at a public repository like ICPSR. Certainly other journals ought to follow suit in this regard, particularly given the new fad towards "embargoed" data sets and away from open data like the ANES (I'm looking at you, CCES, although at least by the time people start publishing with it the CCES data is promised to become public). And I certainly agree that all government-funded research should result in public data, subject of course to subject confidentiality etc.

12/17/2007 1:56 PM  
Anonymous Anonymous said...

The point is that with qual. work the reader is in a much better position to assess the researcher's argument/conclusions than with quant. research.

Now you're just being obtuse. Just stating it doesn't make it so. I just showed you why this isn't true, so you ignored it.

This is why I don't trust qualitative researchers to be forthright about their evidence.

12/17/2007 1:57 PM  
Anonymous Anonymous said...

RE: 1:53 p.m.

If I interview lobbyists, NGO officials, corporate officials, I do not assume they have an expectation of anonymity. It is similar when journalists interview these kind of individuals.

12/17/2007 2:48 PM  
Anonymous Anonymous said...

"I just showed you why this isn't true, so you ignored it."

Where did you "show" this?

12/17/2007 2:49 PM  
Anonymous Anonymous said...

False. The reader can only read the data that you provide. There is no way to tell if the data have been truncated (i.e., ignored or suppressed) to fit your point. See: Margaret Mead.

12/17/2007 3:12 PM  
Anonymous Anonymous said...

I take your point 3:12. But aren't you just saying that some qualitative researchers can be dishonest? Isn't this true of some quantitative researchers as well? At some point we have to trust that the scholar isn't twisting, suppressing, or making up data. If they are, they will be found out eventually (even if it takes 50 years, as with Freeman's interrogation of Mead's work).

By the way, it's worth noting that the jury is still out on Mead's case, so it's not quite right to cite her as an open and shut case of scholarly dishonesty.

12/17/2007 3:37 PM  
Anonymous Anonymous said...

If I interview lobbyists, NGO officials, corporate officials, I do not assume they have an expectation of anonymity.

It doesn't matter in most cases whether your interview subjects/respondents have an expectation. Institutional review boards will typically not allow you to name private citizens or even higher-level administrators (if not elected). The decision is out of your hands.

12/17/2007 3:37 PM  
Anonymous Anonymous said...

I take your point 3:12. But aren't you just saying that some qualitative researchers can be dishonest? Isn't this true of some quantitative researchers as well? At some point we have to trust that the scholar isn't twisting, suppressing, or making up data. If they are, they will be found out eventually (even if it takes 50 years, as with Freeman's interrogation of Mead's work).

Right. Same indictment of qualitative and quantitative research. Qualitative research doesn't earn a special dispensation because it uses words and discourse rather than numbers.

By the way, it's worth noting that the jury is still out on Mead's case, so it's not quite right to cite her as an open and shut case of scholarly dishonesty.

It's not a question of scholarly dishonesty. But it is prima facie evidence that you are wrong that "the reader is in a much better position to assess the researcher's argument/conclusions than with quant. research." Having written down evidence in words does not make it less subject to bias from unclear coding practices.

The irony in all of this is that I am a qualitative scholar. I'm just willing to accept that my work risks dangerous biases. I need not pretend that we have the magical key because we use words.

12/17/2007 4:36 PM  
Anonymous Anonymous said...

I have a question. Did they just decide not to do a 2006 ANES time-series study with comparable questions to the previous time-series studies? It seems like the 2006 pilot study, which is fine but puts an end to a whole slew of long-running questions, is the only one coming out. WTF?

12/17/2007 4:46 PM  
Anonymous Anonymous said...

ask the NSF

12/17/2007 4:53 PM  
Anonymous Anonymous said...

3:37 here. I'm not the one who said, "the reader is in a much better position to assess the researcher's argument/conclusions than with quant. research." I don't think that's a defensible position at all!

12/17/2007 6:53 PM  
Anonymous Anonymous said...

"The reader can only read the data that you provide. There is no way to tell if the data have been truncated (i.e., ignored or suppressed) to fit your point."

If I wanted to omit key evidence or skew interpretations of it, I would use a quantitative large-n approach. I would omit cases, or code cases in a bias manner -- matters that are not reported in the text. If caught on to later, I would beg a difference of opinion, or honest error. It is simply too easy to detect such tendentious decisions with qualitative work -- especially at the peer review stage.

12/17/2007 7:23 PM  
Anonymous Anonymous said...

It is simply too easy to detect such tendentious decisions with qualitative work

Again, the blind assertions, the naive willingness to believe qualitative work, the almost stunning inability to understand basic methodological criticism, this is what makes me suspicious of so much qualitative work.

12/17/2007 7:53 PM  
Anonymous Anonymous said...

people used to argue about things like communism, now we are reduced to this.

12/17/2007 8:42 PM  
Anonymous Anonymous said...

7:23's lack of methodological reflectiveness should not be used to indict all qualitative researchers.

12/17/2007 11:50 PM  
Anonymous Anonymous said...

RE: 7:53 a.m.

Take the above hypothetical example of labor legislation. If a researcher argues that labor unions, particularly through think tanks, were the predominate influence on labor legislation during a certain period of time by looking at three pieces of labor legislation, this exercise is much more transparent to a potential reviewer than a study that codes and analyzes 300+ hundred cases of labor legislation across countries.

In the first instance the reviewer can immediately determine whether the researcher is excluding key pieces of labor legislation from their study. Also, the reviewer is likely to know whether the researcher is considering the key aspects of the legislation in question. Additionally, the reviewer may know of the positions of the other competing interest groups involved in those pieces of legislation, and conclude that the researcher is under treating them.

In large-n quantitative studies, reviewers are given a description of the cases, and the coding scheme, but if the researcher included all relevant cases and reasonably coded them is outside the purview of the reviewer. Thus, at least in the peer review process, qualitative research is more transparent and offers lower opportunities for fraud/deception, whereas large-n quantitative studies offer ample opportunity for researchers to skew their researcher to generate the desired outcome.

12/18/2007 5:57 AM  
Anonymous Anonymous said...

It is valid that reviewers for qualitative research are almost exclusively experts in the substantive subject of the manuscript. In quantitative work reviewers tend to be experts in the methods used in the paper, as opposed to experts in the substantive field.

12/18/2007 6:22 AM  
Anonymous Anonymous said...

#it's worth noting that the jury is still out on Mead's case#

Didn't Kinsey (sexual behavior guy in 1950s) get some criticism for sample selection bias _at the time_?

Strikes me as an interesting example since, as I understand it, he used interviews as cases for a 'large-n' type design.

Obviously, some of his findings were erroneous and over-inflated e.g. married people have sex on average 2.8 times a week?

Yeah, right. Not in my life they don't...

12/18/2007 6:59 AM  
Anonymous Anonymous said...

In the first instance the reviewer can immediately determine whether the researcher is excluding key pieces of labor legislation from their study.

How?

12/18/2007 7:34 AM  
Anonymous Anonymous said...

5:57 and 6:22 raise interesting points, but they are operating with a bit of a straw man model of how quantitative work is assessed. Generally, quantitative work is evaluated by scholars who have *both* susbstantive *and* methodological expertise. You can't black box your coding schemes. Reviewers want to know how you coded things and why you made the choices you made. My point is that there are no reasons to think that *either* type of work--quant or qual--is more or less subject to problems of bias, poor research design, and lack of scholarly honesty.

12/18/2007 8:41 AM  
Anonymous Anonymous said...

#quant or qual--is more or less subject to problems of bias, poor research design, and lack of scholarly honesty.#

Quick query for those who know more than me: do qual researchers ever use 'coders' (other than themselves) for case description/categorizing?

The whole point of doing this in quant work is to establish some validity (i.e. the incentive mechanism of the coder is presumably different to that of the researcher).

Is there (could there be?) a qual analogy?

12/18/2007 8:50 AM  
Anonymous Anonymous said...

RE: 7:34 a.m.

Through their own knowledge of the subject area.

12/18/2007 9:58 AM  
Anonymous Anonymous said...

RE: 8:50 a.m.

What is the protocol for having someone code a researcher's cases? Unless this individual is chosen by an editor, or someone who is truly neutral, and someone else can verify that they indeed did the coding for a project, I think this notion of an external coder is meaningless.

12/18/2007 10:02 AM  
Anonymous Anonymous said...

RE: 7:34 a.m.

Through their own knowledge of the subject area.


Oh good. Qualitative researchers are unbiased because they "know."

I hope that the graduate students that we train to do qualitative research are encouraged to read this thread to learn what not to do.

12/18/2007 10:16 AM  
Anonymous Anonymous said...

The difference between qualitative studies and quantitative studies is the degree to which subjectiveness is apparent to the reader, and the degree it is concealed.

12/18/2007 10:26 AM  
Anonymous Anonymous said...

10:16 a.m. -- Do reviewers always agree with cases included/excluded in quantitative analysis?

12/18/2007 10:34 AM  
Anonymous Anonymous said...

10:16 a.m. -- Do reviewers always agree with cases included/excluded in quantitative analysis?

Certainly not, and this is why people get dinged all the time for this.

You seem to be the type of qualitative researcher who thinks that quantitative people go to sleep when they see numbers and regression coefficients.

12/18/2007 10:41 AM  
Anonymous Anonymous said...

On what basis do reviewers of quantitative work disagree with a researcher's case selection? Is it their judgment that important cases are being omitted, or trivial cases are being included? If so, how is this different than the judgment that reviewers of qualitative use?

12/18/2007 10:51 AM  
Anonymous Anonymous said...

#I think this notion of an external coder is meaningless#

Thanks for this: I see your point.

12/18/2007 1:00 PM  
Anonymous Anonymous said...

I just realized that buddy is probably banging on about model specification more generally rather than coding. It should be no suprise that he wouldn't know the proper terminology. Quantitative findings are obviously susceptible to modelling choices. But so are qualitative arguments. Indeed, they are likely more so as they more rarely put their arguments in formal terms which can be evaluated logically. But, for the purposes of collegiality, I'll agree they are equally vulnerable. Now, can we stop listening to people who don't even really know how to use the word coding?

12/18/2007 2:43 PM  
Anonymous Anonymous said...

Please respond to 10:51 a.m.

12/18/2007 3:00 PM  
Anonymous Anonymous said...

RE: 2:43 p.m.

Even when you are conceding you are wrong, you are insulting. Amazing!

12/18/2007 3:23 PM  
Anonymous Anonymous said...

"Quantitative findings are obviously susceptible to modelling choices. But so are qualitative arguments. Indeed, they are likely more so as they more rarely put their arguments in formal terms which can be evaluated logically."

I think ILP's point is that formal terms do not lead to more logical evaluations, but serve to obscure the fact that subjective decisions are being made. Model specification is a subjective decision, no different than the qualitative researcher's case selection.

So his/her argument is that a particular strength of qualitative research is that the subjective decisions are out in the open, and readers can decide for themselves about how appropriate these decisions are. In the case of large-n quantitative studies, model specification masks the reality that certain cases are included while others are excluded. Because what cases are excluded/included in large-n quantitative work are not explicitly laid out in the manuscript, the reader is not in the optimal position to assess the validity of doing so.

12/18/2007 5:22 PM  
Anonymous Anonymous said...

formal terms do not lead to more logical evaluations, but serve to obscure the fact that subjective decisions are being made.

Sit and read that sentence again.

If formal terms "obscure," it's because you don't understand them.

12/18/2007 7:50 PM  
Anonymous Anonymous said...

This discussion is going nowhere. Both types of work can be fatally flawed, and both can produce lasting and profound insights about political life. The latter is generally performed by honest, careful scholars who are transparent about their coding schemes, modeling choices, case selection, sources, interviews, or whatever.

Besides, the best scholars today combine quant and qual approaches. It's only those who want to relive the tired methodological food fights of the 90s who traffic in these aimless "qual vs. quant" arguments.

Besides, the real methodological divide is not between quant and qual, but between experimentalists and those who conduct observational studies (large or small N). I'm not an experimenter myself, but they make a strong case for the superior ability of their method to produce durable knowledge about important questions.

12/18/2007 8:32 PM  
Anonymous Anonymous said...

RE: 7:50 p.m.

Your comment suggests that you do not understand 5:22 p.m. I would kindly suggest you read it again.

12/19/2007 5:22 AM  
Anonymous Anonymous said...

After a couple of days, we can now infer that GAG does not have an answer to the question posed regarding a qualitative article whose evidence comes mainly from anonymous interviews.

He has clearly given up and conceded defeat. How else would we explain his silence?

I don't blame him, mind you. His position was untenable from the start, it was only a matter of time before he would concede defeat.

I move that we discuss some other topic, now that the main protagonist has waived the white flag.

12/19/2007 11:47 AM  
Anonymous Anonymous said...

I do not know if this is GAG, but see 10:09 a.m.

12/19/2007 12:28 PM  
Anonymous Anonymous said...

What does ILP mean?

12/19/2007 5:05 PM  
Anonymous Anonymous said...

RE: 12/17 4:56

Several years ago, NSF decided not to fund ANES Time Series studies in mid-term elections. Running the Pilot Study in November of 2006 was ANES' attempt to provide at least some data to people with midterm interests.

12/19/2007 8:06 PM  
Blogger American and Comparative Politics Job Blog said...

can you guys avoid discussing particular scholars. i have not been monitoring this thread very closely because there have never been any complaints. thanks!

12/19/2007 8:32 PM  
Anonymous Anonymous said...

For the holidays, I thought I'd give us all a nice laugh by taking a look back at 2007, GAG-style. I'll add the newer posts later...

8/22/2007 8:38 AM:
"In the physical sciences conflicting findings are the result of the need for the refinement of theory, data collection, and/or date analysis. In the social sciences inconsistent findings are the result of the autonomous and highly variable quality of human cognition – the ultimate cause of social/political phenomena."


9/15/2007 9:12 AM:
"Because if he introduced a dichotomous variable, "weather," to his experiment, he'd also find that weather is one of the things that causes people to vote.

And sunrises. In fact, though I haven't run this regression yet, I'm fairly sure (because our intuitions are almost aways correct) that the correlation coefficient between sunrise and decision to vote is highly significant. Every time the sun rises on Election Day, people vote. And murder, too; whenever there's a sunrise, there's also a murder. So maybe there's confounding. Maybe it's murder that causes North Dakotans to vote."


Then on 9/20/2007 9:44 AM, GAG reported doing a search in JSTOR for "correlation coefficient" and claimed to have found 2,032 articles:
"JSTOR--> Advanced Search-->

Search Terms: "correlation coefficient*" AND {show OR suggest OR demonstrate} in ABSTRACT -->

Search Articles--> Date range 1970-2006--> Journal Database: Political Science.

Search Results: 2032 for « ((ab:(correlation coefficient) AND ab:(suggest)) OR ab:(show) OR ab:(demonstrate)) AND ty:FLA AND (year:[1970 TO 2006])^0 AND la:(eng) in multiple journals » (0.49 seconds)

Seems that quantitative scholars trot out "correlation coefficient" as if it were some kind of "golden shovel" quite often."


Someone else replicated his search on 9/20/2007 11:20 AM and found only 5 hits (as opposed to the 2,032 GAG claimed he found)!


Then on 9/20/2007 2:08 PM, GAG claimed that the replication was done erroneously:
"But be a good quantitative methodologist. Replicate the study -- don't do it a different way, get a different finding, and say "see"."


This was done on 9/20/2007 2:28 PM:
"Again, as I wrote earlier, if you search abstracts in political science journals for "correlation coefficient", you get 5 hits, not 2,032."


To which GAG replied on 9/20/2007 4:17 PM:
"But I can't replicate your replication. JSTOR returns the results to me in a different order. (...)
So while I'm happy to accept the validity of your correction, I am now worried about how JSTOR works....(...)
The basic point remains that quantitative scholars also highlight their "golden shovels" (whatever that means)."


[Note how GAG changes the subject once he sees his mistakes...]


Then, someone spoke of uncertainty in parameter estimates, to which GAG responded on 10/16/2007 11:20 AM:
"Your post verifies the prime criticism of statistical methods. Namely, their frequent unscientific/irrational usage in the social sciences is obscured by/hidden in what amounts to statistical mumbo jumbo."


Building upon his previous post, GAG on 10/18/2007 8:47 PM says:
"It should also be stressed that to invoke "margin of error" without a substantive explanation does amount to mumbo jumbo. If I said that qual. methods are superior to quant. methods because of "squanto", and that you need to read about squanto to understand what I am saying, you, too, would accuse me of offering mumbo jumbo to advance my position."


On 11/09/2007 6:09 PM, GAG asks:
"How would quants. explain the unwillingness of the Congressional Democrats to push for the impeachment of Bush/Cheney, given the support for impeachment amongst the public, and especially among registered Democrats?"


Then the following question is asked on 11/10/2007 3:37 PM:
"Hmmm, how are you able to gauge what "public opinion" is? Through qualitative methods/cognition? Or are you making this inference based on large-N surveys? If so, doesn't this validate the use of large-N surveys?"


A question that still remains unanswered to this day...


That doesn't stop GAG from claiming, on 11/11/2007 8:02 AM:
"I know enough, and read enough, about quant. approaches to Congress to know that a maxim in quant. Congress work is that the Political Parties are vote maximizers."


Of course, this statement was challenged on 11/11/2007 11:11 PM:
"Can you provide a citation for the claim that "parties are vote maximizers" in congressional elections?"


Still no answer... [note: of course, no one ever made such a claim, so it is no surprise that GAG cannot find a single citation in support of his argument]


On 11/12/2007 8:55 AM, GAG writes:
"Fenno explicitly assumes that people are in Congress to serve the public good."

A claim that is obviously false to anyone who has read Congressmen in Committees.

12/20/2007 3:24 PM  
Anonymous Anonymous said...

The above posts were written by more than one person. I know because some of those posts are mine, and I am not GAG.

Also, I believe it is unfair and inappropriate to take posts out of context in an game of gotcha. From what I tell both sides of the debate have made errors, and adjusted their arguments/positions accordingly. Is that not a distinct advantage/purpose of blogs -- i.e., that it allows divergent interlocutors to engage a dialectical discussion from which all can learn?

12/20/2007 4:46 PM  
Anonymous Anonymous said...

You mean there are more of you guys out there?!? It's a scary world, I'll tell ya.

12/20/2007 5:26 PM  
Anonymous Anonymous said...

Earlier someone wrote that GAG's work is mediocre. I thought this was excessively harsh. While I do not think anyone is required to carry water for anyone else, I still felt the obligation to determine if this is true, and, if not, to correct the record.

I spent part of yesterday finding reviews of GAG’s books, and while they are not universally positive, many of the reviews do make flattering statements about his work. In light of the earlier statement about GAG’s scholarship, I thought it appropriate to post some of these flattering comments on this blog. The reviewer for the _American Political Science Review_, for instances, enthuses that GAG (2001) in _Corporate Power and the Environment_ "authored a lucid and well-written book with a sharp thesis." He adds that GAG has "carved out a unique perspective on environmental policy that also has implications for the state theory debate." In writing about _Corporate Power and the Environment_, the reviewer for _Organization & Environment_ contends that "the empirical work is excellent, with strong case studies of forestry, national parks, and the Clean Air Act of 1990. There is also an effective and concise outline of the big debates in policy formation." This reviewer goes on to declare that "Gonzalez's conclusions seem to be both correct and essential if we are to understand U.S. environmental policy debates."

GAG (2005) also published a second book entitled _The Politics of Air Pollution_ (2005). The reviewer for _Environmental Politics_ argues that it is "a must-read for any political scientist who studies environmental politics." The reviewer for _Perspectives on Political Science_ avers that _The Politics of Air Pollution_ "is a useful foray into the contested concepts of legitimacy, co-option, and strategies of social change." He adds that he hopes "to see more engaging scholarship from this provocative author in the future."

I definitely think any author is fair game, but to make insulting and caustic statements about someone’s work is ethically wrong. My coming to GAG’s defense in this instance should not be interpreted as an endorsement of his or anyone else’s views.


Reviews Cited:

_American Political Science Review_ (June 2002) 420-1.

_Organization & Environment_ (December 2003) 518-20.

_Environmental Politics_ (November 2005) 703-7.

_Perspectives on Political Science_ (Fall 2005) 222-3.

12/21/2007 6:22 AM  
Anonymous Anonymous said...

Man, you're inviting criticism of the guy... which is also you... who else uses "avers" other than GAG...

Of course, the minute someone posts a critique of his work, you will complain on the "Alert" section that there is an attack in this thread...

12/21/2007 7:27 AM  
Anonymous Anonymous said...

Criticisms of someone's work are completely appropriate, and those should not be blocked out. What is ethically wrong and counter-productive are wanton attacks intended to insult, demean, or degrade. No one should be subject to such abuse. BTW, "aver" is standard English.

12/21/2007 7:51 AM  
Anonymous Anonymous said...

I can't believe anyone other than one of my flunkie grad students would rely on book reviews to judge someone's work. On second thought maybe you are one of them....

Book reviews tend to be superficial (and overly polite; biased to say something positive even if there is nothing of value) and are often written by people who have too much time on their hands owing to their lack of research productivity.

12/21/2007 8:21 AM  
Anonymous Anonymous said...

If you read the above post carefully, you will see that no judgment is offered by me of GAG's books. Not having read these books, I am in no position to assess them.

Also, I think you are too cynical with regard to book review writers. I have read too many negative reviews (and written a few) to think that academic reviewers give the books they review a free pass.

Moreover, the quotes I chose were not bland comments, such as "x" is interesting. In my view, some of the comments that GAG's books invoked from reviewers are noteworthy and significant.

12/21/2007 8:46 AM  
Anonymous Anonymous said...

Yes, "aver" is standard English indeed. Thanks for the vocabulary lesson.

"Respectfully" "putative" "submit" "ostensibly" "patently" "obfscate" "opaque" and "mumbo jumbo" are also standard English words.

The fact that you are so defensive leads me to believe that you were caught with your hand in the cookie jar... Nice try GAG :)

12/21/2007 9:02 AM  
Anonymous Anonymous said...

"If you read the above post carefully, you will see that no judgment is offered by me of GAG's books. Not having read these books, I am in no position to assess them."

Really?

how else is one expected to interpret your preamble:

"Earlier someone wrote that GAG's work is mediocre. I thought this was excessively harsh. While I do not think anyone is required to carry water for anyone else, I still felt the obligation to determine if this is true, and, if not, to correct the record."


Having actually read some of his work, I can unequivocally state that it aspires to mediocrity.

12/21/2007 9:06 AM  
Anonymous Anonymous said...

"obfuscate" that is.

12/21/2007 9:19 AM  
Anonymous Anonymous said...

"Having actually read some of his work, I can unequivocally state that it aspires to mediocrity."

That statement is shoddy, and reflects poorly on who wrote it. If you are going to say that someone's work is worse than mediocre, it is incumbent upon you to justify this judgment -- otherwise it is a cheap shot, unworthy of a true academician.

12/21/2007 9:24 AM  
Anonymous Anonymous said...

RE: 9:02 a.m.

It is interesting that you claim to be able to identify GAG by his varied and diverse vocabulary. If he does read this blog, he should be flattered. On his behalf I will say "Thanks!"

12/21/2007 9:30 AM  
Anonymous Anonymous said...

You're welcome.

12/21/2007 10:40 AM  
Anonymous Anonymous said...

#able to identify GAG by his varied and diverse vocabulary#

How about an application of neural networks/linguistic computing, here? Like they did for the Shakespeare plays...

Train it on the books, then test it on the blog-posts!

But I guess this might lead to another quant/qual debate.

12/21/2007 1:24 PM  
Anonymous Anonymous said...

RE: 10:40 a.m.

That should be "he's welcome." I am not GAG. Nonetheless, the sentiment is appreciated.

12/21/2007 1:39 PM  
Anonymous Anonymous said...

For Christmas, I'm hoping for a T shirt that says:

"Squanto"!

12/21/2007 3:36 PM  
Anonymous Anonymous said...

If I have read the work and you haven't, what basis do you have saying the statement that GAG aspires to mediocrity is unfair?

12/21/2007 3:38 PM  
Anonymous Anonymous said...

In the first instance, the phrase "aspires to mediocrity" is unprofessional. No book review editor would publish such a description. Secondly, it is not enough for you to say that "I read something," and hence "my negative judgment is valid." If you are going to criticize a piece, it is incumbent upon to you to explain/justify your criticism.

12/21/2007 5:43 PM  
Anonymous Anonymous said...

The height of arrogance--my view requires validation despite my reading the book, while your sunshine and pixie dust perspective based solely on the friends that GAG was able to get to shill for him does not?

The speculation that book review editors wouldn't publish such a statement speaks more to their dishonesty and the corruption of the editorial process than it does the professionalism of the review.

12/21/2007 8:18 PM  
Anonymous Anonymous said...

First of all first, let us acknowledge that the phrase "aspires to mediocrity" is nothing more than an insult. It does not even amount to an opinion. Second, you say you have read GAG's work, and I have no reason to doubt your veracity. Nonetheless, perhaps you have not, in which case you are hiding behind your claim of having read his work to wantonly demean it. Finally, there is the issue of collegiality and basic kindness. Even if your very poor view of GAG's work is fully justified (and it may very well be), it is still mean spirited and highly unprofessional to refer to someone's work as "aspiring to mediocrity." I am certain you would not want your scholarship to be degraded in that way, and I hope it never is.

12/22/2007 5:34 AM  
Anonymous Anonymous said...

Of course its an insult. Thats 6 hours of my life I wont get back--someone should pay. Maybe GAG will give me a refund--that'd be a start. But, of course, it also serves as an opinion, if it did not, you wouldn't question it.

12/22/2007 5:51 PM  
Anonymous Anonymous said...

We are discussing this solely because it you put forward an insult. If you had put forward a well reasoned opinion, it would have ended there. I would have no interest in engaging it, nor any basis to do so.

12/22/2007 5:58 PM  
Anonymous Anonymous said...

5:51 p.m. -- I'm curious. Since you so hate GAG's book, what brought you to read it?

12/22/2007 6:30 PM  
Anonymous Anonymous said...

Does an approach seeking to gauge human cognition, or one quantitatively measuring causality, offer greater insight into the recent decision by the Bush Administration to deny California, as well as other states, permission to set their own fuel efficiency standard? It seems patent to me that this decision can be fully understood by determining the opposition of the automobile industry to the state fuel efficiency standard and the strongly pro-corporate disposition of the Bush Administration. By way of gaining analytical leverage on this issue, what do quantitative methods have to offer here?

12/26/2007 6:19 AM  
Anonymous Anonymous said...

Nothing, obviously.

12/27/2007 12:36 AM  
Anonymous Anonymous said...

GAG, are you a Violent Femmes fan?

next verse
same as the first
a little bit louder
a little bit worse...

Just last night I was reminded
Of just how bad it had gotten...

12/27/2007 7:23 AM  
Anonymous Anonymous said...

While 6:19 a.m. plays up the fact that quantitative methods cannot lend insight into any one case, it is also true that quants. have already acknowledged this shortcoming of their preferred methodology. Nonetheless, the case invoked in 6:19 a.m. can help elucidate an issue discussed earlier on this blog – namely, the analytical difficulties surrounding coding in the discipline of political science.

If we want to understand power with regard to policy formation, I would submit it is too elusive a variable to analyze across large-n cases through coding. Let us take, for instance, the example of the political authority of the corporate community – germane given the subject of 6:19 a.m. The difficulty is that business can exercise its influence (i.e., power) through too many means, and through many different guises, to measure its influence through a rigid coding scheme. So large corporations can shape public policies through various organizations (e.g., the Business Roundtable, the Business Council, the Automobile Manufacturers Association, the Edison Institute, etc.), through private meetings (e.g., the Cheney Energy Commission), or even through social interaction (as in a social club). Thus, I would posit the only intellectual honest way to understand the influence/power of business over policy formation is through detailed case studies of different policy areas. Similar points can be about the influence/power of other interest groups.

12/27/2007 10:10 AM  
Anonymous Anonymous said...

I love how GAG posts a question and then pretends to be someone else in answering it. Priceless.

Of course, I can already see the next post: "I'm not GAG, I don't know who GAG is".

Pathetic.

Quants did not say that their methods cannot give insights in any one case. They said that the goal of quant methods isn't to explain in any one case. There's a difference, and if you understood quant methods, you would acknowledge that.

12/27/2007 10:26 AM  
Anonymous Anonymous said...

"Quants did not say that their methods cannot give insights in any one case. They said that the goal of quant methods isn't to explain in any one case."

Aren't you actually acknowledging that quant. methods cannot (do not) provide insight into any one case?

I also believe you should acknowledge that you have no idea who is posting what on this blog.

12/27/2007 10:50 AM  
Anonymous Anonymous said...

Its not too surprising that the people who dont understand the relative benefits (costs) of quant vs qualitative methods dont even understand what a case is.

in fact, quant methods can provide terrific insight to a particular case.

12/27/2007 11:47 AM  
Anonymous Anonymous said...

"in fact, quant methods can provide terrific insight to a particular case."

Please substantiate your assertion.

12/27/2007 12:15 PM  
Anonymous Anonymous said...

Tons of cases/examples.

The Clinton/Meirowitz AJPS article from a few years back on the Jefferson Dinner Party of 1790, for example.

12/27/2007 12:28 PM  
Anonymous Anonymous said...

[Scene of GAG doing some frantic reading.]

12/27/2007 1:25 PM  
Anonymous Anonymous said...

Cut to: [GAG thinks that Josh and Adam's piece is full of mumbo jumbo]

12/27/2007 3:25 PM  
Anonymous Anonymous said...

Tons of cases/examples.
----------------------------

indeed. Krehbiel's waffling from A to Z is another example.

The thing is, there are plenty of examples. Quant studies aren't interested so much in why Senator so and so decided to filibuster. However, most studies are interested in the conditions under which certain senators are more likely than other to filibuster (individual-level), and the conditions under which filibusters are more likely to occur/succeed (aggregate-level).

So what if we can't explain every senator's vote on every filibuster. The point is, we can make generalizable inferences about those conditions.

12/27/2007 3:28 PM  
Anonymous Anonymous said...

Cut to: [GAG think Josh and Adam's piece lacks transparency.]

12/27/2007 3:40 PM  
Anonymous Anonymous said...

Cut to: [GAG thinks that Josh and Adam should have used process tracing.]

12/27/2007 3:55 PM  
Anonymous Anonymous said...

I haven't followed this from the beginning, but does GAG really deserve all this? Poor guy.

12/27/2007 4:15 PM  
Anonymous Anonymous said...

Yes, he does deserve this. He set himself up to be a patsy with his feverish, soapbox diatribes.

12/27/2007 4:16 PM  
Anonymous Anonymous said...

Cut to: [GAG thinks wonders whether Josh and Adam are truly getting at human cognition.]

12/27/2007 10:03 PM  
Anonymous Anonymous said...

Cut to: [GAG wonders whether Josh and Adam are truly getting at human cognition.]

12/27/2007 10:04 PM  
Anonymous Anonymous said...

Much of the scholarship on Congress is close to being formal theory. The work on filibustering is a good example of this, and the Clinton/Meirowitz article seemingly falls into this category. Namely, much of this scholarship is studying how Congresspeople specifically operate within the Congressional rules. So yes, scholars postulate how congresspeople respond to the specific incentives created by congressional rules, and explain how these incentives play out across cases.

Outside of formal theory, quantitative methods are ostensibly about uncovering patterns of behavior. In this context, analyzing specific cases is seemingly outside of the purview of quantitative methods.

12/28/2007 8:04 PM  
Anonymous Anonymous said...

That response sounds like mumbo jumbo to me.

12/28/2007 11:11 PM  
Anonymous Anonymous said...

Interesting response 8:04. And totally ill-informed.

How do you explain the HUGE literature on roll-call voting in Congress. Where individual votes on individual roll calls are explained/predicted using statitical methods (yes, logits and probits)?

12/29/2007 5:17 AM  
Anonymous Anonymous said...

"I haven't followed this from the beginning, but does GAG really deserve all this? Poor guy."

I have no sympathy for the poster people call GAG because he is totally dogmatic about things he doesn’t seem to understand. It isn’t clear that he has read any of the methods texts that are generally assigned in graduate courses. Here are some examples:

He argues that quantitative methods are severely limited because they can’t get at human cognition. First, why talk about “human cognition?” Afraid that if you just use “cognition” we might think you’re referring to cow cognition? More important, what he seems to be arguing is that we need to get at the motivation behind behavior. If this is the case, why no interest in emotion? Surely there is a bit of research in social and political psychology demonstrating that emotion matters too. He also ignores the fact that most studies of cognition use quantitative methods.

He also argues that quantitative studies can’t tell us anything about specific cases, again demonstrating that he is not familiar with a huge literature.

He has also argued that quantitative methods cannot be trusted because they can’t be replicated, again demonstrating that he is unfamiliar with a good deal of work that replicates previous work.

Finally, he used to argue that quantitative methods are limited, or even worthless, because we have no immutable laws of social science. At one point he was even arguing that human behavior is all but completely random. Yet, it was clear he had not given any thought to WHY lack of immutable laws should mean quantitative methods are limited.

All this smacks of pure ignorance.

12/29/2007 6:33 AM  
Anonymous Anonymous said...

I assume what you mean by case study analysis here is the focus on a specific congressperson. Nonetheless, the roll call voting analysis you are referring to is about deducing the pattern of congresspeople's voting behavior. So yes, one can focus their study on the voting behavior of a particular congressperson, but for quantitative methods to yield meaningful results a significant number of votes by this congressperson has to be analyzed.

Thus, if I was interested in only analyzing one congressional vote, then how congresspeople voted across a large number of like issues may not be of much utility to me. Hence, the case here is the specific vote, not the congressperson.

12/29/2007 6:37 AM  
Anonymous Anonymous said...

6:37 a.m. is in response to 5:17 a.m.

12/29/2007 6:38 AM  
Anonymous Anonymous said...

6:33 a.m. is full of distortions, and half-truths.

12/29/2007 6:53 AM  
Anonymous Anonymous said...

Thus, if I was interested in only analyzing one congressional vote, then how congresspeople voted across a large number of like issues may not be of much utility to me. Hence, the case here is the specific vote, not the congressperson.

12/29/2007 6:37 AM

----------

Is that so? So you wouldn't be interested in assessing whether MCs voting on a given roll call followed "central tendencies" (i.e., how they have voted on similar roll calls in the past), or whether they deviated in predictable ways (were they "bought off" via side payments, did they buckle to party pressure, and so on).

Seriously, dude, you have no conception of the roll-call voting literature. Why do you make comments and assertions about literatures that you know nothing about? You just come off as dogmatic and ignorant, and you welcome pokes and jabs about your true goals.

12/29/2007 8:20 AM  
Anonymous Anonymous said...

The issues you raise may or may not have a bearing on someone studying a specific congressional vote. You can believe ignoring such issues is misguided, but neglecting such matters is not inherently illegitimate. Moreover, the ILP's broader point is valid: the focus of quantitative methods is centered on discerning patterns of behavior within the context of large-n cases, and they have substantially less utility in analyzing specific cases.

You seem to believe that qualitative methods offer no advantages over quantitative methods, and that quantitative methods are the end all and be all of political science. This is simply a false and untenable position. The end result is that in your crusade to paint GAG/ILP as ignorant, you are casting yourself as dogmatic, obstinate, and stridently uncompromising.

12/29/2007 9:01 AM  
Anonymous Anonymous said...

It's like talking to a wall.

I'm through with this nonsense. GAG/ILP: good luck to you, and may your anti-quant crusade leave you all warm and fuzzy.

12/29/2007 9:13 AM  
Anonymous Anonymous said...

It is wrong to describe the ILP (GAG?) as anti-quant. In my view this poster has sought to offer a corrective to the seemingly popular thinking with regard to quant. methods -- namely, their unquestioned superior utility in all circumstances. 9:13 a.m. confirms the perception that too many quant. scholars are unable and/or unwilling to see the shortcomings of quant. methods in political science, and that qual. methods offer maximum utility in various contexts. In my view it is the quant. proponents on this blog that prevent a reasoned and constructive conversation, because too many of them become hysterical when one suggests that as a rule political scientists should be thoughtful and circumspect about choosing quantitative methods over qualitative methods.

12/29/2007 9:52 AM  
Anonymous Anonymous said...

"You seem to believe that qualitative methods offer no advantages over quantitative methods, and that quantitative methods are the end all and be all of political science. This is simply a false and untenable position. The end result is that in your crusade to paint GAG/ILP as ignorant, you are casting yourself as dogmatic, obstinate, and stridently uncompromising."

No one here has argued that qual methods offer no advantages. This is just wrong. GAG is the only dogmatic one here.

Inside GAG's head:
I don't understand quant methods so I don't trust them. How can I discredit them?

Um, no immutable laws of social science, thus quant methods can't be helpful. Oh, that's not true? Okay...

...Um, quant methods can't be replicated. Oh, what? They can and frequently are? Okay...

...Er, they can't tell us anything about specific cases. What? They do? Oh.

Um, then they just use mumbo jumbo and don't really mean anything. Oh, what? I just don't know that language? Er, um, what can I pull out of my ars next?

Let's see. You guys say qual methods are useless, thus you're dogmatic, thus your opinion can be discredited. What? No one has actually argued that qual methods are useless? Hmmmm

Maybe I'll post as if i'm someone else and say that GAG isn't really anti-quant despite what the record shows.

12/29/2007 12:08 PM  
Anonymous Anonymous said...

"Um, no immutable laws of social science, thus quant methods can't be helpful."

This is at least the second time you attribute this argument to the qual. side of the debate. First, it is false. Second, it conveys a deep misunderstanding on your part.

The specific position proffered is that when compared to the physical sciences, quantitative methods have limited utility in the social sciences. So where in the physical sciences quantitative methods are used to gain knowledge of the laws of nature, it is uncertain what knowledge is being garnered with quantitative methods in the social sciences.

To hold that quantitative methods have limited utility in the social sciences it not to assert that they are unhelpful in this realm. I would hope that this explanation stops you from repeating what amounts to a gross distortion and slander of the qual. side of the discussion.

12/29/2007 1:08 PM  
Anonymous Anonymous said...

The first sentence of the last paragraph should read: "To hold that quantitative methods have limited utility in the social sciences i[s] . . . " -- my apologies.

12/29/2007 2:35 PM  
Anonymous Anonymous said...

12/29/2007 1:08 PM: Distortion and slander, huh? Stop changing your story. That you do so so often is evidence that you really don't have any knowledge backing up your arguments. Here are examples of where you have argued that quant methods are unhelpful given no immutable laws.

On 6/03/2007 10:44 AM you wrote:

"As I have argued over the past month, seeking to model political science, or any social science, on the physical sciences is misguided, since first and foremost the physical sciences have something social scientists do not have – immutable laws to work with."

Misguided doesn't equal unhelpful?

On 5/27/2007 7:03 AM you wrote:

"In this regard, I found the earlier conversation on this blog dealing with philosophy of science to be extremely helpful. What it made clear for me is that many of the quantitative methods applied in political science are predicated on the false assumption that through these methods researchers are measuring immutable laws. Once this assumption is accepted as false, then the utility of quantitative methods greatly diminishes. Much of this quantitative work then takes on the characteristics of pseudo-science."

So you think pseudo-science is helpful?

On 5/27/2007 8:21 PM you wrote:

"In the social sciences, especially in political science, there is no such thing as a study being "replicable, providing unbiased estimates, providing known levels of uncertainty." Such a finding would necessitate "immutable laws" of social/political behavior. I will quickly add that this issue has been thoroughly discussed earlier on this blog."

Surely, if you believe we can't replicate, provide unbiased estimats [of whatever], or estimate levels of certainty then you can't think it is useful. I could go on, but I won't.

I agree, this is like talking to a wall.

12/29/2007 2:36 PM  
Anonymous Anonymous said...

Most of this is taken out of context. In any event, none of what you quote expressly points to the idea that quantitative methods are unhelpful. What the early qual. side of the debate was seeking to do was to deflate the over important role that too many people assign to quantitative methods in political science. In my estimation, this a necessary and legitimate corrective.

12/29/2007 3:33 PM  
Anonymous Anonymous said...

One more to at to BIAS"

from 6/12/2007 11:06 AM

"RE: 9:24 a.m.

The truth is that none of the tools of quantitative social science I am aware of rely on there being underlying immutable laws.

**********************************************************

That is exactly the problem! If these methods are not being employed to gauge immutable laws driving social/political phenomena, then what are you studying with these quantatitive models? Some would say nothing.

It should be noted, that while you are not seemingly aware of it, economists do claim to be studying the immutable laws of the market with their quantitative methods. Hence, their claim that their discipline is the only true social science."

12/29/2007 3:46 PM  
Anonymous Anonymous said...

I would say that "studying nothing" would be considered unhelpful.

12/29/2007 3:47 PM  
Anonymous Anonymous said...

"That is exactly the problem! If these methods are not being employed to gauge immutable laws driving social/political phenomena, then what are you studying with these quantatitive models? Some would say nothing."

It is true that "some" would say that. It would be a hardline view, and not without some merit. Nonetheless, as far as I can tell, no one on this blog has adopted this hardline position.

12/29/2007 4:11 PM  
Anonymous Anonymous said...

Oh, so you were just noting that SOME would say that, but not you. Okay, sure.

12/29/2007 4:30 PM  
Anonymous Anonymous said...

If I held the hardline position, why would I not acknowledge it. What is the disincentive for not doing so?

12/29/2007 4:41 PM  
Anonymous Anonymous said...

The position of the quant. poster is as follows: quant. research is not as transparent as qual. Why? That has yet to be explained.

In any event, researchers can replicate studies based on publicly available data sets. As to the fact that many coding decisions involve subjective judgments, that, also, has yet to be addressed.

In the case of privately garnered data, one has to repeat the entire exercise in order to reverify the findings. That is not an unreasonable position.

In the end, in many instances quantitative findings are not replicated, which undermines the overall reliability of quantitative work. This is not it something he/she wants to deal with -- for seemingly obvious reasons.

12/31/2007 1:12 PM  
Anonymous Anonymous said...

"The position of the quant. poster is as follows: quant. research is not as transparent as qual."

A quant poster said this? No.

"In any event, researchers can replicate studies based on publicly available data sets. As to the fact that many coding decisions involve subjective judgments, that, also, has yet to be addressed."

Subjective, but transparent - just like qual research - so they can be replicated.

"In the case of privately garnered data, one has to repeat the entire exercise in order to reverify the findings."

Exactly the same for qual research.

"In the end, in many instances quantitative findings are not replicated, which undermines the overall reliability of quantitative work."

Again, exactly the same for qual research. Moreover, many studies are replicated; many questions are asked over and over using either the same data or new data. The data, coding, and methods used are usually very clear.

Better come up with a new "problem" with quant work. You've got to have one more under your hat.

1/01/2008 1:30 AM  
Anonymous Anonymous said...

I'm not going through this again. New years resolution #1: stop waisting time arguing with GAG.

I bid you fairwell!

1/01/2008 1:32 AM  
Anonymous Anonymous said...

To 12/21/07 8:21 am
I disagree with your comments about book reviews. Some are superficial, but many are not. Writing a competent book review takes time and effort, and is not simply something done by lazy people who "have too much time on their hands due to lack of research productivity." Book reviews serve a useful purpose since it's virtually impossible to read everything that's being published, even in one's own field.

1/02/2008 8:10 AM  
Anonymous Anonymous said...

I believe you misunderstand the science and logic underlying qualitative work. In qualitative analysis, there are certainly informal hypotheses and suppositions, but these are not "tested." Instead, what qualitative scholars engage in are competitive interpretations of the human cognition informing empirical phenomena (i.e., human behavior). So through interpretative iterations we presumably gain different (if not greater) insight into the motives and thinking driving human behavior (i.e., empirical phenomena). This is why "not verifying" earlier qualitative work does not induce a "crisis in confidence" in qualitative scholarship, but instead competing interpretations of the same phenomenon are part of the scientific process and the accumulation of knowledge in qualitative research.

1/02/2008 9:57 AM  
Anonymous Anonymous said...

#but instead competing interpretations of the same phenomenon are part of the scientific process and the accumulation of knowledge in qualitative research.#

hmm...not sure I understand the logic here.

To empirically-minded scholars (which I have always assumed includes qual guys), theories of behavior are only useful to the extent we can actually *test* them (eventually).

If the goal is simply to have lots of theories (interpretations), and *never* to test them, how do we decide if one interpretation is 'better' than another?

In the hard sciences, they seem to
like

(a) more parsiminous theories

and/or

(b) theories that predict/explain the data better (including data which may not have existed at the time the theory was written down).

Are you proposing that (a) and (b) are irrelevant to the accumulation of knowledge in political science?

News to me!

1/02/2008 1:41 PM  
Anonymous Anonymous said...

"To empirically-minded scholars (which I have always assumed includes qual guys), theories of behavior are only useful to the extent we can actually *test* them (eventually)."

"If the goal is simply to have lots of theories (interpretations), and *never* to test them, how do we decide if one interpretation is 'better' than another?"

The qualitative scholars does not assume that one interpretation can be definitely determined to be "better" than another. Instead, it is up the reader to decide for him or herself which interpretation better explains the phenomenon in question.

"In the hard sciences, they seem to
like

(a) more parsiminous theories

and/or

(b) theories that predict/explain the data better (including data which may not have existed at the time the theory was written down).

Are you proposing that (a) and (b) are irrelevant to the accumulation of knowledge in political science?"

The qualitative scholar holds that knowledge accumulation in political science is completely different than in the physical sciences. In the physical sciences the goal is to gain greater and greater understanding of the laws of physics, whereas in political science the goal is to gain greater understanding of human cognition. Thus, the epistemology is completely different and the methods are also different.

1/02/2008 2:35 PM  
Anonymous Anonymous said...

The qualitative scholars does not assume that one interpretation can be definitely determined to be "better" than another. Instead, it is up the reader to decide for him or herself which interpretation better explains the phenomenon in question.

Same in quant. studies. I don't see the difference here. sorry.

Please cite one quant. study that *definitely determines* that one interpretation is better than another. Just one citation will do. Not a formal theoretical framework mid you, but a statistical analysis. Can you think of any?

I'll help you out. No statistical analysis can determine anything. That's what the whole 'stochastic' thing is about. You know, the error term? You know, sampling distributions of random variables?

You've just definitely determined that you know little about quantitative analysis, if you think that this a point of contention between quant. and qual.

1/02/2008 5:32 PM  
Anonymous Anonymous said...

I believe you misunderstand the science and logic underlying qualitative work. In qualitative analysis, there are certainly informal hypotheses and suppositions, but these are not "tested." Instead, what qualitative scholars engage in are competitive interpretations of the human cognition informing empirical phenomena (i.e., human behavior). So through interpretative iterations we presumably gain different (if not greater) insight into the motives and thinking driving human behavior (i.e., empirical phenomena). This is why "not verifying" earlier qualitative work does not induce a "crisis in confidence" in qualitative scholarship, but instead competing interpretations of the same phenomenon are part of the scientific process and the accumulation of knowledge in qualitative research.

As a qualitative scholar, this is the biggest load of horse shit I have ever seen.

There is rigorous qualitative work and then there is what you are describing.

The kind of work you describe here is the problem: interpretivist and/or constructivist bullshit that does not engage in the scientific enterprise.

1/02/2008 5:52 PM  
Anonymous Anonymous said...

5:52 -- It is seemingly easy to cast stones, but it is meaningful and potentially convincing to put forward your own substantive understanding of qual. analysis. Otherwise, like 5:32, you are just putting forward hot air.

1/02/2008 10:13 PM  
Anonymous Anonymous said...

@5:32, I'm generally in agreement that qual work is not in any way automatically superior to quant work.

But when you wrote

#Please cite one quant. study that *definitely determines* that one interpretation is better than another. #

it confused me. Just to clarify: are you saying that quant work doesn't allow us to see the 'human cognition' bit directly in terms of the causal mechanism?

I mean, there are lots of ways we can assess 'interpretations' in terms of their *empirical consequences*--- with model fit tests etc.

For example: I suspect a _statistical_ model that 'explains' voting behavior as a product of education will do better than one that 'explains' it as a product of the voters' starsigns.

@5:52 -- my initial thoughts exactly!

1/03/2008 6:44 AM  
Anonymous Anonymous said...

human cognition is a multiple and conjuncturally causal phenomenon that occurs at the individual level that is prior to behavior.

please leave the cognition to the psychologists and start talking about something that you know something about: critical conjecture.

1/03/2008 12:24 PM  
Anonymous Anonymous said...

5:32 here. You stated in your post that qualitative work, unlike quant. doesn't attempt to *definitely determine* an interpretation.

My point was that quant work doesn't provide any *definitely determinative* answer either. Therefore, the difference that you point between the two methodologies is moot. It is also up to the reader to determine which evidence to believe. The difference between qual and quant is the type of evidence presented--and the quantification of uncertainty around the evidence.

And look, I'm really sorry that you think that a discussion of stochastic disturbance as a response to a claim that quant. studies tries to *definitely determine* (YOUR words, not mine) anything is *hot air* (YOUR words, not mine).

It's not a very thoughtful response, is it? So if you don't understand/agree with someone, you simply claim that it's hot air?

You brought up deterministic explanations. I think that a discussion of the stochastic aspect of quant. work is not only appropriate, but it's actually substantive and could help clarify some of the misconceptions you have about "the other side." Do you not see it?

If you believe that quant work is deterministic, then we've got to go back to stats 101.

1/03/2008 2:25 PM  
Anonymous Anonymous said...

For example: I suspect a _statistical_ model that 'explains' voting behavior as a product of education will do better than one that 'explains' it as a product of the voters' starsigns.

Yes, of course it will. And I suspect that a qualitative model will also arrive at that conclusion. So the methodology isn't constant--what presumably is constant is the THEORY. That's what allows us to claim that education explains voting better than starsigns. Theory. No method--quant. or qual.--can determine anything.

1/03/2008 2:59 PM  
Anonymous Anonymous said...

RE: 12:24 p.m.

You fault the notion of analyzing human cognition in terms of gaining analytical leverage over causation. It is not clear why. Moreover, I have to ask what is "critical conjecture," and why would political science scholars want to engage in it?

1/03/2008 7:17 PM  
Anonymous Anonymous said...

RE: 5:32 and 2:25

The very idea of a "test" implies a definite conclusion. Quantitative research is rife with statistical/quantitative "tests."

1/03/2008 7:22 PM  
Anonymous Anonymous said...

#The very idea of a "test" implies a definite conclusion. #

I think 'definite' is being used in a different way by different posters.

The 'tests' I've seen in quant methods come with p-values: probabilities that refer to the chance we would see a result this 'extreme' (which literally refers to the size of the test statistic) in repeated sampling if the null hypothesis were true.

*Nothing* is 'definitely determined': the null hypothesis can never be 'proved' false or true.

Seriously, what do you think happens when people get an 'blood test' for some disease or other? It is simply a p-value...and then someone (often a machine) interprets that as likely to mean you are ill or not.

#Quantitative research is rife with statistical/quantitative "tests."#

Right: but you can still choose to disagree with how compelling you find the results based on, for example, the p-values on the coefficients (yes, yes, or the size of the HPDs for the Bayesians out there).

1/04/2008 9:13 AM  
Anonymous Anonymous said...

Re 1/04/2008 9:13 AM:

Or the way the sample was drawn. Or how the concepts were measured. Or whether all potentially confounding factors were controlled for. Etc, etc.

1/04/2008 9:22 AM  
Anonymous Anonymous said...

While the null cannot be "proved" false, it can be "rejected". This implies a definitive conclusion.

Beyond our specific understanding of quantitative "tests", the broader point is that in qualitative methods formal "tests" (whatever that may mean) are not employed. Obviously, they are in large-n quantitative methods. This makes these methods qualitatively (no pun intended) different.

1/04/2008 9:26 AM  
Anonymous Anonymous said...

#it can be "rejected". This implies a definitive conclusion.#

Not to nitpick, but any 'rejection' happens at a particular 'level' (e.g. the 5% level), and is not 'definite' for any other (more stringent) level.

#the broader point is that in qualitative methods formal "tests" (whatever that may mean) are not employed.#

Fine: but where does that leave us? Can we have any sense of how uncertain we are about the conclusions in qual methods? Should we try to?

I don't know the answer to any of this (don't know enough about qual methods), but if "your guess is as good as mine" in terms of the interpretation, I'm somewhat troubled...

1/04/2008 9:47 AM  
Anonymous Anonymous said...

Because you choose not to use tests when employing qualitative methods (esp. case studies) doesn't mean they aren't used.

One can construct statistical tests based on a handful of cases to estimate the likelihood of observing a particular result (e.g. 7 of 8 predictions correct).

1/04/2008 10:31 AM  
Anonymous Anonymous said...

"Fine: but where does that leave us? Can we have any sense of how uncertain we are about the conclusions in qual methods? Should we try to?"

"I don't know the answer to any of this (don't know enough about qual methods), but if "your guess is as good as mine" in terms of the interpretation, I'm somewhat troubled..."

The qualitative thinker holds that the indeterminancy you find troubling in qualitative research is in actuality an inherent feature of analyzing social/political phenomena. What the qualitative researcher finds highly problematic are quantitative research methods that indicate definitive findings. The qualitative researcher avers such findings are in most/many cases chimeral. In this way, the qual. thinker believes that quantitative research in too many instances puts forward a false picture/analysis of social/political phenomena.

1/04/2008 12:13 PM  
Anonymous Anonymous said...

RE: 10:31 a.m.

I think you are laboring under a false believe if you think that the use of 7/8 cases is going to provide "definite" findings. Certainly the more cases one utilizes to bolster their argument the more convincing that argument becomes, but it is not likely to end debate on the broader issue at hand, nor will it prevent others from legitimately offering different interpretations of your cases.

1/04/2008 12:26 PM  
Anonymous Anonymous said...

The very idea of a "test" implies a definite conclusion. Quantitative research is rife with statistical/quantitative "tests."

Not true. A test doesn't imply anything definitive. Quant. scholars will be the first ones to tell you this (we've been doing so repeatedly). Tell me: is there anything we can say to change your false beliefs? You are simply misguided in your understanding of statistical 'tests'

While the null cannot be "proved" false, it can be "rejected". This implies a definitive conclusion.

Again, completely, utterly false. I won't repeat what one the previous posters said, but you should dust off a textbook and read about p-values, hypothesis testing, etc. Again, you are completely misguided.

1/04/2008 12:58 PM  
Anonymous Anonymous said...

#indeterminancy...is in actuality an inherent feature of analyzing social/political phenomena.#

How much 'indeterminancy', though? Some, a little, a lot? Can't say?

Put another way, are all interpretations equally valid for you? Can we judge them? By what criteria?

#finds highly problematic are quantitative research methods that indicate definitive findings. #

I just don't see this. I see people talking about probabities (albeit implicitly).

Perhaps some poorly trained individuals misuse the probabilistic language associated with quant work, but that's their problem it is *not* the fault of quant methods per se.

1/04/2008 1:31 PM  
Anonymous Anonymous said...

I find it mind-boggling that someone would try to argue that "rejecting" a hypothesis is not a definitive judgment. As for the issue of probability, in large-n statistical analysis if two variables occur with enough randomness they are deemed to have no causal relationship -- again a definite judgment.

With regard to the questions: "How much 'indeterminancy', though? Some, a little, a lot? Can't say?" My answer to these questions is that there is no predetermined answer to these questions. Unfortunately, indeterminancy varies greatly across cases.

You also ask: "Put another way, are all interpretations equally valid for you? Can we judge them? By what criteria?"

Everyone uses their own criteria. I wish it was different. If so, perhaps there would not be so much political strife in the world.

1/04/2008 3:49 PM  
Anonymous Anonymous said...

#I find it mind-boggling that someone would try to argue that "rejecting" a hypothesis is not a definitive judgment.#

I'm not sure what the power of 'definitive' is here and I am trying to avoid a debate over semantics. As I noted, any hypothesis test involves probabilistic reasoning. I find it mind-boggling that you find it mind-boggling.

#in large-n statistical analysis if two variables occur with enough randomness#

no idea what 'enough randomness' means, but ok...

#they are deemed to have no causal relationship -- again a definite judgment.#

Well, what one scholar judges as a 'high' correlation coefficient, another may judge as 'low'. Note that they can at least debate in the same terms ('dispersion', 'correlation coefficient' etc).


#If so, perhaps there would not be so much political strife in the world.#

Actually, I expect there might be more! Based on quant data work (many, many studies) there is a general consensus, for example, that women tend to be victims of domestic violence, that teenage mothers suffer reduced life opportunities, that people who live in economically depressed areas are more likely to be victims of crime and so on.

You can dispute these findings, but at least there is much agreement on what basis (what criteria) one can do so: sample selection, measurement decisions, size of the p-values and so on.

Social policy can do something about all of these things...and I bet those guys don't rely on qual studies where all interpretations are equally legitimate.

1/07/2008 7:26 AM  
Anonymous Anonymous said...

"Based on quant data work (many, many studies) there is a general consensus, for example, that women tend to be victims of domestic violence, that teenage mothers suffer reduced life opportunities, that people who live in economically depressed areas are more likely to be victims of crime and so on."

The questions confronting political scientists is what causes these social ills. Are these social problems the result of "nature" or "nurture"? Unfortunately, this will almost invariably prompt people to disagree on causation. I also feel I need to stress that what differentiates political science from demography or sociology is the putative focus on causation.

1/07/2008 9:14 AM  
Anonymous Anonymous said...

There is nothing definitive about rejecting--or not rejecting--the null hypothesis. Which textbook do you base your view of statistics on?

Since when is finding no correlation between X and Y a definitive judgment?

Wow, I really have to start training my grad students better, so that they do not perpetuate these sorts of misunderstandings. Maybe it's a good thing that we're exposed to so much ignorance of some very basic features of quantitative methods. It should prompt us all to provide more rigorous training.

1/07/2008 11:46 AM  
Anonymous Anonymous said...

I am in agreement that this should not be a discussion about semantics, but at this point I have to ask when you "reject" the null hypothesis (or any hypothesis) what are you saying?

1/07/2008 12:13 PM  
Anonymous Anonymous said...

"I also feel I need to stress that what differentiates political science from demography or sociology is the putative focus on causation."

Sociologists will be surprised to learn that they arent concerned with causation of major and lasting social phenomena.

Can you even read or do you just make this stuff up?

1/07/2008 12:54 PM  
Anonymous Anonymous said...

#have to ask when you "reject" the null hypothesis (or any hypothesis) what are you saying?#

Literally: the *probability* that we would see a test statistic this extreme in repeated sampling *given that the null hypothesis is true* is so small that I (the analyst) do not believe the null hypothesis to be true.

That 'probability' part *really* matters. It could be 0.05 (often is) or it could be 0.1, or 0.5623 or whatever. This is completely up to the researcher, and should be made explicit.

As I and other posters have pointed out, this does not in any way mean that you can't question how the experiment/survey/data was gathered, how things were measured etc.

Rhetorically: I don't know, but the claim certainly is *not* that a particularly hypothesized relationship (between the variables or whatever) has been 'proved' 'definitively' to be true or false.

BTW: if you have a problem with the convoluted language of frequentist hypothesis testing, and its reliance on possible (but theoretical) repeated sampling, take it up with R A Fisher...or become Bayesian.

1/07/2008 1:52 PM  
Anonymous Anonymous said...

#have to ask when you "reject" the null hypothesis (or any hypothesis) what are you saying?#

You wrote:

"Literally: the *probability* that we would see a test statistic this extreme in repeated sampling *given that the null hypothesis is true* is so small that I (the analyst) do not believe the null hypothesis to be true."

This strikes me as a definite judgment.

1/07/2008 5:02 PM  
Anonymous Anonymous said...

I did not mean any disrespect toward sociology. My point is that political science cannot limit itself to identifying social trends or patterns of behavior. If it is going to maintain the mantle of a science (or some might say attain it), the focus of the discipline must be the identification of causation.

1/07/2008 5:07 PM  
Anonymous Anonymous said...

Demography is not population studies, strictly speaking. It's the statistical study of populations. And population structure may affect certain behaviors or changes within political communities (see, for example: Robert Putnam's recent study regarding social capital and racial and ethnic diversity).

1/07/2008 9:29 PM  

<< Home