"Thickening Thin Concepts and Theories: Combining Large N and Small in Comparative Politics"

"Thickening Thin Concepts and Theories: Combining Large N and Small in Comparative Politics" by Michael Coppedge, 1999, pgs 465-476 (A summary in quotes)* this paper is a concept note, which makes it hard for any summary to do the article justice

Definitions
Concept (Thick) exist for two reasons "First, they can not be reduced to a single indicator without losing some important part of their meaning.  Development is more than just average wealth; stability is more than just the absence of coup attempts; and democracy is more than just having elections.  Second, thick concepts are often multidimensional, because no aspect of the concept is reducible to any of the others" (Coppedge, 1999, pg 468)
Theory (Thick) "richly specified, complex models that are sensitive to variations by time and place" (Coppedge, 1999, pg 471)
Summary
“scholars need to answer two key questions that arise at the intersection of small-N and quantitative analysis.  Can thick concepts be translated into the thin format of quantitative data?  And can the nuanced, conditional, complex, and contextualized hypotheses of small-N analysis be translated into quantitative models?” (pg 465)
“Every theoretical model in social science has five parameters.  First, every model pertains to a certain level of analysis – individual, group, national, world-systemic, or some intermediate gradation.  Second, it has one or more dependent variables.  Third, it has one or more explanatory variables.  Fourth, it applies to a certain relevant universe of cases.  And fifth, it applies to events or processes that take place during a certain period of time.  We can refer to the definitions of each of these five parameters as possessing zero order complexity because no relationship among leeway in defining a concept, measuring it and any explanatory factors, selecting a relevant sample of countries for testing a set of explanations, and defining the periods of time to which the explanations apply.  And this example applies just to the national level of analysis.  With smaller or larger units of analysis one would use completely different variables, cases and time frames” (466)
“First order complexity involves any causal relationship within any of these parameters…include(ing) causation bridging levels of analysis, or aggregation and disaggregation, causal relationships among dependent variables, or endogeneity, interactions among independent variables…lagged effects or temporal autocorrelation…or spatial autocorrelation” (466)
“Second order complexity involves interactions between two different parameters.  Hypotheses in which an independent variable X causes a dependent variable Yare second order” (466-7)
Concepts are thick for two reasons “First, they can not be reduced to a single indicator without losing some important part of their meaning… Second, thick concepts are often multidimensional, because no aspect of the concept is reducible to any of the others” (468)
“Small-N analysis excels at the kind of conceptual fussiness that is required to develop valid measures of thick concepts” (468)
“Quantitative research has the opposite strengths and weaknesses.  Its variables tend to be defined more narrowly, which makes it more feasible to gather data from a large number of cases and therefore to support generalizations” (468)
“However ,the thin concepts implied by the construction of some of the variables researchers in effect use the bait-and-switch tactic of announcing that they are testing hypotheses about the impact of, for example, economic development and then by sleight of hand substitute an indicator of per capita energy consumption and assert that it measure development well enough.  The problem with such substitutions is not necessarily that they do not measure the concept of interest at all, but that a single narrow indicator can not capture all the relevant aspects of a thick concept.  It is tempting to hide behind the excuse that we often test only the implication of the rather than the hypothesis itself.  But this fig leaf offers no real protection, because a valid test of the full hypothesis would still require the testing all its manifold implications” (469)
“A basic procedure in measuring any complex concept has four steps.  First, the analyst breaks the mother concept up into as many simple and relatively objective components as possible.  Second, each of these components is measured separately.  Third, the analyst examines the strength of association among the components to discover how many dimensions are represented among them and in the mother concept.  Fourth, components that are very strongly associated with one another are treated as unidimensional, that is, as all measuring the same underlying dimension, and may be combined” (469)
“One basic objection (to quantitative indicators) holds that, when the theoretical concept is categorical, not continuous, then attempts to measure it with a continuous instrument produces either measurement error or nonsense” (469-70)
“The problem with many quantitative indicators is not that they are quantitative, but that they are qualitatively different from the categorical concepts they propose to measure.  The real problem with continuous indicators is that they measure only thin, reductionist versions of the thicker concepts that interest nonquantitative scholars” (471)
   “The longitudinal case study is simply the best research design available for testing hypotheses about the causes of specific events.  In addition to maximizing opportunities to disconfirm observable implications, it does the best job of documenting the sequence of events, which is crucial to establishing the direction of the causal influence…This approach has two severe limitations, however.   First, it is extremely difficult to use it to generalize to other cases.  Every additional case requires a repetition of the same meticulous process-tracing and data collection….Second, the case study does not provide the leverage necessary to test counterfactual hypotheses, for which a single case can supply little data” (472)
“The widest gulf that divides large-N studies form small-N comparisons results from the fact that most of the latter are either cast at the subnational level or move easily between all levels, from individual to international…Incorporation of explanatory factors from a lower level requires their building a completely different dataset from scratch.  Their  units of analysis are countries and years at best.  For example, to test the hypotheses on regime transitions from the O’Donnell-Schmitter-Whitehead project, one would have to collect data about strategic actors rather than countries and resample at intervals of weeks or months rather than years” (474)

No comments:

Post a Comment