Publication Bias in Empirical Sociological Research: Do Arbitrary Significance Levels Distort Published Results?

February 7, 2010

Alan S. Gerber and Neil Malhotra Publication Bias in Empirical Sociological Research: Do Arbitrary Significance Levels Distort Published Results?  Sociological Methods & Research 2008 37: 3-30.

Despite great attention to the quality of research methods in individual studies, if publication decisions of journals are a function of the statistical significance of research findings, the published literature as a whole may not produce accurate measures of true effects. This article examines the two most prominent sociology journals (the American Sociological Review and the American Journal of Sociology) Read the rest of this entry »

Advertisements

Will Kalkhoff and Shane R. Thye, Expectation States Theory and Research: New Observations From Meta-Analysis, Sociological Methods & Research 2006 35: 219-249.

February 7, 2010

Over the past 50 years, the expectation states research program has generated a set of interrelated theories to explain the relation between performance expectations and social influence. Yet while the relationship between performance expectations and social influence is now axiomatic, the reported effects do differ in magnitude, sometimes widely. The authors present results from the first formal meta-analysis of expectation states research on social influence. Their findings indicate that theoretically unimportant study-level differences alter expectation states and the baseline propensity to accept or reject social influence. Data from 26 separate experiments reveal that protocol variations, including the use of video and computer technology, sample size, and the number of trials, have important but previously unrecognized effects. The authors close by discussing the more general implications of their research for future investigators.

Key Words: status • expectation states • rewards • social influence • meta-analysis


Gerty J. L. M. Lensvelt-Mulders, Joop J. Hox, Peter G. M. van der Heijden, and Cora J. M. Maas, Meta-Analysis of Randomized Response Research: Thirty-Five Years of Validation, Sociological Methods & Research 2005 33: 319-348.

February 7, 2010

This article discusses two meta-analyses on randomized response technique (RRT) studies, the first on 6 individual validation studies and the second on 32 comparative studies. The meta-analyses focus on the performance of RRTs compared to conventional question-and-answermethods. The authors use the percentage of incorrect answers as effect size for the individual validation studies and the standardized difference score (d-probit) as effect size for the comparative studies. Results indicate that compared to other methods, randomized response designs result in more valid data. For the individual validation studies, the mean percentage of incorrect answers for the RRT condition is .38; for the other conditions, it is .49. The more sensitive the topic under investigation, the higher the validity of RRT results. However, both meta-analyses have unexplained residual variances across studies, which indicates that RRTs are not completely under the control of the researcher.

Key Words: randomized response • meta-analysis • multilevel • sensitive topics