Thursday, September 20, 2007

Chapter 4 Lauer

The Texas study by Witte, Meyer, Miller and Faigley seems like an appropriate study to evaluate. Even though its 1981 date makes it dated, it offers a good example of the role of basic statistics in writing research. They aimed to collect descriptive data on university writing programs, guided by broad research questions that were related to program evaluation. In other words, this was no small task! The subjects were 550 writing program directors, (originally) though 259 agreed to complete the survey and 127 did. The small response rate, though, is typical of surveys, though I would have expected a commitment to complete one (especially by a writing director) to carry some weight. The questionnaire related to various aspects of the writing program. Theory guided the context of the work, since the work related to previous researchers and their studies. In this study, data collection seemed rather straight forward, though perhaps a bit ambitious since a writing program has many facets, all of which cannot be adequately covered by even the most promising survey.

One of the pitfalls was the lack of a central source of writing programs in the United States. The survey, therefore, did not include a representative sample to the degree the researchers would have liked. Correction factors were not needed in the final data analysis, as sample size was not close to the population size. The data collected aimed at in-depth responses, as opposed to all multiple choice. That, however, meant the response rate was low since the survey was taxing. Even though the survey compensated for this weakness by considering separate strata as populations, compensation is not as good as a strong response rate to begin with.

Learning about writing programs is also probably not something that is best handled by consulting writing directors. Certainly they would have a lot to say about the program, but a more balanced approach would yield more (perhaps relevant) information. A potential problem, though, would be information overload.

I was surprised the researchers did not have access to a central database of programs, though given computer limitations in 1981, that fact is perhaps not as surprising. Without easily accessible spreadsheets or data compilations, the era before computers surely involved a lot of work that today is not wasted on hunting down such information. Now, the problem is too much information!

1 comment:

Kris said...

You make a good point about rate of return on surveys, Bethany. I wonder if the use of email, discussion groups such as WPA, or even the use of survey software packages that are now available free online. Regardless of those possibilities, it's likely that would have been sample size issues if the study were done today. Part of it is the information overload and that "lack of commitment" that surveys often promote. What would have been the role of triangulation in this process, of multiple types of data to help create that richer picture that is necessary to critically investigate writing programs. No right answers on this one, but clearly you're right on with the sense that then and now, a survey needs supplement.