As discussed in the Lauer book, in 1981, Schuessler, Gere, and Abbot tried to make scales for teacher attitudes related to writing. They used 46 items in five scales. These items came from a Composition Questionnaire from the Commission on Composition of the National Council of Teachers of English. One scale was eliminated because nine items had low reliability. Four scales were used. The alpha coefficients were: "1. Attitudes toward the instruction of the conventions of standard written English, r=.72. 2. attitudes toward the development of students' linguistic maturity, r-.73 3. attitudes toward defining and evaluating writing tasks, r-.70, and 4. attitudes toward the importance of student self-expression, r-.74. (137)."
In 1987, a similar topic was studied, with findings that "suggest that certain attitudes, such as concern with individual writers' development, an understanding of the flexibility of language, and a desire to de-emphasize grades, rules, and rigid formats, facilitate better student attitudes. (Four tables are included; 17 references are attached; and samples of the Reigstad and McAndrew "Writing Attitude Scale" and the Gere, Schuessler and Abbott "Composition Opinionnaire" used in the study are appended.)" This information comes from Eric document ED347536, and I found it interesting that this study went back to Schuessler's work. Scales of teacher attitudes seem like an area that needs more research, even today, as a means of getting away from the sheer "number obsessiveness" of standardized tests. While quantitative means are certainly used to assess even attitudes, it makes sense that those teaching writing should have a say about student writing practices. I'm sure different alpha coefficients could have been used, or current studies could adapt them for a specific purpose. .72-.74 is not a big range, though the 4 items listed above were also quite similar.
It does seem, thoug,h that things like "linguistic maturity" are difficult to define, and despite efforts to be objective, I would think that using such language naturally calls into question the validity of the research. I know that many terms are rather ambigiuous and that we work with them in the fieled. I notice a drastic difference between the language used in composition and the hard sciences, which of course is not surprising. I admire our field's persistance in doing quantitative research, though.
Wednesday, October 31, 2007
Friday, October 12, 2007
Lauer Chapter 6
Note: My Lauer chapter 5 blog seems to have cut off a sentence or more in the beginning of the blog. I just noticed this when creating this blog.
The Suddarth and Wirt study seems like a useful study, even for today. Colleges are constantly trying to find ways to place students in appropriate courses. The researchers wanted to predict course placement using pre-college information, so the research question involved asking what information would lead to appropriate college placement. The subjects were 5000 Freshmen at Purdue University in 1971. The context of this study included an era when when appropriate course placement was (perhaps?) more of a new field of research. The students were to be placed in composition, mathematics, and chemistry courses. The data selection included the use of ten predictor variables. These include the following: high-school rank, high-school GPA, SAT verbal score, SAT math score, semesters of foreign language, high-school English grade, semesters of math, semesters of science, math grades and science grades. Analysis included regression analysis equations, as no other methods were found to be as accurate when considering psychology as a whole. The researchers also used the 1971 equation on data from freshmen of 1972, and this helped validate the prediction equation.
I do agree that the type of analysis used indeed created a highly accurate study, as previous statistics courses have shown me the often amazing accuracy possible. If, however, I consider the context of the study I do question some of the ten predictor variables. SAT scores do not, I believe, give any good indication of an appropriate math course for college for most students. Generally using past performance to predict future performance is questionable, though I realize with a large population this reasoning and analysis can produce (and does produce) meaningful results. I suppose I like to consider more of the psychology behind the issue of course placement. Will, for example, a student starting college want a fresh start and surpass the old expectations? Or, perhaps he or she will do much worse than expected given the psychological demands of college? My feeling is that past scores and grades can be good predictors for some students, though not for others. Of course, every student cannot have a personal college coach that looks out for his or her best interest in courses. Advisers are overworked and not knowledge about every single student, so analysis with large amounts of data seems to be the next best solution. With the increased use of computer technology, perhaps course placement will change. Here I am thinking of online essay "tests" for composition courses, such as those at BGSU.
The Suddarth and Wirt study seems like a useful study, even for today. Colleges are constantly trying to find ways to place students in appropriate courses. The researchers wanted to predict course placement using pre-college information, so the research question involved asking what information would lead to appropriate college placement. The subjects were 5000 Freshmen at Purdue University in 1971. The context of this study included an era when when appropriate course placement was (perhaps?) more of a new field of research. The students were to be placed in composition, mathematics, and chemistry courses. The data selection included the use of ten predictor variables. These include the following: high-school rank, high-school GPA, SAT verbal score, SAT math score, semesters of foreign language, high-school English grade, semesters of math, semesters of science, math grades and science grades. Analysis included regression analysis equations, as no other methods were found to be as accurate when considering psychology as a whole. The researchers also used the 1971 equation on data from freshmen of 1972, and this helped validate the prediction equation.
I do agree that the type of analysis used indeed created a highly accurate study, as previous statistics courses have shown me the often amazing accuracy possible. If, however, I consider the context of the study I do question some of the ten predictor variables. SAT scores do not, I believe, give any good indication of an appropriate math course for college for most students. Generally using past performance to predict future performance is questionable, though I realize with a large population this reasoning and analysis can produce (and does produce) meaningful results. I suppose I like to consider more of the psychology behind the issue of course placement. Will, for example, a student starting college want a fresh start and surpass the old expectations? Or, perhaps he or she will do much worse than expected given the psychological demands of college? My feeling is that past scores and grades can be good predictors for some students, though not for others. Of course, every student cannot have a personal college coach that looks out for his or her best interest in courses. Advisers are overworked and not knowledge about every single student, so analysis with large amounts of data seems to be the next best solution. With the increased use of computer technology, perhaps course placement will change. Here I am thinking of online essay "tests" for composition courses, such as those at BGSU.
Wednesday, October 3, 2007
Lauer Chapter 5
gathered merit ratings from 1 to 9, in addition to comments provided by the readers regarding likes and dislikes about the essays. 300 papers produced 11, 018 comments, which became grouped under 55 headings. The Paul Deiderich study seems appropriate to analyze given it took place at my old school in Champaign, IL. For his study, Deiderich questioned if the reliability of grades on essays could be improved. Even today, this is a pressing issue, mainly with ETS and similar services. He used 300 papers (written by the subjects who were all college students in their first month of school), and the papers covered three universities. The context of the study involved a highly specialized research team with clear goals. For instance, sixty readers graded the essays (turned out to be 53 eventually). These readers included professionals such as English teachers, writers , editors, lawyers, etc. The following five factors were considered: ideas, organization, wording, flavor, and conventions like punctuation. The study specifically looked at the factors that EDUCATED readers consider when grading essays. When collecting data, Deiderich used statistical analysis such as standard deviation, which he compared with percentiles, standard scores, range and letter grades.
The range of readers in this study provides for more accurate comments and scores. All the readers were not, for example, English teachers. The readers were not randomly selected, however, but the study has been replicated with similar results. I know that ETS has little variance in the scores of their essays, so they seem to have a system down that is reliable. Whether or not it is valid is another story. Deiderich used various readers, but I wonder if the variety of his readers was almost too great. Would, for example, a lawyer and a social science teacher grade in a similar manner? Should they? I am not sure that context was considered in this study to a great enough extent. That is, the reliability of grades on essays does not really seem to be the pressing question. Each essay for each subject will be written under a certain set of conventions. What works for one paper might not work for another. Also, I would think using three different universities could produce quite different essays, given that I do not know which universities were used. There does need to be a way to evaluate writing in the way ETS does when we deal with massive numbers of essay writers. That does not mean, however, that the BEST method of evaluation is used, especially when we consider all of the various aspects that go into producing a good essay that is relevant to the context and situation.
The range of readers in this study provides for more accurate comments and scores. All the readers were not, for example, English teachers. The readers were not randomly selected, however, but the study has been replicated with similar results. I know that ETS has little variance in the scores of their essays, so they seem to have a system down that is reliable. Whether or not it is valid is another story. Deiderich used various readers, but I wonder if the variety of his readers was almost too great. Would, for example, a lawyer and a social science teacher grade in a similar manner? Should they? I am not sure that context was considered in this study to a great enough extent. That is, the reliability of grades on essays does not really seem to be the pressing question. Each essay for each subject will be written under a certain set of conventions. What works for one paper might not work for another. Also, I would think using three different universities could produce quite different essays, given that I do not know which universities were used. There does need to be a way to evaluate writing in the way ETS does when we deal with massive numbers of essay writers. That does not mean, however, that the BEST method of evaluation is used, especially when we consider all of the various aspects that go into producing a good essay that is relevant to the context and situation.
Subscribe to:
Posts (Atom)