Fox's study from 1980 considered writing apprehension and its effect on composition. His research question tried to determine how writing apprehension, writing quality, and length of writing were influenced by two methods of teaching writing. His subjects were 6 classes made of freshmen from the University of Missouri, enrolled in English composition classes. The context of the study falls into the quasi-experiment category, presented in chapter 9. Graduate instructors were used to to teach the groups. The study used quantitative means for analysis. There were 8 hypotheses (p.180) that dealt mainly with when (in the study) students would exhibit the highest levels of writing apprehension, as well as whether the experimental group would write better posttest compositions. To measure the criterion variables, the Daly-Miller Writing Apprehension Test and a two hour posttest were used. Fox determined which of his results were statistically significant.
The author points out that Fox did not use the pretests and essay ratings in a repeated measurement analysis of variance, so his non significant results might be questionable. The study seemed to involve too many hypotheses, so I question if the study tried to do too much. I'd also like a definition of writing apprehension for the purposes of the study, though I'm fairly certain that was part of the study (just not included in detail in the book).
Friday, November 16, 2007
Friday, November 9, 2007
Lauer Chapter 8
I'll focus on O'Hare's 1973 experiment on grammar instruction in Urbana, IL. The participants in this study were 83 seventh-graders, randomly assigned. There were two treatment and two control groups. A shortened version of the English curriculum was presented to the treatment groups, and the other group had sentence-combining exercises, though without formal grammar instruction. The research questions centered around the different sentences (syntactically) that students might write, based on the two groups. Also, he wanted to know if the treatment group would write better compositions. He hypothesized that the experimental group would write much better compositions than the control group. These differences would be significantly superior, according to O'Hare. The context of the study seems to be the typical role of researcher studying writing ability, in this case with a randomized experiment. O'Hare's results included post-test means and the standard deviation (p. 163 has the full list) for six criterion variables. The conclusion is that sentence combining did produce major effect sizes.
O'Hare was intelligent enough to include a composition pretest. Since some students dropped out, this allowed him to determine how these dropouts influenced the data.
After analysis, O'Hare concluded that sentence-combining was the cause of the changes he noted, based on the hypothesis questions.
I would have expected more students to drop out from the study, given that 83 participated. If that happened, the results would perhaps be more difficult to determine, even with the pretest. Keeping participants in a study that they have signed up for is difficult, as I know from having been a participant in various studies. I would also expect that external variables would affect the study, though in this case it's difficult to say just how much they would do so. It wasn't clear how these 7th graders differed. Sentence-combing seems dated today, at least to the extent that I do not often hear of it being taught.
O'Hare was intelligent enough to include a composition pretest. Since some students dropped out, this allowed him to determine how these dropouts influenced the data.
After analysis, O'Hare concluded that sentence-combining was the cause of the changes he noted, based on the hypothesis questions.
I would have expected more students to drop out from the study, given that 83 participated. If that happened, the results would perhaps be more difficult to determine, even with the pretest. Keeping participants in a study that they have signed up for is difficult, as I know from having been a participant in various studies. I would also expect that external variables would affect the study, though in this case it's difficult to say just how much they would do so. It wasn't clear how these 7th graders differed. Sentence-combing seems dated today, at least to the extent that I do not often hear of it being taught.
Wednesday, October 31, 2007
Lauer Chapter 7
As discussed in the Lauer book, in 1981, Schuessler, Gere, and Abbot tried to make scales for teacher attitudes related to writing. They used 46 items in five scales. These items came from a Composition Questionnaire from the Commission on Composition of the National Council of Teachers of English. One scale was eliminated because nine items had low reliability. Four scales were used. The alpha coefficients were: "1. Attitudes toward the instruction of the conventions of standard written English, r=.72. 2. attitudes toward the development of students' linguistic maturity, r-.73 3. attitudes toward defining and evaluating writing tasks, r-.70, and 4. attitudes toward the importance of student self-expression, r-.74. (137)."
In 1987, a similar topic was studied, with findings that "suggest that certain attitudes, such as concern with individual writers' development, an understanding of the flexibility of language, and a desire to de-emphasize grades, rules, and rigid formats, facilitate better student attitudes. (Four tables are included; 17 references are attached; and samples of the Reigstad and McAndrew "Writing Attitude Scale" and the Gere, Schuessler and Abbott "Composition Opinionnaire" used in the study are appended.)" This information comes from Eric document ED347536, and I found it interesting that this study went back to Schuessler's work. Scales of teacher attitudes seem like an area that needs more research, even today, as a means of getting away from the sheer "number obsessiveness" of standardized tests. While quantitative means are certainly used to assess even attitudes, it makes sense that those teaching writing should have a say about student writing practices. I'm sure different alpha coefficients could have been used, or current studies could adapt them for a specific purpose. .72-.74 is not a big range, though the 4 items listed above were also quite similar.
It does seem, thoug,h that things like "linguistic maturity" are difficult to define, and despite efforts to be objective, I would think that using such language naturally calls into question the validity of the research. I know that many terms are rather ambigiuous and that we work with them in the fieled. I notice a drastic difference between the language used in composition and the hard sciences, which of course is not surprising. I admire our field's persistance in doing quantitative research, though.
In 1987, a similar topic was studied, with findings that "suggest that certain attitudes, such as concern with individual writers' development, an understanding of the flexibility of language, and a desire to de-emphasize grades, rules, and rigid formats, facilitate better student attitudes. (Four tables are included; 17 references are attached; and samples of the Reigstad and McAndrew "Writing Attitude Scale" and the Gere, Schuessler and Abbott "Composition Opinionnaire" used in the study are appended.)" This information comes from Eric document ED347536, and I found it interesting that this study went back to Schuessler's work. Scales of teacher attitudes seem like an area that needs more research, even today, as a means of getting away from the sheer "number obsessiveness" of standardized tests. While quantitative means are certainly used to assess even attitudes, it makes sense that those teaching writing should have a say about student writing practices. I'm sure different alpha coefficients could have been used, or current studies could adapt them for a specific purpose. .72-.74 is not a big range, though the 4 items listed above were also quite similar.
It does seem, thoug,h that things like "linguistic maturity" are difficult to define, and despite efforts to be objective, I would think that using such language naturally calls into question the validity of the research. I know that many terms are rather ambigiuous and that we work with them in the fieled. I notice a drastic difference between the language used in composition and the hard sciences, which of course is not surprising. I admire our field's persistance in doing quantitative research, though.
Friday, October 12, 2007
Lauer Chapter 6
Note: My Lauer chapter 5 blog seems to have cut off a sentence or more in the beginning of the blog. I just noticed this when creating this blog.
The Suddarth and Wirt study seems like a useful study, even for today. Colleges are constantly trying to find ways to place students in appropriate courses. The researchers wanted to predict course placement using pre-college information, so the research question involved asking what information would lead to appropriate college placement. The subjects were 5000 Freshmen at Purdue University in 1971. The context of this study included an era when when appropriate course placement was (perhaps?) more of a new field of research. The students were to be placed in composition, mathematics, and chemistry courses. The data selection included the use of ten predictor variables. These include the following: high-school rank, high-school GPA, SAT verbal score, SAT math score, semesters of foreign language, high-school English grade, semesters of math, semesters of science, math grades and science grades. Analysis included regression analysis equations, as no other methods were found to be as accurate when considering psychology as a whole. The researchers also used the 1971 equation on data from freshmen of 1972, and this helped validate the prediction equation.
I do agree that the type of analysis used indeed created a highly accurate study, as previous statistics courses have shown me the often amazing accuracy possible. If, however, I consider the context of the study I do question some of the ten predictor variables. SAT scores do not, I believe, give any good indication of an appropriate math course for college for most students. Generally using past performance to predict future performance is questionable, though I realize with a large population this reasoning and analysis can produce (and does produce) meaningful results. I suppose I like to consider more of the psychology behind the issue of course placement. Will, for example, a student starting college want a fresh start and surpass the old expectations? Or, perhaps he or she will do much worse than expected given the psychological demands of college? My feeling is that past scores and grades can be good predictors for some students, though not for others. Of course, every student cannot have a personal college coach that looks out for his or her best interest in courses. Advisers are overworked and not knowledge about every single student, so analysis with large amounts of data seems to be the next best solution. With the increased use of computer technology, perhaps course placement will change. Here I am thinking of online essay "tests" for composition courses, such as those at BGSU.
The Suddarth and Wirt study seems like a useful study, even for today. Colleges are constantly trying to find ways to place students in appropriate courses. The researchers wanted to predict course placement using pre-college information, so the research question involved asking what information would lead to appropriate college placement. The subjects were 5000 Freshmen at Purdue University in 1971. The context of this study included an era when when appropriate course placement was (perhaps?) more of a new field of research. The students were to be placed in composition, mathematics, and chemistry courses. The data selection included the use of ten predictor variables. These include the following: high-school rank, high-school GPA, SAT verbal score, SAT math score, semesters of foreign language, high-school English grade, semesters of math, semesters of science, math grades and science grades. Analysis included regression analysis equations, as no other methods were found to be as accurate when considering psychology as a whole. The researchers also used the 1971 equation on data from freshmen of 1972, and this helped validate the prediction equation.
I do agree that the type of analysis used indeed created a highly accurate study, as previous statistics courses have shown me the often amazing accuracy possible. If, however, I consider the context of the study I do question some of the ten predictor variables. SAT scores do not, I believe, give any good indication of an appropriate math course for college for most students. Generally using past performance to predict future performance is questionable, though I realize with a large population this reasoning and analysis can produce (and does produce) meaningful results. I suppose I like to consider more of the psychology behind the issue of course placement. Will, for example, a student starting college want a fresh start and surpass the old expectations? Or, perhaps he or she will do much worse than expected given the psychological demands of college? My feeling is that past scores and grades can be good predictors for some students, though not for others. Of course, every student cannot have a personal college coach that looks out for his or her best interest in courses. Advisers are overworked and not knowledge about every single student, so analysis with large amounts of data seems to be the next best solution. With the increased use of computer technology, perhaps course placement will change. Here I am thinking of online essay "tests" for composition courses, such as those at BGSU.
Wednesday, October 3, 2007
Lauer Chapter 5
gathered merit ratings from 1 to 9, in addition to comments provided by the readers regarding likes and dislikes about the essays. 300 papers produced 11, 018 comments, which became grouped under 55 headings. The Paul Deiderich study seems appropriate to analyze given it took place at my old school in Champaign, IL. For his study, Deiderich questioned if the reliability of grades on essays could be improved. Even today, this is a pressing issue, mainly with ETS and similar services. He used 300 papers (written by the subjects who were all college students in their first month of school), and the papers covered three universities. The context of the study involved a highly specialized research team with clear goals. For instance, sixty readers graded the essays (turned out to be 53 eventually). These readers included professionals such as English teachers, writers , editors, lawyers, etc. The following five factors were considered: ideas, organization, wording, flavor, and conventions like punctuation. The study specifically looked at the factors that EDUCATED readers consider when grading essays. When collecting data, Deiderich used statistical analysis such as standard deviation, which he compared with percentiles, standard scores, range and letter grades.
The range of readers in this study provides for more accurate comments and scores. All the readers were not, for example, English teachers. The readers were not randomly selected, however, but the study has been replicated with similar results. I know that ETS has little variance in the scores of their essays, so they seem to have a system down that is reliable. Whether or not it is valid is another story. Deiderich used various readers, but I wonder if the variety of his readers was almost too great. Would, for example, a lawyer and a social science teacher grade in a similar manner? Should they? I am not sure that context was considered in this study to a great enough extent. That is, the reliability of grades on essays does not really seem to be the pressing question. Each essay for each subject will be written under a certain set of conventions. What works for one paper might not work for another. Also, I would think using three different universities could produce quite different essays, given that I do not know which universities were used. There does need to be a way to evaluate writing in the way ETS does when we deal with massive numbers of essay writers. That does not mean, however, that the BEST method of evaluation is used, especially when we consider all of the various aspects that go into producing a good essay that is relevant to the context and situation.
The range of readers in this study provides for more accurate comments and scores. All the readers were not, for example, English teachers. The readers were not randomly selected, however, but the study has been replicated with similar results. I know that ETS has little variance in the scores of their essays, so they seem to have a system down that is reliable. Whether or not it is valid is another story. Deiderich used various readers, but I wonder if the variety of his readers was almost too great. Would, for example, a lawyer and a social science teacher grade in a similar manner? Should they? I am not sure that context was considered in this study to a great enough extent. That is, the reliability of grades on essays does not really seem to be the pressing question. Each essay for each subject will be written under a certain set of conventions. What works for one paper might not work for another. Also, I would think using three different universities could produce quite different essays, given that I do not know which universities were used. There does need to be a way to evaluate writing in the way ETS does when we deal with massive numbers of essay writers. That does not mean, however, that the BEST method of evaluation is used, especially when we consider all of the various aspects that go into producing a good essay that is relevant to the context and situation.
Thursday, September 20, 2007
Chapter 4 Lauer
The Texas study by Witte, Meyer, Miller and Faigley seems like an appropriate study to evaluate. Even though its 1981 date makes it dated, it offers a good example of the role of basic statistics in writing research. They aimed to collect descriptive data on university writing programs, guided by broad research questions that were related to program evaluation. In other words, this was no small task! The subjects were 550 writing program directors, (originally) though 259 agreed to complete the survey and 127 did. The small response rate, though, is typical of surveys, though I would have expected a commitment to complete one (especially by a writing director) to carry some weight. The questionnaire related to various aspects of the writing program. Theory guided the context of the work, since the work related to previous researchers and their studies. In this study, data collection seemed rather straight forward, though perhaps a bit ambitious since a writing program has many facets, all of which cannot be adequately covered by even the most promising survey.
One of the pitfalls was the lack of a central source of writing programs in the United States. The survey, therefore, did not include a representative sample to the degree the researchers would have liked. Correction factors were not needed in the final data analysis, as sample size was not close to the population size. The data collected aimed at in-depth responses, as opposed to all multiple choice. That, however, meant the response rate was low since the survey was taxing. Even though the survey compensated for this weakness by considering separate strata as populations, compensation is not as good as a strong response rate to begin with.
Learning about writing programs is also probably not something that is best handled by consulting writing directors. Certainly they would have a lot to say about the program, but a more balanced approach would yield more (perhaps relevant) information. A potential problem, though, would be information overload.
I was surprised the researchers did not have access to a central database of programs, though given computer limitations in 1981, that fact is perhaps not as surprising. Without easily accessible spreadsheets or data compilations, the era before computers surely involved a lot of work that today is not wasted on hunting down such information. Now, the problem is too much information!
One of the pitfalls was the lack of a central source of writing programs in the United States. The survey, therefore, did not include a representative sample to the degree the researchers would have liked. Correction factors were not needed in the final data analysis, as sample size was not close to the population size. The data collected aimed at in-depth responses, as opposed to all multiple choice. That, however, meant the response rate was low since the survey was taxing. Even though the survey compensated for this weakness by considering separate strata as populations, compensation is not as good as a strong response rate to begin with.
Learning about writing programs is also probably not something that is best handled by consulting writing directors. Certainly they would have a lot to say about the program, but a more balanced approach would yield more (perhaps relevant) information. A potential problem, though, would be information overload.
I was surprised the researchers did not have access to a central database of programs, though given computer limitations in 1981, that fact is perhaps not as surprising. Without easily accessible spreadsheets or data compilations, the era before computers surely involved a lot of work that today is not wasted on hunting down such information. Now, the problem is too much information!
Wednesday, September 12, 2007
Chapter 3 Lauer
I'll discuss Flori and Clark's ethnography since I find writing at the elementary school level fascinating. Their research question involved asking why children's enjoyment and sense of competence declined, even though their writing improved, according to the National Assessment of Education progress in 1981. The subjects included second and third-grade students, all in a Midwestern city at a land grant university. This was an open classroom, and the economic status of the children varied. Regarding context, the researchers chose this site because they believed it would be an ideal place to study attitudes towards writing. Student backgrounds, the role of writing in building community, and the options provided by the room set up helped Florio and Clark identify four functions of writing. They include the following: participating in the community, knowing oneself and others, occupying free time, and demonstrating academic competence. To collect their data, they used ethnography, and the final report even included samples of students' writing. A lot of "thick description" was used in the data section as well. They determined that social context of the classroom helps or hinders the writing process, a conclusion that seems completely obvious. Because the open classroom was an atypical setup, I question just how we might generalize (if we do so) to a larger population. Also, the classroom seems rather progressive, and being on university grounds might mean these students are exposed to a writing program stronger than many other programs in the nation. Typically, I have found that schools connected with universities do quite well.
I'm also not convinced that the research question was exactly what needed to be studied. It seems that educators can usually raise scores or "quality" of work, but this would, I expect, often come at the expense of a student's enjoyment of the subject. Surely the massive testing common in today's elementary schools causes students to like certain subjects less, though I'm also sure their test scores go up when they are drilled on information. It seems like this study really could have touched more upon what educators might do (or do already) to help retain or create a student's sense of enjoyment and perceived competence in a subject.
I'm also not convinced that the research question was exactly what needed to be studied. It seems that educators can usually raise scores or "quality" of work, but this would, I expect, often come at the expense of a student's enjoyment of the subject. Surely the massive testing common in today's elementary schools causes students to like certain subjects less, though I'm also sure their test scores go up when they are drilled on information. It seems like this study really could have touched more upon what educators might do (or do already) to help retain or create a student's sense of enjoyment and perceived competence in a subject.
Subscribe to:
Posts (Atom)