Running head: DASD GRADING PRACTICES EVALUATING THE EFFECTIVENESS OF CURRENT ELEMENTARY GRADING PRACTICES AND DETERMINING PERCEPTIONS OF A STANDARDS-BASED REPORT CARD IN THE DOVER AREA SCHOOL DISTRICT A Doctoral Capstone Project Submitted to the School of Graduate Studies and Research Department of Education In Partial Fulfillment of the Requirements for the Degree of Doctor of Education Bobbie M. Strausbaugh California University of Pennsylvania June 2022 DASD GRADING PRACTICES ii © Copyright by Bobbie M. Strausbaugh All Rights Reserved June 2022 DASD GRADING PRACTICES iii California University of Pennsylvania School of Graduate Studies and Research Department of Education We hereby approve the capstone of Bobbie M. Strausbaugh Candidate for the Degree of Doctor of Education DASD GRADING PRACTICES iv Dedication I dedicate this work to the students and families of the Dover Area School District. An anomaly in this day and age, my entire educational career has been with a single school district: the Dover Area School District. I have had experiences and built relationships that have enriched my life over the years of work in the district. It has been an honor to complete this work and I hope the district benefits from the research results. I also dedicate this work to the memory of my mother, Gloria Mitzel. Although not able to witness a majority of my educational career, her desire and passion for me to dedicate myself to a career serving others started me on my educational journey. She taught me that everything else would fall in place when you put people first, and my ongoing desire is to make her proud. To this day, she inspires me to stay strong while also being vulnerable and accepting of challenges that allow me to grow and learn. I hope she is proud of this work. DASD GRADING PRACTICES v Acknowledgements This work would not have been possible without the support of others to whom I owe thanks and gratitude. I want to acknowledge the Dover Area School District for consenting to be the setting for this research. Also, thank you to Dr. Todd Keruskin and Dr. Laura McCusker for serving as doctoral capstone committee chairs. Their support and feedback were invaluable throughout the capstone process. Next, I need to acknowledge and thank my former colleague and forever friend, Laurie Heyer, for her friendship and much-appreciated editing assistance. To Kathy Guyer, I also extend my gratitude. Our collegial discussions and reflections always help us grow as professionals, and I am thankful for your compassion, support, and friendship throughout this journey. Finally, I must thank my family. To my father, Larry Mitzel, I thank you for modeling a solid work ethic, supporting my many endeavors, and for your unconditional love. To my sister, Terrie Goodling, thank you for the check-ins and the unique support only a sister can provide. To my daughters, Casie and Sammie, thank you for understanding the time commitment for this work. I hope this work serves as an example to set lofty goals, never stop dreaming, and never stop learning. I love you both! DASD GRADING PRACTICES vi Table of Contents Dedication iv Acknowledgements v Table of Contents vi List of Tables x List of Figures xi Abstract xii CHAPTER I. Introduction 1 Background 2 Capstone Focus 3 Research Questions 3 Expected Outcomes 4 Fiscal Implications 4 Summary 5 CHAPTER II. Literature Review History of Grading Origins of Grading 7 9 9 European and University Influences 10 British and Prussian Influences 12 Horace Mann and American Grading Systems 12 Letter Grades and Percentage-Based Grades Concerns for Letter Grades and Percentage-Based Grades Standardized Testing 15 18 21 DASD GRADING PRACTICES Perceptions of Traditional Grading Practices vii 24 Teachers’ Perceptions of Traditional Grading Practices 25 Parents’ Perceptions of Traditional Grading Practices 27 Effective Grading Practices 30 Standards-Based Grading 32 Interviews With Schools Using Standards-Based Grading Summary Chapter III. Methodology 35 39 41 Purpose 42 Setting and Participants 45 Research Plan 47 Research Design, Methods, & Data Collection 53 Validity 61 Summary 64 Chapter IV. Data Analysis and Results 66 Data Analysis and Results 66 Research Question 1 67 Knowledge of Current Grading Practices 67 Grade-Level Consistency 68 Mastery of Curriculum and Eligible Content 75 Research Question 2 76 Teachers’ Perceptions 76 Parents’ Understanding 78 DASD GRADING PRACTICES Research Question 3 viii 80 Discussion 81 Summary 84 Chapter V. Conclusions and Recommendations Conclusions 86 86 Effectiveness 87 Applications 93 Implications 95 Limitations 100 Recommendations for Future Research 102 Summary 103 References 105 Appendix A. Survey #1 Informed Consent 113 Appendix B. Survey #1 115 Appendix C. Survey #2 Informed Consent 118 Appendix D. Survey #2 120 Appendix E. Survey #3 Informed Consent 124 Appendix F. Survey #3 126 Appendix G. Interview Questions Informed Consent 129 Appendix H. Interview Questions 131 Appendix I. District Letter of Support 132 Appendix J. District Consent to Access Data 133 Appendix K. IRB Approval 134 DASD GRADING PRACTICES ix Appendix L. Educational Research Course Certificate 135 Appendix M. Conflicts of Interest Course Certificate 136 Appendix N. Grade Books Summary Tables 137 Appendix O. Grade Books Analysis Spreadsheets 142 DASD GRADING PRACTICES x List of Tables Table 1. Number of Teachers’ Downloaded Grade Books 48 Table 2. Participants in Survey #1, Appendix B 50 Table 3. Participants in Survey #3, Appendix F 52 Table 4. Breakdown of Questions: Survey #1, Appendix B 55 Table 5. Breakdown of Questions: Survey #2, Appendix D 56 Table 6. Breakdown of Questions: Survey #3, Appendix 57 Table N1. Summary of Grade 1 Grade Books 137 Table N2. Summary of Grade 2 Grade Books 138 Table N3. Summary of Grade 3 Grade Books 139 Table N4. Summary of Grade 4 Grade Books 140 Table N5. Summary of Grade 5 Grade Books 141 DASD GRADING PRACTICES xi List of Figures Figure 1. Grade Book Sample of Types of Student Work 68 Figure 2. Grade Book Sample of Assessment Categories and Numbering 70 DASD GRADING PRACTICES xii Abstract This research project evaluated the effectiveness of elementary grading practices in the Dover Area School District, focusing on English language arts and math in the first through fifth grades. The study was critical because the district did not have clearly defined elementary grading guidelines, and it was unknown if consistency existed in grading practices. Also, it was not known to what extent grades measured or reflected students' mastery of content. Research questions guided the project. Research question one questioned the effectiveness of grading practices by investigating current knowledge, consistency, and how grading practices measured mastery of content. The goal of the second research question was to determine teachers' perceptions and parents' understanding of grading practices. The third research question determined administrators' and teachers' understanding of standards-based report cards. Mixedmethods action research gathered data from teachers' grade books and three surveys. The first research question used data from grade books and a survey, with results minimally supporting effective grading practices. The second research question was answered using survey data and indicated most teachers and parents were confident in their knowledge of grading practices. However, teachers and parents were less confident that grades illustrated students' mastery of skills. The third research question used survey data and indicated that administrators and teachers had a solid understanding of standards-based report cards. The research suggested knowledge of grading practices, but they were inconsistent in several ways, including how they measured and reflected students' mastery of skills and content. DASD GRADING PRACTICES 1 CHAPTER I Introduction Behn (2003) wrote, “ ‘What gets measured gets done’ is, perhaps, the most famous aphorism of performance measurement. If you measure it, people will do it. Unfortunately, what people measure often is not precisely what they want done” (p.599). In education, performance is measured by what students can demonstrate they have learned, and education communicates learning through grading. Therefore, grading can be considered a source of communication and a measure for improvement. This research project will examine the effectiveness of the current Dover Area School District (DASD) elementary grading practices, emphasizing English language arts (ELA) and math in first through fifth grades. DASD elementary grading practices refer to the grading practices in the district’s four elementary schools. The researcher defines the effectiveness of current grading practices as knowledge of current grading expectations, consistent grading practices across all elementary buildings, and grading practices that reflect students’ mastery of state standards. The Literature Review chapter will summarize what literature reveals about the history, perceptions, and effectiveness of grading and provide information on standardsbased grading. The Literature Review chapter will also include information from neighboring school districts using standards-based report cards. The research questions will guide the data analysis portion of the project. The data analysis will determine the effectiveness of current DASD elementary grading practices, gather perceptions and understanding of current elementary grading practices, and determine current knowledge of standards-based grading. Action research will use a DASD GRADING PRACTICES 2 mixed-methods approach, using surveys to collect quantitative and qualitative data from teachers, administrators, and parents. The action research will also include the data analysis of elementary grade books. The research project will conclude with a summary of findings and recommendations for potential changes to the current DASD elementary grading practices. Background The current DASD elementary grading practices are not clearly defined or articulated to teachers, students, and parents. The district does not have a written guide or manual for elementary grading practices. Therefore, this raises several concerns for the current DASD elementary grading practices. First, it is unknown if there is consistency in current DASD elementary grading practices across the four elementary schools. This potential inconsistency means the district does not have a common way to compare student performance across the four elementary buildings. Also, the district does not have common performance indicators to guide elementary instruction. Additionally, the district does not know to what extent grading practices assess students' mastery of state standards. Finally, the district does not know if current practices provide families with a clear understanding of their child's learning. There are other reasons why this research is essential to the district. First, the district's current comprehensive plan includes a district goal that ensures consistent implementation of standards-aligned curricula across all schools for all students. This goal consists of the development and implementation of common, standards-aligned assessments. Also, with the recent secession of a township from the DASD, there may be DASD GRADING PRACTICES 3 redistricting and possible reconfiguration. Therefore, the investigation of current DASD elementary grading practices is timely with other work in the district. Capstone Focus This research focuses on analysis, not necessarily recommendations for change. The researcher plans to complete the research project, present the findings, and recommend potential changes. However, after the research, the district will make decisions based on other district work, circumstances, and needs. This research project will coincide with other work in the district, namely a review of curriculum, instruction, assessment, and resources. Therefore, the research project will occur when the district is receptive to evaluating its current elementary grading practices. Finally, the researcher desires this research to be informative for anyone seeking guidance on grading. The researcher is passionate that the work is meaningful, regardless of whether or not the DASD decides to act upon the findings. Research Questions A reflection on the needs and purpose of the research project resulted in the following research questions: 1. How effective are the current DASD elementary grading practices in ELA and math? The definition of effective is knowing current grading expectations, grade-level consistency across all buildings, and grading practices that reflect students' mastery of curriculum and eligible content of Pennsylvania grade-level state standards. 2. What are DASD teachers’ perceptions and parents’ understanding of current DASD elementary grading practices? DASD GRADING PRACTICES 4 3. What are DASD administrators’ and teachers’ perceptions and understanding of standards-based report cards? Expected Outcomes The potential impact of the research project will depend on the research results, answers to the research questions, and the status of the district upon research completion. As a result of the project, the researcher anticipates three potential changes to the DASD elementary grading practices. The first option will maintain the current elementary grading practices in ELA and math for grades K–5 and improve the grading practices by developing and implementing common assessments in grades K–5. The second option will be developing and implementing common assessments in grades 3–5 and a standards-based report card in grades K–2. The third option will to develop and implement a standards-based report card for all grades. Upon completing the research project, the DASD may decide not to act upon any recommendations. Fiscal Implications The first option of improving the system by developing and implementing common assessments in grades K–5 will require a three-day summer workshop to develop common assessments at an estimated cost to the district of roughly $57,000. The second option of improving the system by developing and implementing common assessments in grades 3–5 and a standards-based report card in grades K–2 will cost roughly $60,000. The third option of improving the system by developing and implementing a standards-based report card for all grades also has an estimated cost of approximately $60,000. In all cases, the costs are primarily due to teachers’ salaries at a per diem rate of compensation. DASD GRADING PRACTICES 5 The second and third options could have additional technology costs if the current school information system cannot generate standards-based report cards. These additional anticipated costs are anywhere from $6000 to $13,000. The researcher understands that the research could result in no change. The research results may not justify a need for change, or considering change may not be practical based on the district’s circumstances at the time of project completion. The researcher recognizes that the research could result in the district wanting to make changes, but the financial implications may create a barrier to any recommended changes. It is worth noting that the variation in the costs of the three options is minimal. The researcher is confident that if the district decides to proceed with any recommended changes, the district will be able to choose an option based on need, not on costs. Summary The research will answer three research questions related to the DASD elementary grading practices. Answering the research questions will provide insight into the effectiveness of current DASD grading practices. Answering the research questions will also provide summaries of teachers’ and parents’ perceptions of current grading practices and teachers’ and administrators’ current knowledge of standards-based report cards. The project will begin with a review of literature related to grading practices. Understanding the history of grading practices, reasons for changes over time, and the philosophies and opinions of critics and advocates provide a foundation to evaluate the effectiveness of current grading practices. DASD GRADING PRACTICES 6 The project will continue with data collection and analysis. Data collection and analysis of information from grade books will provide additional information to evaluate the effectiveness of current grading practices. Further data collection and analysis from surveys will help determine perceptions of current grading practices and current knowledge and understanding of standards-based report cards. Answering the research questions will provide the DASD with valuable information about current elementary grading practices. The researcher does not necessarily intend for the research to provide recommendations. The intent is to provide helpful information to make decisions about elementary grading. Additionally, the researcher has acknowledged the desire for this research to be informative for anyone seeking guidance on grading. The aspect of the research that has the potential to be most applicable to others outside the DASD is the information found in the following Literature Review chapter. DASD GRADING PRACTICES 7 CHAPTER II Literature Review This review of literature about school grading practices will include historical research and a comparison of grading practices. The Literature Review chapter begins with a historical perspective of grading practices. This historical research will reveal abundant "history repeats itself" findings. These findings will draw attention to the drawbacks of grading practices and help determine how to make grading practices more effective. Although not a significant aspect of the research, the historical research will occasionally reference how the emphasis has changed from assessment to grading over time. Often the terms assessment and grading are used interchangeably, but there are differences between the two terms. The goal of assessment is to improve learning. The purpose of grading is to evaluate learning (Carnegie Mellon University, 2021). More specifically, Schneider and Hutt (2014) describe assessment compared to grading as "two different processes—that of internal communication oriented towards pedagogical concerns; and that of external communication oriented towards system-building" (p. 203). The research journey into the history of grading practices tracks how grading changed over time and shows how the emphasis changed from assessment to grading. The review of the literature will compare different grading practices. The research will look at standards-based reporting as an alternative to traditional grading systems. The literature looks at standards-based reporting as a possible system to break the history repeats itself cycle and approach grading more effectively. Since standards-based grading is a reasonably recent approach, not much research is available. Therefore, the researcher DASD GRADING PRACTICES 8 will interview schools presently using standards-based reporting and summarize the findings in the Literature Review chapter. The majority of the review of literature looks at a historical perspective, using historical perspectives to help define what is more effective, and interviewing those who are already trying something new. The review of literature and answers to the research questions will guide a summary of the research. The Literature Review chapter begins with research on the history of grading. Next, the chapter provides research about the traditional grading practices of letter grades and percentage-based grades and the emergence of standardized testing. The chapter continues by examining perceptions of and concerns about traditional grading and a look at what makes effective grading practices. The Literature Review chapter closes with an investigation of standards-based grading. Curriculum and instruction are vital if we want students to succeed. Determining if students are learning is an equally essential step in assessing the effectiveness of our schools and systems. Whether referred to as assessment or grading, it follows: While school leaders often think that pedagogical and curriculum improvements will provide the most leverage for systemic change in school reform efforts, innovative educational leaders are coming to understand the critical role that assessment plays in the teaching and learning process. When failing to close the loop, so to speak, or thinking about improving the grading/assessment piece, the optimism of reform efforts may not come to fruition and schools most likely will continue to spend money, time, and effort searching for the next “silver bullet.” (Townsley & Buckmiller, 2020, p. 1) DASD GRADING PRACTICES 9 History of Grading Assessment is a way of knowing and understanding people and has endured over time (Walsh & Betz, 2001). Therefore, it is no surprise that assessment and grading found their way into schools. It is difficult to imagine schools without assessment and grading, and it is natural to assume assessment and grading have always been a part of learning. However, Hargis (2003) wrote: Grades are such an ingrained part of our educational system we assume they have always been with us. This, however, is not the case. Grades are a relatively new phenomenon. There is not much evidence of their use prior to the mid-nineteenth century. (p. 13) Hargis’s perspective that grades are a relatively new phenomenon does not reduce the significance of researching grading history. The grading history will include the origins of grading, European and university influences, British and Prussian influence, and Horace Mann and American grading systems. Origins of Grading Searches into early schooling back to the Greeks reveal that the key to learning was the relationship between the teacher and student and the learner's feedback from the teacher. Assessments were typically in the form of oral exams, and most assessments were qualitative. Searches into early United States education reveal that biographies of scholars often mention under whom the learner studied. The act of scholars identifying their teachers illustrates the value placed on the relationship between the teacher and student, not an evaluation system. Hargis (2003) explains: DASD GRADING PRACTICES 10 Until about 1850, most schools were of the rural, one-room variety. Students of all ages were mixed together and most students did not stay in school beyond the most elementary levels. The curriculum content was simple. The students generally demonstrated their competencies by reciting. Progress was indicated descriptively; the teacher would simply write down the skills a student had or had not acquired. (pp. 13–14) European and University Influences European styles of schooling and grading influenced early American schools. Grades and grading systems began to arise due to several events at the college level. Early references to school grading practices were from well-known institutions such as Cambridge, Yale, and Harvard. Two historically-noted landmarks of grading at the college level were the grading practices of William Farish and Ezra Stiles. Depending on various research viewpoints, either man can be credited or blamed for beginning a transition from assessment to grading. The choice between crediting or blaming the men for this transition depends on whether researchers saw the transition from assessment to grading as a positive change. “In 1792, William Farish, Professor of Chemistry and Natural Philosophy at Cambridge, developed the concept of grading students' work quantitatively” (Stray, 2001, as cited in Kellaghan & Greaney, 2019, p. 51). Similarly, in 1785, Ezra Stiles, then President of Yale, used a system similar to the Cambridge Mathematical Tripos examination (Dexter, 1901). The mathematical Tripos exam has been called “the grandparent of every university examination in the world" (Kellaghan & Greaney, 2019, pp. 51–52). DASD GRADING PRACTICES 11 William Farish realized that evaluating his students’ understanding of mathematics was challenging using the accepted grading practices of his time. He discovered that assessing mathematics required analysis of more written work and a deeper assessment of students' reasoning. Also, Farish was worried about subjectivity and partiality in prior forms of evaluation. Therefore, he shifted grading to a more quantitative approach. This approach changed the emphasis to grading right-answer questions and grading understanding in specific subject areas. This grading concept started a trend in universities of scoring more specialized information at the university level (Madaus & O'Dwyer, 1999). According to Hargis (2003), "Yale was the first university in America to use a grading system” (pp. 14–15). Ezra Stiles was the president of Yale. In 1785 he documented the exam results of 58 students in a diary. Of the 58 students, he recorded 20 as Optimi (“best”), 16 as second Optimi (“second best”), 12 as Inferiores (Boni) (“less good”), and 10 as Prejores (“worse”). According to Durm (1993), "In all probability, this was the origin of the 4.0 system used by so many colleges and universities today" (p. 295). Following Farish and Stiles were grading practices by instructors at Harvard who continued to influence grading in America. At Harvard, between 1877 to 1895, grading systems such as divisions, letter grades, percentages, classes, and pass or fail systems developed over time. As a result, Mount Holyoke College, in 1898, designed a system that combined various approaches. The college used a system of letter grades based on percentage ranges. The system assigned "A" to grades between 95% and 100%, "B" for DASD GRADING PRACTICES 12 90%–94%, "C" for 85%–89%, "D" for 80%–84%, "E" for 75%–79%, and an "F" for failing. This system became a model for college grading systems (Durm, 1993). British and Prussian Influences College-level grading practices trickled down to the high schools and younger learners. Also, many American school masters migrated from Britain, so university philosophies and British schooling practices influenced the development of American schools. Schneider and Hutt (2014) reference an 1824 book by British teacher John Shoveller. In the appendix of Shoveller’s book is an example of calculating a week’s worth of work. There is a table with columns for each class, each student, and each day of the week. Each day of the week had a specific point value. The teacher totaled the earned points at the end of the week. Based on the total at the end of the week, the teacher leveled the students as Optimé, Bené, Malé, or Pessimé. These descriptions share some similarities to Ezra Stiles’ previously referenced grading categories. Despite being printed in 1824, the table shares a shocking resemblance to a modern-day teacher’s grade book (Shoveller, 1824). Horace Mann and American Grading Systems In the 1830s, Horace Mann, a Massachusetts legislator and secretary of the state's board of education, began to advocate for public schools to be funded by the state. Mann advocated for free public schools available to all children. Mann and other supporters referred to these schools as common schools. Mann and his supporters stood on the platform that public investment in education would benefit the nation by teaching children to be literate, moral, and productive citizens (Center on Education Policy, 2020). DASD GRADING PRACTICES 13 As Mann advocated for common schools, he also became intrigued by the Prussian school systems. Napoleon defeated the Prussians in 1806. The Prussians felt they lost in the battle of Jena because soldiers did not follow orders. As a result, Prussian school systems changed. The Prussians decided to educate only a tiny percentage of citizens fully. This small percentage of citizens, the elite, so to speak, were educated to a level of independent thinking. The rest of the citizens, the majority, were educated to learn harmony, obedience, and the ability to follow orders. As a result, Prussian schools introduced a grading system that emphasized compliance with pedagogical learning and decreased the importance of independent thinking. This schooling system appealed to Horace Mann and others (Sundeen, 2018). Prussian schools organized curriculum into grade levels, and students could work at their own pace through the grade levels. As a result of Horace Mann's influence on schools, grading became a way to relay pedagogical learning and organize schooling and learning. Prussian school models and Mann's common schools were particularly applicable to rural schools that lacked record-keeping systems. Students in these smaller schools had inconsistent attendance, and students of various ages still attended schools together (Schneider & Hutt, 2014). While Prussian school models and Horace Mann's influence were taking hold, the Lancasterian, or monitorial model, was also gaining attention. Developed by Joseph Lancaster, founder of an elementary school for the poor in London, the monitorial system began as an inexpensive way to teach many students. The teacher taught lessons to students who earned high test scores. These students were called monitors, and the monitors had many responsibilities in the school. Monitors managed classroom materials, taught lessons to students of all ages, administered exams to other students, and made DASD GRADING PRACTICES 14 decisions about promotions to new classes. Students became monitors through a rank system. They rose to the rank of monitors by acing exams. Monitors were given special privileges, earned badges, or rewarded with small prizes (Blakemore, 2018). The monitorial model was used more often in urban schools with higher attendance and enrollment. However, concerns developed over the emphasis of competition in this system and a de-emphasis on learning (Schneider & Hutt, 2014). Horace Mann continued to influence education in the 1830s and 1840s and beyond. In his pursuit to design an education system, he also recognized some downfalls of the Prussian and monitorial models. Mann was concerned about the students' motivation, acquisition of knowledge, and abilities to think. He was worried students would become so focused on the resulting grades that learning and thinking would be compromised. He adjusted his philosophies over time to graded steps, periodic quizzes and written exams, and the use of monthly report cards. The report cards kept a running record of each student's work (Schneider & Hutt, 2014). According to some research, in 1845, Horace Mann's implementation of grades and report cards was the first recorded use of a grading system in United States public schools. Students took exams with right and wrong answer choices. The students earned percentage grades on the exams. This grading system worked alongside the ranking system for which Mann earlier advocated. However, Mann hoped to reduce teacher inconsistencies and partiality by using percentages (Tocci, 2008). In summary, the early and mid-nineteenth centuries saw changes in American schools that moved the focus from assessment to grading. Early influences on grading came from Europe and the university level. As Horace Mann introduced common DASD GRADING PRACTICES 15 schools, rural schools changed their grading practices to follow Prussian models. Urban schools that followed the monitorial system needed to reduce competition and organize more at the systems level. These events also coincided with the American industrial revolution, which marked a time in American history of increased population and the desire to industrialize and seek efficiency. Bringing all these things together created continued changes for education. Schneider and Hutt (2014) summarize: Taken together, these developments show how much American grading systems diverged from their early European origins. As the system of common schools took root in America, reformers recognized the need for grades to act as important internal organizational devices—to maintain student motivation while minimizing competition and emphasizing the accretion of knowledge. But though reformers were coming to a general consensus about the purpose of grades, they had yet to standardize the practices themselves. That was a task that would take on increasing importance as both the public education system and society, as a whole, grew more complex in the last decades of the nineteenth century. (p. 207) Letter Grades and Percentage-Based Grades In the late nineteenth century, the needs of school grading continued to change. Because of the increase in public schools, there was a need to educate more children. Grading practices needed to become more standardized so one school could share grades with other schools. The most common grading systems to arise were systems using letter grades and percentage-based grades. Research credits Harvard in 1883 and Mount Holyoke College, as previously noted, with starting the use of letter grades. The systems DASD GRADING PRACTICES 16 assigned "A" to grades between 95% and 100%, "B" for 90%–94%, "C" for 85%–89%, and "D" for 80%–84%. Originally a letter grade of "E" was assigned to 75%–79%, and a letter "F" represented failing. The systems eventually dropped the letter grade "E" with no single agreed-upon theory. Assigning letter grades to percentage ranges led to versions of the familiar 4.0 grading system (Durm, 1993; Schinske & Tanner, 2014). Finkelstein (1913) investigated and studied a variety of grading systems. He wrote of a two-division system using divisions of "passed" and "not passed." He looked at a three-division system using divisions "inferior" and "mediocre" and "superior." He wrote of seeing no justification in a four-division system and proposed a variety of five-division systems using letter grades. The five-division system used five letters and divided them into "A," "B," "C," "D," and "E" or "A," "B," "C," "D," and "F." The letters became associated with descriptors such as excellent, superior, average, inferior, and failure. However, Finkelstein became more interested in the natural distribution of the students' grades over the percentage-based ranges. He found more students' grades fell in the middle "C" interval, and he documented this observation in his book with a variety of drawings that resembled bell-shaped curves. Finkelstein (1913) recorded drawings and reflections that led him to recommend distributing grades over a five-system distribution. He felt a pre-determined number of students should fall in each distribution interval, similar to what we refer to as the standard curve. In other words, there were opinions that the grading system should force more grades into the average "C" letter grade category and less into the other grade categories. He showed 12% of the students in the lowest category, 19% in the following DASD GRADING PRACTICES 17 category, 45% in the middle category, 21% in the following category, and 3% in the highest category. Finkelstein (1913) summarized his work with this recommendation: In our judgement it would be in every respect desirable for Cornell University, and any other institution of like character, and probably also for the secondary schools as well, to adopt a five-division system of marking with the express provision that, in the long run, the marks given by any instructor must not deviate widely from the distribution just indicated. (p. 33) Over time, there were many variations in letter grade systems and percentagebased grade systems. Some systems attached a letter grade to particular intervals of percentage-based grades, such as a grade between 90% and 100% earned the letter grade "A" and so on for other percentage-based categories. The distributed letter grades could result in various distributions depending on how the teacher designed the intervals. Other systems called for the use of the standard curve in assigning letter grades. Using the standard curve meant that only small percentages of students should earn extremely low or extremely high grades. The standard curve, or normal distribution, means that more students are average, and the grade distribution should reflect more average grades. This grading approach required distributing the letter grades to the students based on a pre-determined number of students earning letter grades in each category. One such distribution was to force grades to fall into five categories, with approximately 7% of the grades falling in the lowest category, 24% in the next lowest category, 38% in the middle category, and 24% and 7% in the highest two categories (Hargis, 2003). DASD GRADING PRACTICES 18 The use of letter grades has continued over time. Townsley and Buckmiller (2020) refer to the 1925 work of Chapman and Ashbaugh. Chapman and Ashbaugh collected 436 report cards from various grades across the country. More than half of the report cards reported grades using letter grades or percentages. This 1925 work illustrates the continued use of letter grades and percentage grades on report cards (Chapman & Ashbaugh, 1925; Townsley & Buckmiller, 2020). According to Schneider and Hutt (2014): Although the A–F grading system was still not standard by the 1940’s, it had emerged at that point as the dominating grade scheme, along with two other systems that would eventually be fused together with it: the 4.0 system and the 100 percent system. (p. 215) Letter grades are still in use today. However, percentage-based grades have recently become more popular due to the impact of technology. With the development of grading software and programs, it has become easier to document and calculate grades using percentages (Guskey, 2013). Concerns for Letter Grades and Percentage-Based Grades Concerns for letter grades and percentage-based grades have surfaced for as long as the systems have existed. Leaders, teachers, students, and parents have all expressed concerns. Research back to the monitorial system shows that parents were already expressing concerns. The concerns were on behalf of the parents of the student monitors. The parents were concerned that while their sons were spending time being monitors, they lost time as learners (Murray, 2013). DASD GRADING PRACTICES 19 In 200 years, the concerns about letter grades and percentage-based grades have not changed all that much. Some of the most consistent and reoccurring concerns include misinformation to families, distortion by including zeros, pressure on students, grading to encourage compliance, and grading inconsistencies. There are concerns that criteria used to determine letter grades and percentage-based grades are not apparent to students and parents. Critics have said that grades can misinform students and parents, and grades can even be deceiving (Jongsma, 1991; Schinske & Tanner, 2014; Spencer, 2012). The use of a zero for grades and the distortion of resulting averages is also a common concern. Basic knowledge of averaging shows a single zero can significantly lower and possibly distort a student's overall grade. The question becomes what the grade is trying to communicate. By allowing one score of zero to lower the overall percentage significantly, the concern is whether that grade appropriately reflects learning (Guskey, 2013; Jongsma, 1991). Another concern for letter grades and percentage-based grades is the pressure they place on students to succeed. Students are known to become competitive and overwork themselves to earn better grades. Students may cheat, and they can exchange authentic learning for whatever it takes to get a good grade (Hargis, 2003; Schinske & Tanner, 2014; Schneider & Hutt, 2014). Students often ask, "Is this going to be graded?" or say, "Just tell me what I need to do to get a good grade." The use of grades to gain compliance or punishment is a concern for letter grades and percentage-based grades. Grades can reward or punish students for things unrelated to mastery of content. Students can earn higher scores for participation and lower scores for poor behavior. Homework completion is often a point of contention in deciding how DASD GRADING PRACTICES 20 much influence homework grades should have in calculating overall grades (Jongsma, 1991; Spencer, 2012). Grading inconsistencies are a final concern for letter grades and percentage-based grades. Inconsistencies exist in applied grading scales and teachers' grading practices (Guskey, 2013; Jongsma, 1991). Most research related to letter grades and percentagebased grades mentions concern for subjectivity in grading. Guskey (2013) cites a 1912 study as significant research that questioned the reliability and accuracy of percentagebased grades. The research started with analyzing the grades of 147 papers in an English class. Scores on a first paper ranged from 64–98, and scores on a second paper ranged from 50–97. The wide range of scores placed grading practices in question. Thinking this was an isolated situation, researchers also analyzed geometry papers. These 128 papers found an even more significant variation in grades. Some teachers only gave full credit, some partial credit, and others considered other aspects of work, such as neatness and spelling (Starch & Elliott, 1913). Although focused on a single school district, the research of Cox (2011) sheds light on the many inconsistencies of letter grades and percentage-based grades. His study took place in the Lincoln Secondary School District, a pseudonym for a district consisting of five comprehensive high schools, a continuation school, and an adult education program. The district was primarily Hispanic, challenged by poverty, and needed reform. District reform resulted in consistent standards-based instruction, common assessments, curricular pacing charts, and data-based collaboration meetings. Cox's study included interviews with focus groups and nine individual teachers that revealed a concerning number of inconsistencies in grading. In interviews with focus groups and individual DASD GRADING PRACTICES 21 teachers, Cox found that some teachers emphasized the students' effort in their grading practices. Other teachers placed homework above all else. Some teachers felt students should not earn a grade better than "C" if they did not complete homework, regardless of test grades. At the same time, other teachers overlooked incomplete homework if test grades and end-of-year finals indicated students learned the content. Some did the same for successful scores on standardized tests (Cox, 2011). Although these inconsistencies resulted from one study in a single district, the results would probably be similar in many more schools across the country. For the reasons listed above—distortion of grades by including zeros, pressure on students, grading to encourage compliance, and grading inconsistencies—these same concerns continue today. Standardized Testing In addition to evolving grading practices, the start of the 20th century also saw the initiation of standardized testing practices. Several factors contributed to the development of standardized testing. As the country experienced increased immigration, schools experienced increased enrollment (Grodsky et al., 2008). Increased enrollment and compulsory attendance laws created a need for increased school efficiency. One way to be more efficient was sorting students and assigning them to ability tracks as a way to individualize instruction (Hoff & Coles, 1999). Additionally, advocates called for educational decisions based on merit, not race or social class, and testing to better match students with specific skills. The use of standardized testing was a way to meet these needs. A final factor in the development of standardized testing was the work of psychologists in the study of cognitive abilities. This DASD GRADING PRACTICES 22 work impacted schools, resulting in the development of intelligence testing (Grodsky et al., 2008). There are various events credited with initiating and sustaining the use of standardized testing. In 1904, in Paris, French psychologist Alfred Binet designed a test to predict how well a child could learn (Hoff & Coles, 1999). Around the same time, Sir Francis Galton, a polymath from England with interests in science and psychology, was also recognized as a pioneer in intelligence testing. Galton's work focused on those displaying the top echelons of intelligence (McCreadie, 2017). While Galton focused on the intelligence of the gifted, American psychologist Lewis Terman concentrated on intelligence testing to identify special education students. In other words, standardized testing started to be a way to sort students by cognitive ability (Grodsky et al., 2008). At that time, Terman was a Stanford University professor. In 1916 he modified Binet's test, and it became known as the Stanford-Binet scale used to measure intelligence (Hoff & Coles, 1999). By 1925, there was evidence that many elementary schools were using standardized testing to group students by ability (Grodsky et al., 2008). The United States Army also found the need for standardized testing during World War I and began using the Army Alpha test. The Army desired a way to identify potential officers. American psychologists Arthur Otis and Robert Yerkes created a multiple-choice test to measure soldiers' mental abilities. The test needed to be efficient, becoming a model for future standardized tests. Since the Army valued efficiency, Otis and Yerkes also designed efficient scoring and interpretation techniques of the standardized test results (Gallagher, 2003). DASD GRADING PRACTICES 23 In 1919, Terman looked at the potential of the test for school children. These tests were National Intelligence tests. The Army Alpha test and National Intelligence tests opened the door to standardized testing's potential to measure more than just intelligence. In 1923 the Stanford Achievement Tests were published and administered to elementary students. In 1929, the first statewide achievement test, the Iowa Test, was administered voluntarily. The Iowa Test remained in use for years (Gallagher, 2003). The use of standardized tests continued to evolve and expand. Soon the Scholastic Aptitude Test (SAT) and American College Testing (ACT) were developed, and starting around the 1930's standardized testing started to become a business from which to make a profit. By the 1960s, millions of students were taking the SAT (Hoff & Coles, 1999). The most recent use of standardized testing is in high-stakes testing intended for accountability. Standardized testing determines if schools qualify for federal programs, such as the Title I program. Similarly, these tests are an accountability tool for a variety of federal education policies such as the Elementary and Secondary Education Act, No Child Left Behind Act, and Every Student Succeeds Act (William, 2010). The No Child Left Behind Act was a significant step in solidifying the use of standardized testing as a school accountability measure. In 2002, President George W. Bush signed the No Child Left Behind Act. The No Child Left Behind Act initially mandated that states conduct standardized testing in reading and math in grades three through eight, extending to more subjects and grade levels over time. Under No Child Left Behind, state departments of education were responsible for developing standardized tests and testing procedures. The test scores would establish if students, and subgroups of students, were making adequate annual progress towards meeting state standards. Not DASD GRADING PRACTICES 24 meeting the requirements for sufficient growth meant particular consequences for the schools. Additionally, the act required standardized test results to be aggregated by student groups and all test results released to the public (Hursh, 2005). Standardized testing has impacted education, and there are various ways to use the results. The brief research provided shows standardized testing originated as a means of efficiency and is now a form of accountability. Just as grading practices resulted in concerns and criticisms, so does the use of standardized testing. One of those concerns is the way we use standardized test results (Hanson, 1993). Hanson summarizes the concerns as follows: In a very real sense, tests have invented all of us. They play an important role in determining what opportunities are offered to or withheld from us, they mold the expectations and evaluations that others form of us (and we form of them), and they heavily influence our assessments of our own abilities. Therefore, although testing is usually considered to be a means of appraising qualities that are already present in a person, in actuality the individual in contemporary society is not so much measured by tests as constructed by them. (p. 40) Perceptions of Traditional Grading Practices Kunnath (2017a) writes, “Grades matter, and the future lives of students are in many ways dependent on teacher grading practices” (p. 53). Grading practices can be controversial as more individuals become advocates for grading reform (Kunnath, 2017b). Considering the perceptions of teachers and parents is a way to understand the controversy better and potentially call for reform of grading practices. DASD GRADING PRACTICES 25 Teachers’ Perceptions of Traditional Grading Practices Research directly gathering information on teachers' perceptions and beliefs of traditional grading is not abundant. Research that has analyzed teachers' grading practices will represent their perceptions and beliefs. There are several researchers for whom there are frequent references when researching this topic. These researchers are well known and respected for forming the picture of teachers' perceptions and beliefs of traditional grading practices. The research of school grading practices is relatively consistent in identifying common themes in school grading. For this review of literature, these themes will serve as a summary of teachers' perceptions of traditional grading practices. The research suggests a few common findings (Brookhart et al., 2016). The first common finding is that teachers use a variety of factors in determining grades. Teachers use both achievement and nonachievement factors in their grading practices, and they see grading as a way to document academic performance and motivate students (Brookhart et al., 2016). The phrases "hodgepodge" or "kitchen sink" have described various grading approaches. These descriptions emphasize the unpredictable nature of grades, both in what grades represent and the wide variety of factors used to calculate grades (Chen & Bonner, 2017). For emphasis, Brookhart et al. (2016) stated, "teachers [idiosyncratically] use a multitude of achievement and nonachievement factors in their grading practices" (p. 828). Teachers often base grades on a variety of school and district policies and personal beliefs and values (Chen & Bonner, 2017). Teachers often feel it necessary to include nonachievement factors in grading such as effort, improvement, and conduct (McMillan et al., 2002). Some research suggests elementary DASD GRADING PRACTICES 26 teachers' grading varies from secondary teachers. Elementary teachers tend to see grading as a communication tool between schools and families and often individualize and differentiate the assignment of grades (Guskey, 2009). A second common finding is that teachers consider student effort to be necessary when assigning grades (Brookhart et al., 2016). Teachers want their grades to be fair, they want their grades to account for both effort and achievement, and they want their grades to motivate. Teachers place value on effort and motivation, and they adjust their expectations based on perceived levels of their students' abilities (McMillan et al., 2002). Similarly, research suggests that teachers factor attitude and conduct into grades to manage student behavior (Cross & Frary, 1996). Therefore, even though effort is a nonachievement factor contributing to inconsistent grading practices, it appears to be essential in teachers’ eyes. A third common finding is that teachers advocate for students by helping them earn better grades than they would earn based on only achievement. Sometimes teachers value grades based on what the grades can do for individual students. Teachers use their understanding of individual students and their circumstances to make grading judgments. Again, this illustrates the degree of variety in grading practices and suggests that grading can vary even within a single classroom (McMillan et al., 2002). This topic leads to more specific research into grade inflation and changing grades. Research by Taylor (2007) was isolated to one school, and was not enough research to prove or disprove grade inflation or the frequency of grade changing. However, the research indicates that teachers feel pressured to inflate or change grades. The pressures come from various sources—parental pressure to the point that teachers inflate grades to avoid parent DASD GRADING PRACTICES 27 conferences and administrative pressure due to teacher accountability systems (Taylor, 2007). Specific teacher perceptions of traditional grading practices are hard to identify. However, due to the research identifying such a variety in grading practices, teachers do not share one common perception. It is also clear that teachers feel that factors other than academic achievement have an essential role in determining grades. Last, teachers, to some extent, may feel the need to use their role in grading to help some students. In other words, teachers may feel they experience conflicts of interest regarding grading practices. Cross and Frary (1996) help summarize it best when discussing controversies in grading. They expressed the conflict on the part of teachers when they wrote, "the conflict in roles arising when teachers [have to] serve as [both] advocates and judges" (p. 3). In other words, teachers may feel they experience conflicts of interest regarding grading practices. Parents’ Perceptions of Traditional Grading Practices There is not much research directly gathering information on parents' perceptions and beliefs of traditional grading. The best understanding of parents' perceptions of traditional grading practices is to research parents' feelings towards changes to traditional grading. This investigation leads to the research of schools and districts changing to an alternate form of grading, often standards-based grading. Furthermore, organizing research on perceptions of standards-based grading requires collecting and generalizing investigations into individual schools' and districts' experiences. Therefore, there is no guarantee that these generalizations will transfer to other schools and districts, but the research summary is still valuable to the review of literature. DASD GRADING PRACTICES 28 There are various reasons for changing from traditional grading to standardsbased grading. Scriffiny (2008) identifies some reasons for the change as the desire for grades to have a more apparent meaning, gain better control of grades, reduce meaningless paperwork, and define quality work. Guskey et al. (2011) also explain the need to change from traditional grading practices to standards-based practices to establish clear criteria for grading. Additionally, standards-based reporting is a way to separate content grading from other factors that sometimes distort or confuse traditional grades (Guskey et al., 2011). These are all practical reasons to make the change. However, change is hard, and some parents support change while others do not. Most consistently, parents resist the change from traditional grading practices because of competition, components of grading, and familiarity. Often, traditional grading practices are what parents prefer. They want to see traditional grading practices that use letter grades or percentage grades to compare students and schools. Comparisons in public settings have announced some schools as winners and others as losers. This practice is appealing for the winners. Some think this feeling of competition creates motivation to win (Brookhart, 2013). Research has gone so far as to arrive at equations that connect individual student achievement to family characteristics and purchased inputs. Purchased inputs refer to the ability of some families to provide rewards to students, pay for tutors, or purchase gifts for teachers. They are items of privilege not available to all families. This suggests efforts and rewards are ways to earn grades, so the traditional grading system is appealing for those who have access and can afford the inputs (Bonesrønning, 2004). DASD GRADING PRACTICES 29 Preference for traditional practices also stems from parents' value on grades reflecting more than mastery of content. Parents are critical of grading reform such as standards-based grading because the grades do not include factors such as students’ responsibility or work ethic. Parents argue that grades teach students accountability and responsibility, and they go on to prepare students for jobs (Will, 2019). The use of traditional grading can be considered the currency of our educational system. Grades indicate achievement in a language that all can understand, and for that reason, parents are not willing to easily abandon the traditional approaches (Brookhart, 2013). The most substantial reason parents prefer traditional grading practices is because of familiarity. Traditional letter and percentage grades are familiar to parents. They think they know what these grades mean. The confusion comes from parents perceiving familiarity as understanding. Just because parents are familiar with traditional grading practices does not mean they understand what makes up students’ grades, nor is there agreement on what students’ grades should represent. Teachers and administrators often face pushback and even hostility if they try to introduce grading reform. When it comes to traditional grading practices, words such as “entrenched” and “ingrained” are used to describe traditional grading practices and parents’ unwillingness to accept change in those practices (Will, 2019). Despite the resistance to change, some parents support grading reform. They have concerns for traditional grading practices, and they are usually parents who have experienced the successful implementation of standards-based grading. Hochbein and Pollio (2016) explain that standards-based grading switches tedious debates between teachers and parents about record keeping to meaningful dialogue about student learning. DASD GRADING PRACTICES 30 Instead of debating points and percentage grades, discussions centered around students' abilities and specific skills can occur. Finally, instead of inflating grades with extra credit or bonus points, parent conferences can concentrate on additional learning opportunities and reassessments (Hochbein & Pollio, 2016). Regardless of traditional or reformed grading practices, research clarifies that schools and parents need to be on the same page for any grading practice to have meaning. Olson (1995) wrote, “when [teachers] they're reporting to a parent, they have to convert what they're doing to a language that parents can understand. Unless you're able to convert from the educator's jargonese, you're going to create a problem” (p. 28). Effective Grading Practices Advocates for the reform of grading practices typically start by establishing the inadequacies of traditional grading practices. From there, they provide suggestions for replacements to those inadequate practices. Establishing inadequacies and recommendations to replace those inadequacies results in various opinions and approaches, but most involve using standards-based grading and report cards. Some recommendations for effective grading are simple, and others are complex. A common thread in most includes the idea that grades fail to meet the single goal of communicating students' academic achievement. Therefore, most recommendations for effective grading practices articulate the best ways to display students' academic achievement (Brookhart, 2011; Kunnath, 2017a). One specific recommendation for effective grading and reporting is to base the grading on specific measurement topics. The reporting system should also provide an opportunity to see students' growth over time. This recommendation replaces the DASD GRADING PRACTICES 31 traditional use of averaging multiple grades over time, often referred to as omnibus grades. If it is impossible to eliminate omnibus grades, the advice is to provide information to accompany the omnibus grades. This additional information will help parents better understand students' strengths and needs. Another recommendation is to provide students with a variety of assessment options. Allowing students the opportunity to display their learning in various ways is not only practical, but it can be motivating for students. A final recommendation for effective grading is to allow students the opportunity to update their scores as the school year goes on. This recommendation does not fit the traditional classroom and would also require a change in instructional practices (Marzano & Heflebower, 2011). Another recommendation of effective grading practices is that the most effective grading practices rely on feedback. The feedback should be specific and timely to improve students' performance. Effective grading means students are receiving feedback in addition to grades. Comparing grading in traditional classes to grading in fine arts, performing arts, and sports reveals better performance in those areas because of the increased feedback provided. Often in these non-traditional areas, students create a product, such as a portfolio. A portfolio allows students to display their best work. It will enable students to correct and improve their work until completing the final portfolio. These edits are not failures, and the changes do not get averaged into a final grade (Reeves, 2008). Another recommendation of effective grading practices is that effective grading must be individualized and focus on growth. The best way to accomplish this is again to DASD GRADING PRACTICES 32 provide plenty of feedback (Tomlinson, 2001). Ironically, providing feedback swings the pendulum back to a system based more on assessment than grading. The final thoughts on effective grading practices are not additional recommendations but steps to change. It is best to avoid a top-down approach to change, but it is preferable to establish practical starting points that include educators and families in the discussion. Be prepared to explain why the change is needed. In other words, be ready to explain what is wrong with the current practices. Families will want to know why changing to something unfamiliar to them is beneficial. Leaders must prepare to answer these questions and a myriad of other questions (Reeves et al., 2017). Reeves et al. (2017) state, "grading remains the wild west of school improvement" (p. 44). They continue, "But the serious problems with practices we describe are not controversial among the scholars of classroom assessment. Without question, [grading reform] is the right work to do" (p. 45). Standards-Based Grading As standardized testing continued, the comparison of norm-referenced and criterion-referenced testing gained attention. Norm-referenced measures are a way to find where individuals fall in a distribution of values. Norm-referenced measures often compare or rank, such as comparing or ranking the performance of students, schools, districts, states, and even countries. Criterion-referenced testing measures an individual's level of proficiency or subject knowledge in a particular topic. In other words, criterionreferenced testing measures achievement based on standards rather than on norms (Lok et al., 2016). DASD GRADING PRACTICES 33 Some previously noted concerns of traditional grading are vagueness, using grades to pressure and punish students, and inconsistencies in grading practices. Combined with advocacy for more criterion-referenced measures, these concerns created a new type of assessment: standards-based grading (Lok et al., 2016). Spencer (2012) wrote, "Standards-based grading derives from the idea that teachers ought to have clearly defined academic goals for their students, be able to determine if they've met them, and then communicate that to students and parents" (p. 5). Townsley (2013) opened an Association of Supervision and Curriculum Development article with a reflection. He asked the reader to think about a student's grade in a unit on the subject of surface area. The student's homework grade was 50%, quiz grade was 60%, and test grade was 100%. The student began the unit by not understanding surface area. He received some help with the topic until he finally understood the topic, and he scored 100% on the final test. Another student scored 100% on each activity: homework, quiz, and test. The two students will end the unit with very different grades but the same final understanding. The first student's grade essentially penalized him for not having an initial understanding of the topic. This scenario is one of the many reasons for advocating for a change in grading practices. Advocates feel grades should report learning, not averaged points. They feel learned content should be valued over when it is learned (Townsley, 2013). Standards-based grading begins with identifying and understanding learning standards that communicate learning expectations. These learning standards are determined at the state level and are called state standards (Munoz & Guskey, 2015). DASD GRADING PRACTICES 34 The next step of standards-based grading is determining what the report card will report, often referred to as reporting standards. Clear language that parents can understand outlines the report card's reporting standards (Guskey et al., 2011). After establishing reporting standards, the next step is to determine grades or marks for each reporting standard and accompanying criteria for each. Often, the development of a rubric can assist in developing the grades (Munoz & Guskey, 2015). Establishing the requirements for grades is a challenging but essential step in the process. The established rubrics must clearly describe the criteria so students' performance expectations are welldefined (Guskey, 2020). Advocates for standards-based grading emphasize that standards-based report cards should separate reporting for product, process, and progress. Product means there should be criteria, measurement, and reporting dedicated only to academic performance. Likewise, there should be different criteria, measurements, and reporting for process. Process refers to work habits, behavior, responsibility, and similar topics. Progress has to do with the criteria related to learning gains or growth (Guskey et al., 2011). Not all standards-based reporting contains all three aspects of reporting, but the emphasis is on creating a reporting system that descriptively communicates learning (Guskey, 2020). The final step of standards-based reporting is to design a report card format that will communicate the information. Often, personalized report cards include school information, logo, and student identifying information. The report card contains a list of standards and marks or grades. There is often an area to record comments. Typically, a management system or technology application assists in generating the final standardsbased report cards. Finally, special consideration should be given to students with DASD GRADING PRACTICES 35 disabilities and English learners when using standards-based reporting (Guskey et al., 2011). In closing, traditional grading systems receive criticism for not being clearly defined. Traditional grading systems result in grades that report more than students' achievements and are often derived by and used to report additional information such as effort and motivation. Additionally, traditional grades are often used to sort students and can be used to punish. On the other hand, standards-based grading determines a student's grade using standards as a single measure of learning (Hooper & Cowell, 2014). Guskey (2020) summarizes: We must find ways to provide a more descriptive profile or “dashboard” of information that meaningfully summarizes the different aspects of student performance. At a minimum, we must provide multiple grades for each subject area or course on students' report cards. This is not only a requirement in standards-based approaches to education, it's an essential first step in implementing any meaningful grading reform. (pp. 40–41) Interviews With Schools Using Standards-Based Grading The literature on standards-based grading is oriented more toward advocacy than determining the effectiveness of standards-based grading practices. Most likely, this is due to standards-based grading being a relatively recent approach to grading. Standardsbased grading is a recent approach because it is grounded in the use of learning standards. Once accountability measures were in place through acts such as No Child Left Behind, states began developing learning standards and state testing. Establishing state standards and testing resulted in advocacy for standards-based grading. Therefore, standards-based DASD GRADING PRACTICES 36 grading did not gain momentum until the start of the 21st century. For this reason, the Literature Review chapter includes a summary of interviews with districts presently using non-traditional grading practices. An outline of the interview responses will provide insight into standards-based grading that may not exist through research. Interviews occurred with representatives from five Pennsylvania school districts presently using standards-based grading. All five districts are in South Central Pennsylvania. Four districts are public school districts, and the fifth school is a charter school. Each of the representatives answered nine interview questions. The survey questions are in Appendix H. The first two questions of the survey asked the interviewees to identify what they like about standards-based grading and what challenges they have faced. There are two common themes in what they like about standards-based grading: (a) consistency of standards-based grading, meaning all teachers are using the same assessment procedures and a common language for assessment, and (b) providing information to parents that explains what the students know, how they are growing, and how parents can help. There are also two common themes when it comes to the challenges of standards-based grading: (a) defining rating categories and determining what represents mastery, and (b) helping students and parents understand the standardsbased grading system. The next three survey questions asked the interviewees to discuss perceived changes in instruction, learning, and assessment due to implementing standards-based grading practices. Overall, the interviewees felt the most significant impact on teaching was the need for the teachers to unpack and better understand the standards. All DASD GRADING PRACTICES 37 interviewees had difficulty articulating how standards-based grading impacted learning. Most agreed that the standards-based grading often coincides with other changes such as curriculum, instruction, and assessment, making it hard to identify cause and effect or correlation. However, most agreed that there was improved alignment between curriculum, instruction, and assessments. This alignment revealed itself by students showing a better understanding of learning expectations and articulating learning as specific skills. Regarding assessment changes for a standards-based report card, the interviewees stated that the new assessments are more meaningful than assessments previously used with traditional grading. Additionally, two interviewees pointed out that less time is spent on administering the assessments, leaving more instructional time. In other words, they felt assessment of nonachievement factors is not as prevalent. The next question asked about parents' perceptions, which garnered the most feedback from the interviewees. All four interviewees used words such as “confused” and “overwhelmed” when explaining parents' perceptions of standards-based grading. They feel that parents have the most difficulty understanding the ratings and do not understand the concept of students' learning as growth over time. Parents think that grades must be static and cannot improve over time. The parents panic when they see the lowest rating and think it corresponds to a failure. Finally, all the individuals interviewed agreed that educating parents in advance and explaining how standards-based grading works is an essential step in the process. They suggested having the report card format prepared in advance and sharing it with parents before implementation. Next, the survey asked interviewees to identify factors necessary to use standardsbased grading effectively. The answers to this question were consistent. The interviewees DASD GRADING PRACTICES 38 emphasized agreement on what standards to use at each grade level and within each discipline. There are too many standards to choose from to include all of them in the standards-based grading system. Therefore, they all agree that collaborative and professional dialogue must occur for everyone to agree. Next, they mentioned the importance of deciding on a rating system and what each rating represents about student learning and mastery. The report cards list the ratings as performance indicators, and the report cards have either three or four performance indicators. The performance indicators describe progress towards meeting or mastering the standard. One interviewee discussed the value of this step in the process, and without it, the process becomes no different than traditional grading practices. The final questions asked the interviewees to share their reporting format and any additional information they felt was necessary. Most of the reporting formats provided by the interviewees included a list of standards and accompanying performance indicators. The performance indicators varied, but there was similarity in the approaches. Most had a system containing ratings for not yet demonstrating understanding, partial understanding, and consistent understanding or mastery. One of the interviewees closed out the survey with an insightful reflection. He pointed out that nothing works in isolation, especially a grading system. Many things had to change in his district to develop and implement a standards-based reporting system. Therefore, it's challenging to determine the grading system's direct impact on learning. It's more a matter of embracing the entire process and ensuring each process component aligns with the others. DASD GRADING PRACTICES 39 Summary Grading practices have not changed much over time. From Horace Mann's vision of public schools in the 1830s to the accountability of federal programs like No Child Left Behind in 2002 to present advocacy for standards-based reporting, grading practices have been slow to change. Some key talking points include the meaning of grades, the primary audience of grades, long-standing beliefs, and longtime entrenched practices. The most important topic is whether grades reflect only learning or include nonachievement factors (Brookhart, 2011). The review of literature suggests that while grading practices have not changed much over time, neither have the discussions. The longer traditional grading practices remain, the more challenging it becomes to consider, advocate for, and implement change if a change is the desired goal. The literature suggests that this may be due to no other reason than familiarity. Familiarity makes it hard to look at traditional grading through an evaluative lens. However, continuing with something solely based on familiarity can be grounds for criticism, especially if a change has the potential to be more purposeful or meaningful. While grading practices, in general, have not significantly changed, the literature suggests that if changes have occurred, the changes have been to standards-based grading. Standards-based grading, or standards-based report cards, is the suggested change advocated for by those seeking grading reform. It is not the purpose of the review of literature or this research project to make recommendations. The objective is to provide a foundational understanding of grading practices to evaluate effectiveness in a particular school district. A solid understanding of DASD GRADING PRACTICES the history, concerns, and perspectives of traditional grading practices and research on grading reform such as standards-based grading will be valuable in this evaluation. 40 DASD GRADING PRACTICES 41 Chapter III Methodology The Literature Review chapter provided information on traditional grading practices, namely history, concerns, and perspectives of traditional grading practices. Additionally, the Literature Review chapter provided research on grading reform, such as standards-based grading. The literature revealed that grading practices changed minimally over time. Discussions of grading practices typically focus on the meaning of grades, the grades’ primary audiences, long-standing beliefs, and entrenched grading practices. Entrenched practices make it challenging to look at traditional grading through an evaluative lens. However, continuing with traditional grading practices solely based on familiarity can be grounds for criticism, especially if suggested changes have the potential to be more meaningful or purposeful. The literature emphasized that it was not the purpose of this research study to make recommendations regarding grading practices. The objective was to understand grading practices in order to evaluate the effectiveness of current elementary grading practices in the DASD. Additionally, the study aimed to determine the current understanding of DASD teachers and administrators of standards-based report cards as an alternative to traditional grading practices. A solid understanding of the history of traditional grading practices, concerns and perspectives of grading practices, and research on grading reform such as standards-based grading is valuable in meeting these objectives. This chapter outlines the purpose, setting, and participants of the research study. Also, this chapter shifted the context of the research study from a historical review of the DASD GRADING PRACTICES 42 literature to the actionable implementation of a mixed methods research study. This Methodology chapter outlines the research design and methods, followed by data collection and analysis. These aspects of the research study changed the focus from reviewing the literature to answering research questions. Answering the research questions was essential in evaluating the effectiveness of current elementary grading practices in the DASD. Purpose This action research study collected data used to answer three research questions. These research questions determined the effectiveness of the current DASD elementary grading practices, emphasizing ELA and math in grades 1–5. DASD elementary grading practices refer to the grading practices in the four K–5 elementary schools of the district. The researcher defined the effectiveness of current grading practices as knowledge of current grading expectations, consistency per grade level across all buildings, and grading practices that reflect students' mastery of curriculum and eligible content of grade-level Pennsylvania state standards. The project also researched DASD teachers' perceptions and parents' understanding of current DASD elementary grading practices. Finally, the research determined DASD elementary administrators' and teachers' understanding and perceptions of standards-based report cards. Curriculum, instruction, and assessment are considered the foundational components of successful educational programming. However, when grading practices do not align with curriculum, instruction, and assessment, the grading practices are not effective and serve little purpose. "Effective grading policies are an essential part of DASD GRADING PRACTICES 43 combining rigorous expectations with meaningful feedback" (Feldman & Reeves, 2020, p. 25). Not mentioned in the Literature Review chapter, the current context of the COVID-19 pandemic deserves consideration. More specifically, the pandemic's interruption to education suggests it has created authentic and organic reasons to consider changes in education. More specifically, Feldman and Reeves (2020) explain: The pandemic should teach us what we already should have known: Many grading systems are broken. When these systems rely on antiquated, inaccurate, and unfair practices, such as the average and using the 100-point scale, then we systematically put students at a disadvantage—not only during extended school absences caused by this pandemic, but throughout their educational experiences. Now is the time to learn these lessons and make changes. (p. 27) The desired outcome of this research study was to answer questions about the effectiveness of the current elementary grading practices in the DASD. The action research used a mixed-methods research approach. The project began by retrieving information from the district's school information system to determine the effectiveness of current DASD elementary grading practices. The DASD technology department assisted the researcher in downloading grade books in grades 1–5 from the 2020–2021 school year for all four elementary buildings. Before submitting the project for approval, the researcher downloaded the data because once the management system rolled over to the 2021–2022 school year, the data was no longer accessible. As a result, this data was considered archived data for the research project. DASD GRADING PRACTICES 44 The researcher used surveys to gather information from teachers, administrators, and parents, resulting in quantitative and qualitative data. The researcher collected additional information through discussions and interviews, summarizing this information in the review of literature. The following three research questions guided the research: 1. How effective are the current DASD elementary grading practices (ELA and Math)? Knowledge of current grading expectations, consistency per grade level across all buildings, and grading practices that reflect students' mastery of curriculum and eligible content of grade-level Pennsylvania state standards was the definition of effective in the IRB proposal. 2. What are DASD teachers' perceptions and parents' understanding of current DASD elementary grading practices? 3. What are DASD administrators' and teachers' perceptions and understanding of standards-based report cards? As explained in the Literature Review chapter, history shows a struggle over time to agree on measuring students' academic progress and performance. Review of the literature subtly revealed an underlying difficulty in defining the difference between assessment and grading, and over time, the practices lean more towards grading than assessment. The literature noted various reasons for the historical shift to grading, with efficiency being one of the main reasons. With traditional grading practices resulting in generations of familiarity with traditional percentage and letter grades, any suggested changes are not often well-received. However, breaking from tradition and familiarity is not a reason to avoid evaluating this vital component of our education system. Add to that the timeliness of education recovering from two years of the COVID-19 pandemic, and it DASD GRADING PRACTICES 45 seems now is a good time for a school district to evaluate grading practices and seek to answer the questions that drive this research. Setting and Participants The DASD is located in South Central Pennsylvania and serves Dover Borough and Dover Township. The district covers approximately 42 square miles and serves a community of close to 24,000 members representing around 3,200 students in grades K– 12. The district has one high school, one middle school, and four elementary schools (Dover Area School District, 2021). It is important to note that the district's footprint changed significantly during this project. Previously, the DASD attendance boundaries included Washington Township. However, after nearly a 10-year legal battle, the Pennsylvania Supreme Court ruled in January of 2021 that Washington Township could secede from DASD and join the neighboring Northern York County School District. This secession resulted in over 200 students moving from DASD to Northern York County School District, beginning in the 2021–2022 school year, along with a loss in revenue to the DASD. This loss of revenue has placed a significant financial burden on the DASD and will have a devastating impact on programming for years to come. The district's brand-new state-of-the-art high school serves students in grades 9– 12 and includes a Career and Technical Education program with nine state-approved programs in agriculture, business, audio and visual communications, and computer and engineering technology. After building a new high school, the previous high school building underwent renovations and became a new middle school. Previously, the old intermediate school included grades seven and eight, but after the renovation, the middle DASD GRADING PRACTICES 46 school now houses students in grades 6–8. This planning was due to increases in elementary enrollment and crowding at the elementary buildings. As a result, the four elementary buildings now house students in grades K–5. The DASD has also experienced significant changes in district administration. Within the last two years, the district has experienced administrative personnel changes in the positions of superintendent, assistant superintendent, three out of six secondary principals, and three out of four elementary principals. This change in elementary principals is essential to note since this research project involved research at the elementary level. As three new elementary principals came into the district, the four elementary principals established a high priority goal of collaboration and consistency between the four elementary buildings—a spirit of cooperation that had not existed at the elementary level for many years. In terms of this project, assessment and grading are essential topics to engage in collaboration and seek consistency. A few additional facts about the district help with the context of the research: the staff profile of the district is 116 elementary, 60 middle school, and 74 high school teachers, as well as 208 support staff and 23 administrators. Student population demographics report that the district serves a population of about 78% white students, around 6% multi-racial, and a growing 8% population of Hispanic students. The elementary population is approximately 1440 students who are not currently equally distributed among the four buildings. The district recently voted to change the elementary attendance boundary lines, among many other district changes. Several community housing developments concentrated in one area resulted in increased enrollments in a single elementary building compared to the other three buildings. At the elementary level, DASD GRADING PRACTICES 47 approximately 47% of families are eligible for free and reduced lunches Finally, it is worth noting that 687, about 20% of students attending school within the district, receive special education services and have an Individualized Education Program (IEP) (Dover Area School District, 2021). Research Plan The collection of information for the study began in October of 2021 when the researcher interviewed participants using the informed consent and interview questions in Appendix G and Appendix H. The information collected from these interviews was not data collected to answer the research questions. The collected information helped provide insight from those using standards-based report cards: an alternative to traditional grading practices. The researcher interviewed school representatives from five neighboring school districts using standards-based report cards during the interviews. The information gathered from these interviews was summarized in the Literature Review chapter. The study continued with three phases of data collection. The first data collection phase included teachers' 2020–2021 school year grade books. The researcher downloaded the grade books from the school information system at the end of the 2020–2021 school year. The data was no longer available for download once the district's school information system rolled over to the 2021–2022 school year. With assistance from the technology department and the Director of Child Accounting, the researcher downloaded the data in July 2021. Because the researcher made the download before the start of the research project, the data was considered archived data. This phase of data collection involved downloading the grade books of 49 teachers. Grades 3–5 teachers were departmentalized and only taught one subject. Therefore, the download only included DASD GRADING PRACTICES 48 grade books of ELA and math teachers. The researcher did not download kindergarten teachers' grade books because kindergarten students receive a grade of "T" for taught but not graded in all subject areas. The fourth second-grade teacher at building one was an extra class needed to meet increased enrollment. Therefore, 49 elementary teachers across four elementary buildings were indirect participants in this data collection phase. The district superintendent provided consent for this data collection in two signed district approval letters, included in the IRB approval process and Appendix I and Appendix J. Table 1 illustrates a breakdown of the number of teachers' grade books. Table 1 Number of Teachers’ Downloaded Grade Books Gradebooks Building 1 Building 2 Building 3 Building 4 Grade total Grade 1 3 3 3 3 12 Grade 2 4 3 3 3 13 Grade 3 2 2 2 2 8 Grade 4 2 2 2 2 8 Grade 5 2 2 2 2 8 Downloading the content of the teachers’ grade books required the assistance of the district technology department and the Director of Child Accounting. These individuals assisted in creating the correct reports needed to download the grade books from the district information system. They also assisted in downloading the grade books in a format that allowed the data to be manipulated and organized for data analysis. The second phase of data collection was distributing Survey #1, Appendix B, to elementary teachers. The district superintendent provided consent for this data collection DASD GRADING PRACTICES 49 in the signed district approval letter, included in the IRB approval process and Appendix I. On January 17, 2022, the researcher sent emails to initiate the distribution of Survey #1, Appendix B. An email went to the superintendent to reconfirm approval for distribution of the survey, with a response email received that same day indicating permission. On the same day, an email went to elementary principals. The email contained a script for the principals to send to the participant teachers in their buildings. The email script explained to whom to send the email, instructions on accessing the survey, information on informed consent, and the survey's closing date. Upon initiating the survey, each participant went to the informed consent statement of the survey. If the participant agreed with the informed consent, the survey continued to the second part: the survey questions. After reading the informed consent, if the participant no longer wished to participate, they exited the survey with no information collected. The survey closed for participation on January 31, 2022. The principals distributed the survey to a total of 52 elementary teachers. The breakdown of those teachers is similar to the breakdown of the downloaded 2020–2021 grade books, with the addition of learning support teachers for participation in Survey #1, Appendix B. There was a reduction in the number of teachers in Building 4 due to the noted secession of Washington Township. Survey #1 informed consent and survey questions are in Appendix A and Appendix B. Table 2 illustrates a breakdown of the number of Survey #1 participants. DASD GRADING PRACTICES 50 Table 2 Participants in Survey #1, Appendix B Teachers Building 1 Building 2 Building 3 Building 4 Grade total Grade 1 3 3 3 2 11 Grade 2 3 3 3 2 11 Grade 3 2 2 2 2 8 Grade 4 2 2 2 2 8 Grade 5 2 2 2 2 8 Learning Support 2 2 1 1 6 Also included in the second phase of data collection was the distribution of Survey #2, Appendix D, to parents. The district superintendent provided consent for this data collection in the signed district approval letter, included in the IRB approval process and Appendix I. On January 17, 2022, the researcher sent emails to initiate the distribution of Survey #2, Appendix D. An email went to the superintendent to reconfirm approval for distribution of the survey, with a response email received that same day indicating permission. On the same day, an email went to elementary principals. The email contained a script for the principals to send to the participant parents from their buildings. The email script explained to whom to send the email, instructions on accessing the survey, information on informed consent, and the survey's closing date. Upon initiating the survey, each participant went to the informed consent statement of the survey. If the participant agreed with the informed consent, the survey continued to the second part: the survey questions. After reading the informed consent, if the participant DASD GRADING PRACTICES 51 no longer wished to participate, they exited the survey with no information collected. The survey closed for participation on January 31, 2022. The principals distributed the email script to parents using the district message system, with the survey going to parents of approximately 1050 students in grades 1–5. Survey #2 statement of informed consent and survey questions are in Appendix C and Appendix D. The last phase of data collection was the distribution of Survey #3, Appendix F, to elementary administrators and teachers. The district superintendent provided consent for this data collection in the signed district approval letter, included in the IRB approval process and Appendix I. The distribution of Survey #3, Appendix F, was similar to distribution of Surveys #1 and #2, Appendix B and Appendix D. On February 26, 2022, the researcher sent emails to initiate the distribution of Survey #3, Appendix F. An email was sent to the superintendent to reconfirm approval for distribution of the survey, with a response email received the same day indicating permission. On the same day an email was sent to each of the elementary principals. The email contained a script for the principals to send to participant teachers in their buildings. The email script explained to whom to send the email, instructions on accessing the survey, information on informed consent, and when the survey would close. When opening the survey, each participant was taken immediately to the informed consent statement of the survey. If the participant agreed with the informed consent, the survey continued to the second part which was to answer the survey questions. After reading the informed consent, if the participant no longer wished to participate, they exited the survey with no information being collected. DASD GRADING PRACTICES 52 The four elementary principals were also asked to participate in this survey. The survey closed for participation on March 14, 2022. The principals distributed the survey to a total of 52 elementary teachers. With the survey also requesting participation from the principals, the survey was distributed to 56 total participants. The breakdown of those participants was similar to the breakdown of Survey #1 participants. Survey #3 statement of informed consent and survey questions is in Appendix E and Appendix F. Table 3 illustrates a breakdown of the number of Survey #3 participants. Table 3 Participants in Survey #3, Appendix F Teachers Building 1 Building 2 Building 3 Building 4 Grade total Grade 1 3 3 3 2 11 Grade 2 3 3 3 2 11 Grade 3 2 2 2 2 8 Grade 4 2 2 2 2 8 Grade 5 2 2 2 2 8 Learning Support 2 2 1 1 6 Principals Building 1 Building 2 Building 3 Building 4 Total 1 1 1 1 4 DASD GRADING PRACTICES 53 Research Design, Methods, & Data Collection The research plan started in July of 2021 by downloading the archived elementary grade book data described in the Setting and Participants section. After downloading the grade books, the next step was preparing the IRB request for approval of the research project. Defining the research questions was the foundational step of preparing the IRB request. The research questions concentrated on evaluating the effectiveness of current elementary ELA and math grading practices in grades 1–5, determining perceptions of current elementary grading practices, and measuring the current knowledge of standardsbased report cards. The research location was the DASD in Dover, Pennsylvania, and the researcher obtained superintendent approval, Appendix I, for the research project in July of 2021. Continued preparation for IRB approval included proposals for the need and method of research, data collection and analysis, informed consent and survey questions, and research timeline. The IRB request also included submitting the researcher’s certificates, Appendix L and Appendix M, of completion in required training to conduct educational research. The IRB request was submitted to and approved by the California University of Pennsylvania IRB in August of 2021. The effective date of IRB approval, Appendix K, was August 27, 2021, with an expiration date of August 26, 2022. The review of literature took place between September and December of 2021. The literature showed an awareness that there has been a shift from assessment to grading over time, that traditional grading practices have remained stagnant, and that grading practices have experienced limited scrutiny. This lack of scrutiny has contributed to a long-time familiarity with traditional grading practices. Research shows there has been DASD GRADING PRACTICES 54 recent consideration for changes in grading practices, namely in using standards-based reporting. Through interviews, the Literature Review chapter contained a limited number of viewpoints on standards-based grading. However limited in quantity, the interviews resulted in some common themes provided by those interviewed and currently using standards-based grading approaches. The interview informed consent and interview questions are in Appendix G and Appendix H. The researcher distributed the three IRB-approved surveys in January and February of 2022, with voluntary participants' participation. The surveys were built and distributed using Google Forms, with the Google Form for each survey including three sections. Section one contained the statement of informed consent and a single question asking the participant to consent or not consent to participation. If the participant agreed to participate, the Google Form took the participant to section two; if not, the form moved the participant to section three of the Google Form. Section two of the Google Form contained the survey questions, and section three had a statement to confirm the participant elected not to participate. Once the surveys closed, the responses automatically populated in a Google Sheets spreadsheet that could be sorted and organized as needed for data analysis. The first survey contained 12 questions and was distributed to teacher participants as outlined in the Setting and Participants section. Table 4 includes the breakdown of Survey #1, Appendix B, questions. DASD GRADING PRACTICES 55 Table 4 Breakdown of Questions: Survey #1, Appendix B Question(s) Type of question Purpose Multiple Choice: Collected grade level and 1 and 2 Disaggregate data subject for each participant teacher Likert-type question: Collected quantitative Data to answer research 3 data on participant teachers’ knowledge of question 1 current grading practices Likert-type questions: Collected quantitative Data to answer research 4–10 data on participant teachers’ perceptions of question 2 current grading practices Open-ended question: Collected qualitative Data to answer research 11 data on participant teachers’ understanding questions 1 and/or 2 and/or perception of current grading practices The second survey contained 12 questions and was distributed to parent participants as outlined in the Setting and Participants section. Table 5 includes the breakdown of Survey #2, Appendix D, questions. DASD GRADING PRACTICES 56 Table 5 Breakdown of Questions: Survey #2, Appendix D Question(s) Type of question Purpose Multiple Choice: Collected grade level for 1 Disaggregate data each participant parent’s student Likert-type questions: Collected quantitative Data to answer research 2–11 data on participant parents’ perceptions of question 2 current grading practices Open-ended question: Collected qualitative Data to answer research 12 data on participant parents’ perceptions of question 2 current grading practices The third survey contained 10 questions and was distributed to administrator and teacher participant as outlined in the Setting and Participants section. Table 6 includes the breakdown of Survey #3, Appendix F, questions. DASD GRADING PRACTICES 57 Table 6 Breakdown of Questions: Survey #3, Appendix F Question(s) Type of question Purpose Multiple Choice: Collected grade level or 1 and 2 administrator for each participant; also Disaggregate data collected subject for participant teachers Likert-type questions: Provided quantitative data on teachers’ and administrators’ Data to answer research knowledge and perceptions of standards-based question 3 3–9 report cards Open-ended question: Provided qualitative data on teachers’ and administrators’ Data to answer research knowledge and perceptions of standards-based question 3 10 report cards In March of 2022, the researcher organized the downloaded grade books’ data and the data collected from the three surveys. The organization of the data led to data analysis in April of 2022. The two months of data collection, organization, and analysis allowed the researcher to use the data to answer the three research questions of the research project. Fiscal implications of the research project are minimal. The research plan involved a significant investment of time on the part of the researcher and minimal time for the superintendent, Director of Child Accounting, technology department, interview DASD GRADING PRACTICES 58 participants, and survey participants. Organization and analysis of the grade books’ data required the most significant investment of time. In terms of financial implications, there are no guaranteed costs for the research project. If the research results motivate changes to current elementary grading practices, the district will likely incur expenses to make the changes. The potential impact of the research project will depend upon the results of the research project, the answers to the research questions, and the state of the district at the time the research results are made available to the district. As a result of the project, the researcher anticipates three potential options for improvements to the present elementary grading system. The first option will result in the district maintaining the current elementary grading practices in ELA and math for grades K–5. However, the district will improve grading practices by developing and implementing common assessments in grades K–5. The second option will result in the district maintaining the current elementary grading practices in ELA and math for grades 3–5. However, the district will improve the current grading practices by developing and implementing common assessments in grades 3–5. The difference in this option is in grades K–2. The district will change the current elementary grading practices in grades K–2 to a standards-based report card. The third option will result in the district changing the current elementary grading practices in ELA and math for grades K–5. The district will change the current elementary grading practices in grades K–5 to a standards-based report card. The first option includes a three-day summer work session during the summer of 2022 to develop common assessments in ELA and math in grades K–5 for the start of the 2022–2023 school year. Participants in the work session will be one classroom teacher DASD GRADING PRACTICES 59 per building, per grade in grades K–2, resulting in the participation of 12 teachers. Teachers in grades 3–5 are departmentalized and only teach one subject area. Therefore, only ELA and math teachers in grades 3–5 will participate in the summer session, including 24 more teachers in the work session. The participating teachers will be paid per diem rates for three eight-hour workdays and provided lunch and work supplies. Teachers will report to campus to complete the work, with administrators in attendance to supervise the work. The summer work will result in all K-2 teachers and ELA and math teachers in grades 3–5 having enough common assessments to start the school year. This group of teachers will continue to meet on professional development days throughout the 2022–2023 school year to continue the work begun in the summer. The estimated cost of the summer work has a maximum cost of approximately $57,000. Like the first option, the second option includes a three-day summer work session during the summer of 2022. As described in the first option, the work session will involve the development of common assessments. However, only grades 3–5 will develop common assessments in the second option. For grades K–2, a similar three-day work session will take place, but this work session will be to design a standards-based report card for the 2022–2023 school year. Participants in the work session will be one classroom teacher per building, per grade in grades K–2. The workshop will include the same 36 participants as in the first option. The participating teachers will be paid per diem rates for three eight-hour workdays and provided lunch and work supplies. Teachers will report to campus to complete the work, with administrators in attendance to supervise the work. The summer work will result in the teachers designing enough DASD GRADING PRACTICES 60 common assessments for ELA and math teachers in grades 3–5 to start the school year with common assessments and a standards-based report card for grades K–2. This option also includes designing and mailing to families of an informational flyer about a new standards-based report card. The flyer will explain why the district is changing the K–2 report card, how to interpret the new report card, and information on the new standards-based report card format. As described in the Literature Review chapter, interviews of those using standards-based reporting systems indicated this is an essential step in implementing a standards-based report card. The estimated cost of the summer work for this option has a maximum cost of approximately $60,000. The third option is similar to the previous two options. The teachers will participate in a three-day work session to design a standards-based report card in grades K–5 for the 2022–2023 school year. The work session will include the same 36 teacher participants as previously outlined. The participating teachers will be paid per diem rates for three eight-hour workdays and provided lunch and work supplies. Teachers will report to campus to complete the work, with administrators in attendance to supervise the work. The summer work will result in the teachers designing a standards-based report card for grades K–5. This option will also include developing and distributing an informational flyer to families of students in grades K–5, as described in the second option. The estimated cost of the summer work for this option is approximately $60,000. Additional anticipated costs might accompany the change to a standards-based report card. At this time, the district’s technology department indicates the current school information system can manage a standards-based report card. If the school information system cannot develop a standards-based report card, the district may need a separate DASD GRADING PRACTICES 61 management system dedicated to standards-based report cards. These additional anticipated costs are anywhere from $6000 to $13,000. It is worth noting that the overall costs of the three options are not significantly different. Because of the minimal cost differential between options, the district can consider each option based on needs, and none of the options should be automatically eliminated based on cost alone. Validity In writing about action research, Hendricks (2017) noted, “It is important to remember that all educational research—whether conducted by teachers, administrators, evaluators, university faculty, or others interested in studying educational issues—has the potential to enhance knowledge about teaching and learning” (p. 10). Given the potential of research to enhance teaching and learning, ensuring data validity is crucial for the research plan. In this mixed methods research project, the researcher collected data from grade books, interviews, and surveys. The collected data was qualitative and quantitative, with the majority being of the latter. The grade book data was the most complex and time-consuming to collect, organize, and analyze. Having the Director of Child Accounting and the district technology department perform the download of this data strengthened the validity of the data. Having them assist in downloading the data removed the researcher from any initial handling of the data. This approach placed the data downloading process in the hands of those considered to be experts in school information system reports and the proper downloading of data. Additionally, the downloading process included all available grade DASD GRADING PRACTICES 62 books. Downloading all grade books strengthened the validity of the data because the data analysis was completed using all available data and not a generalization of a subset of the data. Indeed, downloading a smaller data subset to represent the entire population would have significantly reduced the researcher's time. However, the researcher decided analysis of the whole set was feasible, and the additional time investment was worth the thoroughness and validity gained from using all possible data. The download of the initial grade book data was in portable document format (PDF) files. The researcher manually transferred the data from PDF files to various spreadsheets to organize and analyze the grade book data. The researcher used much care and cross-referenced in multiple ways to ensure accuracy. However, this was the one phase of work that was most susceptible to human error, and the researcher wishes to be transparent in this aspect of the data management. Transferring the grade book data to spreadsheets reduced the chances for data manipulation errors since sorting, organizing, and calculating within a spreadsheet significantly reduced the likelihood of errors when compared to forms of manual data manipulation. The research project also included qualitative and quantitative data collected from surveys. The participants in the surveys were teachers, parents, and administrators. To increase the validity of data gathered from the surveys, the researcher focused on the quality of the survey questions, increasing participation rates, and encouraging participants to be open and honest in their responses. To increase the quality of the research questions, the researcher reviewed survey questions used by other researchers and solicited feedback from the internal and external committee chairs and colleagues. Quality research questions will reduce DASD GRADING PRACTICES 63 misunderstanding and ambiguity for the participants, thus increasing the validity of the participants’ responses. The surveys did not collect personally identifiable information from the participants to increase the number of survey participants and encourage open and honest responses. The survey questions did not go beyond asking for a grade level or subject area. Any information that could identify an individual or the school building they represent was not requested. The researcher set up the surveys for distribution by using Google Forms. The default setting in Google Forms collects participant emails, but the researcher turned this default setting off, so the surveys did not collect participant emails. This approach was another layer of protection to reassure the participants that the research did not include identifying the participants or anything perceived as competition between school buildings. The informed consent statements of all three surveys explained how the researcher intentionally eliminated the collection of personally identifiable information. Data collected from the surveys automatically populated from the Google Forms to a Google Sheets spreadsheet. The use of spreadsheets for organization, manipulation, and analysis significantly reduced the chance of human error and increased the validity of the data. Finally, throughout the project, the researcher noted multiple times that the purpose of this research is not necessarily to recommend or initiate changes in the current elementary grading practices in the DASD. The researcher has reiterated that the purpose of the data is to analyze and inform. Therefore, this research intends not to encourage change or fulfill any personal agendas, further supporting the researcher's desire to produce accurate and valid results. DASD GRADING PRACTICES 64 At the research study's conclusion, all data from the grade books and surveys will be downloaded and printed. The printed data will be filed and stored in a confidential location so only the researcher will have future access. The researcher will delete electronic grade book files after the research study. The researcher will also permanently delete the Google Forms and Google Sheets spreadsheets containing the three surveys and the accompanying data unless the district desires to move this data in an alternate electronic location. Summary The Literature Review chapter prioritized evaluating current grading practices and gaining familiarity with potential alternatives to traditional grading practices. The Methodology chapter outlined the purpose, setting, participants, and research plan. It shifted the context of the research study from a historical review of the literature to the actionable implementation of a mixed methods research study. The Research Plan section reviewed the timeline and process used to collect interview information and data from grade books and surveys. The information gathered from interviews was summarized in the Literature Review chapter. The Research Design, Methods, and Data Collection section explained the techniques used to gather the grade book and survey data and outlined the types of questions used in the surveys and the purpose of each survey question. Quantitative and Qualitative data collected from grade books and surveys were organized and analyzed, with the results of that analysis being the foundation of answering the research questions. The next chapter will reveal the results of the data analysis by explaining the organization and analysis of the data and how the data answers the study's three research DASD GRADING PRACTICES 65 questions. The grade book data will help answer the first research question regarding the effectiveness of current DASD elementary grading practices in ELA and math. The data analysis of Survey #1, Appendix B, and Survey #2, Appendix D, will answer the second research question regarding teachers' and parents' perceptions of current DASD elementary grading practices in ELA and math. Finally, the data analysis of Survey #3, Appendix F, will answer the third research question regarding teachers' and administrators' understanding of standards-based report cards. DASD GRADING PRACTICES 66 Chapter IV Data Analysis and Results The content of this chapter focuses on the research project’s data, with the majority of the chapter dedicated to reviewing both the process and results of the data analysis. The chapter includes a dedicated explanation of the data analysis process and results for each research question. Next, the chapter discusses how the data results answered the project’s research questions. Finally, the chapter will conclude with further discussion summarizing the data analysis process and the resulting answers to the three research questions. Data Analysis and Results The objective of the project’s research was to answer three research questions. The researcher organized the presentation of the data analysis process and results around the three research questions. 1. How effective are the current DASD elementary grading practices (ELA and math)? The definition of effective is knowing current grading expectations, grade level consistency across all buildings, and grading practices that reflect students' mastery of curriculum and eligible content of Pennsylvania grade-level state standards. 2. What are DASD teachers’ perceptions and parents’ understanding of current DASD elementary grading practices? 3. What are DASD administrators’ and teachers’ perceptions and understanding of standards-based report cards? DASD GRADING PRACTICES 67 Research Question 1 The first research question focused on the effectiveness of current elementary grading practices. The definition of effectiveness was knowledge of current grading practices, grade-level consistency, and grading practices that reflect the mastery of curriculum and eligible content. Knowledge of Current Grading Practices. Data from the first three questions of Survey #1, Appendix B, was organized and analyzed using basic percentages to analyze the knowledge of current grading practices. The data was collected using surveys with the survey collection tool automatically calculating the percentages, thus eliminating any calculation errors on the researcher's part. Survey #1, Appendix B, had 30 participants out of the 52 teachers to which the researcher distributed the survey, representing a 58% participation rate. The survey asked the teacher participants to what extent they felt the current grading practices for ELA and math report card grades were defined clearly by district policies and procedures. The teacher respondents indicated that 43% of the respondents agreed, and 3% strongly agreed that the district clearly defined grading practices. In contrast, 47% disagreed, and 7% strongly disagreed that the district clearly defined grading practices. Therefore, the results identified that the respondents had divided feelings about clearly defined grading practices. By using identifying data from the first two questions of Survey #1, Appendix B, additional analysis revealed that the grade level and subject of the respondent teachers had an impact on their responses. First and second-grade teachers responded with 60% disagreeing and 7% strongly disagreeing. Teachers in grades three, four, and five DASD GRADING PRACTICES 68 responded more in agreement, with 62% responding they agree and 8% responding they strongly agree. The teachers were departmentalized in grades three, four, and five, meaning they only taught ELA or math. In these grades, the ELA teachers provided divided responses, with 14% responding they strongly agree, 29% responding they agree, and 57% responding they disagree. In contrast, math teachers in grades three, four, and five were in complete agreement, with 100% of them responding that they agreed. The results indicate an overall division in responses and a further discrepancy when disaggregated by grade level and subject area. Therefore, the responses suggest that knowledge of current grading expectations is not well established. Grade-Level Consistency. The researcher organized the archived grade book data in three ways to analyze grade-level consistency. First, to determine the types of work in which the students received grades, an analysis of the data in each grade book occurred for each grade level and subject. These work type categories (see Figure 1) were specific descriptions assigned by teachers to each assessment entered into the district’s school information system. Teachers had complete autonomy in determining these work categories. Figure 1 Grade Book Sample of Types of Student Work The researcher reviewed each grade book at each grade level for each subject area. After reviewing the grade books, the researcher determined broader learning categories for each assessment. Not an exhaustive list, some examples of the larger DASD GRADING PRACTICES 69 learning categories were phonics, fluency, writing, math facts, math problem solving, and nonachievement factors. The researcher included nonachievement factors as a category because the literature drew attention to nonachievement factors in grading. Some examples of nonachievement factors included, but are not limited to, work completion, bell ringers, exit tickets, participation, and extra credit. The next step was to count the number of grade book points assigned to each of the broader learning categories for each grade book. Finally, a percentage was found by dividing the total points in each learning category by the total points in the grade book. For each grade book, the researcher repeated this process of calculating percentages. It’s important to note that the resulting learning categories were unique for each grade level and subject area, based on the types of work assigned in each. This analysis resulted in a percentage of points assigned to broader learning categories in 122 individual grade books. Similarly, the second step was to determine in each grade book which assessment categories the teachers assigned each assessment. Teachers assigned assessment categories using a dropdown menu of predetermined choices in the school information system. The school information system contained a finite list of assessment categories from which to choose. Although limited in the options for these categories, teachers had complete autonomy in assigning the categories. The most frequently used assessment categories were classwork, quizzes, and tests. For each grade book, a percentage was found by dividing the number of assessment category points by the total number of points in the grade book. This analysis resulted in percentages for the assessment categories in 122 individual grade books. DASD GRADING PRACTICES 70 The final data analysis of the individual grade books was to calculate how many assessment scores there were in each grade book. Like the assessment categories, the number of assessment scores (see Figure 2) were retrieved directly from the grade books. The number of assessment scores is not to be confused with the point value in each grade book. The number of assessment scores is simply the count of assessments or scores entered into each grade book. Since the school information system numbers each assessment, this analysis only required retrieving these numbers from the grade book and organizing them elsewhere for further analysis. This process resulted in knowing the number of assessments entered in each of the 122 individual grade books. Figure 2 Grade Book Sample of Assessment Categories and Numbering These three aspects of data analysis were significantly time-consuming. The researcher used manual and electronic techniques to organize and analyze the data. The researcher exercised extreme caution in analyzing the data by cross-referencing the total number of points in each teacher’s grade book. The researcher calculated the total number of points in each grade book during each data analysis phase. This way, regardless of the data organization, the fact that the total number of points remained consistent was a good indication that there were no errors in the data set. Although simple percentages were the outcome of the data analysis, this portion of the data analysis required a significant amount of time and detail. The data analysis process continued by organizing the learning category percentages, assessment category percentages, and the number of assessment scores into DASD GRADING PRACTICES 71 a summary spreadsheet for each grade and subject. For example, there were 12 first-grade ELA grade books, so each grade book's 12 learning category percentages were organized in a single spreadsheet. The researcher followed the same process for assessment category percentages and the number of assessment scores. This step resulted in five ELA, each for grades one through five, and five math spreadsheets for 10 total spreadsheets. These spreadsheets are available in Appendix O for further review. These spreadsheets did not provide any additional results but did provide a more organized and condensed format for viewing results. The next step was to define consistency. Not being aware of universally accepted mathematical criteria to determine consistency, the researcher established a process unique to this research project. Using the spreadsheets in Appendix O, the researcher calculated the mean and standard deviation for each data set of learning category percentages, assessment category percentages, and the number of assessment scores. Next, the researcher calculated cut scores at one and two standard deviations above and below the mean. These cut scores determined values considered to be consistent and inconsistent within the data sets. Learning category percentages, assessment category percentages, and the number of assessment scores fell in one of the four consistency categories. Percentages that fell within one standard deviation above or below the mean were labeled consistent. Percentages above or below one standard deviation of the mean but still within two standard deviations of the mean were labeled inconsistent. Percentages that fell above or below two standard deviations of the mean were labeled extremely inconsistent. Additionally, some data sets of learning category percentages and assessment category DASD GRADING PRACTICES 72 percentages had no data. No data occurred when a subset of the teachers did not use that category in their grade book. For instance, some teachers used projects and homework assessment categories, while others did not. Data sets missing more than three scores were not able to be analyzed. These occurrences were entered as 0% in the summary spreadsheets in Appendix O. Finally, the percentages of data that fell into each of these consistency categories were calculated, color-coded, and documented on the summary spreadsheets in Appendix O. Creating the spreadsheets in Appendix O was another tedious and timeconsuming aspect of the data analysis. The researcher took advantage of the sorting and formulas features of the spreadsheets. Spreadsheet formulas were used to calculate the mean, standard deviation, and cut scores for the consistency categories. The use of these processes eliminated human error in these calculations. The researcher exercised extreme caution when moving data from one spreadsheet to another and implemented checks and balances to ensure data was transferred carefully from one spreadsheet to another. Copy and paste commands were used as frequently as possible to reduce any errors in manually transferring and typing data between spreadsheets. This lengthy data analysis process provided results used to analyze the consistency of the elementary grading practices. Starting with types of work categories in ELA, the average consistency percentage for all ELA grade books was 60%. The highest percentage of consistency was 75% in fourth grade and the lowest percentage of consistency was 36% in second grade. In second grade, 50% of the second-grade data could not be analyzed for consistency because all teachers did not use the categories of high-frequency words, spelling, classwork, and writing. There were a variety of ranges of DASD GRADING PRACTICES 73 percentages, such as a range of 19%–32% in third-grade grammar and writing, representing a compact range of percentages (see Table N3). In comparison, fourth grade reading with a range of 25%–75% is an example of a broader range of percentages (see Table N4). Finally, nonachievement categories in third through fifth grades ranged from 1% to 49% of the assigned scores (see Tables N3–N5). A percentage of 49% means that in at least one class, 49% of students’ overall ELA grade was calculated based on nonachievement factors. Looking at grade book assessment categories in ELA, the average percentage of consistency for all ELA grade books was 41%. The highest percentage of consistency was 75% in the fourth and fifth grades (see Tables N4 and N5), and the lowest percentages of consistency were 0% in first grade and 19% in second grade (see Tables N1 and N2). Like learning categories, analysis of 100% of second-grade data could not occur because the categories, as seen in Appendix O, were not used by all teachers. Also, there were a variety of ranges of percentages, but the researcher wishes to draw attention to the ranges of 0%–100% in first-grade categories of tests and quizzes (see Table N1). A 0%–100% range meant there were assessment categories that some teachers never used at all, while for others, it was the only assessment category used in their grade books. The final indicator of consistency was the number of assessments used in the ELA grade books. The average number of ELA assessments was 76 assessments. In first grade, a grade book had the lowest number of assessments with 28, and 226 assessments in a fifth-grade grade book were the highest. The highest percentage of consistency was 75% in the first, fourth, and fifth grades (see Tables N1, N4, and N5), and the lowest percentage of consistency was 54% in the second grade (see Table N2). DASD GRADING PRACTICES 74 When analyzing the types of work categories in math, the average consistency percentage for all math grade books was 29%. The highest percentage of consistency was 58% in fifth grade (see Table N5), and the lowest percentage of consistency was 19% in fourth grade (see Table N4). The percentages of consistency for assessment categories were low because over 50% of the data could not be analyzed (see Tables N1–N5) in grades 1–4. Similar to ELA, there were a variety of ranges of percentages. There was a case of 0%–100% in the second-grade category of projects and activities. Again, this illustrated a category where at least one teacher assigned all scores to this category while at least one other teacher assigned no scores to this category. Finally, nonachievement categories existed for math in all grade levels, with the highest percentage of 47% existing in third grade (see Table N3). A percentage of 47% means that in at least one math grade book, 47% of the students’ overall grade was calculated based on nonachievement factors. Looking at grade book assessment categories in math, the average consistency percentage for all math grade books was 29%. The highest percentage of consistency was 55% in fifth grade (see Table N5), and the lowest percentage of consistency was 0% in first grade (see Table N1). Again, the 0% consistency was because 100% of the data could not be analyzed, due to the teachers' inconsistencies in assigning assessment categories. The final indicator of consistency was the number of assessments used in the math grade books. The average number of math assessments was 69. A grade book in second grade had the lowest number of assessments with 10, and 243 in a grade book in fifth grade was the highest. The highest percentage of consistency was 55% in fifth grade DASD GRADING PRACTICES 75 (see Table N5), and the lowest percentage of consistency was 0% in first grade (see Table N1). Mastery of Curriculum and Eligible Content. The researcher had to assume some information about the grade books to analyze if grading practices reflected students' mastery of curriculum and eligible content of state standards. Recall the review of the grade books to determine the type of work categories. Upon review, the researcher assigned the assessments to broader learning categories. In completing this process, one of the broader learning categories was nonachievement factors. It would have been beyond the time constraints of this research project for the researcher to interview each teacher to assess if every assigned score in the grade books genuinely aligned with the curriculum and eligible content. Therefore, this aspect of data analysis assumed that all learning categories reflect mastery of curriculum and eligible content, other than work assigned to the nonachievement learning assessment category. This assumption resulted in 100% of the work in first and second grade ELA reflecting students' mastery of curriculum and eligible content of state standards. In third, fourth, and fifth grade ELA, the learning category of nonachievement factors accounted for anywhere from 1% to 49% of the types of work completed by the students. Therefore, any given grade book contained between 51% and 99% of work reflecting mastery of curriculum and eligible content in third, fourth, and fifth grades. Using the same reasoning for math grade books resulted in all grade levels having some grade books with types of work scores assigned to the nonachievement category. Nonachievement scores accounted for 0% to 47% of the types of work completed by the students in math. Therefore, in all grades, any given grade book revealed between 53% DASD GRADING PRACTICES 76 and 100% of the students’ work reflecting mastery of curriculum and eligible content of state standards. Research Question 2 The second research question focused on teachers' perceptions and parents' understanding of current elementary grading practices. Data from all questions but the third question of Survey #1, Appendix B, was organized and analyzed using basic percentages to examine teachers' perceptions of current grading practices. Data from all questions of Survey #2, Appendix D, was organized and analyzed using basic percentages to investigate parents' understanding of current grading practices. Teachers’ Perceptions. Survey #1 in Appendix B had 30 participants out of the 52 teachers to which the researcher distributed the survey, representing a 58% participation rate. Combining respondents’ responses of agreeing and strongly agreeing and referring to them as agreeing will increase the efficiency of discussing the results. If a survey question required more in-depth analysis, the two categories remained separated to review the data results. In Survey #1, Appendix B, question four resulted in 60% of teacher respondents agreeing that their students understood how they earned their ELA and math grades. Question five found that 80% of teacher respondents agreed that they clearly defined grading practices to their students’ families. Also related to families, question 10 revealed that only 37% of teacher respondents agreed that parents have a clear picture of their child’s mastery of curriculum and eligible content. DASD GRADING PRACTICES 77 The results of Survey #1, Appendix B, questions six through nine, are essential for gaining more insight into the consistency of the district’s elementary grading practices. Question six resulted in 93% of the respondent teachers agreeing they used assessments similar to their grade-level district colleagues. Question seven found that 60% of the respondents agreed they used similar assessment categories as their gradelevel district colleagues. These two questions align with the data analyzed in the grade books. Both the surveys and grade books contain information about categories of students' work and assessment categories. In questions eight and nine, the respondents agreed 66% of the time that report card grades reflected mastery of district curriculum and 53% of the time that report card grades reflected mastery of eligible content. The survey data was disaggregated by grade and subject area, resulting in data sets with less than 10 data points. Small data sets are susceptible to skewing, so the disaggregated data had limited use. One noteworthy trend is the varied results of Survey #1, Appendix B, question four, which asked about students’ understanding of how they earn their grades. Grade three teacher respondents agreed 33% of the time that students understand how they earn their grades, while grade four respondents were in 100% agreement. This difference in percentages represents a significant difference in feelings between the two grade levels. Questions six and seven were the most meaningful comparison between ELA and math. ELA teachers were in 71% agreement that they used similar work categories, while math was in 100% agreement. There was little difference between ELA and math teachers’ responses in terms of assessment categories. ELA teachers were in 57% agreement that they used similar assessment categories, while math was in 50% agreement that they used similar assessment categories. DASD GRADING PRACTICES 78 Survey #1, Appendix B, included an open-ended question, allowing respondents to provide any additional comments. Thirteen teachers responded to the open-ended question. The researcher searched the comments using keywords to look for similarities in the responses. Some similarities in the comments were comments that referred to needing the chance to collaborate and regain consistency that the respondents felt has been lost over time. Several comments included a reference to present grades being inflated and not providing information on skills, growth, or areas of needed improvement. Finally, some respondents referenced standards-based report cards as a possible alternative to the district’s present elementary report cards. Parents’ Understanding. Survey #2, Appendix D, was distributed to approximately 1050 families of students in grades 1–5. The survey had a 12% response rate, with 125 parents responding. Questions two, three, and four gathered information on how well parents felt they understood the district’s grading practices. Question two asked the parents to what extent they felt teachers explained grading practices. Question three was to what extent they understood district grading practices. Question four asked parents to what extent they understood how their children earned their grades. The parent respondents agreed at approximately 70% for all three of these questions. Therefore, about 70% of the 125 parent respondents felt they understood the district’s grading practices. Questions five and six looked at what extent the parent respondents agreed that report card grades reflected their child’s performance. Question five asked to what extent the parents felt the report card helped them understand their child’s performance, and question six asked the same thing about their child’s mastery of content. When worded as DASD GRADING PRACTICES 79 understanding performance, 85% of the parent respondents agreed, but when expressed as mastery of content, the respondents’ rate reduced to 55% agreement. Questions seven and eight asked parents to what extent report card grades helped them understand what their child learned and what their student still needed to learn. Parent respondents agreed 54% with the report card helping them understand what their child knows, but agreed at a rate of 43% with the report card helping them understand what their child still needed to learn. Similarly, questions nine and 10 asked the same questions about students’ performance on grade-level work and state standards. Parent respondents agreed at a rate of 64% that the report card helped them understand how their child performed on gradelevel work and 39% for state standards. Overall, as the questions became more specific to learning, the percentage of parent respondents in agreement decreased. Question 11 was the final question of Survey #2, Appendix D. Question 11 asked to what extent parents were satisfied with the current elementary grading practices. The responses to this question resulted in 18% of the parents strongly agreeing, 46% responded they agreed, 31% disagreed, and 5% strongly disagreed. The survey also collected data intended to be disaggregated by grade level and subject. When looking at smaller data sets based on individual grade levels, there was no significant difference between the responses of the entire data set. The same was true when analyzing the ELA and math data sets separately. There was no significant difference between the results. Like Survey #1, Appendix B, Survey #2, Appendix D ended with an open-ended question. This open-ended question allowed respondents to provide any additional comments, and 44 parents provided comments. The researcher searched the comments by DASD GRADING PRACTICES 80 using similar keywords in the comments. The most frequent comment was the desire to return to paper report cards. Some parent respondents stated they dislike checking grades and accessing report cards on the district’s school information system. Next, there was a common theme in the responses; parents would value additional opportunities to speak with the teachers rather than rely on report cards. Several respondents replied that one 20minute parent conference in the fall of the school year is inadequate and leaves parents frustrated with the lack of feedback. Finally, some respondents made emotional comments about the survey questions that referenced state standards. It seems that the simple mention of standards triggered a feeling on the part of some parents that the district places state testing above all else. Research Question 3 Survey #3, Appendix F, focused on administrators' and teachers' perceptions and understanding of standards-based report cards. Data from all questions of Survey #3, Appendix F, was organized and analyzed using basic percentages. The percentages summarized administrators' and teachers' perceptions of standards-based report cards. Survey #3, Appendix F, had 20 participants out of the 56 teachers and administrators to which the survey was distributed, for a participation rate of 36%. All questions on the survey asked respondents to what extent they agreed with certain aspects of a standards-based report card, and the agreement rate of all questions was very high. Question three asked to what extent the respondents agreed with a standards-based report card providing an appropriate amount of information. Question four asked about the quality of the information on a standards-based report card. Question six asked about a standards-based report card being a good tool to document learned DASD GRADING PRACTICES 81 skills. The respondents were in 100% agreement on all three of these questions. The results revealed 90% agreement on question five, which asked to what extent the respondents agreed on a standards-based report card being easy to understand. Finally, on questions seven, eight, and nine, the respondents agreed 95% of the time on a standardsbased report card documenting skills that need improvement, documenting students’ overall performance, and providing families with important learning information. Similar to Surveys #1 and #2, Appendix B and Appendix D, Survey #3, Appendix F, ended with an open-ended question. This open-ended question allowed respondents to provide any additional comments, and nine respondents provided comments. The researcher searched the comments, using keywords to look for any similarities in the comments. Most frequently, the respondents made comments that supported standards-based report cards. They commented that standards-based report cards show more details about students’ learning. At the same time, the respondents felt that parents might not like standards-based report cards, mainly because they won’t understand how to interpret them. Several respondents warned that the district should accompany any changes to district report cards with an explanation of the changes to parents. Discussion The data used in the research project came from 122 individual archived elementary grade books and three surveys. There was a significant amount of analysis and a wealth of results in the grade books alone, resulting in data that can continue to be analyzed well beyond the scope of this research project. Though less complex to analyze, the data from the surveys served an equally important role. The survey data provided the opportunity to compare the perceptions of the survey participants to the reality of the data DASD GRADING PRACTICES 82 in the grade books. Additionally, the data from the grade books and surveys provided the information needed to answer the project’s three research questions. Finally, the data from all sources organically exposed connections between the data results and several topics brought to the forefront in the Literature Review chapter. The first research question focused on the effectiveness of current DASD elementary grading practices in ELA and math, based on knowing current grading expectations, grade-level consistency, and students’ mastery of curriculum and eligible content. Concerning knowledge of current grading practices, the teacher participants split in their agreement that district grading practices are clearly defined. Concerning consistency in grade books, three pieces of data represented the bottom line of the data results. First, regarding the types of students’ work, the data analysis results found 60% consistency in ELA grade books and 29% consistency in math grade books. The data analysis results found 41% consistency in ELA and 29% consistency in math grade books in terms of assessment categories. Finally, in terms of the number of assessment scores in each grade book, the data analysis found an average of 76 in each ELA grade book and 69 in each math grade book. Although consistent compared to each other, the range of values in all grade books was anywhere from 10 to 243 assessments per grade book. Considering there are 180 instructional days in a school year, the difference between 10 assessments and 243 seems significant and inconsistent. As a comparison, 10 gradebook assessments a year equals .055 assessments a day and 243 grade book assessments equals 1.35 assessments a day. Concerning mastery of curriculum and standards, the data results provided some indication, based on the criterion used, that the grade books represented mastery of DASD GRADING PRACTICES 83 curriculum and standards. In first and second grades, 100% of the grade book assessments indicated the measurement of curriculum and standards-based work. In grades 3–5, anywhere from 51% to 99% of the grade book assessments indicated the measurement of curriculum and standards-based work. The second research question focused on DASD teachers’ perceptions and parents’ understanding of current elementary grading practices. Survey #1, Appendix B, provided a variety of data about teachers’ perceptions. Knowing the extent to which teachers agreed that they used assessments that were similar to their grade-level colleagues is valuable. This data allows for a comparison between their perceptions and the reality of the grade book data results. Also, the open-ended responses in the survey resulted in comments about grade inflation, standards-based reporting, and the need to re-establish expectations. These topics allow for connections to information presented in the Literature Review chapter. Regarding parents’ perceptions, 70% of the parents felt they know the current grading practices. However, in questions that asked to what extent they felt the report card grades successfully relay understanding of their child’s performance in school, the percentages decreased to 40%–50% agreement. Finally, when the questions asked to what extent the report card grades represented mastery of standards, the percentage dropped to 39%. The idea of parents believing they understand grading practices, combined with parents not being able to use grades to interpret learning outcomes, also has some connection to topics presented in the Literature Review chapter. The third research question focused on administrators’ and teachers’ perceptions and understanding of standards-based report cards. The extent to which respondents DASD GRADING PRACTICES 84 agreed with all questions asked about a standards-based report card was 90% or higher. Therefore, the administrators and teachers felt a standards-based report card provides an appropriate amount of information, provides quality information, is a good tool to document learned skills, is easy to understand, and provides information on students’ progress, such as what students have mastered and areas needing improvement. Respondents answered the open-ended question with comments about feeling that a standards-based report card is good for reflecting learning, but warned that parents don’t understand a standards-based report card. Summary The data analysis has provided valuable results. Archived grade book data and survey questions from Survey #1, Appendix B, answered the first research question. The question looked to determine the effectiveness of current DASD elementary grading practices. The data did not support knowledge of current grading practices. The data also did not support consistency in work categories, assessment categories, or the number of assessments used in the grade books. Last, the data provided some support in establishing that grading practices reflected curriculum and standards mastery. The data minimally supported the elements that defined effective grading practices. The second research question looked to determine teachers’ perceptions and parents’ understanding of current DASD elementary grading practices. Survey #1, Appendix B, and Survey #2, Appendix D, provided insight into the perceptions and understanding of the teacher and parent respondents. The data analysis of the surveys provided easily interpreted percentages, while the open-ended survey questions provided information that lent itself to making comparisons and connections. DASD GRADING PRACTICES 85 The third research question determined DASD administrators’ and teachers’ perceptions and understanding of standards-based report cards. Survey #3, Appendix F, provided insight into the perceptions and understanding of the teacher and administrator respondents. The data analysis of the surveys provided easily interpreted percentages. The data answered the third research question by quickly revealing that respondents understand standards-based report cards and find them effective. However, the survey comments show a more profound understanding that parents might not have the same perceptions and certainly not the same level of understanding. The next chapter will draw further conclusions about the research project and discuss the limitations of the research. The data answered the research questions, provided deeper understanding, and provided the groundwork to make connections. However, analyzing data and answering research questions often leads to new questions. The data provided by the archived grade books is an example of data having the potential to go beyond this project and assist the DASD in answering more questions about its grading practices. Therefore, the next chapter will also recommend further work related to the research project. DASD GRADING PRACTICES 86 Chapter V Conclusions and Recommendations This research project answered three research questions related to elementary grading practices in the DASD. A review of literature provided a historical perspective of grading practices, while the data analysis and results provided the current and specific context of grading practices in the district. The data analysis and results allowed the researcher to discuss connections between the literature, data results, and answers to the three research questions. This chapter will discuss additional conclusions of the research, including its effectiveness, how the results supported the findings, the application of the results to the particular school district, and fiscal implications. The chapter will conclude by addressing the limitations of the research and discussing recommendations for future research. Conclusions The objective of the project’s research was to answer three research questions. The research questions focused on the effectiveness of current DASD elementary grading practices, teachers’ perceptions and parents’ understanding of current grading practices, and teachers’ and administrators’ perceptions and understanding of standards-based report cards. The first research question looked to determine the effectiveness of current DASD elementary grading practices. The researcher defined the effectiveness of current grading practices as knowledge of current grading expectations, consistency in grading practices, and students’ mastery of curriculum and eligible content. The Data Analysis and Results chapter concluded that the data did not support knowledge of current grading practices. DASD GRADING PRACTICES 87 The data also did not support consistency in work categories, assessment categories, or the number of assessments used in the grade books. Finally, to some extent, the data analysis supported that grading practices reflected mastery of curriculum and eligible content. Therefore, the data minimally supported the elements that defined effective grading practices. Effectiveness Survey results concluded that the data analysis did not support knowledge of current grading practices. The teacher respondents had divided feelings about district grading practices being clearly defined. Parent respondents agreed more than teachers that they were knowledgeable of grading practices. Most parents felt they were aware of district grading practices, that teachers explained them, and knew how their child's grades were determined. The surveys were effective in determining the feelings of teachers and parents. However, the lack of respondents' participation could put the reliability of the results in question. The participation rate for teachers was 58%, and parents only had a 12% participation rate. The researcher included measures in the survey design and informed consent to maintain confidentiality and minimize the identification of any of the respondents. However, even with these measures, teachers still seemed hesitant to participate. Teachers may not have trusted the confidentiality of their responses, or they might have been fearful of comparisons between school buildings. Finally, they could have felt that participating would change current practices, and change is typically unwelcome. DASD GRADING PRACTICES 88 Similar confidentiality and informed consent measures were in place for the parent respondents. Parents also could have lacked trust for confidentiality or felt their children would be penalized if they shared their honest feelings about grading practices. Also likely is that parents aren't always sure if their opinions are valued. The district has distributed surveys on other topics in recent years and considered feedback from parents when making district decisions. However, parents often confuse consideration of feedback with their feedback being the final decision in matters. When parents do not see their opinions leading directly to final decisions, they do not feel valued and hesitate to participate. Any of these feelings could have contributed to the low parent participation rate in the survey. Finally, when it comes to participation in surveys, time is always a factor for teachers and parents alike. In our busy and stressful lives, the 10 or 15 minutes it takes to complete a survey is time most people may feel they cannot spare. Regardless of the participation rates and the factors contributing to the low participation rates, the researcher feels the results would likely reflect the feelings of a larger population of teachers and parents. This opinion was due to teacher participants representing all grade levels and both disciplines of ELA and math. Likewise, the parent respondents represented parents of all grade levels, and there were no significant differences in the parents’ responses about grading practices in ELA compared to math. Finally, the researcher felt the variety in the answers to the open-ended survey questions provided confidence that the participants responded with enough variety to represent the larger population of potential respondents. The second part of the first research question was the analysis of consistency in grading practices. The data analysis did not support consistency in grading practices, and DASD GRADING PRACTICES 89 this determination resulted from the analysis of the archived grade books. The analysis of the grade books effectively analyzed the consistency of grading practices. Unlike knowledge of grading practices, lack of participation was not in question with the analysis of the grade books. All available grade books were analyzed, reflecting a 100% participation rate. The grade book analysis resulted in the indirect participation of 49 teachers and the analysis of 122 grade books for work categories, assessment categories, and the number of assessments. Therefore, lack of participation did not put the reliability of these results in question. The reliability of how the researcher sorted the assessment grades into types of work categories could have been a concern. It would have been beyond the project's scope for the researcher to interview each teacher on the types of work used in their grade books. Therefore, the researcher had to rely on assessment descriptions in the grade books and familiarity with curriculum and content to sort the assessments as accurately and consistently as possible. Therefore, some assessments could have been misplaced, resulting in variations of the ranges provided in the spreadsheets in Appendix O. There were no concerns for subjective practices, the reliability of results when organizing the data in assessment categories, or the number of grade book assessments. These values did not involve interpretation because their only analysis involved simple counting and totaling. The third part of research question one was determining if grading practices reflected students’ mastery of curriculum and eligible content. The conclusion that the data provided some support for grading practices reflecting mastery of curriculum and eligible content resulted from the analysis of the archived grade books. The analysis of DASD GRADING PRACTICES 90 the grade books effectively analyzed the consistency of grading practices. Unlike knowledge of grading practices, lack of participation was not in question with the analysis of the grade books because the analysis included all available grade books. The reliability of the results could have been questioned based on the process used to sort the assessments into types of work categories. More specifically, the researcher determined that the only assessments that did not reflect mastery of curriculum and eligible content were assessments placed in the nonachievement factors category. Therefore, misinterpretation errors could have led to inaccurate ranges in the summary spreadsheets. The researcher attempted to be conservative when placing assessments in the nonachievement factors category. The researcher only used assessments typically considered to be nonachievement factors such as bell ringers, exit tickets, work completion, participation, and extra credit. Therefore, any interpretation errors underrepresented the nonachievement category and overrepresented mastery of curriculum and eligible content. The second research question looked at teachers’ perceptions and parents’ understanding of current grading practices. The data analysis revealed that 46% of teachers agreed that grading practices were clearly defined, 60% agreed that students understood grading practices, and 80% explained grading practices to families. The data analysis also revealed that 93% of the teachers agreed they used types of work categories consistent with their colleagues, and 60% agreed they used consistent assessment categories as their colleagues. Finally, the data analysis revealed that 60% of the teachers agreed that grading practices represented mastery of the curriculum, and 53% agreed grading practices represented mastery of eligible content. DASD GRADING PRACTICES 91 The data analysis also revealed information about parents’ understanding of grading practices. The data analysis revealed that 70%–85% of parents agreed that they were informed about and understood grading practices. The data analysis also revealed that 43%–64% of the parents agreed that grades assisted them in understanding their child’s progress, areas of growth, and areas needing improvement. The analysis also revealed that only 39% of the parents agreed that grades help them understand how their child is performing on eligible content related to state standards. Finally, 51% of the parents agreed that they are satisfied, in general, with current grading practices. The conclusions about teachers' perceptions and parents' understanding of current grading practices resulted from the analysis of survey questions. Effectively answering research question two was not necessarily about answering a single question; it was about forming an overall picture of perceptions and understandings. The surveys summarized teachers' perceptions and parents' understanding of current grading practices and effectively formed this overall picture. Similar to research question one, the lack of participation on the part of the respondents could put the reliability of the results into question. The participation rate for teachers was 58%, and parents only had a 12% participation rate. The researcher included measures in the survey design and informed consent to maintain confidentiality, minimize respondents' identification, and allow them to feel comfortable in participating in the surveys. In addition to low participation rates, the reliability of the results could be questioned based on the researcher's wording of the questions and the participants' interpretations of the questions. To increase the quality of the research questions, the researcher reviewed survey questions used by other researchers and solicited feedback DASD GRADING PRACTICES 92 from the internal and external committee chairs and colleagues. Despite these efforts, misunderstanding or misinterpretation of the questions could have impacted the final results of the surveys. Regardless of the surveys’ participation rates and possible misinterpretation of survey questions, the researcher feels the results likely reflected the feelings of a larger population of teachers and parents since the teacher participants represented all grade levels and both disciplines of ELA and math. Likewise, the parent respondents represented parents of all grade levels, and there were no significant differences in the parents’ responses about grading practices in ELA compared to math. Also, the patterns between the teachers’ and parents’ responses share some similarities. The questions about overall grading practices resulted in greater agreement than questions about more specific aspects of grading, such as mastery of curriculum and content. Therefore, the researcher feels the data analysis effectively answered research question two and represented a larger population of potential respondents. The third research question looked at teachers’ and administrators’ perceptions and understanding of standards-based report cards. The data analysis revealed that administrators and teachers agreed at a rate of at least 95% that standards-based report cards document appropriate, quality, and easy to understand information. They also agreed at a rate of at least 95% that a standards-based report card provides information about students’ learned skills and skills needing improvement. Finally, administrators and teachers felt at least 95% of the time that standards-based report cards result in grading and reporting consistency. The conclusions about teachers’ and administrators’ perceptions and understanding of standards-based report cards resulted from the analysis DASD GRADING PRACTICES 93 of survey questions. Like research question two, effectively answering the third research question was not as much about answering a single question as it was about forming an overall picture of perceptions and understandings. The surveys effectively summarized teachers’ and administrators’ perceptions and understanding of standards-based report cards and formed this overall picture. Again, the reliability of the results could be questioned based on lack of participation. There were 20 administrator and teacher participants out of 56 potential respondents, for a participation rate of 36%. However, all administrators responded to the survey. There were teacher respondents from all grade levels representing both ELA and math, and there was consistency in their responses. These factors provide confidence that the results represented a larger population. Applications The researcher has stated that this research project's purpose is not to make specific grading recommendations to the DASD. The researcher plans to complete the research project, present the findings, and recommend potential changes. However, after being provided the research, the district will make decisions based on other district work, circumstances, and needs. There are some reasonable recommendations for the district concerning grading practices. The first recommendation would be to review and discuss the research results on grading consistency, define a degree of tolerable inconsistency, and make changes in grading practices to maintain consistency within an agreed-upon threshold. One of the most concerning inconsistencies found in the grade books was allocating assessment points to types of work categories. Whether these inconsistencies were perceptions because of the descriptions used by the teachers or are a reality, the DASD GRADING PRACTICES 94 degree of inconsistency deserves reflection, discussion, collaboration, and action. Being an administrator in the district, the researcher cannot accept that students are being assessed differently within a building or between buildings. Parents should have some reassurance that their child’s educational experience is similar regardless of the classroom or school building they attend within the district. Similarly, the inconsistencies in assessment categories and the total number of assessments used in each grade book are recommended for review, reflection, discussion, and action. Assessment categories are quick ways that parents can see grading inconsistencies because there are limited categories. For example, a parent might ask why their child's assessments were all tests when the assessments for a student in a neighboring classroom were all quizzes. There might be little difference between the actual assessments used, but the parents have no way of knowing that fact. If they rely on past experiences and grading familiarity, a parent might assume tests are more difficult than quizzes. Another district recommendation is to consider taking immediate action to review and plan for guidelines about the number of grade book assessments. There were grade books with as few as 10 total assessments and some with over 200 assessments. Given that there were 180 school days in a school year, students took more than one assessment on some days. The district will need to decide on the reasonableness of this practice. Finally, the grade books with the most assessments often had higher percentages of points assigned to the learning category of nonachievement factors. This pattern is another recommended area for the district to reflect on and decide if it is an acceptable district practice to assign so many assessments to this category. DASD GRADING PRACTICES 95 The researcher recommends combining any work dedicated to changes in grading practices with curriculum, instruction, and assessment efforts. Since grades measure and report students' learning, grading discussions should not be isolated from instructional conversations. The surveys indicated that most teachers were knowledgeable of current grading practices. Likewise, most parents also felt they were knowledgeable of grading practices and how students earn grades. However, parents' percentages of agreement reduced when asked if they felt report card grades provided parents with information about their child's learned skills and mastery of curriculum and eligible content. This scenario is where the researcher feels the district needs to make an important decision. The district needs to decide if percentage grades will continue because teachers and parents feel they understand these grading practices. If that's the case, the surveys revealed that despite being knowledgeable of the grading practices, the teachers and parents did not feel the current grading practices provided information about students' learned skills, skills needing improvement, or mastery of curriculum and eligible content. In other words, the surveys revealed that teachers and parents are knowledgeable about students' grades, but the grades are not providing helpful information about students' learning. This notion of the assessment of learning taking a backseat to traditional and arbitrary grading practices was an underlying theme of the Literature Review chapter. Implications Concerning grading practices in the DASD, the research implies a conflict between perception and reality. Results of research question two revealed that teachers were knowledgeable of district grading practices and had confidence that their grading DASD GRADING PRACTICES 96 was consistent with colleagues. However, research question one showed multiple examples of inconsistencies in grading practices. This discrepancy is concerning to the researcher and should be concerning to the district upon review of this research. However, this potential discrepancy between perception and reality has implications beyond the district where the research occurred. This type of discrepancy is the reason data analysis is essential. The researcher shares awareness of this discrepancy to encourage any school district to investigate if they have any similar differences between perceptions and reality of grading practices. This implication was not an intended outcome of the research but obvious enough to include in the Conclusions section. Another implication of the research is the assignment of nonachievement factors in grading. The literature revealed that teachers use both achievement and nonachievement factors in their grading practices. They see grading as a way to document academic performance and motivate students (Brookhart et al., 2016). The phrases hodgepodge and kitchen sink have been used to describe grading approaches. These descriptions have emphasized the unpredictable nature of grades, both in what grades represent and the wide variety of factors used to calculate grades (Chen & Bonner, 2017). For emphasis, Brookhart et al. (2016) stated, "teachers [idiosyncratically] use a multitude of achievement and nonachievement factors in their grading practices" (p. 828). Teachers often feel it necessary to include nonachievement factors in grading, such as effort, improvement, and conduct (McMillan et al., 2002). When answering research question one, the data analysis drew attention to the use of these nonachievement factors. Nonachievement factors were used more frequently in grades three, four, and five ELA, with grade books having as high as 27%, 28%, and 49% DASD GRADING PRACTICES 97 of the assessment points placed in the category of nonachievement factors. In math, nonachievement factors were used at all grade levels but less frequently, with 14% being the highest percentage in any math grade book. These percentages might indicate that teachers struggle to gain compliance, engagement, or work completion without using nonachievement grades. If any part of this assumption is correct, the implications are that teachers need support with classroom management, instructional strategies, behavior management, or assessment strategies. This information might be helpful for other districts since it suggests that the underlying concern might not be grading practices. The issues manifest in grade book results, but the underlying problems could be rooted in classroom instruction. Consideration also needs to be given to the open-ended survey responses that contributed to answering survey questions two and three. The teacher respondents expressed a need for collaboration time to re-establish grading consistency. Several teacher respondents expressed concerns about grade inflation and report cards not providing information about students’ skills, growth, and areas of needed improvement. These comments imply that teacher respondents expressed concerns about consistency and nonachievement factors when making comments outside the confinement of Likerttype survey questions. These comments suggest that these concerns that were first revealed in the Literature Review chapter and then emerged again in the research questions, appear to be timeless and ageless concerns about grading. The parent respondents wanted to bring back paper report cards, stating they dislike looking up report cards on a school information system. Parent respondents also expressed that they would welcome additional opportunities to speak with the teachers DASD GRADING PRACTICES 98 about their child’s learning rather than rely on report cards. Several parent respondents commented that 20 minutes to talk at parent conferences in the fall of the school year is inadequate and leaves parents frustrated by a lack of feedback. These comments imply that parents are seeking more communication with teachers. As noted in the Literature Review chapter, elementary teachers tended to see grading as a communication tool between schools and families (Guskey, 2009). Ultimately, the district will have to decide the purpose of grading. The research suggests that the current grading practices produce grades that most feel they understand but are inconsistent in how they are calculated and are not painting a clear picture of students’ learned skills or mastery of curriculum and standards. One option is to keep percentage grades but hopefully implement more consistent percentage-based grading practices. Another option is to design and implement common assessments and assign consistent assessment categories to grades to illustrate learned skills, skills needing improvement, and mastery of curriculum and eligible content. The district might also consider more significant grading reform, such as designing and implementing a standards-based report card. It was not in the parameters of this research project to make recommendations for large-scale change. Still, the research indicated that administrators and teachers felt they understood standards-based reporting. If standards-based reporting is a consideration, the district needs to keep in mind the caution presented about parents and standards-based reporting. The literature suggested that standards-based report cards are most successful when a significant effort occurs to provide a prior explanation of the report cards to the parents. The interviews with schools currently using standards-based grading also DASD GRADING PRACTICES 99 stressed the importance of explaining standards-based grading to parents. In the openended comments, several teacher respondents warned that administrators and teachers understand standards-based report cards, but parents are probably unfamiliar with them. Finally, some parent respondents made emotional comments in the survey when questions referenced state standards. It seems that the simple mention of standards triggered a feeling on the part of some parents that the district places standardized testing above all else. The implication that standards-based grading requires preparation and communication, especially for parents, is valuable knowledge for any district that considers grading reform. The fiscal implications of the research will depend on the impact the study has on the district’s grading practices. The researcher understands that the research could result in no change. The research results may not justify a need for change, or considering change may not be practical based on the district’s circumstances at the time of project completion. The researcher recognizes that the research could result in the district wanting to make changes, but the financial implications may create a barrier to any recommended changes. If the district decides to change grading practices, the fiscal implication will be the need for administrators and teachers to reflect, discuss, collaborate, and plan for change. Changes in grading practices could result in reviewing types of work categories and assessment categories and meeting to create more consistency in assigning grades to these categories. Other change efforts could include the design of common ELA and math assessments in some or all grades. Another option would be more extensive grading reform in the form of a standards-based report card in ELA and math in some or all DASD GRADING PRACTICES 100 grades. If unable to complete the necessary work during dedicated professional development days or bring in substitute teachers to cover classes, summer work sessions will be the next option. Summer work would require paying teachers at a per diem rate of compensation for an estimated cost of as much as $60,000. Additionally, if the current school information system cannot generate standards-based report cards, an additional anticipated cost to upgrade technology is estimated to be anywhere from $6000 to $13,000. Limitations The limitations of the research were minimal. One previously noted limitation was in the process used to categorize assessments into work categories. These work categories then fell into more significant learning categories. The researcher made assumptions when assigning assessments to categories based on the teachers’ descriptions. The teachers had flexibility in their use of assessments and had complete discretion in describing them. Because of this, the researcher felt challenged and experienced limitations when sorting the assessments. Additionally, the research timeline limited the time to complete the research, so the researcher did not have time to crossreference with each individual teacher to ask for clarification on how they described each individual assessment. A similar limitation existed when the researcher sorted assessments into the nonachievement category. This category eventually determined what percentage of assessments represented mastery of curriculum and eligible content. Another limitation existed in the statistical measures used to analyze the data. There is no universally accepted statistical measure for consistency, so the researcher used mean and standard deviation to create cut scores for inconsistent and extremely DASD GRADING PRACTICES 101 inconsistent values. However, a limitation existed because the data sets were limited to percentages between 0% and 100%. This limitation meant that one standard deviation below the mean often resulted in negative cut scores, which did not exist in the finite data sets. Therefore, the data results found very few occurrences of extremely inconsistent values. This limitation has the potential to be handled differently by the DASD or any other school district wishing to evaluate the consistency of their grading practices. Districts can determine unique criteria for establishing internal cut scores based on the level of consistency they seek. Fiscal limitations are unknown to the researcher at this time. Fiscal limitations are difficult to determine because the researcher is not sure how the district will use the results of this research. Additionally, the noted Pennsylvania Supreme Court decision to permit Washington Township to secede from the DASD has resulted in a significant loss of revenue for the DASD. This revenue loss has placed a substantial financial burden on the DASD and will have a devastating impact on programming for years to come. Therefore, even if the district wants to act upon the research results, there could be financial limitations on how much work can occur. The final limitation worth noting is the limited but potential impact of virtual learning during the 2020–2021 school year. The COVID-19 pandemic impacted the 2020–2021 school year, and this was the school year from which the downloaded and archived grade books were sourced. It is important to note that the DASD was open for in-person learning the entire 2020–2021 school year, but there were some pre-planned virtual days and a couple of occasions of multi-day closures resulting in virtual learning. There is a chance that the periodic change to virtual learning resulted in changes to DASD GRADING PRACTICES 102 teachers' grading practices. The biggest concern would be for virtual learning generating additional assessments in the nonachievement categories. For example, teachers could have changed to more frequent use of completion points due to virtual learning. However, the researcher found consistent use of nonachievement assessments throughout the grade books—not just on virtual learning days. Therefore, virtual learning could have been a limitation in acquiring accurate results, but the researcher is confident that the impact of virtual learning was minimal. Recommendations for Future Research Recommendations for future research begin with the DASD. Assuming the DASD acts on the results of this research project by making changes in grading practices, the district should engage in further research to determine if any changes result in the intended outcome. Minimally, the district should revisit the research to re-evaluate consistency and the current discrepancy between grading perceptions and reality. The results of this research raised some questions about underlying aspects of instruction and whether or not these aspects of instruction impact grading. For instance, do teachers successful in classroom management and engagement tend to have a reduced number of assessments and fewer assessments involving nonachievement factors? Does more formative and summative assessment knowledge result in a more consistent assignment of assessments to types of work and assessment categories? Another suggestion for future research could be to determine the effectiveness of grading practices based on the district's role in establishing the direction and expectations of grading. This research also highlighted a discrepancy between teacher respondents' confidence in understanding grading practices and their decreased confidence in feeling DASD GRADING PRACTICES 103 the grades expressed students' understanding skills and mastery of content. In the literature, this was pointed out as, over time, the change from assessment to grading. Therefore, future research could question whether effective grading practices have any connection or correlation to better learning. Another recommendation for future research would be to determine which measures can help bridge the grading gap between teachers and parents. More specifically, what do teachers value the most concerning grading? Concerning grading, what do parents value the most? If a gap exists, what can assist in closing that gap? Finally, a recommendation for future research would be to investigate the effectiveness of standards-based report cards. Are standards-based report cards better for reporting learning? If so, why? Additionally, what leads to the successful implementation of a standards-based report card? Summary Some survey respondents expressed a concern that they felt the survey design was to produce results that would lead to grading reform in the DASD. Other respondents were enthusiastic about the research occurring to evaluate the effectiveness of DASD grading practices. The researcher understands how some might feel the study had a specific agenda, but it was reassuring to see comments that also supported the work. Initially, the researcher stated that the result of this project was to provide results to the DASD but not to make specific recommendations. Beyond DASD, the researcher desired this work to be informative for anyone seeking guidance on grading practices. This project has provided one school district with valuable information about its current grading practices. The study can also encourage other school districts to evaluate their DASD GRADING PRACTICES 104 grading practices. Beyond simply informing, the research may or may not impact changes to grading practices in the DASD or other districts. Change is hard, and familiarity with traditional grading practices makes change even harder. Even if there is consideration for changing grading practices, teachers and administrators often face pushback and even hostility if they try to introduce grading reform. When it comes to traditional grading practices, the words entrenched, ingrained, and even “toxic” are used to describe traditional grading practices (Will, 2019). The review of literature started by establishing that the journey into the history of grading practices would track how the emphasis changed over time from assessment to grading. This project will not change what history has dictated. However, the researcher hopes the results will guide the DASD in reflecting on its current grading practices and possibly result in positive changes to current practices. If the results motivate others outside the DASD, who come in contact with this project to evaluate their current practices, the research will stretch well beyond its original intentions. DASD GRADING PRACTICES 105 References Behn, R. D. (2003). Why measure performance? Different performances require different measures. Public Administration Review, 63(5), 586–606. https://doi.org/10.1111/ 1540-6210.00322 Blakemore, E. (2018, August 22). In early 1880s American classrooms, students governed themselves. History. https://www.history.com/news/in-early-1800samerican-classrooms-students-governed-themselves Bonesrønning, H. (2004). Do the teachers’ grading practices effect student achievement? Education Economics, 12(2), 151–167. https://doi.org/10.1080/ 0964529042000239168 Brookhart, S. M. (2011). Starting the conversation about grading. Educational Leadership, 69(3), 10–14. Brookhart, S. M. (2013). The public understanding of assessment in educational reform in the United States. Oxford Review of Education, 39(1), 52–71. https://doi.org/ 10.1080/03054985.2013.764751 Brookhart, S. M., Guskey, T. R., Bowers, A. J., McMillan, J. H., & Smith, J. K. (2016). A century of grading research: Meaning and value in the most common educational measure. Review of Educational Research, 86(4), 803–848. https://doi.org/ 10.3102/0034654316672069 Carnegie Mellon University. (2021). What is the difference between assessment and grading? Eberly Center. https://www.cmu.edu/teaching/assessment/basics/ grading-assessment.html DASD GRADING PRACTICES 106 Center on Education Policy. (2020). History and evolution of public education in the US (ED606970). ERIC. https://files.eric.ed.gov/fulltext/ED606970.pdf Chapman, H. B., & Ashbaugh, E. J. (1925). Report cards in American cities. Educational Research Bulletin, 4(14), 289–293. Chen, P. P., & Bonner, S. M. (2017). Teachers’ beliefs about grading practices and a constructivist approach to teaching. Educational Assessment, 22(1), 18–34. https://doi.org/10.1080/10627197.2016.1271703 Cox, K. B. (2011). Putting classroom grading on the table: A reform in progress. American Secondary Education, 40(1), 67–87. Cross, L. H., & Frary, R. B. (1996, April 9-11). Hodgepodge grading: Endorsed by students and teachers alike [Paper presentation]. Annual Meeting of the National Council on Measurement in Education, New York, NY, United States. Dexter, F. B. (Ed.). (1901). The literary diary of Ezra Stiles. C. Scribner’s Sons. https://www.google.com/books/edition/The_Literary_Diary_of_Ezra_Stiles/au9I AAAA MAAJ?hl=en&gbpv=1 Dover Area School District. (2021). Dover Eagles 2021-2022 annual report to the community. https://www.doversd.org/downloads/annual_reports/20212022_annual_report_community.pdf Durm, M. W. (1993). An A is not an A is not an A: A history of grading. The Educational Forum, 57, 294–297. https://doi.org/10.1080/00131729309335429 Feldman, J., & Reeves, D. (2020). Grading during the pandemic: A conversation. Educational Leadership, 78(1), 22–27. Finkelstein, I. E. (1913). The marking system in theory and practice. Warwick & York. DASD GRADING PRACTICES 107 https://www.google.com/books/edition/%20The_Marking_System_in_Theory_an d_Practice/4wsRAAAAMAAJ?hl=en&gbpv=0 Gallagher, C. J. (2003). Reconciling a tradition of testing with a new learning paradigm. Educational Psychology Review, 15(1), 83–99. https://doi.org/10.1023 /A:1021323509290 Grodsky, E., Warren, J. R., & Felts, E. (2008). Testing and social stratification in American education. Annual Review of Sociology, 34, 385–404. https://doi.org/10.1146/annurev.soc.34.040507.134711 Guskey, T. R. (2009). Bound by tradition: Teachers' views of crucial grading and reporting issues [Paper presentation]. Annual Meeting of the American Educational Research Association, San Francisco, CA, United States. Guskey, T. R. (2013). The case against percentage grades. Educational Leadership, 71(1), 68–72. https://doi.org/10.1177/003172171109300212 Guskey, T. R. (2020). Breaking up the grade: To make grading more meaningful, course grades should reflect a range of distinct criteria that make up student learning. Educational Leadership, 78(1), 40–46. Guskey, T. R., Swan, G. M., & Jung, L. A. (2011). Grades that mean something: Kentucky develops standards-based report cards. Phi Delta Kappan, 93(2), 52– 57. Hanson, F. A. (1993). The invention of intelligence. Education Week, 13(2), 40–41. Hargis, C. H. (2003). Grades and grading practices: Obstacles to improving education and to helping at-risk students (2nd ed.). Charles C. Thomas Publisher. DASD GRADING PRACTICES 108 Hendricks, C. (2017). Improving schools through action research: A reflective practice approach (4th ed.). Pearson. Hochbein, C., & Pollio, M. (2016). Making grades more meaningful. Phi Delta Kappan, 98(3), 49–54. https://doi.org/10.1177/0031721716677262 Hoff, D. J., & Coles, A. D. (1999). Lessons of a century. Education Week, 18(40), 20–28. Hooper, J., & Cowell, R. (2014). Standards-based grading: History adjusted true score. Educational Assessment, 19, 58–76. https://doi.org/10.1080/ 10627197.2014.869451 Hursh, D. (2005). The growth of high-stakes testing in the USA: Accountability, markets and the decline in educational equality. British Educational Research Journal, 31(5), 605–622. https://doi.org/10.1080/01411920500240767 Jongsma, K. S. (1991). Research to practice: Rethinking grading practices. The Reading Teacher, 45(4), 318–320. Kellaghan, T., & Greaney, V. (2019). Public examinations examined. World Bank. https://openknowledge.worldbank.org/bitstream/handle/10986/32352/9781464814 181.pdf Kunnath, J. (2017a). Creating meaningful grades. The Journal of School Administration Research and Development, 2(1), 53–56. Kunnath, J. P. (2017b). Teacher grading decisions: Influences, rationale, and practices. American Secondary Education, 45(3), 68–88. Lok, B., McKnaught, C., & Young, K. (2016). Criterion-referenced and norm-referenced assessments: Compatibility and complementarity. Assessment & Evaluation in DASD GRADING PRACTICES 109 Higher Education, 41(3), 450–465. https://doi.org/10.1080/ 02602938.2015.1022136 Madaus, G. F., & O’Dwyer, L. M. (1999). A short history of performance assessment. Phi Delta Kappan, 80(9), 688–695. Marzano, R. J., & Heflebower, T. (2011). Grades that show what students know. Educational Leadership, 69(3), 34–39. McCreadie, H. (2017). Pioneers and landmarks in intelligence testing: Identifying the gifted Francis Galton (1822-1911). Assessment & Development Matters, 9(2), 31– 35. McMillan, J. H., Myran, S., & Workman, D. (2002). Elementary teachers’ classroom assessment and grading practices. The Journal of Educational Research, 95(4), 203–213. https://doi.org/10.1080/00220670209596593 Muñoz, M. A., & Guskey, T. R. (2015). Standards-based grading and reporting will improve education. Phi Delta Kappan, 96(7), 64–68. https://doi.org/ 10.1177/0031721715579043 Murray, L. (2013). Bulgaria, Hungary, Romania, the Czech Republic, and Slovakia. Britannica Education Educational Publishing. Olson, L. (1995). Cards on the table. Education Week, 14(38), 23–28. Reeves, D. B. (2008). Effective grading. Educational Leadership, 65(5), 85–87. Reeves, D., Jung, L. A., & O’Connor, K. (2017). What’s worth fighting against in grading? Educational Leadership, 74(8), 42–45. DASD GRADING PRACTICES 110 Schinske, J., & Tanner, K. (2014). Teaching more by grading less (or differently). CBE Life Sciences Education, 13, 159–166. https://doi.org/10.1187/cbe.cbe-14-030054 Schneider, J., & Hutt, E. (2014). Making the grade: A history of the A–F marking scheme. Journal of Curriculum Studies, 46(2), 201–224. https://doi.org/ 10.1080/00220272.2013.790480 Scriffiny, P. L. (2008). Seven reasons for standards-based grading. Educational Leadership, 66(2), 70–74. Shoveller, J. (1824). Scholastic education: Or, a synopsis of the studies recommended to employ the time, and engage the attention of youth; a suggestion of the most efficient methods of tuition; and a notice of the authors which may be advantageously used in a scholastic course (2nd ed.). G. and W. B. Whitaker. https://www.google.com/books/edition/Scholastic _Education _Or_ A_ Synopsis_of_th/F9wGAAAAQAAJ?hl=en&gbpv=0 Spencer, K. (2012). Standards-based grading. Education Digest, 78(3), 4–10. Starch, D., & Elliott, E. C. (1913). Reliability of grading work in mathematics. The School Review, 21(4), 254–259. Sundeen, R. (2018/2019, Winter). The troubling history of public education. Old Schoolhouse, 42–46. https://www.thehomeschoolmagazine-digital.com/ thehomeschoolmagazine/2019x1/MobilePagedArticle.action?articleId=1446000# articleId1446000 Taylor, A. C. (2007). Grade inflation: An analysis of teacher perception, grade point average, and test scores in one southeastern Georgia high school [Doctoral DASD GRADING PRACTICES 111 dissertation, Georgia Southern University]. Georgia Southern Digital Archive. https://digitalcommons.georgiasouthern.edu/etd/414 Tocci, C. (2008). An immanent machine: Reconsidering grades, historical and present. Educational Philosophy and Theory, 42(7), 762–778. Tomlinson, C. A. (2001). Grading for success. Educational Leadership, 58(6), 12–15. Townsley, M. (2013). Redesigning grading—Districtwide. Educational Leadership, 71(4), 68–71. Townsley, M., & Buckmiller, T. (2020). Losing As and Fs: What works for schools implementing standards-based grading? Educational Considerations, 46(1), 1–10. https://doi.org/10.4148/0146-9282.2204 Walsh, W. B., & Betz, N. E. (2001). Tests and assessments (4th ed.). Prentice Hall. Will, M. (2019). Exploring ways to say so long to traditional letter grades. Education Week, 38(20), 23–24. William, D. (2010). Standardized testing and school accountability. Educational Psychologist, 45(2), 107–122. DASD GRADING PRACTICES 112 APPENDICES DASD GRADING PRACTICES 113 Appendix A Survey #1 Informed Consent Dear Faculty Member, As an elementary teacher in the Dover Area School District, you are being asked to participate in a research study to evaluate the effectiveness of current elementary grading practices and determining perceptions a standards-based report card in the Dover Area School District. Your participation in the study will help the researcher collect and analyze data to summarize teachers' perceptions of the effectiveness of current DASD elementary grading practices. What will I be asked to do if I take part in this study? If you agree to participate in this study, you will be asked to complete an electronic survey. The survey will be available via Google Forms. Participants are asked to engage in selected-response and open-ended questions about the effectiveness of current elementary grading practices. Where will this study take place? The survey will be available online via Google Forms. Survey participants can take the survey at a time and location most convenient to them via online access. How long will the study last? You will be asked to participate in a survey that will take approximately 10 - 15 minutes to complete. What happens if I don't want to participate? Your participation is voluntary; you can choose whether you want to participate in the study or not. There will be no penalty if you decide not to participate. Can I quit the study before it ends? You do not have to participate. If you don't want to participate, please do not complete the survey. Otherwise, by clicking continue, you are giving your consent to participate in the survey. If you change your mind after you start the survey, close the survey before completion, and no survey responses will be recorded. What are the risks? There are minimal risks to this study. You will not answer questions of a sensitive nature, and you will not provide personally identifiable information. Settings in Google Forms will be such that the researcher does not collect email addresses from participants. The survey and interview questions may make you feel uncomfortable as some people do not like to volunteer information or feedback that could be perceived as negative. However, the research is going to be most meaningful if participants are honest in their responses. Your privacy is important, and the researcher will confidentially handle all information. The study's results will be reported in a way that will not identify you and will not isolate DASD GRADING PRACTICES 114 any building's data for scrutiny. The researcher plans to present the study results as a published study and potentially in journals or periodicals. How will I benefit from participating? If you decide to participate, you will assist the researcher in better understanding teachers’ perceptions when it comes to the current elementary grading practices in the Dover Area School District. Benefits may include your perceptions being valued and heard, evaluation of current grading practices, and identification of any potential considerations for improvement. Will my responses be kept confidential and private? Yes, the survey responses collected from you will remain confidential, which means only the researcher will see or have access to the data. Again, the study's results will be reported in a way that will not identify you and will not isolate any building's data for scrutiny. Data will be stored on a secure server and password-protected or stored in a locked office or a combination. Who do I contact if I have questions about this study? If you have questions about this study, don't hesitate to contact the researcher, Bobbie Strausbaugh, at str6264@calu.edu or 717-487-2291. If you would like to speak with someone other than the researcher, don't hesitate to contact Dr. Todd Keruskin, Faculty Advisor at the California University of Pennsylvania, at keruskin@calu.edu. I have read this form. Any questions I have about participating in this study have been answered. I agree to take part in this study, and I understand that taking part is voluntary. I do not have to take part if I do not wish to do so. I can stop at any time for any reason. If I choose to stop, no one will ask me why. By clicking continue, you agree to participate in this survey. Approved by the California University of Pennsylvania Institutional Review Board. This approval is effective 8/27/21 and expires 8/26/22. DASD GRADING PRACTICES 115 Appendix B Survey #1 DASD: Dover Area School District DASD Elementary Grading Practices: The grading practices for which report card grades in ELA and Math are calculated in the four K-5 elementary buildings of the DASD. Effectiveness of Current DASD Elementary Grading Practices: Grading practices that are clearly defined, consistent per grade level across all buildings, and reflect students’ mastery of district-approved curriculum and eligible content of grade-level PA state standards. 1. What grade level do you currently teach? Only select one grade level. If you work with multiple grade levels, please select the grade level for which you spend a larger percentage of time. 1 2 3 4 5 2. If you teach grade 3, 4, or 5 please select the subject you teach. ELA Math I do not teach grade 3, 4, or 5 3. To what extent do you agree or disagree with the statement: As a teacher, I feel the grading practices I am using for ELA and/or math report card grade(s) are clearly defined for me by district policies and procedures. Strongly Agree Agree Disagree Strongly Disagree 4. To what extent do you agree or disagree with the statement: As a teacher, I feel the students understand how they earn their ELA and/or math report card grade(s). Strongly Agree Agree Disagree Strongly Disagree DASD GRADING PRACTICES 116 5. To what extent do you agree or disagree with the statement: As a teacher, I feel I clearly define the grading practices I am using for ELA and/or math report card grade(s) to my students’ families. Strongly Agree Agree Disagree Strongly Disagree 6. To what extent do you agree or disagree with the statement: The assessments I use to determine students' ELA and/or math report card grade(s) are similar to the assessments used by my grade-level colleagues in the DASD. Strongly Agree Agree I do not have the knowledge to answer this question Disagree Strongly Disagree 7. To what extent do you agree or disagree with the statement: The assessment categories (classwork, quizzes, tests, homework, participation, etc.) I use to determine my students' ELA and/or math report card grades are similar to the categories used by my grade level colleagues in the DASD. Strongly Agree Agree I do not have the knowledge to answer this question Disagree Strongly Disagree 8. To what extent do you agree or disagree with the statement: I believe my ELA and/or math report card grading practices accurately reflect my students’ understanding of district-approved ELA and/or math curriculum. District approved curriculum is the curriculum posted on the district website. Strongly Agree Agree I do not have the knowledge to answer this question Disagree Strongly Disagree 9. To what extent do you agree or disagree with the statement: I believe my ELA and/or math report card grading practices accurately reflect my students’ understanding of eligible content of PA grade-level standards. Eligible content of PA grade-level standards is posted on the SAS website. DASD GRADING PRACTICES 117 Strongly Agree Agree I do not have the knowledge to answer this question Disagree Strongly Disagree 10. To what extent do you agree or disagree with the statement: I believe current DASD elementary grading practices provide families with a clear picture of their child’s mastery of district-approved curriculum and eligible content of PA grade-level standards. Strongly Agree Agree Disagree Strongly Disagree 11. What else would you like to share about DASD Elementary Grading Practices? DASD GRADING PRACTICES 118 Appendix C Survey #2 Informed Consent Dear Parent, As a parent in the Dover Area School District, you are being asked to participate in a research study to evaluate the effectiveness of current elementary grading practices and determining perceptions a standards-based report card in the Dover Area School District. Your participation in the study will help the researcher collect and analyze data to summarize parents' understanding of current DASD elementary grading practices What will I be asked to do if I take part in this study? If you agree to participate in this study, you will be asked to complete an electronic survey. The survey will be available via Google Forms. Participants are asked to engage in selected-response and open-ended questions about the effectiveness of current elementary grading practices. Where will this study take place? The survey will be available online via Google Forms. Survey participants can take the survey at a time and location most convenient to them via online access. How long will the study last? You will be asked to participate in a survey that will take approximately 10 - 15 minutes to complete. What happens if I don't want to participate? Your participation is voluntary; you can choose whether you want to participate in the study or not. There will be no penalty if you decide not to participate. Can I quit the study before it ends? You do not have to participate. If you don't want to participate, please do not complete the survey. Otherwise, by clicking continue, you are giving your consent to participate in the survey. If you change your mind after you start the survey, close the survey before completion, and no survey responses will be recorded. What are the risks? There are minimal risks to this study. You will not answer questions of a sensitive nature, and you will not provide personally identifiable information. Settings in Google Forms will be such that the researcher does not collect email addresses from participants. The survey and interview questions may make you feel uncomfortable as some people do not like to volunteer information or feedback that could be perceived as negative. However, the research is going to be most meaningful if participants are honest in their responses. Your privacy is important, and the researcher will confidentially handle all information. The study's results will be reported in a way that will not identify you and will not isolate DASD GRADING PRACTICES 119 any building's data for scrutiny. The researcher plan to present the study results as a published study and potentially in journals or periodicals. How will I benefit from participating? If you decide to participate, you will assist the researcher in better understanding parents’ perceptions when it comes to the current elementary grading practices in the Dover Area School District. Benefits may include your perceptions being valued and heard, evaluation of current grading practices, and identification of any potential considerations for improvement. Will my responses be kept confidential and private? Yes, the survey responses collected from you will remain confidential, which means only the researcher will see or have access to the data. Again, the study's results will be reported in a way that will not identify you and will not isolate any building's data for scrutiny. Data will be stored on a secure server and password-protected or stored in a locked office or a combination. Who do I contact if I have questions about this study? If you have questions about this study, don't hesitate to contact the researcher, Bobbie Strausbaugh, at str6264@calu.edu or 717-487-2291. If you would like to speak with someone other than the researcher, don't hesitate to contact Dr. Todd Keruskin, Faculty Advisor at the California University of Pennsylvania, at keruskin@calu.edu. I have read this form. Any questions I have about participating in this study have been answered. I agree to take part in this study, and I understand that taking part is voluntary. I do not have to take part if I do not wish to do so. I can stop at any time for any reason. If I choose to stop, no one will ask me why. By clicking continue, you agree to participate in this survey. Approved by the California University of Pennsylvania Institutional Review Board. This approval is effective 08/27/21 and expires 08/26/22. DASD GRADING PRACTICES 120 Appendix D Survey #2 DASD: Dover Area School District DASD Elementary Grading Practices: The grading practices for which report card grades in ELA and Math are calculated in the four K-5 elementary buildings of the DASD. 1. Please select your child’s grade. If you have more than one child, feel free to complete the survey multiple times, one time for each child. Your understanding may be different for each grade level. 1 2 3 4 5 2. To what extent do you agree or disagree with the statement: I have received information either from the district or my child’s teacher about the current DASD Elementary Grading Practices in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree Agree Disagree Strongly Disagree 3. To what extent do you agree or disagree with the statement: I understand the current DASD Elementary Grading Practices in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree DASD GRADING PRACTICES 121 Agree Disagree Strongly Disagree 4. To what extent do you agree or disagree with the statement: I am aware of how my child's grades are determined in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree Agree Disagree Strongly Disagree 5. To what extent do you agree or disagree with the statement: Based on my child’s report card, I have a good understanding of how my child is performing in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree Agree Disagree Strongly Disagree 6. To what extent do you agree or disagree with the statement: Based on my child’s report card, I understand what my child has mastered in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree DASD GRADING PRACTICES 122 Agree Disagree Strongly Disagree 7. To what extent do you agree or disagree with the statement: Based on my child’s report card, I understand where my child is growing in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree Agree Disagree Strongly Disagree 8. To what extent do you agree or disagree with the statement: Based on my child’s report card, I understand what my child still needs to work on in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree Agree Disagree Strongly Disagree 9. To what extent do you agree or disagree with the statement: Based on my child’s report card, I have a good understanding of how my child is performing on grade level ELA and math skills. ELA: Strongly Agree Agree Disagree Strongly Disagree Math: Strongly Agree DASD GRADING PRACTICES 123 Agree Disagree Strongly Disagree 10. To what extent do you agree or disagree with the statement: When I look at my child's grades, I understand which state standards my child has learned in ELA and math. ELA: Strongly Agree Agree Disagree Strongly Disagree I cannot answer this question because am not sure what is meant by state standards Math: Strongly Agree Agree Disagree Strongly Disagree I cannot answer this question because am not sure what is meant by state standards 11. To what extent do you agree or disagree with the statement: I am satisfied with the present DASD elementary grading practices in ELA and math. Strongly Agree Agree No Opinion Disagree Strongly Disagree 12. What else would you like to share about DASD Elementary Grading Practices? DASD GRADING PRACTICES 124 Appendix E Survey #3 Informed Consent Dear Faculty Member or Administrator As an elementary or teacher or administrator in the Dover Area School District, you are being asked to participate in a research study to evaluate the effectiveness of current elementary grading practices and determine perceptions of a standards-based report card in the Dover Area School District. Your participation in the study will help the researcher collect and analyze data to summarize administrators’ and teachers’ understanding and perceptions of a standards-based report card. What will I be asked to do if I take part in this study? If you agree to participate in this study, you will be asked to complete an electronic survey. The survey will be available via Google Forms. Participants are asked to engage in selected-response and open-ended questions about the perceptions of a standards-based report card. Where will this study take place? The survey will be available online via Google Forms. Survey participants can take the survey at a time and location most convenient to them via online access. How long will the study last? You will be asked to participate in a survey that will take approximately 10 - 15 minutes to complete. What happens if I don't want to participate? Your participation is voluntary; you can choose whether you want to participate in the study or not. There will be no penalty if you decide not to participate. Can I quit the study before it ends? You do not have to participate. If you don't want to participate, please do not complete the survey. Otherwise, by clicking continue, you are giving your consent to participate in the survey. If you change your mind after you start the survey, close the survey before completion, and no survey responses will be recorded. What are the risks? There are minimal risks to this study. You will not answer questions of a sensitive nature, and you will not provide personally identifiable information. Settings in Google Forms will be such that the researcher does not collect email addresses from participants. The survey and interview questions may make you feel uncomfortable as some people do not like to volunteer information or feedback that could be perceived as negative. However, the research is going to be most meaningful if participants are honest in their responses. Your privacy is important, and the researcher will confidentially handle all information. The study's results will be reported in a way that will not identify you and will not isolate DASD GRADING PRACTICES 125 any building's data for scrutiny. The researcher plans to present the study results as a published study and potentially in journals or periodicals. How will I benefit from participating? If you decide to participate, you will assist the researcher in better understanding teachers’ perceptions when it comes to the use of a standards-based report card. Benefits may include your perceptions being valued and heard and a determination of current understandings. Will my responses be kept confidential and private? Yes, the survey responses collected from you will remain confidential, which means only the researcher will see or have access to the data. Again, the study's results will be reported in a way that will not identify you and will not isolate any building's data for scrutiny. Data will be stored on a secure server and password-protected or stored in a locked office or a combination. Who do I contact if I have questions about this study? If you have questions about this study, don't hesitate to contact the researcher, Bobbie Strausbaugh, at str6264@calu.edu or 717-487-2291. If you would like to speak with someone other than the researcher, don't hesitate to contact Dr. Todd Keruskin, Faculty Advisor at the California University of Pennsylvania, at keruskin@calu.edu. I have read this form. Any questions I have about participating in this study have been answered. I agree to take part in this study, and I understand that taking part is voluntary. I do not have to take part if I do not wish to do so. I can stop at any time for any reason. If I choose to stop, no one will ask me why. By clicking continue, you agree to participate in this survey. Approved by the California University of Pennsylvania Institutional Review Board. This approval is effective 08/27/21 and expires 08/26/22. DASD GRADING PRACTICES 126 Appendix F Survey #3 DASD: Dover Area School District Standards-based report card: There is not a universal definition of a standards-based report card. For this survey, the researcher will offer the following description of a standards-based report card. A standards-based report card is a report card that, instead of providing a single overall grade, breaks down the subject matter into smaller learning concepts. A standards-based report card provides feedback on the smaller learning concepts. 12. What grade level do you currently teach? (If you work with multiple grade levels, please select the grade level for which you spend a larger percentage of time). 1 2 3 4 5 Administrator 13. If you teach grade 3, 4, or 5 please select the subject you teach. ELA Math I do not teach grade 3, 4, or 5 14. To what extent do you agree or disagree with the statement for ELA and math: Based on my knowledge of a standards-based report card, I feel it reports an appropriate amount of information. Strongly Agree Agree Disagree Strongly Disagree 15. To what extent do you agree or disagree with the statement for ELA and math: Based on my knowledge of a standards-based report card, I feel the information it reports is quality information. Strongly Agree Agree Disagree Strongly Disagree . DASD GRADING PRACTICES 127 16. To what extent do you agree or disagree with the statement for ELA and math: Based on my knowledge of a standards-based report card, I feel the information it reports is easy to understand. Strongly Agree Agree Disagree Strongly Disagree 17. To what extent do you agree or disagree with the statement: Based on my understanding of a standards-based report card, I feel it is a good reporting tool to document learned skills. Strongly Agree Agree Disagree Strongly Disagree 18. To what extent do you agree or disagree with the statement: Based on my understanding of a standards-based report card, I feel it is a good reporting tool to document skills for which each student needs to improve. Strongly Agree Agree Disagree Strongly Disagree 19. To what extent do you agree or disagree with the statement: Based on my understanding of a standards-based report card, I feel a standards-based report card is a consistent way to report students’ performance. Strongly Agree Agree Disagree Strongly Disagree 20. To what extent do you agree or disagree with the statement: Based on my understanding of a standards-based report card, I feel a standards-based report card provides families with important information about their student’s performance. Strongly Agree Agree Disagree Strongly Disagree DASD GRADING PRACTICES 128 21. What else would you like to share about your understanding of standards-based report cards? DASD GRADING PRACTICES 129 Appendix G Interview Questions Informed Consent Dear District Representative, As an elementary administrator or teacher who currently uses a standards-based report card, you are being asked to participate in a research study to evaluate the effectiveness of current elementary grading practices and determining perceptions of a standards-based report card in the Dover Area School District. Your participation in the study will help the researcher collect and summarize data on perceptions of administrators and teachers currently using a standards-based report card. What will I be asked to do if I take part in this study? In this study, you will be asked to answer interview questions via a phone conference, Zoom interview, or in person. The researcher will conduct a formal Literature Review of elementary grading practices, including standards-based report cards at the elementary level. However, as part of the Literature Review, the researcher wishes to include perceptions from school districts that presently use standards-based report cards. Where will this study take place? You will be asked to answer interview questions via a phone conference, Zoom interview, or in person. The researcher will coordinate with you on the time and location of the interview. How long will the study last? You will be asked to participate in an interview that will take approximately 20 - 25 minutes to complete. What happens if I don't want to participate? Your participation is voluntary; you can choose whether you want to participate in the study or not. There will be no penalty if you decide not to participate. Can I quit the study before it ends? You do not have to be in this study. If you don't want to participate, please let me know. If you do agree to participate, you can stop participating at any time during the interview and no responses will be recorded. What are the risks? You will be asked to participate in an interview related to my research topic. The researcher will document your interview responses. Any reference to your responses will be made by assigning you an interview number. Any reference to the interview responses will be made by the number and not by the individual providing the interview responses. There is minimal risk to participants as all interview documentation will remain confidential. Your privacy is important, and the researcher will confidentially handle all information. The study's results will be reported in a way that will not identify you. The DASD GRADING PRACTICES 130 researcher plans to present the study results as a published study and potentially in journals or periodicals. How will I benefit from participating? The potential benefits to you from being in this study may include sharing effective practices from your school and the potential for future networking and collaboration. Will my responses be kept confidential and private? Yes, the survey responses collected from you will remain confidential, which means only the researcher will see or have access to the data. Again, the study's results will be reported in a way that will not identify you. Data will be stored on a secure server and password-protected or stored in a locked office or a combination. Who do I contact if I have questions about this study? If you have questions about this study, don't hesitate to contact the researcher, Bobbie Strausbaugh, at str6264@calu.edu or 717-487-2291. If you would like to speak with someone other than the researcher, don't hesitate to contact Dr. Todd Keruskin, Faculty Advisor at the California University of Pennsylvania, at keruskin@calu.edu. I have read this form. Any questions I have about participating in this study have been answered. I agree to take part in this study, and I understand that taking part is voluntary. I do not have to take part if I do not wish to do so. I can stop at any time for any reason. If I choose to stop, no one will ask me why. By signing below, I agree to participate in this study. By doing so, I am indicating that I have read this form and had my questions answered. I understand that it is my choice to participate and I can stop at any time. Printed Name: ___________________________________________________ Signature: ______________________________________________________ Date: ___________________________ Approved by the California University of Pennsylvania Institutional Review Board. This approval is effective 08/27/21 and expires 08/26/22. DASD GRADING PRACTICES 131 Appendix H Interview Questions 1. What have you liked about standards-based report cards? 2. What challenges have come up with standards-based report cards? 3. What is your perception of changes in classroom instructional practices you have noticed with the implementation of standards-based cards? 4. What is your perception of changes in assessment practices you have noticed with the implementation of standards-based report cards? 5. What is your perception of changes in student learning you have noticed with the implementation of standards-based report cards? 6. What are your thoughts on parents’ perceptions of standards-based report cards? 7. In your opinion, what factors are necessary for the effective use of standardsbased report cards? 8. Are you willing to share the format of your standards-based report card? 9. What else would you like to share about standards-based report cards? Note: A summary of the interviews will to be included in the Literature Review DASD GRADING PRACTICES Appendix I District Letter of Support 132 DASD GRADING PRACTICES Appendix J District Consent to Access Data 133 DASD GRADING PRACTICES 134 Appendix K IRB Approval DASD GRADING PRACTICES Appendix L Educational Research Course Certificate 135 DASD GRADING PRACTICES Appendix M Conflicts of Interest Course Certificate 136 DASD GRADING PRACTICES Appendix N Grade Books Summary Tables Table N1 Summary of Grade 1 Grade Books 137 DASD GRADING PRACTICES Table N2 Summary of Grade 2 Grade Books 138 DASD GRADING PRACTICES Table N3 Summary of Grade 3 Grade Books 139 DASD GRADING PRACTICES Table N4 Summary of Grade 4 Grade Books 140 DASD GRADING PRACTICES Table N5 Summary of Grade 5 Grade Books 141 DASD GRADING PRACTICES 142 Appendix O Grade Books Analysis Spreadsheets (/$*UDGH 7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH )XQGDWLRQV -RXUQH\V 6LJKW:RUGV )OXHQF\ :ULWLQJ &RQVLVWHQW  ,QFRQVLVWHQW ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN                         *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 7HVWV             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV                                     *UDGH%RRN$VVHVVPHQW&DWHJRULHV 4XL]]HV &ODVVZRUN 3URMHFWV                                                 DASD GRADING PRACTICES 143 (/$*UDGH 7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH +LJK RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH &RPSUH )UHTXHQF\ )XQGDWLRQV *UDPPDU 9RFDEXODU\ 6SHOOLQJ &ODVVZRUN :ULWLQJ KHQVLRQ :RUGV &RQVLVWHQW  ,QFRQVLVWHQW ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN                           &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 7HVWV              &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV                                        $VVLJQHG*UDGH%RRN&DWHJRULHV 4XL]]HV &ODVVZRUN 3URMHFWV                                                                                            DASD GRADING PRACTICES 144 (/$*UDGH 7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH RI*UDGH RI*UDGH *UDPPDU 1RQ 5HDGLQJ 9RFDEXODU\ DQG:ULWLQJ $FKLHYHPHQW &RQVLVWHQW  ,QFRQVLVWHQW ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 7HVWV             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV                                                 $VVLJQHG*UDGH%RRN&DWHJRULHV 4XL]]HV &ODVVZRUN +RPHZRUN                                     3URMHFWV             DASD GRADING PRACTICES (/$*UDGH &RQVLVWHQW  ,QFRQVLVWHQW ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 145 7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 1RQ 5HDGLQJ *UDPPDU :ULWLQJ 9RFDEXODU\ 3URMHFWV $FKLHYHPHQW )DFWRUV                                                                         $VVLJQHG*UDGH%RRN&DWHJRULHV 4XL]]HV &ODVVZRUN 3URMHFWV                                     *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 7HVWV             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV             DASD GRADING PRACTICES (/$*UDGH &RQVLVWHQW  ,QFRQVLVWHQW ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 146 7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 1RQ 5HDGLQJ *UDPPDU :ULWLQJ 9RFDEXODU\ 3URMHFWV $FKLHYHPHQW )DFWRUV                                                   $VVLJQHG*UDGH%RRN&DWHJRULHV 4XL]]HV &ODVVZRUN 3URMHFWV                                     *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 7HVWV             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV                                   DASD GRADING PRACTICES 0DWK*UDGH &RQVLVWHQW  ,QFRQVLVWHQW ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  147 7\SHVRI6WXGHQW:RUN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 8QLW &XPXODWLYH 7LPHDQG 3UREOHP 3URMHFWVDQG 1RQ 0DWK)DFWV &ODVVZRUN $VVHVVPHQWV $VVHVVPHQWV *HRPHWU\ 6ROYLQJ $FWLYLWLHV $FKLHYHPHQW                                                                                                 $VVLJQHG*UDGH%RRN&DWHJRULHV 7HVWV 4XL]]HV &ODVVZRUN                                     1XPEHURI $VVHVVPHQWV             DASD GRADING PRACTICES 148 0DWK*UDGH 7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 8QLW &XPXODWLYH 3URMHFWVDQG 1RQ 0DWK)DFWV +RPHZRUN $VVHVVPHQWV $VVHVVPHQWV $FWLYLWLHV $FKLHYHPHQW &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN                                                     $VVLJQHG*UDGH%RRN&DWHJRULHV &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  4XL]]HV 7HVWV &ODVVZRUN +RPHZRUN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN                                                     *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV                                        DASD GRADING PRACTICES 149 0DWK*UDGH &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  7\SHVRI6WXGHQW:RUN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 8QLW &XPXODWLYH 3URMHFWVDQG 1RQ 366$ 0DWK)DFWV 2QOLQH7RROV $VVHVVPHQWV $VVHVVPHQWV $FWLYLWLHV $FKLHYHPHQW 3UDFWLFH                                                                                     $VVLJQHG*UDGH%RRN&DWHJRULHV 7HVWV 3URMHFWV &ODVVZRUN                                     &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 4XL]]HV             &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV             DASD GRADING PRACTICES 150 0DWK*UDGH 7\SHVRI6WXGHQW:RUN RI*UDGH 8QLW/HVVRQV RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 8QLW 1RQ &XPXODWLYH 366$ 2QOLQH7RROV 3URMHFWV +RPHZRUN $VVHVVPHQWV $FKLHYHPHQW $VVHVVPHQWV 3UDFWLFH &RQVLVWHQW  *UDGHERRN         ,QFRQVLVWHQW  *UDGHERRN         ([WUHPHO\,QFRQVLVWHQW  *UDGHERRN         &DQQRWEHDQDO\]HG  *UDGHERRN         *UDGHERRN         *UDGHERRN         *UDGHERRN         *UDGHERRN         *UDGHERRN         *UDGHERRN         *UDGHERRN         *UDGHERRN         4XL]]HV 7HVWV $VVLJQHG*UDGH%RRN&DWHJRULHV &ODVVZRUN 3URMHFWV +RPHZRUN &RQVLVWHQW  *UDGHERRN      ,QFRQVLVWHQW  *UDGHERRN      ([WUHPHO\,QFRQVLVWHQW  *UDGHERRN      &DQQRWEHDQDO\]HG  *UDGHERRN      *UDGHERRN      *UDGHERRN      *UDGHERRN      *UDGHERRN      *UDGHERRN      *UDGHERRN      *UDGHERRN      *UDGHERRN      1XPEHURI $VVHVVPHQWV &RQVLVWHQW  *UDGHERRN  ,QFRQVLVWHQW  *UDGHERRN  ([WUHPHO\,QFRQVLVWHQW  *UDGHERRN  &DQQRWEHDQDO\]HG  *UDGHERRN  *UDGHERRN  *UDGHERRN  *UDGHERRN  *UDGHERRN  *UDGHERRN  *UDGHERRN  *UDGHERRN  *UDGHERRN  DASD GRADING PRACTICES 151 0DWK*UDGH RI*UDGH 8QLW/HVVRQV &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  &RQVLVWHQW  ,QFRQVLVWHQW  ([WUHPHO\,QFRQVLVWHQW  &DQQRWEHDQDO\]HG  *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 4XL]]HV             *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN *UDGHERRN 1XPEHURI $VVHVVPHQWV             7\SHVRI6WXGHQW:RUN RI*UDGH RI*UDGH RI*UDGH RI*UDGH RI*UDGH 8QLW 1RQ +RPHZRUN 2QOLQH7RROV 3URMHFWV $VVHVVPHQWV $FKLHYHPHQW                                                             $VVLJQHG*UDGH%RRN&DWHJRULHV 7HVWV &ODVVZRUN +RPHZRUN                                     3URMHFWV