Modeling Science‎ > ‎Thoughts‎ > ‎

A Simple-ish System for Standards-Based Grading

posted Feb 5, 2016, 12:26 PM by Mark Schober   [ updated Sep 12, 2017, 8:20 PM ]
The best advice for implementing SBG is to keep it simple. A complex system can do more harm than good if it mires you in paperwork and it's too hard for students to understand. After several years of revisions, my system now works smoothly for me and my students and provides the benefits that attracted me to SBG in the first place. So here is an explanation my standards-based grading logistics.

The screenshot below is part of my objectives and grading sheet for my 9th grade physics course. (Clicking the image will open up the full pdf of the objectives and grading sheet.) I've packed all of the key elements for the grading system onto one double-sided sheet of paper. Each student has this sheet in their binder so that they can track their progress, and I keep a sheet like this for each student as my gradebook. So that's one of the simplifications: a paper gradebook. I found that keeping track of all of the individual pieces of data on the computer wasn't worth the effort (I had used ActiveGrade). Before the end of each marking period, I make a copy of my gradesheets and hand them out to the students so that we can rectify any discrepancies. 
https://drive.google.com/open?id=0B0zTEgmv0I9UZ0pZY25pQUpCMTA

The learning objectives (standards) I use are listed down the left hand side of the page. I started from objectives that others had written that I keep editing to better fit my course. I try to make each objective sophisticated, clear, and broad so that each can apply in multiple models. For example, when we get to unbalanced forces only two new objectives are added, but objectives from balanced forces and uniform acceleration are also assessed. This helps to keep the number of objectives small and it also helps students to see the connections between units of study. The idea with the objectives is to write objectives for anything that you value and want the students to value. Therefore, I have three laboratory objectives that for assessing lab work (that I describe as take-home quizzes). I've grouped computational accuracy, significant figures, and units into an objective I call "Details" for those situations when students clearly understand the content objective but have slipped on one of these other problem-solving skills. Finally, I have a "Synthesis" objective that requires multiple-model problem solving. Standards-based grading can sometimes become very reductionist, and this helps to address that issue. 

Students complete quizzes about once a week that I announce in terms of the objectives assessed on the quiz. Once students finish their quiz, I give them a colored pen and an answer key to mark their own quizzes with corrections and annotations. This gives them the instant feedback they crave and it also forces them to reflect on their current state of understanding. The relevant objectives are listed at the end of the quiz where I students to self-rate their work on each objective with either a "P" for proficient or an "L" for learning. I then collect the quizzes, add my comments to their work, and make my ratings for each objective. The simplicity of a binary grading system makes record-keeping easier -- it's either good enough or it isn't -- and there are no multi-level rubrics for each objective. There are good arguments for the complexity of more rating levels (see Bob Marzano's work), but with a highly motivated student body that consistently performs well on assessments, the binary system has worked well for us.

The weekly-ish whole-class quizzes are open-ended, a bit hard, and push the students.  Most quizzes look like this: thoroughly represent what is going on in a given problem situation and solve for everything you can to convince me that you understand the concept. Each of the whole-class quizzes are numbered starting from 1, and for multiple class sections I number alternate versions of the quiz 1a, 1b, 1c and so on. The goal is for students to become proficient with every objective, and some students need more practice than others before they can successfully demonstrate their understanding. Therefore, students are welcome to take extra quizzes, after demonstrating their practice, as often as needed. Students can always fill in missing proficiencies from earlier marking periods as well. I've developed an arsenal of extra quizzes that are grouped according to clusters of related objectives. Quizzes are named with a letter for the cluster followed by the quiz number. For example, the cluster of objectives related to quantitative problem solving with unbalanced forces is G, so these extra quizzes are named G1, G2, and so on. The short name for each quiz makes record-keeping easier.
https://docs.google.com/forms/d/1pwj2otk9UF-F8zGICfcvHMMvNcX5eovSQFwK5CTLE5s/viewform



When students are ready to take an extra quiz, I ask them to sign up through a Google form. (Extra Quiz Request Form) Students select which cluster of objectives they want to assess, choose when they want to take the assessment, tell me how they practiced, and reflect on what skills they have improved. The quiz request could be done on paper instead, but I've programmed a Google apps script (with help form John Burke) that takes information from the form submission and sends an email confirmation to the student and to me, and also creates a calendar item including the student name and quiz cluster. This makes it easier for me to print out the set of extra quizzes each morning as I'm preparing for the day. All the quizzes I give have the date auto-inserted into the header, so when I print quizzes out, the date is already there. 


Every bit of assessed student work goes through a double-sided page scanner (Fujitsu's ScanSnap) and is sent to Evernote. I keep an Evernote folder for each student, and sort the scanned files into their folders. The result is a portfolio of each student's work. The students should also have a portfolio of their work in their binder -- as long as they keep things organized. When the students get their quizzes back, for each proficiency they earn on an objective, they record the quiz number in a blue box next to that objective. The number of blue boxes indicate my choices for how many times I want to see a proficient score on each objective. For example, I want to see multiple proficient scores on fundamental ideas and skills such as Newton's third law and using graphical representations to solve accelerated motion problems. Late in the year, when there is less time for reassessment, a single proficiency is sufficient. Even though I want to see many proficiencies on the details objective, the large number is mainly to keep students focused, as these proficiencies are not hard to earn. Proficiencies on the synthesis objective are what distinguish the students who know all of the basic concepts in the course from those who can use those concepts to solve novel problems. Therefore, the number of blue boxes, or scored proficiencies, are chosen in such a way so that the number of earned proficiencies out of the number of expected proficiencies form a percentage that can be converted into a grade. The transparency in how proficiencies translate into a final grade is very comforting to the students. Every quarter grade is a progress report that culminates in the year-end grade -- the only grade our school displays on a student's transcript.

With all of their work scanned, I don't have to immediately record each student's work on my copy of their objectives and grading sheet. When I go through the Evernote folder of their assessments, it's easy to record the quiz numbers into the blue boxes where the kids have earned proficiencies, and it's easy to count up the number of proficiencies earned in order to calculate the grade. Every student ends up taking a different set of quizzes depending on the extra quizzes they take, so I keep a running list of the quizzes taken on the front of the objectives/gradesheet. This also helps me not to give them the same extra quiz twice. 

No grading system is perfect, and this one isn't either, but students see how their work translates into their grades, building up from zero rather than down from 100. Taking risks is encouraged - there's no penalty for wrong answers, and even if a synthesis problem isn't answered perfectly, students demonstrate understanding of many other objectives along the way. Students aren't stressed out by assessments and really do see them as opportunities to show what they know. It helps them to clearly see that I'm on their side as they grow in their skills.

Thanks to the many people that I've learned and gained ideas and advice from! A few in particular: Kelly O'Shea, Seth Gunials-Kupperman, Sammie Smith, and Frank Noschese.

Comments