79

We're thinking of implementing a policy where if a student asks a question/makes a mistake in a lab, they are required to write down what that question/mistake was and how it was resolved. These "lab notebooks" (for want of a better word) will be evaluated based on the quality of the responses to the problems, not the problems themselves. For instance, if the problem was relatively trivial ("We didn't turn the device on") but the response is great ("we came up with a checklist that we will follow to ensure that the basic setup is done every time") then the submission would be evaluated with a high mark.

For the majority of our students, there will be more than enough stuff in each lab to write about. However, this isn't universally true and some groups will do the lab just fine without asking any questions. Whether they make mistakes or not is an open question, but there really are some groups who just get what's going on, take their data and do very well without needing to interact with us at all. And there's the problem - these students would have an empty notebook because they didn't need to go back and check things out. Would they get a 0 on that part? That doesn't sound fair at all.

How can I reward the students who learned from their mistakes without penalizing those who don't make any?

Michael Stachowsky
  • 9,768
  • 5
  • 36
  • 45
  • 66
    Just grade them based on outcomes. In my opinion, the issues you're running into are a clear sign that you're assigning a task that is not worthwhile ("busywork"). If they are graded based on results, they will be motivated to learn from their mistakes in order to produce good results. – David Ketcheson Apr 10 '18 at 13:12
  • 6
    I'm willing to concede that after a bit of discussion. We are noticing a few things. First, students are asking the same questions over and over without really learning from it, and second students' first reaction is to ask us before trying to solve the problem themselves. We feel that the best way to address that is to have students record their issues and write the responses in their own terms. I don't necessarily feel that's busywork, but I'm willing to consider other options to my problem – Michael Stachowsky Apr 10 '18 at 13:18
  • 4
    This sounds like « partial marking » ie awardings points for application of method even if the final answer was incorrect. – Solar Mike Apr 10 '18 at 13:35
  • 1
    @SolarMike it goes a little bit beyond that. In our labs, currently the only thing we mark is the end results. Nearly every group gets these results "correct" (ie: they get reasonably good marks, not necessarily through good experimentation). We want to assign grades, ideally, to what they learned of the process. I don't want to penalize a student for bad process if they learned from it, but I do want to reward them for thinking about their process as much as their results – Michael Stachowsky Apr 10 '18 at 13:38
  • 2
    Give 100 bonus points to those that don't need help/make no mistake. Those who make mistakes receive 0-99 bonus points depending on this "lab notebook". Wouldn't this be an easy solution? – Bakuriu Apr 10 '18 at 18:25
  • 1
    @DavidKetcheson If that "busywork" teaches them how to better document, journal and problem solve? Especially if the "journey" is more important than the "destination". I'd consider this a variation of "Show your work". 8x + 7 = 47... x = 5? Great! How'd you get there? Just because you "know" the answer doesn't mean you don't get a pass because you can just write it down... – WernerCD Apr 10 '18 at 18:27
  • 19
    students' first reaction is to ask us before trying to solve the problem themselves Not an answer, but taking a page out of Stack Exchange, ask them: What have you tried? If their effort was insignificant or null, look at them with a dead stare and answer them, in a robotic voice: "Your question has been closed due to evident lack of minimum research" – xDaizu Apr 11 '18 at 08:27
  • 1
    @xDaizu: we're going to be doing something really similar. The TAs are only going to show them where in the lab manuals to look. The lab manuals are pretty extensive but still under testing, so I expect that the first time we run this (next semester) we may have to be less strict – Michael Stachowsky Apr 11 '18 at 11:18
  • 6
    "First, students are asking the same questions over and over without really learning from it, and second students' first reaction is to ask us before trying to solve the problem themselves." This gives me the impression that when they ask a question, you give the answer. Try to lead them to the answer instead. – the_lotus Apr 11 '18 at 15:49
  • 2
    A different idea: if you want to reward them for learning from their mistakes, how about giving them extra points the first time they correctly answer a question that's of the same type as one they previously got wrong? As long as the bonus points are less than what they lost in the first place, there's no way to game the system by selectively getting things wrong. – BlueRaja - Danny Pflughoeft Apr 11 '18 at 20:58
  • @BlueRaja-DannyPflughoeft that's a good idea, actually. It might take some effort to set up in terms of grading time, though. Any thoughts on how to get around that? We do use an online tool for some grading but I don't think it's sophisticated enough. For what it's worth, we have about 400 students and 12 TAs – Michael Stachowsky Apr 11 '18 at 21:01
  • 2
    There's tacit assumption that rewarding students for learning and penalizing them for not learning is the proper business of those who teach. If that assumption gets explicitly rejected by students and teachers, that would avoid some avoidable problems. – Michael Hardy Apr 11 '18 at 22:57
  • 2
    Urgh. I remember this from my Chemistry coursework for GCSE. As one of those who actually "measured twice, cut once" there was often very little that went wrong. As this was part of the marking criteria, I actually had to make up mistakes, to which I could provide corrections to get the full marks. - and I wonder why I don't like writing up results these days. – Baldrickk Apr 12 '18 at 15:46
  • r.e. the above, my teacher actually instructed me to do this, along with a small group of others. – Baldrickk Apr 12 '18 at 15:47
  • 1
    And that's what I'm trying to avoid. I don't want to get students to make fake mistakes in order to get marks – Michael Stachowsky Apr 12 '18 at 16:05
  • 1
    "How to reward students (increase their grade, spend more time teaching/mentoring them) for learning from mistakes without penalizing (not lowering their grade, not taking time away) those who didn't make mistakes in the first place?" Not possible without grade inflation or you working more. I see no true answer to the title question. – chux - Reinstate Monica Apr 14 '18 at 00:33
  • I might have to agree. Great points have been raised but it's probable that the question itself is not the right one to ask – Michael Stachowsky Apr 14 '18 at 01:51
  • 1
    To be fair (and to avoid encouraging students to intentionally make mistakes), the marks awarded for correcting an error should never exceed the marks lost by making that error in the first place. – John B. Lambe Apr 15 '18 at 19:39

10 Answers10

67

At face value, it seems clear that the issue is that you're trying to mix measurements of two very different kinds of performance. I think the REAL issue is deeper than this, but let's start with the two things you're mixing up:

  1. You want to measure outcomes. Did the group end up with "correct" results from the lab?
  2. You want to measure a specific (learning) process. Did the group learn from mistakes and questions?

The first measurement is usually the default in classroom settings because it's the most obvious. The second measurement has value, though: You're trying to prepare students for life, not just hand them a set of answers. Measuring their ability to learn, and encouraging them to be learners, is important.

That said, measuring outcomes has the advantage that it's not dependent on the process. It's easy to see if the lab result is correct or not. I don't think that's the focus of your question, so let's concentrate on the second concept - measuring the learning itself.

That brings us to the core issue with your proposed process: Measuring learning from mistakes and questions is very, very different from measuring learning in general. You hinted at this when you said,

For the majority of our students, there will be more than enough stuff in each lab to write about. However, this isn't universally true and some groups will do the lab just fine without asking any questions.

The issue here is that you've singled out a single type of learning to measure. The students who learned by failure and questioning are correctly being evaluated by your grading approach. This is good - you're encouraging that type of student to hone their skills. So let's look at the second group - the students who "did the lab just fine without asking any questions." The important thing to remember is that they weren't born knowing how to do the lab - they learned it too, just in a different way than the question-askers and mistake-makers. The reason why your new approach lacks fairness (as you identified yourself) is that it's only attributing value to one type of learning.

So - is it a bad idea to grade your students based on the questions they ask and the mistakes they make? If you're trying to grade them on their learning process, which is what I think you're trying to do, you need to make sure you're accounting for all types of learning not just the one style. This will be really challenging - people learn in multiple ways and transition between learning styles in a fluid manner, and many learning styles don't leave behind a "paper trail" you can measure after the fact. So let's get to your specific question:

How can I reward the students who learned from their mistakes without penalizing those who don't make any?

I may be frame-challenging your question, but if you allow me that liberty, I would rewrite it as:

How can I encourage students to be aware of, and work on improving, their learning skills, versus just rewarding them for getting the right answer to a specific assignment?

I think the answer to this will be much broader than a single stack exchange question, but some ideas to think about:

  1. Include a discussion/presentation during/after the lab where a selected group walks the other groups through how they approached the lab, with a focus on how they prepared or learned the material beforehand and how they dealt with issues/challenges.
  2. When a group asks questions during the lab, let the other lab groups answer it (instead of you). When an answer is given, have the answering group provide explanation of where they got the answer from.
  3. Include an entire lab session, or elements of a lab, that are not part of the graded solution and are deliberately "unanswerable" in that they cover material you haven't taught or are otherwise deliberately difficult. Have the students come up with their own way to resolve this. This could even be a lab where the process intentionally causes failure, which would "force" the students to use your "learning notebook" as a tool to document and explain the failure and their approach to solving it. Purposefully cause them to go off-script and respond to the failure with creativity.

Basically, you need something that accounts for multiple learning styles, and something that encourages accountability instead of just allowing students to "cheat the system" by pencil-whipping busywork in order to get a grade. Notice I left your word choice of "reward" out of my re-worded version of your question - I think awareness and demonstration are probably more important than a reward - at the least because the reward just causes students to focus on the end, rather than the means, which totally dodges your goal of getting them to focus on learning skills versus just a final result.

Edited to add:

Let me further add: I don't think the "learning notebook" is a bad idea. It actually sounds like a great way to encourage people to recognize failure and accept it as part of a (positive) learning process, versus being afraid of failure. It would be great to include along with my three suggestions above. That said, I do think it becomes cumbersome to try to include it as a graded item, and more importantly, regardless of whether you grade it or not, focusing on it (as a graded item) misses the larger opportunity to consider and encourage other learning styles.

dwizum
  • 2,327
  • 11
  • 11
  • 1
    Very good points and a lot to think about. Thank you! – Michael Stachowsky Apr 10 '18 at 15:00
  • 8
    As a student, I would have been really frustrated by your proposed approach. I didn't like to ask questions or show failure in a lab setting. I needed to learn it upfront and then just execute - I needed to be an expert. My suggestions 1 and 2 would have been really rewarding and reinforcing for me, and my third suggestion would have caused me to face my own weakness of getting caught off guard by issues. Having a mix like that should encourage and challenge everyone versus just a specific subset of the students. – dwizum Apr 10 '18 at 15:10
  • 4
    Students need to become comfortable with failure—and the earlier the better. The longer a student goes without learning from failure the harder it will be for them to deal with once it comes up. If you have students who never fail, I would simply remove that potential score from their grade. Oversimplified, students who fail, but don't learn from it (or don't demonstrate how they learned from it) would get, say, 0/100, students who showed superb learning from failure would get 100/100, students who continue not to fail would get 0/0. That makes their other work more heavily weighted. – Rubellite Fae Apr 11 '18 at 02:44
  • @RubelliteFae - I agree that students need to learn from failure - but I don't agree that a grading approach should depend on it. Mainly as I said because it doesn't incent students to develop other learning styles. To put it differently: it's unfair to students who "fail" or learn outside of the graded setting. As a student, I had lab classes where I got perfect grades on every lab. What would my score have been with your approach? Should I not get credit for the work I put in to learning and preparing for those labs? – dwizum Apr 11 '18 at 12:32
  • I tried to indirectly say this in my answer: I think it's appropriate to grade outcomes. I don't think it's appropriate to grade learning processes (mainly because it's hard to come up with an approach that's equal and fair to everyone) - you need to find other ways to incent or encourage development of learning processes. That's why none of my three suggestions at the end of the answer focus on assigning a grade, but rather just on direct participation/encouragement of learning. – dwizum Apr 11 '18 at 12:34
  • @dwizum As stated, under the scenario you presented you would receive for example, 250/250 for the work proper and 0/0 for the "learning notebook." It wouldn't harm your grade, it would just more heavily weight your other grades. – Rubellite Fae Apr 11 '18 at 13:31
  • @RubelliteFae My point is: You're giving me a null grade because I learned in a different way than some others. Yes, it doesn't directly bring down my average, but it changes weighting significantly, which can definitely influence the final grade. Should the students who needed the notebook get a 0/100 or 0/0 for "not studying ahead of time?" For me, you're lumping learning and outcome together. For others, you're evaluating those separately. That doesn't feel right to me. And at the end, I don't think the grade should be what matters, but rather recognizing and encouraging ALL styles. – dwizum Apr 11 '18 at 13:43
  • Let me add - I appreciate the comments and I'm going to edit my answer to add some clarity to my thoughts around the "learning notebook." – dwizum Apr 11 '18 at 13:48
  • I am okay with lumping those two together. After all, if the student did their lab perfectly, the extra weight can't lower their grade. If they got an A, but not a perfect A, it would, but very slightly. As long as it is clear at the beginning of the course it seems perfectly fair—and honestly is something teachers should be doing from a very young age. Alternative styles are fine until they aren't. So, those who experiment w/ different styles will find out what works best in which situations & become comfortable learning from failure. IMO, failure shouldn't have negative connotation. – Rubellite Fae Apr 11 '18 at 14:04
  • 1
    I've framed the lab notebook in my own mind as a deviations notebook. You're given essentially an SOP on how to get from raw materials to a product in student labs and the "answers" are the result of in-process testing to ensure you obtained the right product. Deviations state that some step(s) of the SOP weren't followed normally, and include an impact analysis, corrective/preventative actions and a final disposition or resolution. It's a great way to look at how a mistake was made, whether it's your fault or not, and determine how it was handled. I'd encourage group discussion on deviations. – CKM Apr 11 '18 at 16:35
  • 1
    It's a great way to highlight good documentation practices in the laboratory setting, laboratory writing, and the importance of quality control/assurance without making anyone feel smaller for it. It's also a great way to get students thinking critically about the experiment: if you can discuss as a group how a mistake at step A, B or C might change the product, or ways to prevent or correct future mistakes, they're also thinking outside the comfort zone of the typical lab handout packet. – CKM Apr 11 '18 at 16:40
  • @CKM Great comments. It also strikes me that a prof could introduce the idea of such journaling by making the first assignment one that will always "fail." – Rubellite Fae Apr 12 '18 at 03:49
  • Yes! That's the sort of thing I was thinking of with my third suggestion above. That way, it's not "optional" and it's part of a bigger approach that might encourage lots of styles versus focused on "learn by failure." I will edit the answer to reflect this. – dwizum Apr 12 '18 at 12:42
24

You could use a "point recovery" system. I haven't seen anyone else describe it, and I don't know if there is a formal name for it. Anton's answer is somewhat similar in practice.

After each lab, students are offered an opportunity to recover a portion of the missed points from that lab. This opportunity can be a notebook review, a test/quiz on the lab material, a followup analysis on their previous lab work, or an optional lab exercise. It could even be a group project, similar to Michael's suggestions.

You can choose a proportionate cap (max XX% of missed points), an absolute cap (max score of XX), or both. Setting an absolute cap will deter stronger students from participating in the optional work, as they have less to gain. You decide whether that is desirable.

Setting the "right" caps on point recovery is a bit of trial and error. The best values will depend on how stringently your labs are scored and how much you want to encourage participation. If you chose to allow "up to 85% of missed points" with "no maximum on final score", you would allow any student to achieve a B with excellent followup work---even if they missed the lab entirely. (A reasonable way to accommodate absences, too.) In this case, even an A-level student could accrue a few extra points by participating.

Ideally, you would look for some evidence that the student grasps material which was not evident from their performance in the lab. This makes notebook reviews or self-analysis of their lab work better options; you can award points specifically for addressing issues with their original lab work. The self-comparison is key for improvement. Otherwise, you can expect to see a lot of students with 50-60% lab scores come in and get around 60% on a followup quiz without much real improvement. If you want improvement, you need to measure exactly that, and only that.

The biggest difference between this method and Anton's approach is that this method favors students who mastered the material initially---they will end up with the highest scores in the group most of the time. As before, you would have to decide whether this is desirable. Should their faster uptake or lesser need for instruction be a factor in their final grade? Is this program or field highly competitive in nature?

DoubleD
  • 877
  • 5
  • 7
  • 4
    This is what I was thinking, basically not doing the notebook doesn't hurt you if you've done the lab flawlessly but having the notebook can save you from a less than flawless performance grade. – Dean MacGregor Apr 11 '18 at 18:02
  • 1
    I had a professor that offered something similar: the "revised" version of your lab would be graded and weighted at 50% of the difference: 50% on the original, if revised to 100%, would cap at 75% final, but an 80% could be increased to a 90% with a solid revision. From what I recall, >50% of students opted to submit revised versions of their reports. – TemporalWolf Apr 12 '18 at 17:54
  • "Should their faster uptake or lesser need for instruction be a factor in their final grade? Is this program or field highly competitive in nature?" Realistically, when you have a job, your won't get many do-overs. Even in an academic position, getting stuff wrong is not good for anyone. Someone who gets things right the first time is more valuable to whomever they serve, and there's nothing wrong with grades reflecting that. Love the idea. +1 – jpmc26 Apr 14 '18 at 02:21
10

The confusion i see with this is that different groups are going to have different problems. This makes comparisonsense of responses difficult because of the variety in the responses. This is in addition to people being able to skip this altogether. I recommend the following.

** have a set of common problems that every group responds too. There are probably common mistakes that happen every year. Make everyone explain these whether they made the mistake or not. If they didn't make the mistake it reinforces proper behavior while preventing others from making the same mistake.

** for those who do make a real mistake set parameters for what to report. For example maybe they only have to share 2-3 mistakes rather than report everything. A criteria for judging the significance would be useful as well. For example if someone forgets to turn the power on maybe that shouldn't require an explanation.

** lastly a rubric that delineates exactly what is expected would eliminate the confusion and laziness. An actual example would further help the students as well.

Darrin Thomas
  • 6,993
  • 3
  • 26
  • 50
9

I'm answering this from the point of view of, that in professional software development a log book is an important tool for the long term maintenance of a product.

Instead of focusing just on "learning from mistakes", which potentially alienates students who feel they have not made any mistakes. You want to have students built the log book as their "bible of information for the next developer".

This means documenting:

  • Potential pitfalls that were discovered or researched
  • Solutions that were ruled out for one reason or another
  • Areas that can be looked into further, if more time was allocated
  • Thoughts on potential improvements that could be made
  • Mistakes and major learning items that were found
  • Any thoughts or remarks about the work done
  • Justifications on why an approach was taken, over another one

Importantly, not everybody's log book will contain all of these. But these are the kind of elements that can provide a real value to another developer on a real world problem. The idea is to be tracking everything that might be valuable in the future - to avoid duplicated work and to help communicate potential ideas for where things could be taken.

Marking is of course difficult, but I'm sure you can see that from the scope of the log book; no student should have an empty one. Students who struggle, will be able to detail what problems they faced, while students who succeed will be able to note down potential improvements or alternate solutions they avoided (and why).

Log books are definitely a valuable professional tool, and I'd suggest approaching it from that angle - rather than trying to gauge a student's learning from their book (and grading them on that).

Bilkokuya
  • 234
  • 2
  • 8
6

One option would be to do the following:

The grade for each lab consists of two components:

  1. X% for the actual lab report (that would have been submitted without introducing the "new system"
  2. (100-X)% for the "Q&A lab notebook" part of the report.

Now, if the Q&A lab notebook is not present (because students completed the lab without questions/mistakes), they automatically get full (100-X)% portion of the grade. If the Q&A lab notebook is present, it is graded accordingly.

With that system (in my opinion):

  • Students are not discouraged to ask questions because they can get a full mark even if "everything falls apart" with a proper explanation.
  • Not asking a question when the problem is not resolved within the group is not an option – as then the repercussions will be on the X% of the grade.
  • Students are encouraged to check obvious things themselves (like turning on the equipment) because otherwise, they will be required to write an additional paragraph in the report.
  • Those who silently finish the lab get an opportunity for a full grade anyway (provided they did everything correctly).

Selecting X is definitely an art and highly depends on the structure of the lab and the existing reports. But 75% (actual report)-25% (Q&A) seems a reasonable initial guess to me.

Anton Menshov
  • 6,030
  • 4
  • 32
  • 54
  • 3
    It's an interesting idea, but what's stopping groups from just not submitting anything? By your grading scheme a group that doesn't submit anything is assumed to have done the lab without any difficulty. – Michael Stachowsky Apr 10 '18 at 13:36
  • @MichaelStachowsky does this imply that by default a group does not submit any report on the lab? – Anton Menshov Apr 10 '18 at 13:37
  • @MichaelStachowsky I would say, that lab assistants (who are supposedly the graders) should just mark somewhere that a certain question has been asked. – Anton Menshov Apr 10 '18 at 13:41
  • 1
    Basically, all groups will submit a report, but some groups may not submit the question-report (the 100-X part). Some of those groups genuinely had no questions, some did but want the automatic 100% on that part. I agree, though, that if our lab assistants marked it down, then we can just compare. That's definitely possible – Michael Stachowsky Apr 10 '18 at 13:46
  • 3
    Then you need to ensure the lab assistants do their job of marking the asked questions. You can mention to them (if you are not the one and not present during each lab) - that this, technically, reduces the amount of questions/work they are required to do DURING the lab. However, making sure the lab assistants are following the instructions is a totally different question. – Anton Menshov Apr 10 '18 at 13:49
5

Why not have two different 'rubrics' (aka marking schemes in the UK) - mark students against both and keep only their highest mark.

One of the rubrics could for example focused on those students how perform in the actual lab or how well they described what they did and why they did it.

The other could focus on Q&A part as described in this question.

This would not penalize those who don't ask questions as there marks would come from the first rubric whilst still rewarding those who do as you describe - as there marks would come from the second rubric.

3

The groups that don't make mistakes will arrange their reward:

  1. Think of a great response

  2. Make a trivial mistake that would be fixed by that response

  3. Get a high mark

A mistakes quota is easily gamed.

hyperpallium
  • 131
  • 4
  • 2
    Agreed, that was one of my issues with this. I think the main answer, about assessing all learning and not just mistakes, will be the way to go – Michael Stachowsky Apr 11 '18 at 11:17
3

A fundamental principle I've absorbed into my philosophy of life is "one can learn more from being wrong than being right". If students are in a course because they want to learn (as opposed to wanting to get credit for having completed the course well), the reward for getting a lab done quickly and accurately should be the ability to take on something they're less certain about and may thus find more interesting.

Another principle that is all too often ignored (by myself as well as others, I'll admit) is that documentation about why various ideas that seems like they should be good, actually aren't, may be more valuable than documentation that seemingly-good ideas actually work. If an idea seems like it should be good, it's likely other people will try it whether one documents it or not. If it actually is good, those other people won't need one's documentation to benefit from it. If it turns out not to be good, documenting that fact may benefit those who would otherwise have wasted time trying it for themselves.

supercat
  • 698
  • 4
  • 6
1

Build the follow up lab assignment in a way that the experience from mistakes gives the student an advantage (eg knowing your way around certain equipment better due to having had to dismantle it to clean up a mistake).

rackandboneman
  • 301
  • 1
  • 5
-1

Just have multiple labs (or whatever) and grade on results. You are making it too complicated otherwise. There is limited time to do coaching and assessment. If I mess something up once and learn from it, it actually shows up in future work. It's even better if I have to live with the bad grade versus getting a makeup. (We are not chopping off body parts...who cares if you take a bad grade once or twice...it's a kick in the butt.)

The ONE key help that you can do is having several labs, tests, etc. Not single chances for failure, performance. But if you do this...no makeups (other than perhaps the final).

guest
  • 1