Course operation

Making a PG course

Usually I write custom lessons and exercises for each course. You don't have to do that, if you can find a focused, well-written textbook. I haven't been able to find exactly what I want for my courses. I'm picky.

Ideally, course design could use a "waterfall" approach, where you get each step right the first time. Get the task goals right, lock them in place, get the concepts right, lock them in place, etc.

That would be nice, but it never works out in practice. Nobody gets everything right the first time, with something as complex as course design and production. Imperfection and rework does not imply lack of due diligence. The software industry took decades to realize that.

More realistically, you'll need to make a base course that you can iterate on and improve. That's what agile methods like AGILE and LLAMA are about.

This section explains what I do to make a PG course. Mostly it refers to what we've already talked about, but there are a few more issues I'd like to raise. There are links to design documents for an Excel VBA course, so you can see an implementation.

Context

Start by working out who the students are, what they can do at the start of the course, what they care about, and other course constraints. You can see an example for the Excel VBA course.

Goals

Next, work out course goals, starting with target tasks. Here are three tasks I worked out for the Excel VBA course.

Define a transfer range for the tasks, as we talked about earlier. Then list the schemas, procedures, and facts needed to do these tasks. You can see an example of that.

Sequence of lessons

Work backwards from the goals, mapping out prerequisites. For example, one of the patterns needed for the sample tasks is numeric validation. For that pattern, you need to know about If statements. Before Ifs, you need to know about variables and expressions. So the course needs to cover variables, then expressions, and then Ifs, before you can get to validation. If it helps, you can make a spreadsheet, showing what students need to learn first for each pattern.

Once you have the sequence, you can break it up into lessons and modules. This gives you the lesson tree for the course, the first version of it, anyway.

Table of contents

You can see the lesson sequence for the Excel VBA course.

Write the lessons and exercises

Now you can write the lessons, using didactic explanations, worked examples, reflection questions, and simulations. You can see the final result for the Excel VBA course.

Let's talk about a few issues.

Characters

If you're using characters, write brief personas for each one before you start writing lessons. That helps with consistency. For example, before I started writing the lessons for the Excel VBA course, I decided that Georgina would be a bright, excitable geek.

Georgina

Reduce cognitive load

Learning is hard enough for students, without having to struggle with your writing. Let students keep their brainpower for learning. Use simple writing. Use short sentences. Eschew obfuscation.

Suppose you want to refer to the same image three times in a lesson. You could include it once, label it Figure 1, and write "see Figure 1" a few times.

This makes sense for paper books, but less so on the web. Pixels are cheap. Show the image each time, so people don't have to scroll.

Onboarding

Students will have to learn how to use your course website. The first few lessons might be about that. Students won't remember everything the first time through, however, so you may have to remind them how to use the website as you go.

It helps to make the first few exercises about using the website. For example, the first exercise in my courses is for each student to complete the About me field in their user profiles, so I can get to know them better.

Running a PG course

Precourse setup

You should have a calendar, giving due dates for each exercise. Having a calendar helps students schedule their time.

In my course websites, each student has a personal timeline, showing suggested due dates for each exercise, and the submission status of each one.

Timeline

This is from a test class. Real classes have dozens of exercises. The shield means it's a challenge exercise. They're advanced tasks that let students earn badges.

The timeline has a lot of data, and it isn't clear how students should feel about their progress. To help with that, there's an emoji in the toolbar on every page.

Progress emoji

The face varies from insanely happy, to terrified, depending on a student's progress. The happy face here shows that the student is slightly ahead.

I've overheard students saying things like, "I need to catch up this weekend, so I can get a smiley face." I don't have any hard data on the effectiveness of the emojis, though.

Class time

I lecture once per semester, on the first day of class. My goals are to (1) convince students the course is worth their time, (2) get them started on the course website, and (3) take everyone's photo, so I can get to know their names.

For the rest of the semester, class is for relationship building, troubleshooting, cheering, and sometimes exams. Maybe group work, too, depending on the course.

Grading

Students do lots of exercises, and get lots of feedback. You need a good work system for this.

Earlier, we talked about exercises, and rubrics. Skilling's grading interface translates rubric items and their responses into clickables. If that's a word. Here's part of the grading interface for an exercise.

Giving feedback

The grader clicks on one of the canned responses for each rubric item, or the + button to add a new canned response. Clicking the Create message button generates a feedback message based on the selected items. For example:

Feedback message

Graders can edit feedback messages before sending them. Greetings (G'day) and signatures (Kieran the Bold) are chosen randomly from lists created by graders. This lets each grader make their own grading persona. Casual, formal… whatever they choose.

The grading system lets graders assess each submission in about 1 to 3 minutes. The system is key in meeting the feedback goals of pretty good courses.

You may be thinking, "Why not AI-powered automatic grading?" I'll tell you why… in a moment.

Improving a PG course

Two things I want to talk about here: gathering data to guide improvement, and making courses easy to change.

Data to guide improvement

Skilling, and every LMS, gathers data on student behavior and performance. However, the best data I get is from working with students in class. When a student asks a question about something they don't understand, that's data on course effectiveness. Not that students' errors are all the course's "fault." Still, each student's misunderstanding is a clue to be investigated.

Skilling's grading system gathers data on student performance, down to the rubric item level. That can be more than 10,000 (!) data points in a typical course of 50 students. Eventually, a set of data analysis programs will come with Skilling, to generate standard reports from exports of student history data. For example, a report might identify the rubric items with the most failures across all exercises.

The best exercises are challenging enough to push students to think, but not so difficult as to be demotivating. The "sweet spot" is called the zone of proximal development. How to estimate whether students are in the zone?

When students submit exercise solutions, they say how difficult the exercise was. Giving reasons is optional.

Difficulty

Preliminary data suggests that resubmissions are higher for exercises rated as more difficult, which makes sense. However, that's a tentative finding at the moment.

Making courses easy to change

Many flipped courses use video lectures. It makes sense, if you're used to lecture courses. Record lectures given in face-to-face courses, and you have what you need.

My courses are mostly text and images (and simulations, MCQs, FiBs, etc.). Video is reserved for two purposes. First, there are emotional appeals. Modules of my courses have short (less than 5 minute) videos, where I talk about why the content matters. The videos put a face on the course, especially for students I never meet.

Video

I also use videos to show dynamic processes. For example, for an exercise where students write a D&D-like character generator, there's a video showing how it should work.

Character generator

Other than that, I stick to text and images, for several reasons.

  • Content consumption is easier, since students control pacing. Students read as quickly or slowly as they want. It's easy to go back and reread a paragraph, or a piece of code.
  • Accessibility is easier.
  • Seamlessly replacing content is easier.

The last is particularly important, especially if you use agile processes. Say I want to improve a worked example, perhaps insert an annotation. If the content was video, I'd have to record the new content with a microphone and screen capture, use an editor to insert the new content into the video, and replace the existing video on the course website. Further, unless I get the lighting and sound right, the new video will have jarring transitions.

Life is easier when the content is text and images. Edit a page on the website, type in the new content, paste in images, and save. Done! No continuity problems, either. It won't be obvious that new content was patched in.

"But video helps learning!" Not necessarily. Video can help with some aspects of learning, and make other aspects more difficult. Check out one of Donald Clark's blog posts for an into.

"But people like video!" Perhaps. It may be that most people are more willing to pay for a video course, than a text/image course. That's a sales and marketing issue. PGM is about learning.

Why not AI-powered automatic grading?

Skilling's grading system relies on human judgement. It makes good use of graders' time, automating everything it can, but at the center is a human grader. Why not use an AI-powered autograder?

First, they're difficult to get right. You need someone to give grading rules, or train the software on thousands of examples for each exercise. When something is less than perfect, students will be complaining to instructors, looking for adjustments. Not a good way to keep a positive emotional tone.

Second, because they're hard to make with high quality, autograders limit the flexibility that agile methods demand. With an autograder, you might be reluctant to change an exercise, or add new ones. Content updates, like moving a course to a new version of Excel, can be fraught with difficulty.

Third, autograders can't recognize alternate solutions, or handle nuance. I'm happy when a student comes up with a new way to solve a problem. I compliment them on it. An autograder would reject their work. A general problem with AI software is that it doesn't know when it doesn't know, that is, when a task is outside it's domain of competence.

With human graders, you can change rubrics, add and change exercises, whatever you want. Tell them what you've done, and you're good to go.