It's time once again for the year in review! This'll be a short one, as this year was a fallow one for lessons spotted that were recorded in the blog (though certainly never short on lessons for me).


It's time once again for the year in review! This'll be a short one, as this year was a fallow one for lessons spotted that were recorded in the blog (though certainly never short on lessons for me).
Sometimes technology betrays us! It’s happened to us all, usually at a point when we are actively trying to seem competent. If you ALWAYS have problems, however, there might be something on your end you could change. There are some common problems I’ve identified from my time supporting faculty and graduate students with technical issues; these issues not only also affect students (and thus could be usefully passed on to them for their own tech needs) but also characterize a decent portion of my own issues when I stop to reflect. Some of the biggest below:
You’re going too fast. Technology can often allow us to go faster! But going too fast with a technology can not only lead you to make mistakes (guilty), it is also sometimes simply too fast for the tool to keep up. For example, Canvas often needs a refresh to show an updated grade when grading quizzes– it simply won’t update its view for you immediately upon the updating of a quiz grade. And, it’s of no use to start reviewing New Analytics in Canvas immediately; it takes time for it to analyze student activity and grade data and present it to you. It can help if you think of technology not as an instant communicator, but as a collaborator who might take a little time to get back to you.
You’re using the wrong browser/old hardware. Imagine the software you’re trying to use is a gold-plated, top of the line refrigerator. It’s very nice, it functions perfectly, and it has a lot of features that you’re looking forward to using. Now imagine you give that refrigerator to a Pekingese and tell it to carry it up the stairs, install it, and get it functioning.
![]() |
A Pekingese who would try really hard, though. Image by No-longer-here via Pixabay. |
Not only is that Pekingese not going to be able to lift that refrigerator, it’s also not going to have any idea of what you’re talking about. Using incompatible hardware or software to get your tool working is similar– generally, it’s either not strong enough to carry the task, it doesn’t have the capacity to understand what it would need to understand to carry out the task, or both.
You’re using a nonstandard tool for the job. Imagine that refrigerator again. It has a door, right? It plugs in. You put food in it. All these qualities are very similar to a microwave, but if you try to use the refrigerator as a microwave, you’re going to be disappointed. Similarly, looking at what a tool was designed for and evaluating if that is in line with your ambitions for the tool can be helpful. That’s not to say you can’t use a tool at sideways purposes– I would argue half of my discussions of Twine are sideways to its intent. (Incidentally, the medium of Interactive Fiction also has a lengthy history of using tech in ways contrary to its intention for creative purposes, so I feel I’m part of a rich tradition.) However, you’ll experience less frustration if you do so intentionally, rather than realizing that you’re trying to give feedback with a software that’s really only meant for file sharing, or facilitate group collaboration with a tool that only allows one or two users at a time.
I’m in an interesting predicament at the moment where I’m trying to use AI tools to complete some data analysis. I am reading about good prompting strategies and trying to use them. I’m trying to build a foundation and then build on top of it. I’m trying multiple different strategies to see if I can come to some different results, even if only to give me some insight into how to better get the results I want in future. Yet, at the moment, it’s been more time checking and evaluating and figuring out where it’s getting its ideas than actual time saved in the analysis from the way I normally do it.
I don’t think I’ve been going too fast, although I admit sometimes I see the demos of how quickly AI can pop together information and read reports of how it’s dramatically cutting time spent on certain tasks and I’m like “shouldn’t it just be able to do this for me already??” My hardware and software don’t seem to be the limiting issue– it’s not that it’s too slow, it’s that the conclusions are incorrect or partially correct or inventing some information that is not quite there. And the tool by all reports seems to be the tool for the job (see above demos), although perhaps a lesson I may end up learning is that some parts of the job are beyond it. I think the issue is actually a number 4, which is that the tool is more complicated to use than I am treating it as, and I don’t know enough yet about using it effectively. Straightforward and simple UX is powerful, but it can come at the expense of tailored, expert usages– think about what can be achieved by a pro using HTML to create a website versus a novice using Blogger or WordPress, or by a musician versus the prerecorded tracks on a child’s toy keyboard. The problem I’m trying to solve, then, may simply be one that is more “improvisational jazz” than “Old MacDonald.”
The strategy that's been most helpful to me so far is one that can be difficult for a lot of us to use: ask someone else to make an attempt at it, and learn from what they do. When in doubt, have someone else try! Either they have success, which you can learn from, or they don't, in which case you can at least feel a bit better that you are not alone.
Any of these tech issues plaguing you, or others? Let me know in the comments!
![]() |
Photo by Işıl via Pexels. |
In my last feature in this series, I talked about using GPTs to help in the process and clarification of your writing. This entry is a bit of a cheat in this series, since it doesn't actually involve prompting a GPT at all.
Good UX always inspires me to think about how to use some of these strategies more broadly. I remember being mildly affronted during the CITL reading group I attended when I first became interested in better teaching-- did I spend that whole several weeks affronted? Apparently! Clearly it was challenging me in some useful ways-- because one of the resources we read discussed using advertising principles to make learning "stick." Advertising? That soulless capitalist enterprise? Could help me teach the intellectually rigorous discipline of history? Pish-posh!
Of course, the only thing we do if we eschew these strategies is ensure that what they're learning in our class presently doesn't seem quite as memorable as literally any reasonably well-crafted local commercial they saw roughly a decade ago, which is what I eventually realized as I considered the concepts that were "sticky" in my brain and why.
With that in mind, you might reflect on ways that new tools introduce themselves to you, and see if that sparks an idea about how you might in turn introduce a concept that is very familiar to you in a way that seems approachable to a new learner. I'll use the original starting page for ChatGPT as an inspiration to explain something, such as instructions or expectations for an assignment. One thing that is inherently challenging is to explain something you know very well to someone else. It can be more challenging than explaining something you only know kind of well., as you have to identify the most significant information out of all the information you know and explain it clearly.
Let's take a look at how ChatGPT introduced itself when I encountered it (this has since changed as public familiarity with the tool has increased):
![]() |
Original ChatGPT intro screen. Image via Datamation. |
This screen introduces the tool by offering a breakdown of examples of things you could ask it ("Explain quantum computing in simple terms"), capabilities the tool has ("Remembers what user said earlier in the conversation"), and limitations the tool has ("May occasionally generate incorrect information.")
What if you tried a Examples, Capabilities, Limitations breakdown for assignment instructions? I've seen many, many pagelong or multipage prompts for a paper that's only 3-5 pages long. Rather than paragraphs of context and/or admonitions based on past experiences ("12 point font and 1.5 inch margins this time-- I'm talking to you, Bradley"), what might it look like to organize a prompt arount the following structure:
Obviously a prompt doesn't have to stay in this format-- I'm not suggesting that the original ChatGPT welcome page cracked some kind of fundamental educational code. But instructions that you find clear or helpful in introducing you to a new topic may be useful in turn to incorporate into your own teaching strategies.
It's time once again for the year in review! This has been a strange year for the blog (and for me!)
I took a bit of a hiatus early in 2023 to focus on other projects (like Winnie!) and to recover some steam while doing a lot of work-related writing. During that period, I featured a post that has always been helpful for me to return to when trying to write or teach writing.
Like about a thousand others this year, AI was a pretty common theme for the blog this year. In Fall, I returned with an ongoing miniseries on incorporating AI into teaching with my usual focus on flexibility and ease of use.
More to come in this series in the new year! I hope 2024 allows us all to spot all the lessons we can handle.
![]() |
Photo by Işıl via Pexels. |
Despite widespread enthusiasm about having AI help generate content, that's not in my experience the best way to use it. One, it's doubtful it knows as much about what you want to convey on your topic as you do. Two, conversely, you're probably better at the content than you are at communicating it clearly. In this post, I'll offer a few suggestions for how to use AI to help with writing and clarity, which could be useful for both instructors trying to create educational/assignment content and students attempting to frame their ideas for coursework.
I tend to think as I talk-- the talking is the thinking, and talking through a topic helps me write it down. When I'm preparing to lead a workshop or give a presentation, I often begin by recording myself trying to give it off the cuff based on the ideas I have right now before I start outlining or writing things down. Then, I use the talk-through as the basis for the outline or script or notes. Sometimes I do this in the opposite direction; a brief and sketchy outline that I talk through and then edit based on what I said. In the past, I've always done this using Zoom and then watching back the recording.
More recently, I've started using a combination of two technologies to begin creating written work. First, I speak the ideas into text-to-speech, like dictation.io. Then, I take that text and ask ChatGPT to make it grammatically correct and separate it into sensible paragraphs. This is so much easier for me than fully typing out all the same ideas-- I can talk just about as fast as I think, while I am a decidedly slower typer despite years of practice at Typing Tutor. This strategy works best if you speak in small chunks and confirm that dictation.io is absorbing it all; I've noticed dictation.io does not capture everything I say if I speak for a long time. It's also important to communicate clearly with ChatGPT or your AI chatbot of choice-- it will attempt to smooth the language and potentially add (too many) adjectives by default, so if you want language adjustments, give clear parameters; if you just want the text to be given appropriate puncutation and capitalization, say so.
Most recently, I'm intrigued by some of the AI offerings within Zoom to transcript and summarize meetings; in theory, this could mean that a recorded first draft could have a relatively coherent text component to work from. If you have a paid Zoom account, it's worth playing with these features and seeing if they do anything for your process.
![]() |
Photo by Işıl via Pexels. |
Think-pair-share-- it's the easiest, most go-to way to get a conversation going in a room of people. It works well in rooms that don't have flexible seating for gathering into larger groups, it's a low amount of investment in setup and explanation, and it gives literally everyone in the room a chance to talk (unless one half of the pair is either real shameless or charmingly enthusiastic). And if done well, it's an "activity" that makes such intuitive sense it doesn't have to feel like an "activity."
If you haven't done think-pair-share before as a leader or participant, this description of the activity and FAQ about it is a nice introduction.
Normally, think-pair-share is conducted by humans (for NOW), but an AI chatbot can be one half of your pair or a supplement to a human pair. This was one of the first uses of ChatGPT I wondered about when it became a popular subject of conversation, and I wasn't alone! Several pieces offer ideas about incorporating AI into your Think-Pair-Share time (for example, it's one of Ditch that Textbook's 20 Ways to Use [Chatbots and Artificial Intelligence] as a Tool for Teaching and Learning). Most treatments of this idea I've seen are riffing off of this widely shared tweet by Sarah Dillard.
[An aside: one of the really funny things to me about the surge in conversations about AI in teaching is how often I now see people talking, writing about, and sharing tweets as though they're academic articles in circles where that might never have been the case before. When something is very new, many standard expectations of what kinds of things are acceptable or useful to cite shift; I imagine this is less wondrous to folks doing more present-focused work, who have frequently encountered the wonderful world of citing social media before, than to those of us who spent seven years citing while fighting microfilm-induced nausea.]
For some real zaniness, and for more insight into the tools themselves, you could have students fire up two chatbots and give them instructions, then feed their responses to one another. This can take careful prompting that will depend on the topic of discussion; it also may shed some light on the boundaries of the chatbots-- it's likely that their conversation will become a bit circular, as they tend to declare everything up front rather than have an evolving conversation (similar to some of the worst human small group activities I've been part of, really).
For more complex use of a Think-Pair-Share type framework, I love this set of options from Acadly, which suggest alternatives to insert into the process like think-write-pair-share, which encourages fuller consideration of the issue before the pair stage, and think-vote-pair-vote-share, which would work well for a question on which minds are likely to change after some collaboration or conversation on the topic. Generative AI could easily be incorporated into these steps to provide some real value; for example, comparing the written thoughts in think-write-pair-share with how ChatGPT might respond to the prompt, or asking the chatbot for reasons why someone might disagree with one's original vote in the think-vote-pair-vote-share framework.
![]() |
Photo by Işıl via Pexels. |
![]() |
A row of white doors. They'll all get you where you're going! |
This year brought a lot of excitement and growth! A key theme of this year's work on the blog was technological curiosity and a willingness to experiment. I focused on fewer posts and more continuity between them, particularly the introduction of and support for the Source Analysis Template. I celebrated my first anniversary in my current position. And, I finally wrote about rubrics after teasing it for roughly three years. Below, find the lessons spotted this year:
Looking forward to more in the new year! As always, if you'd like to reach out for more support with anything mentioned on the site, don't hesitate to schedule time on my Calendly.
So, you’ve created a source analysis activity using this template– congratulations! Now you’re ready to make it available to others so they can, you know, analyze the source.
Regardless of which method you choose, the first step is the same: From the story view, you’ll click the title of your activity at the bottom left, then click “Publish to File” in the menu that appears. This will bring up a window where you can confirm what you’d like to name the file and where it should be saved; an HTML file will then be created.
The menu that appears when clicking the title of your Twine story. |
The nifty thing about an HTML file is that, if you open it in a browser like Google Chrome or Safari, it will open as your game, playable and looking just as you designed it to look. If you open it in Notepad or something similar, however, it will be code– code that you can use!
This possibility gives us a lot of options for how we might wish to share our activity. I’ve talked about mechanisms for doing this a bit previously, but it’s worth revisiting here to address one of the spaces almost all teachers already have access to: an LMS space. If you don’t already have one, you can usually create one for free, and many receive these spaces from their institutions automatically.
Here is a quick walkthrough on how to incorporate a Twine activity into the Canvas LMS, with concept and code courtesy of Laura Gibbs:
Create your HTML file.
Pop that file into the Files space. You could make that file visible to your class, but would probably rather hide it from students.
In a new tab, open the Edit view of the Assignment or Page you’d like to add an activity to.
Click the </> button at the bottom right hand side of the Rich Content Editor in order to open the HTML view.
Copy and paste this code into the text entry field:
<iframe src="https://___/courses/___/files/___/download" width="100%" height="600"></iframe>
Go back to the tab in which you have Files open. Right click on the HTML file you just uploaded and select “Copy Link Address” from the dropdown menu that appears.
Go back to the tab in which you have the Edit view of your assignment open. Select the bold text, right click, and paste the link you’ve just copied.
Save and publish.
To see this in action, enroll in this demo Canvas space and experiment with the items available in the Source Analysis Activity module (feel free to use a pseudonym). Into this space, I’ve added some examples of what the Source Analysis Activity can look like incorporated into an LMS as a page or an assignment. In the process, I’ve taken the opportunity to update things that were or have broken (for example, the link to the Arbella Speech that I used in the first iteration of the Source Analysis Activity has since become defunct). More significantly, I edited some of the language to apply more clearly to the Canvas environment– students no longer need either so many or so vague instructions about how to turn in their answers if the activity is embedded within an assignment, for example; the activity also no longer needs to collect their name to associate with their answers, but it does still need to instruct students on how to collect and turn in their answers into a format that can be delivered to the instructor by Canvas.
One of the beautiful things about distributing your activity via an iframe in an LMS is that it works well on mobile– even the process of copying and pasting my answers into the text box was relatively straightforward when testing this on my iPhone.
I hope this inspires you to try this out in your own courses, even if only on an unpublished demo page. If you need a Canvas space to experiment in, you can create a Free-for-Teacher account.
If you have questions or get stuck at any of these steps, feel free to reach out in comments, or schedule a quick chat with me via my Calendly.