Sometimes technology betrays us! It’s happened to us all, usually at a point when we are actively trying to seem competent. If you ALWAYS have problems, however, there might be something on your end you could change. There are some common problems I’ve identified from my time supporting faculty and graduate students with technical issues; these issues not only also affect students (and thus could be usefully passed on to them for their own tech needs) but also characterize a decent portion of my own issues when I stop to reflect. Some of the biggest below:
You’re going too fast. Technology can often allow us to go faster! But going too fast with a technology can not only lead you to make mistakes (guilty), it is also sometimes simply too fast for the tool to keep up. For example, Canvas often needs a refresh to show an updated grade when grading quizzes– it simply won’t update its view for you immediately upon the updating of a quiz grade. And, it’s of no use to start reviewing New Analytics in Canvas immediately; it takes time for it to analyze student activity and grade data and present it to you. It can help if you think of technology not as an instant communicator, but as a collaborator who might take a little time to get back to you.
You’re using the wrong browser/old hardware. Imagine the software you’re trying to use is a gold-plated, top of the line refrigerator. It’s very nice, it functions perfectly, and it has a lot of features that you’re looking forward to using. Now imagine you give that refrigerator to a Pekingese and tell it to carry it up the stairs, install it, and get it functioning.
A Pekingese who would try really hard, though. Image by No-longer-here via Pixabay. |
Not only is that Pekingese not going to be able to lift that refrigerator, it’s also not going to have any idea of what you’re talking about. Using incompatible hardware or software to get your tool working is similar– generally, it’s either not strong enough to carry the task, it doesn’t have the capacity to understand what it would need to understand to carry out the task, or both.
You’re using a nonstandard tool for the job. Imagine that refrigerator again. It has a door, right? It plugs in. You put food in it. All these qualities are very similar to a microwave, but if you try to use the refrigerator as a microwave, you’re going to be disappointed. Similarly, looking at what a tool was designed for and evaluating if that is in line with your ambitions for the tool can be helpful. That’s not to say you can’t use a tool at sideways purposes– I would argue half of my discussions of Twine are sideways to its intent. (Incidentally, the medium of Interactive Fiction also has a lengthy history of using tech in ways contrary to its intention for creative purposes, so I feel I’m part of a rich tradition.) However, you’ll experience less frustration if you do so intentionally, rather than realizing that you’re trying to give feedback with a software that’s really only meant for file sharing, or facilitate group collaboration with a tool that only allows one or two users at a time.
I’m in an interesting predicament at the moment where I’m trying to use AI tools to complete some data analysis. I am reading about good prompting strategies and trying to use them. I’m trying to build a foundation and then build on top of it. I’m trying multiple different strategies to see if I can come to some different results, even if only to give me some insight into how to better get the results I want in future. Yet, at the moment, it’s been more time checking and evaluating and figuring out where it’s getting its ideas than actual time saved in the analysis from the way I normally do it.
I don’t think I’ve been going too fast, although I admit sometimes I see the demos of how quickly AI can pop together information and read reports of how it’s dramatically cutting time spent on certain tasks and I’m like “shouldn’t it just be able to do this for me already??” My hardware and software don’t seem to be the limiting issue– it’s not that it’s too slow, it’s that the conclusions are incorrect or partially correct or inventing some information that is not quite there. And the tool by all reports seems to be the tool for the job (see above demos), although perhaps a lesson I may end up learning is that some parts of the job are beyond it. I think the issue is actually a number 4, which is that the tool is more complicated to use than I am treating it as, and I don’t know enough yet about using it effectively. Straightforward and simple UX is powerful, but it can come at the expense of tailored, expert usages– think about what can be achieved by a pro using HTML to create a website versus a novice using Blogger or WordPress, or by a musician versus the prerecorded tracks on a child’s toy keyboard. The problem I’m trying to solve, then, may simply be one that is more “improvisational jazz” than “Old MacDonald.”
The strategy that's been most helpful to me so far is one that can be difficult for a lot of us to use: ask someone else to make an attempt at it, and learn from what they do. When in doubt, have someone else try! Either they have success, which you can learn from, or they don't, in which case you can at least feel a bit better that you are not alone.
Any of these tech issues plaguing you, or others? Let me know in the comments!