Tongue-in-Cheek Opening: In this post I will offer observations on various LMS evaluations of which I am aware. This awareness and knowledge comes from personal contacts and from published LMS reports (“the literature”). Unfortunately I have not much good to say. I would like to say I’ve caught someone doing something right. If I have, I will speak up. But mostly I haven’t been able to catch any institution in the act.
Since I can’t promise I will be exceptionally kind although I do have a high value on kindness, I will refer to neither my friends nor the Universities themselves by name. If you choose to comment but know you know of whom I speak, please also refrain from mentioning names. Shall we begin?
(I have a browser open with the online versions of LMS evaluation reports from 4 institutions whose reports have been released in the last 2 years. In addition to those 4 publicly available reports, I have contacts involved with 3 other evaluations.).
Flaw # 1 Lack of Strategic Alignment
I am not aware of a single institution whose process has included asking the question, “Why do we need an LMS?” “How do we resource the management of an LMS such that strategic goals of the University are met?” (save money, save time, go paperless, build an online program, enable faculty with a bigger toolset, etc. Even “meet student expectations based on their High school experiences with teaching and learning technology” would be something to state explicitly!). Flaw: Assumptions are not subject to reality checks.”
Flaw #2 Focus on Functionality that is all but equivalent these days
An RFP process generally focuses on comparing functionality between systems. I don’t know how long it’s been now that the marketplace is mature enough that they all pretty much DO the same thing, it’s how they do it that now matters. The workflow required of the faculty, the ability of the tool to integrate with the gradebook, the flexibility of the gradebook. Flaw: RFP process too shallow.
Flaw #3 Subjectivity and Exclusion
The voices and recommendations of those most tasked with the support of the institution’s LMS are usually not included in the evaluation process. When the evaluation process is is led by Academics, it is usually very theoretical in its approach. (I do like identifying “guiding principles,” but don’t stop there!). While surveys and focus groups are often used of faculty and student groups, their goal is to discover subjective impressions in the aggregate. Often whether to weight the student or faculty impressions more highly is not determined in advance.
On the other hand, evaluations that are conducted by personnel in the central IT division or heavily weighted by them, also do not normally include evaluating use cases or what might be called “workflows.” It’s odd, but I haven’t seen either the Instructors who actually do the work, nor the IT shop who supports those workflows really care very much about being rigorous in whether or not the “how” of what you have to do, makes sense to those doing it. Although they all support the same tools (quizzes, file content, assignment uploads, discussions and grading it all), the way you do it, and the way you have to think about what you do, varies widely.
Flaw #4 “A Review of the Literature”
While reviewing other institutions’ LMS Evaluation reports is a good start, there is nothing like digging in there and doing your own due diligence. If nothing else, it teaches you a whole lot about your own institution and what makes it distinctive from others.
Some questions to ask yourself (Google Forms Survey).