Category Archives: Academic IT

What I Didn’t Know About Using Community Source Software (Sakai) in Higher Ed

“How to set expectations for change management,” – that’s it. That’s what I didn’t know.

Consider the kinds of changes one habitually takes from a vendor of proprietary software which you maintain in your own higher ed Data Center.

The hypothetical change is

New cool features advertised to you, or…

  • Bug fixes (you’d found them or you hadn’t, in other words, you cared deeply or not at all), or …
  • Enhancements.

Qualities of the change:

  • The change has already been deployed and tested many times over by the software vendor in environments, with data, very similar to yours.
  • You schedule when you want to accept/install the change and make it available to your users, usually based on your academic schedule.

What your users expect-

  • Won’t get it before the regular schedule break, even if they knew about it.
  • Didn’t know about it anyway because the vendor doesn’t market to them but to the people who manage the system for them.
  • The vendor is an uncaring large collection of cogs anyway, so no point in asking for an enhancement.

What your users do-

  • Blog, tweet and facebook their complaints vociferously but without expecting more than a good venting session.
  • Write you emails about how your vendor doesn’t care.
  • Login after upgrades and harumpf that that the thing that used to annoy them so greatly is finally fixed.

 

Contrast this with open/community source:

What your users expect-

  • They will be heard if they connect with the community.
  • You are connecting with the community on their behalf and that will be meaningful in the community because it must be small.
  • The developers working on their behalf will automatically do a better job than the vendor because they work for higher ed institutions.

What your users do-

  • Demand bug fixes and enhancements.
  • Expect them to be applied frequently.
  • Suggest enhancements and expect them to be executed in amazingly beautiful ways.
Advertisements

Working with the Ents

Enterprise architecture, the endeavor of building technical reference architecture for the business, or, in this case, for higher ed, is a deliberative iterative and s l o w process.

Here I am in Madison, Wisconsin joining phenomenally gifted and wise senior enterprise architects such as Rich Stevens (University of Maryland), Jim Phelps (U of Wisconsin and current chair of ITANA*), Leo Fernig (U of British Columbia) and Scott Fullerton (U of Wisconsin) in creating a Learning Reference architecture for presentation at Educause in the fall. Knock on wood.

Wood, you say? Or trees? Not only the things architects see through on their way to categorizing the whole forest, but really, these deliberate conversations with their careful measured tone …which I am learning from in enormous measure… think before you speak, Laura, hear the rationale of that statement on the inside of your brain before you say it on the outside…, these deliberate conversations make me feel as foolish as the Hobbits among the Ents.

Even my fellow subject matter experts, Jeanne Blochwitz (Asst. Director of Academic Technology, Wisconsin) and Jeff Bohrer (Instructional Technology Consultant, Wisconsin) seem more tuned to this pace than I am.

Remember this Lord of the Rings council of war by the Keepers of the Forests, the Ents? (There are Hobbits in this photo perched in an Ent, but you can’t really see them).

many_ents

*ITANA, by the way, is a constituent group of Educause, an outreach arm for Enterprise, Business, and Technical Architects in Academia.

Look for the presentation of our work at Educause this fall. Knock on wood (but not in an Entish forest) we’ll be done!

Data, Research, Education and … Hunches

A national gas station chain opens a neighborhood store, adds a customer loyalty program, puts up a website to collect registration data, gets people to swipe the card at the pump whenever buying gas, while inside asks again for the card and/or zip code, pays out their incentives: coffee, frozen drinks, snack packs, cookies, crackers, 2-liter pops. A video camera records it all.

Another day, a researcher working on a project to determine the snacking habits of obese people versus non-obese people has just struck a gold mine if they can agree to responsibly treat this data in the aggregate only. (Didn’t the gas station promise not to share the data when they collected it? Maybe. Or maybe just not to sell it to companies looking for more consumers ). Canvassing begins, more data gathered, and correlation theories processed.

A couple months ago, in an entrepreneurial startup weekend, publically available data was called upon to inform or power a new phone app with predictive capabilities for determining the rise or fall of stock prices. That one’ll be hot. Publically available data …

Try this idea: Find existing data useful for research, and then create the questions which could be answered by its careful analysis.

Call it “Backward Research.” Start with a data set first. Ask questions later. Find data in existence, not just to be mined, but to be curated, aggregated, built-upon, re-defined, and continually expanded to provide answers to new questions, questions we weren’t capable of even dreaming until we’d gathered the data.

In the years ahead more and more data constructs will be created which are ‘living,’ persist over time, and therefore will be useful for ongoing research.

Education data is such data. Who owns that data? Who should own that data? We’re calling this burgeoning field learning analytics, but do we know what we’re really talking about?

K-12 students will be tested via computer in most of the United states starting 2014. Those results, mapped to the Common Core standards, will over time form a ginormous data repository. What rules will govern access to that repository? Should the state governments own it? Federal?

To what purposes could we put a repository of testing information for each child’s educational career ? I remember the Twitter backchannel asking those same questions during the Educause Midwest 2009 Keynote. Nancy Zimpher, then of University of Cincinnati, was telling us about a “virtual backpack” of student data which travels with the person from cradle through career. Nope, not science fiction.

While the future which the Tweeters in the room that day were cynically pronouncing was one of categorization and the creation of societal strata based on past performance such as late reading, or non-social kindergarten behaviors, which then solidified the students’ role in society forever, I would sound the alarm that now is the time to develop policy around such education data, policy which prescribes its appropriate and inappropriate use, policy which gives it an accountable owner, one beyond reproach, one with the best interests of the individual in mind. This is not the government, my friend. The government’s mission is to have the best interest of society in mind.

That data is here now. It will be aggregated. It will be researched. It should be researched. How and by whom are the questions…

There are dots to be connected. I would feel most comfortable if they were connected by researchers and educators at responsible higher ed institutions. Over at Music for Deckchairs, in the context of creating and curating educational content, Kate Bowles is making this connection, “The sudden partnership between venture-funded educational startups and traditional elite universities has thrown down a big challenge to less flexible models of higher education, especially outside the U.S. And the fact that we’ve typically bundled content, learning and accreditation under the broad heading “education” doesn’t mean that we’ll be able to keep them all contained in this way indefinitely.”

Michael Feldstein, commenting on Blackboard strategy via Ray Henderson, says, “…there are huge potential benefits to a true SaaS [Software as a Service]platform in terms of the value of the data that can be gathered. With analytics and adaptive learning being the huge buzzwords that they are, the future success of learning technology companies will largely depend on their ability to capture the data exhaust from students’ and teachers’ interactions on the platform and harness it to produce better learning outcomes.”

In all of this, who will speak for the student?

More Reading:

Researchers Digitize AIDS Quilt to Make it a Research Tool,” July 9, 2012.

Blackboard’s New Platform Strategy,” Annotated Link Here – Feldstein Quote. August 19, 2012.

The revolution might be televised,” July 22, 2012. Kate Bowles.

It’s a media, media media world.

If you’re doing academic research, you can now cite a Tweet.

From the MLA:

MLA_Tweet_Citation

If you do project management, make it visual. In my workplace we’re seeing these “SCRUM boards” on every available wall. Some even include “buns in the oven” (the photo of the ultrasound is an example of media embedding):

scrumwall5

If you want to make a point, use an Infographic (fancy name for a collage that’s informative, right?) :

infographofatwitteruser2

Marketers always use media, your technology project might want to use it to help spin the change:

Sakai-AMovingStory

#LMSunSIG Tweets: Strategic Vision

Vision_LMSunSIG

Vision_LMSunSIG2

Vision_LMSunSIG3

Vision_LMSunSIG34

Vision_LMSunSIG5

NERCOMP LMS UnSIG website: http://edtechgroup.org/lmsunconference/

#LMSunSIG Harvested Tweets

This thread of screen captures from the twitter stream, I’m calling “Training,” that is, valuable comments today related to Faculty training/workshops.

TrainingLMSunSIG1

TrainingLMSunSIG2

TrainingLMSunSIG3

TrainingLMSunSIG4

TrainingLMSunSIG5

TrainingLMSunSIG6

TrainingLMSunSIG7

TrainingLMSunSIG8

TrainingLMSunSIG9

TrainingLMSunSIG10

TrainingLMSunSIG11

TrainingLMSunSIG12

TrainingLMSunSIG13

TrainingLMSunSIG14

TrainingLMSunSIG15

TrainingLMSunSIG16

TrainingLMSunSIG17

TrainingLMSunSIG18

ToolsLMSUnSIG1

ToolsLMSUnSIG2

How Universities Choose Their LMS: A Review of the Literature (but if you don’t know, I can’t tell you)

Tongue-in-Cheek Opening: In this post I will offer observations on various LMS evaluations of which I am aware. This awareness and knowledge comes from personal contacts and from published LMS reports (“the literature”). Unfortunately I have not much good to say. I would like to say I’ve caught someone doing something right. If I have, I will speak up. But mostly I haven’t been able to catch any institution in the act.

Since I can’t promise I will be exceptionally kind although I do have a high value on kindness, I will refer to neither my friends nor the Universities themselves by name. If you choose to comment but know you know of whom I speak, please also refrain from mentioning names. Shall we begin?

(I have a browser open with  the online versions of LMS evaluation reports from 4 institutions whose reports have been released in the last 2 years. In addition to those 4 publicly available reports, I have contacts involved with 3 other evaluations.).

Flaw # 1 Lack of Strategic Alignment

I am not aware of a single institution whose process has included asking the question, “Why do we need an LMS?” “How do we resource the management of an LMS such that strategic goals of the University are met?” (save money, save time, go paperless, build an online program, enable faculty with a bigger toolset, etc. Even “meet student expectations based on their High school experiences with teaching and learning technology” would be something to state explicitly!). Flaw: Assumptions are not subject to reality checks.”

Flaw #2 Focus on Functionality that is all but equivalent these days

An RFP process generally focuses on comparing functionality between systems. I don’t know how long it’s been now that the marketplace is mature enough that they all pretty much DO the same thing, it’s how they do it that now matters. The workflow required of the faculty, the ability of the tool to integrate with the gradebook, the flexibility of the gradebook. Flaw: RFP process too shallow.

Flaw #3 Subjectivity and Exclusion

The voices and recommendations of those most tasked with the support of the institution’s LMS are usually not included in the evaluation process. When the evaluation process is is led by Academics, it is usually very theoretical in its approach. (I do like identifying “guiding principles,” but don’t stop there!). While surveys and focus groups are often used of faculty and student groups, their goal is to discover subjective impressions in the aggregate. Often whether to weight the student or faculty impressions more highly is not determined in advance.

On the other hand, evaluations that are conducted by personnel in the central IT division or heavily weighted by them, also do not normally include evaluating use cases or what might be called “workflows.” It’s odd, but I haven’t seen either the Instructors who actually do the work, nor the IT shop who supports those workflows really care very much about being rigorous in whether or not the “how” of what you have to do, makes sense to those doing it. Although they all support the same tools (quizzes, file content, assignment uploads, discussions and grading it all), the way you do it, and the way you have to think about what you do, varies widely.

Flaw #4 “A Review of the Literature”

While reviewing other institutions’ LMS Evaluation reports is a good start, there is nothing like digging in there and doing your own due diligence. If nothing else, it teaches you a whole lot about your own institution and what makes it distinctive from others.

Some questions to ask yourself (Google Forms Survey).