Category Archives: Sakai

Sakai is alive and well and on the move

Like Sakai, this tiger is alive, staring directly at you, and ready to spring

Sakai is open source. So yeah, many open source softwares come and go.

But other open source softwares embed into your infrastructure, are constantly maintained and improved and evolving. Examples: Apache webserver, Central Auth Service, Git, Linux, and Sakai.

Don’t believe me? Monitor the commits on github. Look at contributions from Western Ontario, University of Virginia and Longsight (a commercial affiliate). Oh, and globally? Check out Flying Kite of Australia (busy enough that they don’t bother with a website but use LinkedIN (https://www.linkedin.com/company/flying-kite-au ) and Entornos de Formación of Spain (big enough to have an English language website: https://www.edf.global/ ).

So let’s see where Sakai compares with overall caveats proposed by Sam Saltis in his recent blogpost on Core dna:

#1. Cost. Sam says, “Open software providers are also increasingly charging for extras like add-ons, integration, and additional services, which can negate any cost-saving advantages in some cases. In the end, rather than being free, you are still paying for a service with open source software.”

And he’s right. This is happening. But not with Sakai. Sakai is community-based and community maintained. It can be hosted anywhere you like, including by a commercial hosting provider or in your own institutional AWS, Google or Azure cloud. It can also be multi-tenanted, so that you could share costs of an instance. With Sakai you control the cost by controlling your choices.

#2. Service. Sam says, “Open source software relies on a loyal and engaged online user community to deliver support via forums and blogs, but this support often fails to deliver the high level of response that many consumers expect (and can receive with proprietary software).”

And he’s right. But it’s also a strength. Sakai has a loyal and engaged online user community which is highly responsive. I dare you to engage with us on our lists. You’ll get an answer within 24 hours for sure, but often within minutes(!)

#3. Innovation. Sam says, “Open source software provides a large amount of flexibility and freedom to change the software without restriction. This innovation, however, may not be passed on to all users and it is debated whether customized changes to the original source code can limit the future support and growth of the software. Once more, open source software providers often struggle to attract large-scale research and development.”

I have to answer this one for the Sakai community in parts: a) Yes, you are able to change Sakai without restriction. Truthfully, we like that control. b) Don’t do it in a vacuum. Engage with the community to innovate in a direction we all can use. That’s what we do. c) The Sakai community is a member of a larger open source umbrella organization, Apereo. This has been a good move for us spurring even more innovation. The Apereo Foundation is like a ‘braintrust’ for higher ed open source software projects. Check it out. 

#4. Usability. Sam says, “Usability is often a major area of criticism for open source software because the technology is generally not reviewed by usability experts and caters to developers rather than the vast majority of layperson users. User guides are not required by law and are therefore often ignored. When manuals are written, they are often filled with jargon that is difficult to follow.”

And he’s right. And admittedly, in the past this has plagued Sakai software as well. We reached a tipping point in community involvement about 3 years ago when we held a little “Sakai Camp” for the first time and discovered those who came were about half developers and half instructional designers, LMS Admins, and those representing accessibility and usability concerns. More and more instructional designers and faculty are being heard from on the lists. Even the formerly predominantly developer list, sakai-dev@apereo.org , now has regular members who are NOT developers but again are peoople who care about usability.

Sakai is now reviewed by usability and accessibility experts.

Sakai has robust user guides created by users, not developers.

#5. Security. And finally, Sam says, “With individual users all around the world developing the software, there is a lack of continuity and common direction that prevents effective communication. Once more, the software is not always peer-reviewed or validated, meaning that a programmer can embed a backdoor Trojan into the software while the user is none the wiser.”

This is just plain bogus. This is not how many of the most popular open source softwares are maintained. Maybe it was back in the days of the wild west, but today I think the majority of open source software has nailed this one. I’d like to think Sakai was among the first.

I’m not a developer. But I care about the software I am administering for Notre Dame. So I know those who run Sakai and develop Sakai keep current on exploits and test Sakai for those. I know when an exploit is identified, a patch comes out very quickly. It becomes part of the main branch from which anyone who implements Sakai can pull.

Finally, “continuity and common direction” – well, let’s let the comment section by my Sakai peers speak to this.

Sam Saltis’ full article.

 

Advertisements

All the Connecting Dots

We’re planning an upgrade of our LMS. Consider these “dots” and their connections:

  • The old gradebook is out. No further development. When do we take it away from our instructors? How many times do we notify them first?
  • We’re in the middle of changing video on demand (VOD) providers. The old one was to have been available ’til Jan. 2019, but the old one does not provide a standard LTI integration and therefore isn’t ready for the upgraded LMS version.
  • Also, there is no straight migration of content between the old VOD and the new one. Our instructors want to take their content with them. There are 576 sites which have media that needs to be migrated. This needs to be decoupled from any and all LMS upgrades. I wish.
  • We’ve been running this LMS since 2011 with that aforementioned gradebook. There is no archival method for old sites in this LMS, nor in any other LMS I’m aware of. Nor should anyone ever expect that an LMS is a system of record. But instructors do. So when we upgrade to the new version without the old gradebook, none of their old sites will have grades. Just a blank screen. Can we live with that?
  • There’s one best time frame for an LMS upgrade, even one that mostly streamlines performance and usability without adding new features or seriously changing workflows. That time is Commencement weekend, a mere 5 and a half weeks from now.

abstract-313629_640

Instead of focusing on the technical details, all the splotches that need to be corralled, tested, organized, cleaned up and formed into a pleasing whole… let’s see if it helps to imagine this from our constituents’ point of view. What do they need to know and when?

  • Their current gradebook is going away. Even in old course sites they haven’t seen in a long time. Now would be a good time to check their records and export gradebooks from long ago since instructors are responsible for these things, not the LMS, nor those who run it.
    • From Fall 2018 course sites on, the old gradebook will no longer be available.
    • From that same time, old course sites also won’t have grades records.
    • We’re working on a copy of all those sites to run on the old server software version, so that we can take requests for anyone who missed these messages, and get them spreadsheet exports of those old gradebooks. But that old server software won’t be able to be up for ever. We’re thinking may another year?
  • They’re going to like the new gradebook. And we’ll help. In fact, we’ve been offering it as an alternative since last May, so even if it wasn’t in use by any given instructor, LMS support staff already knows a good deal about it and the kinds of questions  people have.
  • The upgrade to the LMS? Not really of consequence this time. People will care more about their gradebooks and their embedded video service.
  • How fast can we migrate their content from the old (“K”) to the new (“P”) video on demand/playback service? Because of course we can’t take away K without delivering on an alternative!

Sometime in August I hope to deliver on a cohesive orchestrated totality.

background-2734972_640

 

What I Didn’t Know About Using Community Source Software (Sakai) in Higher Ed

“How to set expectations for change management,” – that’s it. That’s what I didn’t know.

Consider the kinds of changes one habitually takes from a vendor of proprietary software which you maintain in your own higher ed Data Center.

The hypothetical change is

New cool features advertised to you, or…

  • Bug fixes (you’d found them or you hadn’t, in other words, you cared deeply or not at all), or …
  • Enhancements.

Qualities of the change:

  • The change has already been deployed and tested many times over by the software vendor in environments, with data, very similar to yours.
  • You schedule when you want to accept/install the change and make it available to your users, usually based on your academic schedule.

What your users expect-

  • Won’t get it before the regular schedule break, even if they knew about it.
  • Didn’t know about it anyway because the vendor doesn’t market to them but to the people who manage the system for them.
  • The vendor is an uncaring large collection of cogs anyway, so no point in asking for an enhancement.

What your users do-

  • Blog, tweet and facebook their complaints vociferously but without expecting more than a good venting session.
  • Write you emails about how your vendor doesn’t care.
  • Login after upgrades and harumpf that that the thing that used to annoy them so greatly is finally fixed.

 

Contrast this with open/community source:

What your users expect-

  • They will be heard if they connect with the community.
  • You are connecting with the community on their behalf and that will be meaningful in the community because it must be small.
  • The developers working on their behalf will automatically do a better job than the vendor because they work for higher ed institutions.

What your users do-

  • Demand bug fixes and enhancements.
  • Expect them to be applied frequently.
  • Suggest enhancements and expect them to be executed in amazingly beautiful ways.

It’s a media, media media world.

If you’re doing academic research, you can now cite a Tweet.

From the MLA:

MLA_Tweet_Citation

If you do project management, make it visual. In my workplace we’re seeing these “SCRUM boards” on every available wall. Some even include “buns in the oven” (the photo of the ultrasound is an example of media embedding):

scrumwall5

If you want to make a point, use an Infographic (fancy name for a collage that’s informative, right?) :

infographofatwitteruser2

Marketers always use media, your technology project might want to use it to help spin the change:

Sakai-AMovingStory

Stalking Sakai

I’m new to the open source model. To supporting it. To participating in the community. To seeing how it’s built and how features are added. But I’ve been watching for nigh unto 15 years. And I’m here to tell you: higher ed is generally bullish on software derived from this open source model.

It’s almost as if open source were the answer to all the budgetary , visionary, and advocacy issues we all face. From Community Jr. College to State School to Private – we’ve summoned open source to give us more freedom, more features, more revenue, more integration points, more responsiveness to our constituencies,  and more control of our destinies.

Software derived from and supported by the open source model is more and more under investigation by more and more institutions of higher ed. Cautiously under investigation in some cases, but under investigation nevertheless.

Sakai began around 2004 initially as a collaboration between University of Michigan, Indiana University, MIT and Stanford. By 2005 Foundation Staff on the Sakai CLE were 5 people- salaries based mostly on contributions from higher ed IT.

Institutions joined up. Commercial affiliates formed. Synergies developed. The coalition worked diligently. Advocated. Listened. Built. Deployed. Software developed by higher ed for higher ed and ‘owned’ by all.

Very cool.

Except when too many institutions want to take and not give back.

That was the message I was shocked to internalize last week when one of the chief Sakai advocates and architects this past 8 years, Dr. Chuck Severance, defended his decision to take employment from – Blackboard. He took a position at Blackboard that furthers his goals (shared by the Sakai community) of making learning technologies interoperable. Below, website by website, is a visual of his considerable breadth of reach. From development acknowledgements at Moodlerooms, and Blackboard’s Edugarage , to standards work at IMS Global and thought leadership published by Gilfus, Delta Initiatives, Campus Technology, edu1World and InsideHigher Ed. (As well as a frequently referenced though ‘unpublished’ work…!).

dr chuck severance internet presence

But today, according to Dr. Chuck, since about 4 months ago,  Sakai Foundation Staff actively working on the Sakai CLE (version 2.9 now) is zero.  Instead, the only remaining +dedicated+ release management resources moving the release forward come from commercial affiliates, NOT higher ed.

In Dr. Chuck’s call to action posting last week, he says, “Does it bother you that about 40 higher educations stopped supporting the Sakai Foundation over the past five years?” We remember the past five years- In budgetary terms, everyone ran for the hills, dug in where we could. The difficulty is that if higher ed doesn’t sustain this effort, who will?

He goes on to ask,  “Are you uncomfortable that for-profit companies already provide all of the long-term committed resources for the Sakai CLE product?”

I am. I am very uncomfortable. Are you?

Blackboard, SunGard and rSmart : A Client’s Take on Support, Oh My.

“Support” in the context of this posting is that service for which a client contracts that includes a way to report issues to a vendor such that the vendor is expected to respond with a resolution. That’s just basic, right?

Unfortunately with technology vendors the evaluation of which vendor to choose for ones initial purchase is so complex that it can not and does not include an evaluation of the structures and process the vendor has in place to provide that support. This leads to wide disparity among technology vendors as to how they resource support and what they think support does for their bottom line.

I know these three companies fairly well from personal experience. Here’s this year’s report card from me:

  • Blackboard    A
  • rSmart             B+
  • SunGard         D

In every single encounter the following is what the support structure must deliver.  Here are my ratings for these three vendors on a 1-5 scale; 1 being exemplary and 5 indicating you’d be looking for another vendor if it were your cellphone service:

Parameter Bb rSmart SunGard
Timeliness 1 1 5
Expectation Management 1 4 4
Trustworthiness 1 1 3
Accuracy 2 3 4
Knowledge 2 3 5
Meaningfulness 2 3 4
Totals 9 15 25

So vendors, I’ll say it straight out: Your clients expect to be able to contact you quickly, receive immediate word that you’re working on their issue, and have a meaningful resolution in a timeframe commensurate to their urgency.

We want what you tell us to be truthful and timely and we don’t want promises to guestimate to a greater degree of accuracy than you’re able to deliver. If you say the service pack which includes my issue resolution will be available “soon” – tell us if that means 3rd quarter or next year and get it to us by then. If you say, “April” – you’ll lose points for Accuracy, Trustworthiness and Meaningfulness if you then deliver in May. And God forbid that what you deliver should end up being nothing like what you told us it would be!

“Meaningful” – let’s talk about that specifically. No one wants repeated “We’re working on it,” messages. Even if you personalize those messages, they aren’t meaningful. 

“Knowledge” – SunGard has a lot of knowledge, but I can’t locate it myself in their huge knowledgebase site – so what good is that?

“Expectation Management” – I’m not sure I have any help for you vendors on this one. On the one hand, I appreciate multiple ways of communicating with you. On the other hand, if none of the people who say they’re going to escalate my problem actually do, or attempt it, but don’t understand my problem well enough to communicate it, I’m still looking for someone who can. I should also remind you saying “no,” is sometimes the right answer. But if you say that to a new client there could be a deeper issue. The fact that they asked the question in the first place probably means you didn’t manage their expectations well when you sold them whatever they bought from you in the first place.

When I started this post I thought I had a bias toward smaller companies. It’s true that larger companies with complex products often have unwieldy support structures that make self-service almost always impossible, however, larger companies that keep it personal and pay attention to quality follow-up can still effectively meet my needs.

At least when they’re not in the middle of a merger.

Making this easy: LMS Evaluations

My brain just did a flashback as my fingers poised over the keyboard ready to begin this post. The song, “War is a Science,” from Pippin has started to syncopate through my skull:

the rule that every gen-er-al
kno-ws by he-art:
it’s smarter to be lucky
than it’s lucky to be smart!

 

I value ‘smart’ (O how I value smart!) but  in my experience we overthink LMS evaluations.

It’s not about a Request for Proposal process. It’s not about a comparison of features. It’s not about the best software package out there. It’s not even primarily a decision of open source vs proprietary, although this exercise may help you characterize your institution’s culture as one or the other and that will get you started down the right track …

LMS Evaluations are like any other decision you have to make for your institution. It’s about trusting that the software you choose matches the way your institution does things.

It’s a cultural decision. Wasn’t always. But these days the market is mature enough that all these packages (Canvas, Sakai, Blackboard, Desire2Learn, Moodle) can do pretty much the same thing. It’s the way they do them that you care about. It’s the way your institution plans to use and support the software that will make the implementation project a smashing success or an unadopted disaster.

This is what you need to plan for and do:

1). Select the group of people at your institution that you trust to make this decision for you. Is it an already existing faculty committee? Maybe its composed of appointees to be representative of each college or department, with central IT or the Library thrown in because they have to run it?

2). Create and follow whatever rigor or metrics these people will need to document and communicate their decision for maximum buy-in.

That’s it.

No kidding. Institutions often publish their final LMS Eval reports. Read them. The variety they represent is as wide as the cultures of the institutions that created them. They’re not all smart. But the successes are the lucky ones who chose software that matches their institutions’ culture.