Sakai 2.2 Status Update

Well the Sakai 2.2 release is out and it is a good time to core dump a few thoughts.
In a way 2.2 “completes” the work we started in the 2.0 release. Looking forward, we do not have any more “shoes to drop” as big as 2.0 and 2.2 releases. We can gracefully evolve Sakai without having to do major surgery for the next year or so. This should lead us to have at a 1-2 year period where we can focus on making Sakai better rather than having to “clear the decks” for a rewrite.
Frankly, we need to focus our energy on improving the end-user, developer, and deployer experience in Sakai if we are to truly reach our full potential. Brad Wheeler uses the term “user delight” – users should be happy to use Sakai.


A good way to get our focus on to “user delight” is to revisit the requirements list and take a look at what is most important on the list – we may want to run another round of requirements to see if more stuff has popped up. We need to probably think about a plan to work on the requirements over the next 9-10 months rather than trying to rush them into the next 6-8 weeks.
My personal wish list is pretty short – that feels pretty good. Import and export are probably the hottest button for me – it is kind of shameful not to support IMS Content Packaging and Common Cartridge – especially because we have mostly-working patches from Zach Thomas of Texas State. I also think that we can make progress on accessibility and Charon improvements based on the TILE work at University of Toronto. Both of these things can be done touching only a small are of the code. There are several other architecture efforts like CourseManagement that can be done without too much disturbance – these can nicely fit in to any release when developer resources come available. I am sure that these can fit nicely with a set of community-driven priorities around functionality.
Even though we have been operating as a community informally for some time now, this is our first official release “as a community”. The release team was made up of volunteers from the community with increasingly strong and diverse voices around the release process. These new voices and their new perspectives changed the release process for the better.
Even though Sakai 2.2 was late by five weeks, I think that in some ways the 2.2 release process was the best yet. I think that we learned some new best practices in this release. We slipped because the release wanted to make sure that there were no big or even medium sized things wrong with the release. I think that the time where we QA in four weeks is long past. In the future, we need to build time into the schedule to QA until the release is solid.
There were some mistakes made in the release process which added to the delay and increased the effort which the QA and development teams had to invest in 2.2. The biggest problem was several features which we thought we could finish, starting just a short time before the freeze date. These features took longer than we expected and hence we had to develop after the freeze – a bad pattern which wastes a lot of time and QA energy. We have to have the discipline to “wait for the next release”.
In a way, it is good that we took a little time to get things right for 2.2, because we really need to feature freeze maintenance releases going forward. This will take some discipline on the part of the whole community. We will need to find a way for some members to evolve more rapidly and deploy new features. We need to keep folks “together” – if we all run slightly different versions over time, the value proposition of common QA starts to go away.
There is some discussion on the dev list to switch from two major releases per year to a single major release per year in the Spring. Folks should make sure their opinions are known in that discussion so we as a community can make the right decisions for the whole community.
Regardless of the number or types of releases, in a way we need to slow down a bit. The 2.2 release really completed the rewrite and cleanup that we started in the 2.0 release. If you remember, the 2.0 release what a complete rewrite of the low-level framework. We got the low-level framework completed but never got to clean up the tools and services to be as clean as the low level bits in 2.0. In 2.2 we re-factored the “legacy” module – this was out catch all for everything that we did not get time to fix in 2.0. In 2.2 we finally “finished” the work so we have both a nice framework *and* a nice looking source tree.
We really need to get disciplined about maintenance releases and maintenance releases to bug fixes only. I was talking to a friend who runs a software development company about how/when they did QA. He said that they maintain a bug-fix branch and on the first of every month, QA takes that branch and runs tests on it. The check to see if (a) the bugs are fixed and (b) that there were no regressions and (c) no new features sneaked in. If QA did not like a patch – they simply kicked it out and QA continued on. That way, QA was in total control of the release and the development team was only involved when they were needed by the QA team. One thing they did *not* do was use each maintenance release to go through and choose a set of bugs that were targeted to be fixed and then work with developers to fix the bugs. This would lengthen the process quite a bit because unless you had the full attention of the right developer the QA process had to wait.
I think that we should move toward this approach for Sakai. We would do simpler maintenance releases more often – we get better at doing the QA in a maintenance release to the point where perhaps we are doing maintenance releases every 1-2 months without too much pain. So far we have treated maintenance releases like they were just “little major releases” where we try to decide “which bugs need fixing” during the QA process and negotiate with the development team to get fixes for the “big bugs”. Using the new pattern, If the QA team wants to suggest that something needs a high-priority fix – they make the suggestion but it comes out in the next maintenance release – we don’t delay or even re-QA this maintenance release to get “one more bug fix”. The upshot is that the QA team is 100% in control of the maintenance release process.
Samigo was a big topic at the meeting in June – time for an update here as well. Things have gotten better over the summer – a number of significant problems were found and fixed. Several sites have run tests that worked pretty well. But we should continue to be ready for anything – as you ramp up Samigo use – don’t over sell it and don’t put Samigo into high-performance, all-at-the-same-time, high-stakes situations or you may be disappointed. Lets keep talking about the experiences and sharing problems and solutions.
Just so you know neither Etudes nor Rutgers triggered their backup plans – there was just not enough time to develop and QA a whole new solution in time for September. So they too will be running Samigo and watching carefully. We still have a tentative meeting now scheduled for sometime in October to talk about Samigo technically. We will have a better idea where we are at, once the September production kicks in. Please keep talking on the lists about this. Also congratulations to the Samigo team for continuing to work though the issues for all of us.
We should also celebrate the full integration of OSP into Sakai for this release. This completes a long journey that really has taken several years to complete – like Sakai, OSP can invest some time is improving their “user delight” rather than working on “plumbing issues” :).
Another topic going forward is looking at other open source learning management systems that we are starting to work with and talk with. This is a natural effect of becoming more mature. As we get our technology more solidly in place, we can invest some time in looking at ways to connect in other technologies.
Our relationship with LAMS is well established and with the release of LAMS 2.0, we are looking to evolve the Sakai Architecture in ways to align with the LAMS tool contract. Take a look here:
http://wiki.lamsfoundation.org/display/lams/Tool+Contract
I like to think about the LAMS tool contract as a possible next step in the evolution of the IMS Tool Interoperability effort.
We also have been talking with the Bodington folks in the UK. Bodington is an open source LMS developed initially at University of Leeds. (http://bodington.org/index.php)
We have also been talking very briefly with the ATutor folks at University of Toronto (http://www.atutor.ca/) about possible collaboration around the IMS Tool Interoperability.