Abstract: Initial Experiences Teaching a Massively Open Online Course (MOOC)

Dr. Severance teaches the online course “Internet History, Technology, and Security” using the Coursera teaching platform. His course starts July 23, 2012 and is free to all who want to register. The course has over 25,000 enrolled students. In this keynote, we will look at at the current trends in teaching and learning technology as well as look at technology and pedagogy behind the course, and behind Coursera in general. We will meet the Coursera founders as well as take a live behind-the-scenes look into the course as it is being taught including successes and challenges to date. Attendees are welcome to sign up and participate in the course prior to the keynote to make the discussion about the course even more interactive.

Date: August 10, 2012 – Wilmington, NC

Speaker: Dr. Charles Severance

Charles is a Clinical Associate Professor and teaches in the School of Information at the University of Michigan. Charles is a founding faculty member of the Informatics Concentration undergraduate degree program at the University of Michigan. He also works for Blackboard as Sakai Chief Strategist. He also works with the IMS Global Learning Consortium promoting and developing standards for teaching and learning technology. Previously he was the Executive Director of the Sakai Foundation and the Chief Architect of the Sakai Project.

http://www.dr-chuck.com/dr-chuck/resume/bio.htm
https://www.coursera.org/course/insidetheinternet

Blackboard, xpLor, and Sakai – Oh My!

It has been four months since I started working as a consultant for Blackboard in the role of Chief Sakai Strategist. With the annual Sakai Conference a month ago in Atlanta and Blackboard DevCon and BbWorld this past week in New Orleans, it seemed like a good time to give an update on things.

Cross-Platform Learning Object Repository – xpLor

Now that xpLor has been announced, I can further clarify why I joined Blackboard back in March. Back then I knew what you all now know. Blackboard (through MoodleRooms) is building a cross-platform Learning Object Repository that is planned to be deeply integrated into Learn, ANGEL, Moodle, Joule, and Sakai. For years, in my role as IMS LTI evangelist, I have been hoping and encouraging anyone to build a real LOR that made proper use of IMS Common Cartridge and IMS Learning Tools Interoperability.

Blackboard xpLor running the in the Sakai CLE

It turns out that for the past two years, Dave Mills (founder of ANGEL Learning) has been quietly working on just such a product in his (then) role as the Chief Technology Officer at MoodleRooms. Dave had quietly re-assembled a number of the brilliant ex-ANGEL development team (Kellan, Mike, Scriby, etc) back together at MoodleRooms and when Blackboard acquired MoodleRooms everyone has stayed, and we are having a great time working together. I love it because I get to go to Indianapolis every few weeks and work with that team. It is just a 4-hour drive from my home so it is easy to get to Indy. When technical folks are having fun working together on fun stuff – it bodes really well for the future of any company.

The xpLor system uses real cloud technologies like Node.js, MongoDb, and elastic search. These are fun technologies to work with for a guy who has been deep in in the trenches of Java, multiple SQL variants, and Spring for the past 10 years.

As a quick history aside, many know me as “Dr. LTI” – but what is less well-known is that the original Common Cartridge evangelist before Jeff Kahn was none other than David Mills himself. Back in 2005-2006, Dave Mills was at ANGEL and used his leadership position to spearhead both the technical designs of IMS CC as well as make sure that it was rapidly implemented in ANGEL as the first implementation in a major LMS. Dave/ANGEL shipped IMS Common Cartridge import and export *before* the standard was complete. Here is a video from the Learning Impact in 2006:

Another history tidbit is that Ray Henderson is one of the initial inspirations for IMS Common Cartridge when he was at Pearson (before ANGEL and before Blackboard). Ray was at the (then secret) meeting in early 2005 where the words “Common Cartridge” were first uttered and Ray Henderson was the person who formally proposed the idea of a Common Cartridge specification in Summer 2005 at the Alt-I-Lab conference in Sheffield England. What is cool about all of this is that all of us have been working together for the past 8 years *regardless* of what company we have worked for. Our personal passion for standards and interoperability stays us with through any job change. And now we are all together at Blackboard. End of history aside – you get the picture that this has been brewing for a while.

The xpLor system is the exact system that I would have designed if I had the time to do it. It uses IMS CC and IMS LTI as its foundational architectural construct and everything is built around the fact that the LOR will interoperate and supplement LMS systems. It is the first LOR that will not try to replace LMS systems, but instead augment all LMS systems regardless of the vendor of those LMS systems through the use of standards to make the integration. In a sense it makes LMS-specific Learning Object Repositories pretty much obsolete. Dave Mills built the ANGEL Learning Object Repository (which is a fine product – but not cross platform) so he knows first hand (a) the right features to put into a LOR, (b) the features *not* to put into a LOR, and (c) the limitations of a single-vendor LOR in the marketplace.

If you saw Michael Chasen’s keynote at BbWorld – you saw a 10 minute demonstration of xpLor connected to Learn and then a single screen with a few clicks showing xpLor integrated into ANGEL, Joule, and Sakai.

BBWorld Day3 (21 of 72)BBWorld Day3 (23 of 72)

If you came to the in-depth session hosted by Brent Mundy and David Mills later, you saw the longer demo done in ANGEL. For those of you well-versed in demo-ology, you would immediately assume that it meant that the ANGEL, Moodle, Joule, and Sakai integrations were fake or hand-constructed. That is absolutely not the case. Before BbWorld we had full and deep implementations of the xpLor integration API in all five platforms (Learn, Sakai, Moodle, Joule, and ANGEL) – the 10 minute demo could have been done with *every one* of those five LMS systems and it would have been as smooth as the Learn demo. I will be showing the Sakai implementation to folks at IMS, Michigan, Columbia, NYU, Rutgers, and others as quick as I can. I am not going to make a screen recording because it is not a final product and I still have a few things I need to tweak in Sakai and in the integration API before the product is finished – so I want don’t want things locked down too early. In terms of full disclosure, none of the integration code is in the core codebases of Sakai, ANGEL, Learn, Moodle, or Joule – as we need to do a little more work before we start the process of putting the code into those core code-bases.

But I repeat that what you saw of the demonstration of xpLor and Sakai, Moodle, Joule, ANGEL, and Learn was real, rich, working code that was working solidly and continues to work as we evolve the code bases towards Beta.

Oh yes and grades are already flowing back to the LMS through LTI 1.1 – the way I did it in Lessons is that I just made it auto-create GradeBook columns when grades started to flow. I made the instructor UI as simple as possible – I did the nomal things and made it as simple as possible. by the way – grades flow back to all of the LMS’s that are integrated with xpLor though LTI 1.1.

I could not be more excited than I am about xpLor and in a way I have spent more time since March working with xpLor than I have with Sakai because I wanted to make sure Sakai was an equal part of the xpLor roll-out this week. Now that BbWorld is done, I can get back to Sakai.

Sakai Presence on the BbWorld 2012 Trade Show Floor

I have been part of my first ever “Sakai” booth! I had a booth. I did demonstrations of xpLor and Sakai at the booth. I used the little badge scanner. I had a special Blackboard Open Source Services t-Shirt. I felt all grown up! We had a combined open source booth with Sakai, MoodleRooms, and NetSpot. It was so cool and in particular because I am getting to be really good friends with all the MoodleRooms folks and got to meet all the NetSpot folks. Phil, Lou, Martin, and Tom from MoodleRooms are the best mentors I could ever have. In a way, if you think about it, what I will hope to build would be something like “SakaiRooms” where we host Sakai in Blackboard’s hosting facilities around the world. I have no idea how long it will take to roll out such a service or if/when I will start working on it, but I will say that my focus in my role at Blackboard is to invest Blackboard resources in Sakai 2.9 and beyond to improve the overall quality of Sakai as my top priority.

But while I work on and invest Blackboard resources in Sakai 2.9, I am getting so much help and mentoring from all the folks at MoodleRooms. MoodleRooms has done a really nice job of how they deploy Moodle as a cloud service. It takes some pretty dang clever tricks to make a shared multi-tenant app server work. I don’t think that I will be able to pull off multi-tenant app servers for hosted Sakai, but I am learning all the tips and tricks that make MoodleRooms scalable and manageable and I can use many of the same techniques in my design of a hosted Sakai service that will greatly streamline management and deployment. Luckily since Learn is a Java application with similar limitations to Sakai, Blackboard’s hosting facilities are well-prepared to handle Java applications. By the time MoodleRooms is integrated, the hosting folks will have a really broadened skill set and bringing Sakai into the shared centers should be pretty smooth.

What is freakishly cool is the camaraderie between the folks working on Learn, ANGEL, Sakai, Moodle, Joule, and Engage (formerly edline). You would think that there would be some kind of edge or competition with six LMS systems in the same company. But that is not the case since there is only limited market overlap for the products. It might be a little confusing for sales people – but it absolutely *not* confusing for technical people – we all are having so much fun seeing different parts of the market. For me in Sakai, we had such a hard time penetrating the K12 market. If I work closely with the Engage folks – I can get features that I think are important into 20,000 K-12 customers through a simple upgrade. I cannot tell you how exciting this is for me as someone who wants to build technology that changes how we teach and learn.

Plans for Sakai CLE and OAE

The integration that I built for the Sakai CLE is basically an integration into Sakai 2.9’s Lessons (the software formerly known as LessonBuilder). Lessons is our place for hierarchical content, Common Cartridge Import, selective release and all the gooey goodness that defines content within an LMS. And now it has xpLor integration as well. Prior to BbWorld, I could not work with Chuck Hedrick and Eric Jeney of Rutgers on the implementation / Lessons integration. Now I can sit down with them in the next few weeks and get things nicely integrated into Lessons working with Eric and Chuck.

I don’t want to release the integration code yet because I still want to change the APIs a bit but once things are locked down, I will simply check the code into the trunk of LTI. The API code is only about 350 lines of additions to:

basiclti-blis/src/java/org/sakaiproject/blti/ServiceServlet.java

The patches are not yet in that code – but once things settle down – they will be. I doubt it will make it into Sakai 2.9.0 – perhaps 2.9.1. But since the patches are so localized they could easily go back to 2.8 or 2.9.0 for Beta use when I get the done. By the time all of the things I want to change are completed, it will likely touch a few more files and perhaps a bit of work in Lessons.

The xpLor folks are working up a Beta program and I am hoping to get at least five Sakai schools into the very first Beta and then as the Beta expands, perhaps make it available to more schools with a membership in the Sakai foundation. When you are making a real cloud service it needs be carefully tuned for scalability. No promises here – just telling you what I am hoping for. I will let folks know as this progresses.

While I am totally excited about the relationship between xpLor and the Sakai CLE, I am even more excited about how the xpLor can work with Sakai OAE. Throughout the OAE project there has been this tension between whether OAE should follow the path of being a “new” LMS and rewrite everything that is in Sakai CLE or should OAE focus on new ways of thinking about collaboration across teaching and research in education. The “hybrid mode” has been the compromise to broaden the scope of the OAE to get it feature-rich enough for production without requiring a full-on re-build of the traditional LMS functionality present in the CLE. But there is a bit of an impedance mis-match between the CLE and OAE in hybrid mode as each is architected to be the “top level organizer” of the entire experience.

The combination of OAE+xpLor will not suffer from this impedance mis-match. The xpLor system was designed from its core to *not* be the top-level organizer of the user experience. the xpLor system was designed to support *whatever* organizing principle that a particular LMS has produced. Lessons in CLE is one approach to organizing, activities in Moodle is another approach, while Content in Learn is another approach. OAE is still another approach to organization, workflow, authorization, navigation, etc. The xpLor design is intended to work with all of these approaches and fit in gently to all of them.

I don’t think that the xpLor will replace hybrid mode as the functionality in xpLor is simple, pretty and generic – almost Google-like in its UI designs. The xpLor approach is to stay simple and generic (like Google) and as such needs to work with more complex tools like Message Center or Samigo in Sakai CLE. Sometimes you want things that are simple and can be thrown around like widgets and other times you need something larger with more precise use cases. OAE+xpLor+hybrid will be a very nice combination.

As I have time, I will meet with some of the OAE stakeholders to give some in-depth demonstrations and begin the discussion. I would hope to have an OAE school in the first round of Beta testers. If I have time (sheesh), I may even start to develop a more rich integration of xpLor into OAE or I may find someone in the OAE world that can help me do it. Any work would have to be under NDA for a while until things with the API are more solid and officially released – but the code would eventually be open source and part of the core OAE code base. For me OAE is a lower priority than CLE 2.9 – but I am excited enough about xpLor that I would pull this up in the priority – at least to get the OAE conversation and thinking started.

Summary

It has been a *heck* of a four months. I am so glad that I can talk openly about xpLor. I have been hinting to all my friends in the Sakai community about “good things to come” and now I can talk about it and do demos and be more open – which is my nature. I feed off openness and I feed off sharing and learning from that sharing.

The culture at Blackboard is great. I am getting so much support from my friends at Learn, Engage, MoodleRooms, and the shared services teams. There is absolutely no resistance to standards and interoperability. I just don’t have enough hours in the day to work on all the fun things and move the cause of flexible choices in teaching and learning forward. I may need to be getting some help soon in order for me not to be the weakest link.

Oh yeah, and if you think this is all pretty exciting, remember that this is *just* the beginning. It is just the beginning. It is just the first four months. Hang on – this will be a fun ride.

Independence Day (US): A Maturing Open Source Community at the 2012 Sakai Conference in Atlanta

I was really looking forward to the Sakai conference in Atlanta this year because with my recent involvement with Blackboard as the Sakai Chief Strategist it was the first time since 2007 that my non-academic work life was nearly 100% focused on Sakai. In order to achieve what I plan to achieve in my role at Blackboard, Sakai needs to be a success and I needed to find a way to have Blackboard a part of that success in a manner that is supportive to the community. So I am once again back in the middle of all things Sakai.

The State of the Sakai CLE

The Sakai Technical Coordination Committee (TCC) is now two years old having been formed in June of 2010. The formation of the TCC was a Magna Carta moment where those working on the CLE asserted that they controlled the direction of the CLE and not the Sakai Foundation Board of directors. Now that the TCC is two years old, the culture of the Sakai community has completely changed and the TCC is very comfortable in its Sakai CLE leadership role.

This was evidenced in the pre-conference meeting, several talks throughout the conference, and most strikingly in the day-long Sakai CLE planning meeting on the Thursday after the conference. The TCC has 13 members but there were over 40 people in the (very warm) room. The TCC is a membership body but does all of its work in public on the Developers List and the TCC list. TCC meetings are also open to anyone to attend and contribute.

The goal of the planning meeting is to agree on a roadmap, scope and timeframe for the Sakai 2.9 Release as well as a general scope for Sakai 2.10.

The agenda was very long but the group moved quickly through each item having the right kinds of conversations about issues balancing the need to have a complete and yet solid 2.9 release in a timely manner (Mid-Fall 2012 hopefully). The meeting was led by the current TCC chair Aaron Zeckoski of Unicon. We had the right amount of discussion on each item and then moved on to the next topic to make sure we covered the entire agenda.

I was particularly interested in figuring out items that I could accelerate by using Blackboard funds and resources. But I wanted to make sure that we had community buy-in on the items before I set off to find resources. I was quite happy that we will include the new skin that came from Rutgers, LongSight, and University of Michigan. We decided to put the new skin into Beta-6 but after the meeting decided to move it to Beta-7 because there was so many little things in Beta-6. Most of the Sakai 2.9 decisions were carefully viewed through a lens of delaying the release as little as possible.

The Coming Golden Age of the Sakai CLE

To me the biggest problem that the Sakai community faces (OAE and CLE) is that the CLE is incomplete and as such is weak in competitive situations when facing products like Canvas, Moodle, Desire2Learn, eCollege or Blackboard. From its inception, Sakai has been more of a Course Management System than a Learning Management System. Sakai 2.x through Sakai 2.8 is incomplete because it lacks a structured learning content system like Moodle Activities, Blackboard Content, ANGEL Lessons. etc. This is a feature that can create a structure of learning activities that include HTML content, quizzes, threaded discussions, and other learning objects. These structured content features have selective release capabilities as well as expansion points.

The IMS Common Cartridge specification provides a way to import and export the most common elements in these structured content areas and move learning content in a portable manner between LMS systems. Sakai 2.8 (and earlier) simply did not have any tool/capability that could import a cartridge that included a hierarchy of learning objects. Melete (not in core) could import a hierarchy of HTML content, and Resources can import a hierarchy of files but nothing could import a Common Cartridge and that meant that Sakai 2.8 was missing essential functionality that every LMS with significant market share had.

Other efforts like Learning Path from LOI, Sousa from Nolaria, OpenSyllabus from HEC Montreal went down the path of building hierarchal structures beyond Melete and Resources, but never got to the point where they were full-featured enough to become core tools and put Sakai on equal footing with the structured content offerings from other LMS systems with real market share (i.e. the Sakai’s competitors).

That all changed in the summer of 2010 when Chuck Hedrick and Eric Jeney of Rutgers University decided to build Lesson Builder (now called Lessons) for Sakai. Instead of building Lessons on a design of their own making, they started with a competitive analysis of the other LMS systems in the marketplace to determine the core features of Lessons. This alignment with the other LMS systems in the marketplace also perfectly aligned Lessons with IMS Common Cartridge.

Chuck and Eric built Lessons aggressively and deployed it at Rutgers as it was being built and took faculty and staff input as well as input from others in the community who grabbed early versions of Lessons and ran them in production at their schools. In Early 2011 we decided Lessons was mature enough to be part of the Sakai 2.9 release and later in the year,I added support for IMS Learning Tools Interoperability so that Lessons could be certified as able to import IMS Common Cartridge 1.1.

Even though Sakai 2.9 stalled in early 2012 for lack of QA resources, a number schools put the 2.9 Beta version with Lessons into production because they had a painful need for the Lessons capability. The great news is that Lessons has held up well both in terms of functionality and performance in those early deployments. All of that production testing will help insure that Sakai 2.9 is solid.

I suggest that Sakai 2.9 with Lessons will trigger a Golden Age of the Sakai CLE. In a way I am completely amazed at how well the Sakai CLE through 2.8 has fared in the marketplace without the Lessons capability. Sakai has taken business from Blackboard Learn, WebCT, Moodle, ANGEL, and others without having the Lessons capability – a feature that many consider essential. I shudder to think how much market share we would have at this point if the Sakai CLE had Lessons in 2006 when Blackboard purchased WebCT. I spent a lot of time talking to WebCT schools and they loved Sakai except for its lack of structured content. So we left a lot of that market share in 2006-2007 on the table.

I honestly don’t think that the primary purpose of an open source community like Sakai is to get more market share – but it is a nice measure of the value of the software and community that you produce. Commercial vendors like rSmart, LongSight, Unicon, Edia, Samoo, and now Blackboard use Sakai to meet the needs of customers for whom Sakai is a good fit and good value. We can be proud of the aggregate market share of both the direct adopters of the community edition and the customers of the commercial providers of Sakai.

The LMS market in North America is hotly contested with strong entrants like Canvas and OpenClass and well-established competitors like Desire2Learn so I don’t now how well Sakai (even with all the 2.9 gooey goodness) will be able to gain market share. But I do think that there is an amazing un-met need for Sakai outside North America. Outside North America, I see the primary market players as Learn, Sakai, and Moodle.

If you look at the market where Learn and Moodle are the only significant players, I think that Sakai 2.9 has a lot to bring to that market. I think that Moodle and Learn have their strengths and weaknesses and I think that Sakai 2.9 is strong where Learn and Moodle are weak and that Venn-Diagram of strengths and weaknesses leads to natural adoption and resulting market share. I am happy to talk more about this over beers about the precise areas of relative strengths and weaknesses between Sakai, Moodle, and Learn.

So to me the Golden Age of the Sakai CLE is the 2.9 (and then 2.10) release that allows Sakai to maintain or slightly grow market share in North America by winning more than we lose and dramatically growing Sakai market share beyond North America.

I also think that once we have 2.9 out the door and installed across the Sakai community, the pace of innovation in the Sakai CLE can slow down and we can focus on performance, reliability, and less visible but equally important investments on the quality of the Sakai code base. I think that we need one more major release (Sakai 2.10) to clean up loose ends in Sakai 2.9, but as we move beyond Sakai 2.9, I think we will see a move from one release per year to a release every 18 or 24 months. We will see more 2.10.x releases during those periods as we tweak and improve the code. In a sense, the sprint towards full functionality that we did in 2004-2007 and then picked back up in 2010-2012 will no longer be necessary and lead to a golden age where we can take a breath and enjoy being part of a mature open source community collectively managing a mature product from 2013 and beyond.

I am telling this same story internally within Blackboard in my role as Sakai Chief Strategist. Invest in 2.9, get it solid and feature complete and then invest in 2.10 and make it rock-rock-rock solid. Any notion of deploying scalable Sakai-based services in my mind takes a back seat to investment in improving the community edition of Sakai in the 2.9 and 2.10 releases. I am not taking this approach because Blackboard has a long history of charitable giving. I am taking this approach because I see this approach (fix the code before we deploy anything) as the way to maximize Sakai-related revenue at Blackboard while minimizing Sakai-related costs. Even though Learn, ANGEL, and MoodleRooms are my new colleagues at Blackboard, I am hopeful that while any Sakai business that Blackboard undertakes will likely not be Blackboard’s largest line of business, I want Sakai to be the most profitable line of business in the Blackboard portfolio so I end up with enough to fund tasty steak dinners and plenty of travel to exotic locations :)

Sakai OAE and Sakai CLE Together

There has been a testy relationship between the Sakai OAE and Sakai CLE community since about 2008. Describing what went wrong would take an entire book so I won’t try to describe it here.

The good news is that when the Sakai CLE TCC was formed in 2010, it set the wheels in motion for all of the built-up animosity to go away in time. At the 2011 Sakai conference there was a few flare-ups as folks in the OAE community needed to let go of the notion that the Sakai CLE community were resources that should be controlled by the OAE management.

The great news is that in 2012, everything is as it should be. The Sakai CLE and Sakai OAE communities see themselves as independent peers with no remaining questions “who is on top” or “who does the Sakai board like best”. Not only have all of the negative feelings pretty much become no more than background noise, there is increasing awareness of the interlinked nature of the CLE and OAE. The OAE needs the CLE to be successful to maintain the Sakai presence in the marketplace while OAE matures and the CLE forms the basis of the OAE hybrid mode so the more solid the CLE is – the more successful the OAE will be.

While I want to the CLE to be quite successful and have a long life, its founding technologies like Java Server Faces, Hibernate, Sticky Sessions, Iframes, and a host of other flaws mean that it is just not practical to move the CLE technology to the point where it can be a scalable, multi-tenant, cloud-based offering without a *lot* of care and feeding. The OAE is a far better starting point to build such a service given that it is starting much later (i.e. 2008 versus 2003). The OAE was born in a more REST-Based cloud style world. Sometimes you need a rewrite – and history has shown (in Sakai and elsewhere) that rewrites take a long time – much longer than one ever expects. The community has wisely switched from seeing the CLE as resources coveted by the OAE and instead seeing investment in the CLE in buying time for the OAE work to finish taking as much time as is needed.

The only bummer about this year’s Atlanta meeting was that the CLE folks and OAE folks both had quite full schedules making progress on their respective efforts so there was very little overlap between the teams. Usually when meetings at the end of the day “finish” what really happens is that the discussions continue, first in the bar, then at dinner, and then later at the bar or Karaoke. Because the CLE and OAE meetings were on different tracks, there was no where near enough overlap in the dinner and beer conversations. I think that at next years meeting we will address that issue.

Sakai + jasig = Apereo

Wow this discussion has been going on for a long time! The good news is that we seem to have very high consensus on all of the details leading up to the moment where the two organizations become one. It feels like we are down to crossing the t’s and dotting the i’s. It will still take some time to do the legal process – but those wheels are now started and I am confident we will have Apereo by Educause this year.

This is a long time in coming. Joseph Hardin and I had a discussion back in 2005 before we created the Sakai Foundation as to whether we should just join jasig instead of making our own foundation. We dismissed the notion because back then it was clear that we needed a focal point to solidify the definition of Sakai and what it was and the Foundation was a way to help make that happen and create a world-wide brand.

The decision to start our own foundation and not join jasig had its advantages and disadvantages.

We certainly advanced the Sakai brand with an active and visible board of directors and full-time executive director in the form of first me and then Michael Korkuska. We were able to come together and engage and “defeat” Blackboard in the patent war of 2006. We had well attended bi-annual conferences that later became once per year out of financial necessity and grew a series of regional conferences around Sakai as well.

But with all those advantages there were some massive mistakes made because the Sakai Foundation ended up learning a few hard lessons that jasig had frankly already painfully learned several years earlier. And sadly those lessons took a long time to learn and caused significant harm to the Sakai Community. The very board of directors that was empaneled to nurture and grow the community, by the middle of 2009, were the greatest risk to Sakai’s long-term survival.

I won’t go over all the mistakes that the Sakai board made between 2008 and 2011 – that would take an entire book. I will just hit the high points:

When your funding source is higher education – money does not grow on trees. The Sakai *project* in 2004-2005 was funded by large grants and large in-kind contributions and handed the Foundation a $1Million dollar surplus. The annual membership revenue peaked in 2006 and has fell steadily ever since. Here is one of many rant posts where I go off on the financial incompetence of the Sakai board during that period:

Sakai Board Elections – 2010 Edition

It literally took until March 2010 for the board to understand that it needed to live within its means so as to not go bankrupt. The jasig group learned to live within its means and align their spending with their real revenues years earlier.

The second major problem that the Sakai board has was its own sense of how much power it had over volunteer members of the community. The Sakai Board saw itself as a monarchy and saw the community as its subjects. The perfect example of the Sakai Board’s extreme hubris was the creation of the ill-fated Product Council. Again this was solved in June 2010 and now in June 2012 there is very little residual pain from that terrible decision – so we are past it.

As a board member of the Sakai Foundation in 2012, I am very proud of the individual board members and very proud of how the board is currently functioning as a body. It took from 2006 – 2010 to make enough mistakes and learn from those mistakes to create a culture within the board that is truly reflective of what an open source foundation board of directors should be.

The Sakai Foundation board has (finally) matured and is functioning very well. My board tenure (2010-2012) has been very painful and I have shouted at a lot of people to get their attention. But the core culture of the board has finally changed and it is in the proper balance to be a modern open source organization. If I rotate off the board at the end of this year or if my board position ends at the moment of Apereo formation, I am confident that the culture will be good going forward.

Why Merge?

So if everything is so perfect, why then should we merge and become two projects (Sakai OAE and Sakai CLE) and become Apereo?

Because the Sakai brand, while a strong brand and known worldwide, can never expand the scope beyond the notion of a single piece of software in the teaching and learning marketplace. The brand Sakai is successful because it is narrow and focused and everyone knows what it means. This is great as long as all the “foundation” wants to do is build one or two learning management systems – but terrible if we want to broaden the scope to all kinds of capabilities that work across multiple learning management systems.

What if we wanted to start a piece of software to specifically add MOOC-like capabilities to a wide range of LMS systems using IMS Learning Tools Interoperability? Would we want to call it Sakai MOOC? That would be silly because it would imply that it only worked with one LMS. We should call it the MOOCster-2K or something like that and have a foundation where the project could live.

The Sakai brand is too narrow to handle cross-LMS or other academic computing solutions. The jasig brand is nice and broad – but there is nothing in jasig about teaching and learning per se. So the MOOCster-2K would not fit well in jasig because it needs to be close to a community (like Sakai) that has teaching and learning as its focus.

The Apache Foundation would be perfectly adequate except that there are no well-established communities that include teaching and learning as a focus.

So the MOOCster-2K needs to make the MOOCster Foundation and go alone and perhaps take 5-8 years and perhaps make mistakes due to growing pains like both the Sakai Foundation and jasig endured. But why? Why? Why waste that time in re-deriving the right culture when all the MOOCster community wants is a place to house intellectual property and pay for a couple of conferences per year.

So we need Apereo – and we need it to be the sum of Sakai + jasig. It needs to have a broad and inclusive brand and mature open source culture throughout but also including all of the academy – both the technical folks and the teaching and learning folks and the faculty and students as well. It will take this group of people with a higher education focus to truly take higher education IT through the next 20 years if we are to make it through the next 20 years begging for scraps from commercial vendors that see higher education as a narrow and relatively impoverished sub-market of their mainstream business lines.

I come out of the Atlanta conference even more convinced of the vitality of Apereo than ever before. While there are many benefits cited for combining the organizations, having a single conference is the most important benefit of all. It was so wonderful to see all the uPortal folks in the bars and know we were all in the same building. But this was not some kind of Frankenstein conference with parts and pieces awkwardly sewn together. I must hand it to Ian Dolphin, Patty Gertz, and the conference organizers. The tracks were nicely balanced and literally we could have the conference be whatever we wanted it to be. It was so well orchestrated that I don’t think anyone would ever suggest that these two groups should ever have separate conferences from this point going forward.

Summary

Wow. Simply wow. Things in Sakai are better than they have been in a long time. Excitement is high. Internal stresses within the community are almost non-existent. The Sakai Foundation is financially stable (thanks to Ian Dolphin). Both the CLE and OAE are moving their respective roadmaps forward and rooting for each other to succeed.

Those of you who have known me since 2003 know that I do *not* candy-coat things. Sometimes when I think things are going poorly I just sit back and say nothing and hope that things will get better. And other times I come out swinging and don’t hold anything back.

The broad Sakai community is hitting on all cylinders right now. It will be a heck of a year. I promise you.

IEEE Video: Alan Turing and Bletchley Park

This month is the 100th anniversary of Alan Turing’s birth. There will be world-wide celebrations to acknowledge Alan Turing’s tremendous contributions to Computer Science as well as his contributions to the outcome of World War II through his code breaking efforts. Here is my video of the visit to Bletchley Park to look at how Alan Turing worked together with his brilliant colleagues brought together at Bletchley Park:

For my Computing Conversations column for the June 2012 issue of IEEE Computer magazine we wanted to be part of the celebration. I wanted to examine Alan Turing’s time at Bletchley Park from the point of view of a multi-disciplinary research effort to solve the most pressing issues in cryptography during World War II. I had the following graphic drawn by Matt Pinter to use in the video and in the article. Alan Turing at Bletchley Park The theme of the image is a play on the “six-degrees of Alan Turing” and focused on the embeddedness of his work at Bletchley Park as well as focus on the evolution of mechanical computing into electronic computing during World War II.

In the quest to break the codes of their opponents in World War II, the people at Bletchley Park pushed the frontier of computation forward at a high rate of speed. World War II was the first war that operated at a scale and speed that required communication to be done using wireless transmissions. Since wireless communication can be monitored by ally and enemy alike, it is necessary to encrypt transmissions. In order to communicate securely at scale, it was necessary to develop mechanical encryption and decryption machines in order to produce an “un-crackable” encipher system.

Just like in modern encryption, it was (and continues to be) impossible to hide the technical details of how encryption and decryption was/is done. Given that the encryption technique would be revealed or reverse-engineered sooner or later, the only defense was to make it computationally “impossible” to determine the key and then change the key regularly enough that it was/is simply impractical to try to crack the encryption and determine the key. The goal was to make it so computationally painful than no one would even attempt to break the code.

The winning side in this computational war would be the one who could compute quickly enough to decrypt transmissions so that the information would still be useful. If for example, the enemy sends 1,000 messages per day and changes the key every day, and it takes two months to decrypt a single message, by the time a message was finally decrypted it would have little or no value from a military perspective. And if you chose the wrong message to decrypt, the information would be completely useless.

If you could get to the point where you could quickly decrypt a large fraction of the messages in a timely manner you could correlate across all the messages for a given day as well as across a series of messages on a topic over time. Such intelligence would be (and was) extremely valuable in pursuing a war to a successful conclusion and minimizing loss of life.

Project “Ultra” was that overarching effort to decrypt massive numbers of messages and then produce low-level and high-level intelligence provided to Winston Churchill and his top generals.

Another key was to keep the enemy believing that their encryption was unbreakable so that they would confidently continue to use their encryption and not develop new encryption techniques. The key to making it all work was to build computing machines that were so much faster than the enemy could imagine.

As the war started, with the help and inspirations of Polish cryptographers who had successfully developed a system to decipher German enigma traffic, Alan Turing developed an electro-mechanical system called the BOMBE that electronically tested possible key values so quickly that it made breaking German traffic encrypted with the Enigma (and similar) machines a tractable problem and ultimately a routine activity. While Turing designed the core algorithm for the BOMBE, it was engineered and built by Harold (Doc) Keen, and optimized with the addition of the “Diagonal Board” by Gordon Welchman. They key was that while Turing played a central role in the making of the BOMBE – his creativity was amplified by the contributions of hundreds of other people.

The other computing machine featured in the video is the Colossus that has been reconstructed and runs in the National Museum of Computing at Bletchley Park. While Turing is credited with developing the decryption technique for the more advanced Lorenz SZ42 encryption machines used by Hitler for longer strategic computations, other created the necessary solutions and systems to enable the regular decryption of these high-command messages. Bill Tutte worked out the details of how the Lorenz machine was built and enabled the construction of a “clone” machine. Tommy Flowers devised and constructed a tube/valve-based computer to automate the process of figuring out the key sequence for a particular message. Max Newman ran the production facility and created the processes and structure to enable the breaking of the codes. Again very much a team effort with Turing as making a contribution amplified by the talents of others.

All in all my favourite aspect of the video is the juxtaposition of the BOMBE and the Colossus.

The BOMBE represents as far as we could advance mechanical computation. It was cleverly designed, cleverly optimized, and made to be as fast as it possibly could go. The mechanical bits are lubricated by a fine mist of oil that falls out the bottom of that machine and is collected in a pan. It is an ultimate expression of what one can do moving information through cogs, springs, wheels, contacts, resistors, relays, wire, and light bulbs. It is built to withstand the wear and tear of 24 hour per day seven day per week production use and remain reliable.

And yet with all of the sophisticated engineering of the BOMBE, mechanical computing was no match for the Lorenz SZ42 encryption. The Enigma had three to five encryption wheels and a plug board and the Lorenz SZ42 had 12 encryption wheels. The Lorenz was not practical to break in a reasonable time with mechanical computation. And so the brilliant minds at Bletchley Park had no choice but to invent large-scale high-speed electronic computation to break the Lorenz cipher. They knew the Lorenz machine could be broken. All it would take was faster computation. So the brilliant minds at Bletchley Park threw themselves at the problem until they solved it.

And so in the pastoral setting of the Bletchley Park mansion and outbuildings we see the mechanical computing era give way to the electronic computing era. While all of the electronic computing technology was a closely held military secret that was protected for many years, the world would never go back. The electronic computing age had begun even though it was another 10 years before the rest of the world had much of an inkling of the profound change.

It is why I think it is fair to mark Bletchley Park as “ground zero” for the electronic computing age. Of course there were lots of experiments with electo-mechanical and electronic computational circuits in research labs at universities that pre-date the Colossus, the Colossus is clearly the first electronic computing device that ran at scale and in production 24 hours per day seven days per week.

After the war, people like Turing, Welchman, Newman and others fanned out and created the fledgling field of Computer Science in Britain, the United States and around the world. Computers like the MIT Whirlwind, Harvard Mark I, Manchester Baby, Manchester Mark I, and Ferranti Sirius were general purpose tube-based computers that built on the technology breakthroughs produced at Bletchley Park. All these early break-through electronic computers can trace a bit of their DNA back to the brilliant group of people at Bletchley Park during World War II.

As a note, the Ferranti Sirius was featured in my March 2012 Computing Conversations column where I visited Monash Museum of Computing History in Melbourne, Australia:

If you are interested, here is a podcast of me reading the text of the written column that appears in the June 2012 issue of IEEE Computer magazine:

I hope people enjoy viewing this month’s column video as much as I enjoyed making it. The video was greatly helped by Joel Greenberg. I met Joel many years ago while I was working on Sakai and Joel was working at the Open University in Milton Keynes. When Joel retired from the Open University, he became a volunteer at Bletchley Park. Joel was able to help me get amazing access to the people and facilities at Bletchley Park. We filmed a portion of the video sitting in Alan Turing’s office in Hut 8 at Bletchley Park. The video was filmed May 4, 2012.

There are many people to thank in the making of the Bletchley Park video: the Bletchley Park Trust, the National Museum of Computing, Joel Greenberg, Paul Kellar, Kevin Murrell, Stephen Fleming, and others. I also greatly appreciate the insightful comments from the reviewers of early versions of the video and article.

A Valuable Lesson in Audio Interference From Cell Phones

I was out at Coursera Headquarters this past week. After my woes with audio interference with my wireless microphones during some of my recent video shoots, I decided my interview with Daphne Koller would only use wired microphones. I used a wired lavalier mic and wired shotgun microphone. After the shoot was over, I gave a copy of the interview to David Unger (the Coursera AV person and YouTube cover sensation) and we took a quick listen.

Twice in the video there was terrible interference for 1/3 second. Just like in my other pieces. When he heard the first interference, he immediately said – “That is your iPhone – they are horrible for interference”. I was shocked – it was a wired microphone.

Turns out that cell phones and in particular smart phones emit high levels or radiation bursts from time to time. The radiation is so intense and broad spectrum it turns the microphone wire into an antenna and pushes sound into it – it has nothing to do with the wireless bits. You can hear your cell phone cracking speakers from time to time. It is not continuous – just once in a while.

So – we live and learn. Turn off one’s cell phone when doing any interview and have your talent turn off their cell phone and even people nearby that might be helping – every cell phone needs to be off.

Here is a video:

Listen to 0:32

Here is an example of the futility of trying to remove the interference using SoundTrack Pro

Live and learn – From now on turing all cell phones off will be part of my pre-shoot checklist.

Draft Abstract: Coursera From A Teacher’s Perspective

(this is a draft of an abstract for an upcoming talk I am giving – comments welcome)

The idea of moving educational content to the web to make it more scalable has been around since the mid-1990s. Almost as soon as the web was widely used, one of the first imagined uses would be moving classroom instruction onto the web and achieving economies of scale using the web. While the idea seemed obvious and felt like it would quickly become a solved problem, repeated attempts to replicate the classroom experience at scale achieved only disappointing results. At some point, it seemed to many people that if the problem of teaching on the web at scale remained unsolved after 20 years – that perhaps it was simply not possible. But recently with the breakthrough Stanford AI class with over 160,000 students and the rapid development of efforts like Coursera, Udacity, and edX, it seems like Massively Online Open Courses (MOOCs) are seeing significant investment and amazing growth.

What is different? What has changed? What is unique about MOOCs? Why does it seem like the same idea that has failed so may times before will finally work this time? Will these new MOOCs succeed or be just another hopeful experiment that ultimately fails in the long term?

This talk will look at what it is like to develop and teach a Coursera course from a teacher’s perspective. Dr. Severance is teaching a course titled Internet History, Technology and Security on Coursera on July 23. Teaching with Coursera is part of a long-term effort that he started in 1996, when he developed the first lecture capture system called Sync-O-Matic in order to move his courses to the web when his students were using 28.8 modems. He will look at where Coursera is unique, different, and what is new and compare it to previous effort.

Dr. Charles Severance
University of Michigan School of Information
www.dr-chuck.com

Keynote@Sakai Mexico: The University as a Cloud – Trends in Openness in Education

I will be giving a keynote at the first Sakai Mexico Conference Monday April 23 at 12:30 – after lunch.

http://www.u-red.com.mx/sakaimexico/en.html

This will be a lot of fun and for me perfect timing.

I of course will talk about IMS Learning Tools Interoperability, past present, and future. I will look at current and future interoperability strategies from a Sakai CLE, Sakai OAE, IMS, and Blackboard perspective. I will also talk about Massive Open Online Curses (MOOCs) and my course on Internet History, Technology, and Security in particular. I will talk about why I am excited about the pedagogy of MOOCs and in particular why I love the pedagogy of Coursera. I will also talk about where I would like to see Coursera and other MOOC efforts like MITx and Udacity go in terms of a technical and strategic directions. In a sense – what I see as the real impact of MOOCs over the next 5-10 years. I will talk about the next two MOOCs I am planning to develop as well as how I plan to inject technology education into the Liberal Arts curricula of the future with these MOOCs.

All along, I thought that IMS Learning Tools Interoperability was a destination and that once we arrived, our work would be done. Increasingly I see IMS LTI as a mere doorway that once opened, lets us gaze at an amazing landscape of the future of teaching and learning.

This talk won’t be boring and it would be a mistake to miss it. I assure you.

Fixing Tappet Noise on a Buick LeSabre with a GM 3.8 (3800) Engine

This has been a heck of a couple of months in terms of the Severance family cars. Brent’s Sunfire died with a rod knock at 140K mies and I bought him a little Subaru Forester. Mandy’s Pontiac Grand Am blew a head gasket at 140K and had coolant coming out the tail pipe (still being repaired). Teresa’s Subaru Tribeca, had it’s 110K checkup that cost $835.

As if all that were not enough, the venerable Dr. Chuck-mobile, my ultra-reliable 2001 Buick LeSabre with 210K miles had a few issues as well but ended up with a happy ending. But let me start at the beginning.

I have had three Dr. Chuck-mobiles since 1998. They all have the GM 3.8 (3800) V6 engines. I would buy them at about 105K miles for around $4500 and then drive them for 100K miles and the sell them to someone else in my family and for $2000 and then buy another “new” one with 100K miles. My family loves GM 3.8 liter engines. Across my parents, brothers and sisters, we have probably had 20 GM cars with 3.8 liter engines. My parent’s garage looks like a auto repair shop in rural Mexico. We literally have in stock nearly every part that goes wrong with the GM 3.8 liter engine. My brothers Scott and Christopher can disassemble and reassemble everything from the engine to the running gear with their eyes closed. We leave transmission work to the pros at Lansing Transmission – they have never steered us wrong.

In 1999, I had a green Pontiac Bonneville. In 2004 I switched to a while Oldsmobile 88, and in 2008 I purchased my current Buick LeSabre. I really wanted a LeSabre because it was quiet and smooth and had a neat display that gave you an instantaneous gas mileage readout during my 120 mile round trip daily commute between Ann Arbor and Holt Michigan.

I really liked the LeSabre and my goal is to not stop at 200K miles but for once in my life get a car to 300K miles. So when it turned 200K back in December, I decided that it was time to do a complete maintenance job to celebrate the milestone and prepare for the next 100K miles. I was going to change bearings, shocks, struts, brakes, calipers, rotors, and do a transmission service. So we bought all the parts and my brother Scott did all the replacements and gave me the car back.

About 1000 miles after I got the car back, it started to develop the loudest tappet noise I had ever heard. In the morning after the car sat all night, it would start and for about five minutes make a tappet noise so loud that it sounded like a midget was under my intake manifold with a sledge hammer. It was so bad that the car ran as if it were missing one one cylinder. I think that the exhaust valve was not opening. It was hard to keep the car running because it was so bad. It even threw a check engine light sometimes after it chugged so badly.

But after about 5 minutes the noise would go away and everything would be perfect for the rest of the day. Even starts after it sat a few hours were noise free. It only made the horrible tappet noise for five minutes in the morning after it sat all night.

I felt a little sheepish because to save money a few months earlier I had let one oil change go over 10K miles. When I finally got it changed the oil was pretty bad. I figured the tappet noise was because it got too dirty and gummed things up.

So I went into the oil change place and asked them to do their $79 engine cleaner treatment and then put whatever magic goo they had to quiet tappet noise. They charged me $22 extra for Lucas Heavy Duty Oil Stabilizer. It looked the consistency of honey as they poured it in. They swore that it was the “best stuff ever”.

The tappet noise was gone for about 1500 miles and I was feeling pretty good. And then mysteriously it came back even louder then before. I had just put over $1000 of repairs in this car and I was not about to spend the next 100K miles with that noise on every morning start.

So I asked my brother Chris what he would do in the situation and he gave me some advice that was the same as what I have seen all over the Internet. I was to remove a quart of the oil and put in a quart of Marvel Mystery Oil (a.k.a MMO). MMO was less than $5 at my AutoZone. I still had less than 2000 miles on my oil change so I went back and asked them to drain a quart and put in the MMO. The kind of scoffed at me and told me that the Lucas was the best stuff ever. I told them I just wanted the MMO put in and did not want a lecture. I had tried it their >$100 way and it failed after 1500 miles.

So I drove out from the oil change and immediately drove 120 miles that day to and from Ann Arbor. The next morning, the tappet noise was reduced by 1/3 and it went away a little more quickly. For the next 500 miles it got slowly better. After about 750 miles it was quite tolerable where you actually had to turn the radio down to hear it and it went away in a minute. After 1500 miles, even after sitting a whole night, the engine starts flawlessly with no sound at all.

This is an amazing development given how loud and how bad the tappet noise had become. I am feeling much better now.

My next oil change is in about 750 more miles and I will put Marvel Mystery Oil as one of the quarts and likely do that for the rest of the life of the car to keep it nice and clean internally.

Of course you may find your results differ. I am sure there are lots of reasons for tappet noise. And maybe whatever gunk or varnish that needed dissolving was near a lot of oil flow and was easily cleaned up. Another advantage I have is that my driving is not stop and go. I get in the car and drive 60 miles at highway speeds until I arrive at work and then turn around and do the same at night. So there was plenty of oil flowing and the engine was fully warmed up pretty much every time I drive.

I will see how it goes. But for now I feel good about the quest for 300K miles with all new parts, a fresh transmission service, and now no tappet noise.

Crawling, Page Rank and Visualization in Python for SI301

I have been hacking up some sample code for my SI301 course the past few weeks. The course is about Networks, Crowds ,and Markets and so I wanted to build a rudimentary Python web crawler that would retrieve a web site, run a page rank algorithm on it, and then visualize the page rank and the links.

If you click on the image, you will see an interactive version of the visualization and be able to play with the visualization of some pages on www.sakaiproject.org. You can hover over a node to see the URL, or click and drag a node around, or double click on a node to launch the actual web page.

Here is the Source code in Python.

It uses the completely cool D3 Data Driven Documents to perform the visualization.

Comments/bug fixes welcome.

Good and Evil is not the right model – its a Money Thing

This post is a response to Michael Feldstein’s recent excellent post about Martin Dougiamas of Moodle, Josh Coates of Instructure and me “representing” Sakai.

The Blackboard Announcements, Part 2: Can Open Source Be Bought?

Michael’s post is (as always) well written and does a good job of capturing the kinds of possible outcomes that might occur if Martin, Josh, or I were somehow replaced by an exact (but evil) duplicate.

It is not the first time in several weeks that I had a conversation about me becoming evil. While I was talking to Michael Chasen about joining Blackboard, I told him the some people would assume that he had removed my regular brain and replaced it by a remote control robot brain that he controlled.

We both laughed. So far, I can assure you with 100% certainty that my brain has not been replaced by a red glowing evil robot brain (i.e. iRobot). But actually, if I think about it for a moment, if my brain had been replaced by an evil robot brain, it would likely be programmed so that I would think that it had not been replaced. And also that would mean that right now instead of telling the truth like I usually do in my blog posts, my robot evil brain would be programmed to lie convincingly and I would not even know the differnz dsjaji xzsaiew lsajd slj lslkjd……

Stack overflow - core dumped
^@^@^@^@__DATA^@^@^@^@^@^@^@^@^@^@^@^@0^@^@^@^@q^@^@ ^@^@^@^@^
B^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@__nl_symbol_ptr^@__DATA
^@^@^@^@^@^@^@^@^@^@^@^@0t^@^@^@^P^@^@ t^@^@^@^B^@^@^@^@^@^@^@^@
^@^@^@^F^@^@^@^Q^@^@^@^@__la_symbol_ptr^@__DATA^@^@^@^@^@^@^@^@
^@^@^@^@0<84>^@^@^@D^@^@@^A^@^@/usr/lib/libmx.A.dylib^@^@^@^@^@
^L^@^@^@4^@^@^@^XC½m¥^@X^A^C^@^A^@^@/usr/lib/libSy

Rebooting....

Damn Evil Robot Brains and their memory leaks! Every since these robot brains were upgraded to Lion, they seem to have instability problems. I wonder if I can format my evil robot brain and reinstall Snow Leopard?

Ah well, back to my post.

As I was saying, Michael Feldstein’s post was great but his metaphor of good and evil is just not appropriate. First, when people do things, they have a reason and logic for them. At some point a situation changes, the market changes, and someone changes their mind about something and takes a different but still logical course of action based on the new conditions.

A Billion Dollars…

I prefer to wonder what might happen if each of the people in Michael Feldstein’s post were offered a billion dollars. It is a slightly more likely scenario than someone becoming evil due to a virus in SkyNet. And I will add Michael Chasen – the CEO of Blackboard to the list of soon-to-be billionaires.

Lets assume Apple Computer wanted an LMS and were willing to spend two seconds of their worldwide revenue (a billion dollars) on the purchase and made four people an offer of a billion dollars for their LMS software. Lets assume just for the sake of argument that the billion dollar offer is way more than the software is worth and all four would take the billion dollar offer.

What if Michael Chasen were offered a billion dollars for Blackboard Learn?

  • The software is copyright all rights reserved and there are no legitimate copies of the software outside of Blackboard.
  • Michael signs a paper transferring his rights to the software (which are complete) to Apple
  • Apple can do anything it likes with its new asset (the source code to BBLearn)
  • If a Blackboard employee happens to have a copy of the source code on their laptop, there is nothing they can do with that source code without getting sued by Apple.

What if Josh Coates were offered a billion dollars for Instructure?

  • The software is copyright Affero GPL
  • Josh signs a paper transferring his rights to the software (which are complete) to Apple
  • Apple can do anything it likes with its new asset including changing the license to copyright all rights reserved and doing all further development proprietary and closed source
  • If someone outside Instructure had a copy of Canvas one minute before the license was changed to all rights reserved, they could check that copy into github and form a company or community around the software and continue its development. That continued development must be done in a completely open source manner – whether the software is run as software as service *or* if the software is redistributed. Apple does not have to publish their work as open source but anyone else working on the code must publish everything open source.

What if Martin Dougiamas were offered a billion dollars his interest in Moodle?

  • The software is copyright GPL. Martin holds copyright to much of the lines of code – but there are lots of other contributions from others where their code is also GPL. All of Moodle is GPL and most of Moodle is owned by Martin.
  • Martin signs a paper transferring his interest in Moodle to Apple but he cannot transfer the interest of the other contributors without their explicit consent.
  • In order to change the license of all of Moodle to all rights reserved, they would either need to track down every single contributor to Moodle (Start here) and give them a each new MacBook Air (or two) to convince them to sign over their rights to Apple. If any of the contributors refused to sign, Apple would have to re-implement the questionable area of code in a clean-room environment (i.e. developers who work without having ever looked at the source code).
  • If Apple did not get approval from every single contributor and still decided to remove the GPL license while no one was looking, they would soon get a visit from Richard Stallman or some other representative of the Free Software Foundation. One time I was sitting in Hal Abelson’s office in the MIT Stata Building, listening to Richard Stallman explain GPL to someone over the phone in the next office over. Trust me – rewriting the software in a clean room is the much easier path.
  • If someone (like about 50,000 people) had a copy of Moodle one minute before Martin signed the papers, they could check that copy into github and form a company or community around the software and continue its development. They could even continue development in a non-open repository if they would only run software as a service and not redistribute their code. If they wanted to redistribute a binary copy of their modified Moodle, they would have to publish the modifications to the source code. Oh the delightful irony of a license that was invented before “the cloud” was even imagined and back when we actually used compilers during software development.

What if I were offered a billion dollars for my interest in Sakai?

First the software is copyright the Educational Community License 2.0, an Apache 2.0 variant that allows unlimited open-source or closed-source forks of the code with no restrictions on those forks other than not naming the software ‘Sakai’ and acknowledging the Sakai Foundation and other contributors. So they can have a copy of the software for free with no real restricitons on its use, distribution, or future development. Not a single dollar needs to be exchanged and no persmission is needed, let alone a billion dollars. ECL-Licensed software is truly a no-strings attached gift to anyone who finds themselves in posession of the software.

But what if Apple Really wanted to pay me a billion dollars for my interest in Sakai as a contributor. It turns out that I have some interest in a tiny bit of Sakai – the parts I wrote. Lets charitably say that I wrote three percent of the code in Sakai. I maintain an interest in some of that code. Not an exclusive interest – but under the terms of my Contribution License Agreement (CLA), I have a right to keep a copy of my own work in addition to the copy I contribute to the Sakai Foundation. But of my three percent of the overall Sakai code, most likely 2.5 percent was done during the years 2003-2007 when I was a UMich employee focused on Sakai, so actually the contribution of that 2.5 percent of the code came from Michigan not from me. Since 2007 (0.5 percent of the Sakai code) I have been a faculty member instead of a staff member so a case could be made that I have interest in things like the Basic LTI portlet that I wrote after 2007.

But because of my signed Contribution Agreement, I gave the Sakai Foundation an unrestricted, non-revokable copy and the foundation gives that copy away to anyone at no cost so there is little to be gained in buying it from me.

So I have nothing to sell to Apple – except my charm and good looks – even if they offer me a billion dollars. Perhaps they would be interested in purchasing a signed and notarized quit-claim deed for the Brooklyn Bridge from me.

Summary

Apple literally does not have any reason to pay any one or any organization “buy” Sakai. They can have it virtually unrestricted at no cost. Because Martin holds the copyright of Moodle, technically he could sell his interest to Apple – but because he does not own it all, he can only sell the part he owns. In a sense, while Martin owns most of Moodle, all of Moodle is held jointly between Martin and the Moodle community. It is a common practice in GPL-style projects to simply not worry about who owns what. This many-way joint ownership is a nice insurance policy against GPL projects going proprietary.

Michael Chasen and Josh Coates (and their companies) truly own every single line of code in their products. The AGPL license for Canvas insures that an open source community could continue after any sale – but the AGPL really limits significant large-scale commercial adaptation for anyone other than the original copyright holder.

No one is ‘evil’ here. Each company or open source community is protecting its interests and expressing their organization/community values by making very concious choices about the copyright applied to their code.