Category Archives: Uncategorized

Independence Day (US): A Maturing Open Source Community at the 2012 Sakai Conference in Atlanta

I was really looking forward to the Sakai conference in Atlanta this year because with my recent involvement with Blackboard as the Sakai Chief Strategist it was the first time since 2007 that my non-academic work life was nearly 100% focused on Sakai. In order to achieve what I plan to achieve in my role at Blackboard, Sakai needs to be a success and I needed to find a way to have Blackboard a part of that success in a manner that is supportive to the community. So I am once again back in the middle of all things Sakai.

The State of the Sakai CLE

The Sakai Technical Coordination Committee (TCC) is now two years old having been formed in June of 2010. The formation of the TCC was a Magna Carta moment where those working on the CLE asserted that they controlled the direction of the CLE and not the Sakai Foundation Board of directors. Now that the TCC is two years old, the culture of the Sakai community has completely changed and the TCC is very comfortable in its Sakai CLE leadership role.

This was evidenced in the pre-conference meeting, several talks throughout the conference, and most strikingly in the day-long Sakai CLE planning meeting on the Thursday after the conference. The TCC has 13 members but there were over 40 people in the (very warm) room. The TCC is a membership body but does all of its work in public on the Developers List and the TCC list. TCC meetings are also open to anyone to attend and contribute.

The goal of the planning meeting is to agree on a roadmap, scope and timeframe for the Sakai 2.9 Release as well as a general scope for Sakai 2.10.

The agenda was very long but the group moved quickly through each item having the right kinds of conversations about issues balancing the need to have a complete and yet solid 2.9 release in a timely manner (Mid-Fall 2012 hopefully). The meeting was led by the current TCC chair Aaron Zeckoski of Unicon. We had the right amount of discussion on each item and then moved on to the next topic to make sure we covered the entire agenda.

I was particularly interested in figuring out items that I could accelerate by using Blackboard funds and resources. But I wanted to make sure that we had community buy-in on the items before I set off to find resources. I was quite happy that we will include the new skin that came from Rutgers, LongSight, and University of Michigan. We decided to put the new skin into Beta-6 but after the meeting decided to move it to Beta-7 because there was so many little things in Beta-6. Most of the Sakai 2.9 decisions were carefully viewed through a lens of delaying the release as little as possible.

The Coming Golden Age of the Sakai CLE

To me the biggest problem that the Sakai community faces (OAE and CLE) is that the CLE is incomplete and as such is weak in competitive situations when facing products like Canvas, Moodle, Desire2Learn, eCollege or Blackboard. From its inception, Sakai has been more of a Course Management System than a Learning Management System. Sakai 2.x through Sakai 2.8 is incomplete because it lacks a structured learning content system like Moodle Activities, Blackboard Content, ANGEL Lessons. etc. This is a feature that can create a structure of learning activities that include HTML content, quizzes, threaded discussions, and other learning objects. These structured content features have selective release capabilities as well as expansion points.

The IMS Common Cartridge specification provides a way to import and export the most common elements in these structured content areas and move learning content in a portable manner between LMS systems. Sakai 2.8 (and earlier) simply did not have any tool/capability that could import a cartridge that included a hierarchy of learning objects. Melete (not in core) could import a hierarchy of HTML content, and Resources can import a hierarchy of files but nothing could import a Common Cartridge and that meant that Sakai 2.8 was missing essential functionality that every LMS with significant market share had.

Other efforts like Learning Path from LOI, Sousa from Nolaria, OpenSyllabus from HEC Montreal went down the path of building hierarchal structures beyond Melete and Resources, but never got to the point where they were full-featured enough to become core tools and put Sakai on equal footing with the structured content offerings from other LMS systems with real market share (i.e. the Sakai’s competitors).

That all changed in the summer of 2010 when Chuck Hedrick and Eric Jeney of Rutgers University decided to build Lesson Builder (now called Lessons) for Sakai. Instead of building Lessons on a design of their own making, they started with a competitive analysis of the other LMS systems in the marketplace to determine the core features of Lessons. This alignment with the other LMS systems in the marketplace also perfectly aligned Lessons with IMS Common Cartridge.

Chuck and Eric built Lessons aggressively and deployed it at Rutgers as it was being built and took faculty and staff input as well as input from others in the community who grabbed early versions of Lessons and ran them in production at their schools. In Early 2011 we decided Lessons was mature enough to be part of the Sakai 2.9 release and later in the year,I added support for IMS Learning Tools Interoperability so that Lessons could be certified as able to import IMS Common Cartridge 1.1.

Even though Sakai 2.9 stalled in early 2012 for lack of QA resources, a number schools put the 2.9 Beta version with Lessons into production because they had a painful need for the Lessons capability. The great news is that Lessons has held up well both in terms of functionality and performance in those early deployments. All of that production testing will help insure that Sakai 2.9 is solid.

I suggest that Sakai 2.9 with Lessons will trigger a Golden Age of the Sakai CLE. In a way I am completely amazed at how well the Sakai CLE through 2.8 has fared in the marketplace without the Lessons capability. Sakai has taken business from Blackboard Learn, WebCT, Moodle, ANGEL, and others without having the Lessons capability – a feature that many consider essential. I shudder to think how much market share we would have at this point if the Sakai CLE had Lessons in 2006 when Blackboard purchased WebCT. I spent a lot of time talking to WebCT schools and they loved Sakai except for its lack of structured content. So we left a lot of that market share in 2006-2007 on the table.

I honestly don’t think that the primary purpose of an open source community like Sakai is to get more market share – but it is a nice measure of the value of the software and community that you produce. Commercial vendors like rSmart, LongSight, Unicon, Edia, Samoo, and now Blackboard use Sakai to meet the needs of customers for whom Sakai is a good fit and good value. We can be proud of the aggregate market share of both the direct adopters of the community edition and the customers of the commercial providers of Sakai.

The LMS market in North America is hotly contested with strong entrants like Canvas and OpenClass and well-established competitors like Desire2Learn so I don’t now how well Sakai (even with all the 2.9 gooey goodness) will be able to gain market share. But I do think that there is an amazing un-met need for Sakai outside North America. Outside North America, I see the primary market players as Learn, Sakai, and Moodle.

If you look at the market where Learn and Moodle are the only significant players, I think that Sakai 2.9 has a lot to bring to that market. I think that Moodle and Learn have their strengths and weaknesses and I think that Sakai 2.9 is strong where Learn and Moodle are weak and that Venn-Diagram of strengths and weaknesses leads to natural adoption and resulting market share. I am happy to talk more about this over beers about the precise areas of relative strengths and weaknesses between Sakai, Moodle, and Learn.

So to me the Golden Age of the Sakai CLE is the 2.9 (and then 2.10) release that allows Sakai to maintain or slightly grow market share in North America by winning more than we lose and dramatically growing Sakai market share beyond North America.

I also think that once we have 2.9 out the door and installed across the Sakai community, the pace of innovation in the Sakai CLE can slow down and we can focus on performance, reliability, and less visible but equally important investments on the quality of the Sakai code base. I think that we need one more major release (Sakai 2.10) to clean up loose ends in Sakai 2.9, but as we move beyond Sakai 2.9, I think we will see a move from one release per year to a release every 18 or 24 months. We will see more 2.10.x releases during those periods as we tweak and improve the code. In a sense, the sprint towards full functionality that we did in 2004-2007 and then picked back up in 2010-2012 will no longer be necessary and lead to a golden age where we can take a breath and enjoy being part of a mature open source community collectively managing a mature product from 2013 and beyond.

I am telling this same story internally within Blackboard in my role as Sakai Chief Strategist. Invest in 2.9, get it solid and feature complete and then invest in 2.10 and make it rock-rock-rock solid. Any notion of deploying scalable Sakai-based services in my mind takes a back seat to investment in improving the community edition of Sakai in the 2.9 and 2.10 releases. I am not taking this approach because Blackboard has a long history of charitable giving. I am taking this approach because I see this approach (fix the code before we deploy anything) as the way to maximize Sakai-related revenue at Blackboard while minimizing Sakai-related costs. Even though Learn, ANGEL, and MoodleRooms are my new colleagues at Blackboard, I am hopeful that while any Sakai business that Blackboard undertakes will likely not be Blackboard’s largest line of business, I want Sakai to be the most profitable line of business in the Blackboard portfolio so I end up with enough to fund tasty steak dinners and plenty of travel to exotic locations :)

Sakai OAE and Sakai CLE Together

There has been a testy relationship between the Sakai OAE and Sakai CLE community since about 2008. Describing what went wrong would take an entire book so I won’t try to describe it here.

The good news is that when the Sakai CLE TCC was formed in 2010, it set the wheels in motion for all of the built-up animosity to go away in time. At the 2011 Sakai conference there was a few flare-ups as folks in the OAE community needed to let go of the notion that the Sakai CLE community were resources that should be controlled by the OAE management.

The great news is that in 2012, everything is as it should be. The Sakai CLE and Sakai OAE communities see themselves as independent peers with no remaining questions “who is on top” or “who does the Sakai board like best”. Not only have all of the negative feelings pretty much become no more than background noise, there is increasing awareness of the interlinked nature of the CLE and OAE. The OAE needs the CLE to be successful to maintain the Sakai presence in the marketplace while OAE matures and the CLE forms the basis of the OAE hybrid mode so the more solid the CLE is – the more successful the OAE will be.

While I want to the CLE to be quite successful and have a long life, its founding technologies like Java Server Faces, Hibernate, Sticky Sessions, Iframes, and a host of other flaws mean that it is just not practical to move the CLE technology to the point where it can be a scalable, multi-tenant, cloud-based offering without a *lot* of care and feeding. The OAE is a far better starting point to build such a service given that it is starting much later (i.e. 2008 versus 2003). The OAE was born in a more REST-Based cloud style world. Sometimes you need a rewrite – and history has shown (in Sakai and elsewhere) that rewrites take a long time – much longer than one ever expects. The community has wisely switched from seeing the CLE as resources coveted by the OAE and instead seeing investment in the CLE in buying time for the OAE work to finish taking as much time as is needed.

The only bummer about this year’s Atlanta meeting was that the CLE folks and OAE folks both had quite full schedules making progress on their respective efforts so there was very little overlap between the teams. Usually when meetings at the end of the day “finish” what really happens is that the discussions continue, first in the bar, then at dinner, and then later at the bar or Karaoke. Because the CLE and OAE meetings were on different tracks, there was no where near enough overlap in the dinner and beer conversations. I think that at next years meeting we will address that issue.

Sakai + jasig = Apereo

Wow this discussion has been going on for a long time! The good news is that we seem to have very high consensus on all of the details leading up to the moment where the two organizations become one. It feels like we are down to crossing the t’s and dotting the i’s. It will still take some time to do the legal process – but those wheels are now started and I am confident we will have Apereo by Educause this year.

This is a long time in coming. Joseph Hardin and I had a discussion back in 2005 before we created the Sakai Foundation as to whether we should just join jasig instead of making our own foundation. We dismissed the notion because back then it was clear that we needed a focal point to solidify the definition of Sakai and what it was and the Foundation was a way to help make that happen and create a world-wide brand.

The decision to start our own foundation and not join jasig had its advantages and disadvantages.

We certainly advanced the Sakai brand with an active and visible board of directors and full-time executive director in the form of first me and then Michael Korkuska. We were able to come together and engage and “defeat” Blackboard in the patent war of 2006. We had well attended bi-annual conferences that later became once per year out of financial necessity and grew a series of regional conferences around Sakai as well.

But with all those advantages there were some massive mistakes made because the Sakai Foundation ended up learning a few hard lessons that jasig had frankly already painfully learned several years earlier. And sadly those lessons took a long time to learn and caused significant harm to the Sakai Community. The very board of directors that was empaneled to nurture and grow the community, by the middle of 2009, were the greatest risk to Sakai’s long-term survival.

I won’t go over all the mistakes that the Sakai board made between 2008 and 2011 – that would take an entire book. I will just hit the high points:

When your funding source is higher education – money does not grow on trees. The Sakai *project* in 2004-2005 was funded by large grants and large in-kind contributions and handed the Foundation a $1Million dollar surplus. The annual membership revenue peaked in 2006 and has fell steadily ever since. Here is one of many rant posts where I go off on the financial incompetence of the Sakai board during that period:

Sakai Board Elections – 2010 Edition

It literally took until March 2010 for the board to understand that it needed to live within its means so as to not go bankrupt. The jasig group learned to live within its means and align their spending with their real revenues years earlier.

The second major problem that the Sakai board has was its own sense of how much power it had over volunteer members of the community. The Sakai Board saw itself as a monarchy and saw the community as its subjects. The perfect example of the Sakai Board’s extreme hubris was the creation of the ill-fated Product Council. Again this was solved in June 2010 and now in June 2012 there is very little residual pain from that terrible decision – so we are past it.

As a board member of the Sakai Foundation in 2012, I am very proud of the individual board members and very proud of how the board is currently functioning as a body. It took from 2006 – 2010 to make enough mistakes and learn from those mistakes to create a culture within the board that is truly reflective of what an open source foundation board of directors should be.

The Sakai Foundation board has (finally) matured and is functioning very well. My board tenure (2010-2012) has been very painful and I have shouted at a lot of people to get their attention. But the core culture of the board has finally changed and it is in the proper balance to be a modern open source organization. If I rotate off the board at the end of this year or if my board position ends at the moment of Apereo formation, I am confident that the culture will be good going forward.

Why Merge?

So if everything is so perfect, why then should we merge and become two projects (Sakai OAE and Sakai CLE) and become Apereo?

Because the Sakai brand, while a strong brand and known worldwide, can never expand the scope beyond the notion of a single piece of software in the teaching and learning marketplace. The brand Sakai is successful because it is narrow and focused and everyone knows what it means. This is great as long as all the “foundation” wants to do is build one or two learning management systems – but terrible if we want to broaden the scope to all kinds of capabilities that work across multiple learning management systems.

What if we wanted to start a piece of software to specifically add MOOC-like capabilities to a wide range of LMS systems using IMS Learning Tools Interoperability? Would we want to call it Sakai MOOC? That would be silly because it would imply that it only worked with one LMS. We should call it the MOOCster-2K or something like that and have a foundation where the project could live.

The Sakai brand is too narrow to handle cross-LMS or other academic computing solutions. The jasig brand is nice and broad – but there is nothing in jasig about teaching and learning per se. So the MOOCster-2K would not fit well in jasig because it needs to be close to a community (like Sakai) that has teaching and learning as its focus.

The Apache Foundation would be perfectly adequate except that there are no well-established communities that include teaching and learning as a focus.

So the MOOCster-2K needs to make the MOOCster Foundation and go alone and perhaps take 5-8 years and perhaps make mistakes due to growing pains like both the Sakai Foundation and jasig endured. But why? Why? Why waste that time in re-deriving the right culture when all the MOOCster community wants is a place to house intellectual property and pay for a couple of conferences per year.

So we need Apereo – and we need it to be the sum of Sakai + jasig. It needs to have a broad and inclusive brand and mature open source culture throughout but also including all of the academy – both the technical folks and the teaching and learning folks and the faculty and students as well. It will take this group of people with a higher education focus to truly take higher education IT through the next 20 years if we are to make it through the next 20 years begging for scraps from commercial vendors that see higher education as a narrow and relatively impoverished sub-market of their mainstream business lines.

I come out of the Atlanta conference even more convinced of the vitality of Apereo than ever before. While there are many benefits cited for combining the organizations, having a single conference is the most important benefit of all. It was so wonderful to see all the uPortal folks in the bars and know we were all in the same building. But this was not some kind of Frankenstein conference with parts and pieces awkwardly sewn together. I must hand it to Ian Dolphin, Patty Gertz, and the conference organizers. The tracks were nicely balanced and literally we could have the conference be whatever we wanted it to be. It was so well orchestrated that I don’t think anyone would ever suggest that these two groups should ever have separate conferences from this point going forward.

Summary

Wow. Simply wow. Things in Sakai are better than they have been in a long time. Excitement is high. Internal stresses within the community are almost non-existent. The Sakai Foundation is financially stable (thanks to Ian Dolphin). Both the CLE and OAE are moving their respective roadmaps forward and rooting for each other to succeed.

Those of you who have known me since 2003 know that I do *not* candy-coat things. Sometimes when I think things are going poorly I just sit back and say nothing and hope that things will get better. And other times I come out swinging and don’t hold anything back.

The broad Sakai community is hitting on all cylinders right now. It will be a heck of a year. I promise you.

IEEE Video: Alan Turing and Bletchley Park

This month is the 100th anniversary of Alan Turing’s birth. There will be world-wide celebrations to acknowledge Alan Turing’s tremendous contributions to Computer Science as well as his contributions to the outcome of World War II through his code breaking efforts. Here is my video of the visit to Bletchley Park to look at how Alan Turing worked together with his brilliant colleagues brought together at Bletchley Park:

For my Computing Conversations column for the June 2012 issue of IEEE Computer magazine we wanted to be part of the celebration. I wanted to examine Alan Turing’s time at Bletchley Park from the point of view of a multi-disciplinary research effort to solve the most pressing issues in cryptography during World War II. I had the following graphic drawn by Matt Pinter to use in the video and in the article. Alan Turing at Bletchley Park The theme of the image is a play on the “six-degrees of Alan Turing” and focused on the embeddedness of his work at Bletchley Park as well as focus on the evolution of mechanical computing into electronic computing during World War II.

In the quest to break the codes of their opponents in World War II, the people at Bletchley Park pushed the frontier of computation forward at a high rate of speed. World War II was the first war that operated at a scale and speed that required communication to be done using wireless transmissions. Since wireless communication can be monitored by ally and enemy alike, it is necessary to encrypt transmissions. In order to communicate securely at scale, it was necessary to develop mechanical encryption and decryption machines in order to produce an “un-crackable” encipher system.

Just like in modern encryption, it was (and continues to be) impossible to hide the technical details of how encryption and decryption was/is done. Given that the encryption technique would be revealed or reverse-engineered sooner or later, the only defense was to make it computationally “impossible” to determine the key and then change the key regularly enough that it was/is simply impractical to try to crack the encryption and determine the key. The goal was to make it so computationally painful than no one would even attempt to break the code.

The winning side in this computational war would be the one who could compute quickly enough to decrypt transmissions so that the information would still be useful. If for example, the enemy sends 1,000 messages per day and changes the key every day, and it takes two months to decrypt a single message, by the time a message was finally decrypted it would have little or no value from a military perspective. And if you chose the wrong message to decrypt, the information would be completely useless.

If you could get to the point where you could quickly decrypt a large fraction of the messages in a timely manner you could correlate across all the messages for a given day as well as across a series of messages on a topic over time. Such intelligence would be (and was) extremely valuable in pursuing a war to a successful conclusion and minimizing loss of life.

Project “Ultra” was that overarching effort to decrypt massive numbers of messages and then produce low-level and high-level intelligence provided to Winston Churchill and his top generals.

Another key was to keep the enemy believing that their encryption was unbreakable so that they would confidently continue to use their encryption and not develop new encryption techniques. The key to making it all work was to build computing machines that were so much faster than the enemy could imagine.

As the war started, with the help and inspirations of Polish cryptographers who had successfully developed a system to decipher German enigma traffic, Alan Turing developed an electro-mechanical system called the BOMBE that electronically tested possible key values so quickly that it made breaking German traffic encrypted with the Enigma (and similar) machines a tractable problem and ultimately a routine activity. While Turing designed the core algorithm for the BOMBE, it was engineered and built by Harold (Doc) Keen, and optimized with the addition of the “Diagonal Board” by Gordon Welchman. They key was that while Turing played a central role in the making of the BOMBE – his creativity was amplified by the contributions of hundreds of other people.

The other computing machine featured in the video is the Colossus that has been reconstructed and runs in the National Museum of Computing at Bletchley Park. While Turing is credited with developing the decryption technique for the more advanced Lorenz SZ42 encryption machines used by Hitler for longer strategic computations, other created the necessary solutions and systems to enable the regular decryption of these high-command messages. Bill Tutte worked out the details of how the Lorenz machine was built and enabled the construction of a “clone” machine. Tommy Flowers devised and constructed a tube/valve-based computer to automate the process of figuring out the key sequence for a particular message. Max Newman ran the production facility and created the processes and structure to enable the breaking of the codes. Again very much a team effort with Turing as making a contribution amplified by the talents of others.

All in all my favourite aspect of the video is the juxtaposition of the BOMBE and the Colossus.

The BOMBE represents as far as we could advance mechanical computation. It was cleverly designed, cleverly optimized, and made to be as fast as it possibly could go. The mechanical bits are lubricated by a fine mist of oil that falls out the bottom of that machine and is collected in a pan. It is an ultimate expression of what one can do moving information through cogs, springs, wheels, contacts, resistors, relays, wire, and light bulbs. It is built to withstand the wear and tear of 24 hour per day seven day per week production use and remain reliable.

And yet with all of the sophisticated engineering of the BOMBE, mechanical computing was no match for the Lorenz SZ42 encryption. The Enigma had three to five encryption wheels and a plug board and the Lorenz SZ42 had 12 encryption wheels. The Lorenz was not practical to break in a reasonable time with mechanical computation. And so the brilliant minds at Bletchley Park had no choice but to invent large-scale high-speed electronic computation to break the Lorenz cipher. They knew the Lorenz machine could be broken. All it would take was faster computation. So the brilliant minds at Bletchley Park threw themselves at the problem until they solved it.

And so in the pastoral setting of the Bletchley Park mansion and outbuildings we see the mechanical computing era give way to the electronic computing era. While all of the electronic computing technology was a closely held military secret that was protected for many years, the world would never go back. The electronic computing age had begun even though it was another 10 years before the rest of the world had much of an inkling of the profound change.

It is why I think it is fair to mark Bletchley Park as “ground zero” for the electronic computing age. Of course there were lots of experiments with electo-mechanical and electronic computational circuits in research labs at universities that pre-date the Colossus, the Colossus is clearly the first electronic computing device that ran at scale and in production 24 hours per day seven days per week.

After the war, people like Turing, Welchman, Newman and others fanned out and created the fledgling field of Computer Science in Britain, the United States and around the world. Computers like the MIT Whirlwind, Harvard Mark I, Manchester Baby, Manchester Mark I, and Ferranti Sirius were general purpose tube-based computers that built on the technology breakthroughs produced at Bletchley Park. All these early break-through electronic computers can trace a bit of their DNA back to the brilliant group of people at Bletchley Park during World War II.

As a note, the Ferranti Sirius was featured in my March 2012 Computing Conversations column where I visited Monash Museum of Computing History in Melbourne, Australia:

If you are interested, here is a podcast of me reading the text of the written column that appears in the June 2012 issue of IEEE Computer magazine:

I hope people enjoy viewing this month’s column video as much as I enjoyed making it. The video was greatly helped by Joel Greenberg. I met Joel many years ago while I was working on Sakai and Joel was working at the Open University in Milton Keynes. When Joel retired from the Open University, he became a volunteer at Bletchley Park. Joel was able to help me get amazing access to the people and facilities at Bletchley Park. We filmed a portion of the video sitting in Alan Turing’s office in Hut 8 at Bletchley Park. The video was filmed May 4, 2012.

There are many people to thank in the making of the Bletchley Park video: the Bletchley Park Trust, the National Museum of Computing, Joel Greenberg, Paul Kellar, Kevin Murrell, Stephen Fleming, and others. I also greatly appreciate the insightful comments from the reviewers of early versions of the video and article.

A Valuable Lesson in Audio Interference From Cell Phones

I was out at Coursera Headquarters this past week. After my woes with audio interference with my wireless microphones during some of my recent video shoots, I decided my interview with Daphne Koller would only use wired microphones. I used a wired lavalier mic and wired shotgun microphone. After the shoot was over, I gave a copy of the interview to David Unger (the Coursera AV person and YouTube cover sensation) and we took a quick listen.

Twice in the video there was terrible interference for 1/3 second. Just like in my other pieces. When he heard the first interference, he immediately said – “That is your iPhone – they are horrible for interference”. I was shocked – it was a wired microphone.

Turns out that cell phones and in particular smart phones emit high levels or radiation bursts from time to time. The radiation is so intense and broad spectrum it turns the microphone wire into an antenna and pushes sound into it – it has nothing to do with the wireless bits. You can hear your cell phone cracking speakers from time to time. It is not continuous – just once in a while.

So – we live and learn. Turn off one’s cell phone when doing any interview and have your talent turn off their cell phone and even people nearby that might be helping – every cell phone needs to be off.

Here is a video:

Listen to 0:32

Here is an example of the futility of trying to remove the interference using SoundTrack Pro

Live and learn – From now on turing all cell phones off will be part of my pre-shoot checklist.

Draft Abstract: Coursera From A Teacher’s Perspective

(this is a draft of an abstract for an upcoming talk I am giving – comments welcome)

The idea of moving educational content to the web to make it more scalable has been around since the mid-1990s. Almost as soon as the web was widely used, one of the first imagined uses would be moving classroom instruction onto the web and achieving economies of scale using the web. While the idea seemed obvious and felt like it would quickly become a solved problem, repeated attempts to replicate the classroom experience at scale achieved only disappointing results. At some point, it seemed to many people that if the problem of teaching on the web at scale remained unsolved after 20 years – that perhaps it was simply not possible. But recently with the breakthrough Stanford AI class with over 160,000 students and the rapid development of efforts like Coursera, Udacity, and edX, it seems like Massively Online Open Courses (MOOCs) are seeing significant investment and amazing growth.

What is different? What has changed? What is unique about MOOCs? Why does it seem like the same idea that has failed so may times before will finally work this time? Will these new MOOCs succeed or be just another hopeful experiment that ultimately fails in the long term?

This talk will look at what it is like to develop and teach a Coursera course from a teacher’s perspective. Dr. Severance is teaching a course titled Internet History, Technology and Security on Coursera on July 23. Teaching with Coursera is part of a long-term effort that he started in 1996, when he developed the first lecture capture system called Sync-O-Matic in order to move his courses to the web when his students were using 28.8 modems. He will look at where Coursera is unique, different, and what is new and compare it to previous effort.

Dr. Charles Severance
University of Michigan School of Information
www.dr-chuck.com

Keynote@Sakai Mexico: The University as a Cloud – Trends in Openness in Education

I will be giving a keynote at the first Sakai Mexico Conference Monday April 23 at 12:30 – after lunch.

http://www.u-red.com.mx/sakaimexico/en.html

This will be a lot of fun and for me perfect timing.

I of course will talk about IMS Learning Tools Interoperability, past present, and future. I will look at current and future interoperability strategies from a Sakai CLE, Sakai OAE, IMS, and Blackboard perspective. I will also talk about Massive Open Online Curses (MOOCs) and my course on Internet History, Technology, and Security in particular. I will talk about why I am excited about the pedagogy of MOOCs and in particular why I love the pedagogy of Coursera. I will also talk about where I would like to see Coursera and other MOOC efforts like MITx and Udacity go in terms of a technical and strategic directions. In a sense – what I see as the real impact of MOOCs over the next 5-10 years. I will talk about the next two MOOCs I am planning to develop as well as how I plan to inject technology education into the Liberal Arts curricula of the future with these MOOCs.

All along, I thought that IMS Learning Tools Interoperability was a destination and that once we arrived, our work would be done. Increasingly I see IMS LTI as a mere doorway that once opened, lets us gaze at an amazing landscape of the future of teaching and learning.

This talk won’t be boring and it would be a mistake to miss it. I assure you.

Fixing Tappet Noise on a Buick LeSabre with a GM 3.8 (3800) Engine

This has been a heck of a couple of months in terms of the Severance family cars. Brent’s Sunfire died with a rod knock at 140K mies and I bought him a little Subaru Forester. Mandy’s Pontiac Grand Am blew a head gasket at 140K and had coolant coming out the tail pipe (still being repaired). Teresa’s Subaru Tribeca, had it’s 110K checkup that cost $835.

As if all that were not enough, the venerable Dr. Chuck-mobile, my ultra-reliable 2001 Buick LeSabre with 210K miles had a few issues as well but ended up with a happy ending. But let me start at the beginning.

I have had three Dr. Chuck-mobiles since 1998. They all have the GM 3.8 (3800) V6 engines. I would buy them at about 105K miles for around $4500 and then drive them for 100K miles and the sell them to someone else in my family and for $2000 and then buy another “new” one with 100K miles. My family loves GM 3.8 liter engines. Across my parents, brothers and sisters, we have probably had 20 GM cars with 3.8 liter engines. My parent’s garage looks like a auto repair shop in rural Mexico. We literally have in stock nearly every part that goes wrong with the GM 3.8 liter engine. My brothers Scott and Christopher can disassemble and reassemble everything from the engine to the running gear with their eyes closed. We leave transmission work to the pros at Lansing Transmission – they have never steered us wrong.

In 1999, I had a green Pontiac Bonneville. In 2004 I switched to a while Oldsmobile 88, and in 2008 I purchased my current Buick LeSabre. I really wanted a LeSabre because it was quiet and smooth and had a neat display that gave you an instantaneous gas mileage readout during my 120 mile round trip daily commute between Ann Arbor and Holt Michigan.

I really liked the LeSabre and my goal is to not stop at 200K miles but for once in my life get a car to 300K miles. So when it turned 200K back in December, I decided that it was time to do a complete maintenance job to celebrate the milestone and prepare for the next 100K miles. I was going to change bearings, shocks, struts, brakes, calipers, rotors, and do a transmission service. So we bought all the parts and my brother Scott did all the replacements and gave me the car back.

About 1000 miles after I got the car back, it started to develop the loudest tappet noise I had ever heard. In the morning after the car sat all night, it would start and for about five minutes make a tappet noise so loud that it sounded like a midget was under my intake manifold with a sledge hammer. It was so bad that the car ran as if it were missing one one cylinder. I think that the exhaust valve was not opening. It was hard to keep the car running because it was so bad. It even threw a check engine light sometimes after it chugged so badly.

But after about 5 minutes the noise would go away and everything would be perfect for the rest of the day. Even starts after it sat a few hours were noise free. It only made the horrible tappet noise for five minutes in the morning after it sat all night.

I felt a little sheepish because to save money a few months earlier I had let one oil change go over 10K miles. When I finally got it changed the oil was pretty bad. I figured the tappet noise was because it got too dirty and gummed things up.

So I went into the oil change place and asked them to do their $79 engine cleaner treatment and then put whatever magic goo they had to quiet tappet noise. They charged me $22 extra for Lucas Heavy Duty Oil Stabilizer. It looked the consistency of honey as they poured it in. They swore that it was the “best stuff ever”.

The tappet noise was gone for about 1500 miles and I was feeling pretty good. And then mysteriously it came back even louder then before. I had just put over $1000 of repairs in this car and I was not about to spend the next 100K miles with that noise on every morning start.

So I asked my brother Chris what he would do in the situation and he gave me some advice that was the same as what I have seen all over the Internet. I was to remove a quart of the oil and put in a quart of Marvel Mystery Oil (a.k.a MMO). MMO was less than $5 at my AutoZone. I still had less than 2000 miles on my oil change so I went back and asked them to drain a quart and put in the MMO. The kind of scoffed at me and told me that the Lucas was the best stuff ever. I told them I just wanted the MMO put in and did not want a lecture. I had tried it their >$100 way and it failed after 1500 miles.

So I drove out from the oil change and immediately drove 120 miles that day to and from Ann Arbor. The next morning, the tappet noise was reduced by 1/3 and it went away a little more quickly. For the next 500 miles it got slowly better. After about 750 miles it was quite tolerable where you actually had to turn the radio down to hear it and it went away in a minute. After 1500 miles, even after sitting a whole night, the engine starts flawlessly with no sound at all.

This is an amazing development given how loud and how bad the tappet noise had become. I am feeling much better now.

My next oil change is in about 750 more miles and I will put Marvel Mystery Oil as one of the quarts and likely do that for the rest of the life of the car to keep it nice and clean internally.

Of course you may find your results differ. I am sure there are lots of reasons for tappet noise. And maybe whatever gunk or varnish that needed dissolving was near a lot of oil flow and was easily cleaned up. Another advantage I have is that my driving is not stop and go. I get in the car and drive 60 miles at highway speeds until I arrive at work and then turn around and do the same at night. So there was plenty of oil flowing and the engine was fully warmed up pretty much every time I drive.

I will see how it goes. But for now I feel good about the quest for 300K miles with all new parts, a fresh transmission service, and now no tappet noise.

Crawling, Page Rank and Visualization in Python for SI301

I have been hacking up some sample code for my SI301 course the past few weeks. The course is about Networks, Crowds ,and Markets and so I wanted to build a rudimentary Python web crawler that would retrieve a web site, run a page rank algorithm on it, and then visualize the page rank and the links.

If you click on the image, you will see an interactive version of the visualization and be able to play with the visualization of some pages on www.sakaiproject.org. You can hover over a node to see the URL, or click and drag a node around, or double click on a node to launch the actual web page.

Here is the Source code in Python.

It uses the completely cool D3 Data Driven Documents to perform the visualization.

Comments/bug fixes welcome.

Good and Evil is not the right model – its a Money Thing

This post is a response to Michael Feldstein’s recent excellent post about Martin Dougiamas of Moodle, Josh Coates of Instructure and me “representing” Sakai.

The Blackboard Announcements, Part 2: Can Open Source Be Bought?

Michael’s post is (as always) well written and does a good job of capturing the kinds of possible outcomes that might occur if Martin, Josh, or I were somehow replaced by an exact (but evil) duplicate.

It is not the first time in several weeks that I had a conversation about me becoming evil. While I was talking to Michael Chasen about joining Blackboard, I told him the some people would assume that he had removed my regular brain and replaced it by a remote control robot brain that he controlled.

We both laughed. So far, I can assure you with 100% certainty that my brain has not been replaced by a red glowing evil robot brain (i.e. iRobot). But actually, if I think about it for a moment, if my brain had been replaced by an evil robot brain, it would likely be programmed so that I would think that it had not been replaced. And also that would mean that right now instead of telling the truth like I usually do in my blog posts, my robot evil brain would be programmed to lie convincingly and I would not even know the differnz dsjaji xzsaiew lsajd slj lslkjd……

Stack overflow - core dumped
^@^@^@^@__DATA^@^@^@^@^@^@^@^@^@^@^@^@0^@^@^@^@q^@^@ ^@^@^@^@^
B^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@__nl_symbol_ptr^@__DATA
^@^@^@^@^@^@^@^@^@^@^@^@0t^@^@^@^P^@^@ t^@^@^@^B^@^@^@^@^@^@^@^@
^@^@^@^F^@^@^@^Q^@^@^@^@__la_symbol_ptr^@__DATA^@^@^@^@^@^@^@^@
^@^@^@^@0<84>^@^@^@D^@^@@^A^@^@/usr/lib/libmx.A.dylib^@^@^@^@^@
^L^@^@^@4^@^@^@^XC½m¥^@X^A^C^@^A^@^@/usr/lib/libSy

Rebooting....

Damn Evil Robot Brains and their memory leaks! Every since these robot brains were upgraded to Lion, they seem to have instability problems. I wonder if I can format my evil robot brain and reinstall Snow Leopard?

Ah well, back to my post.

As I was saying, Michael Feldstein’s post was great but his metaphor of good and evil is just not appropriate. First, when people do things, they have a reason and logic for them. At some point a situation changes, the market changes, and someone changes their mind about something and takes a different but still logical course of action based on the new conditions.

A Billion Dollars…

I prefer to wonder what might happen if each of the people in Michael Feldstein’s post were offered a billion dollars. It is a slightly more likely scenario than someone becoming evil due to a virus in SkyNet. And I will add Michael Chasen – the CEO of Blackboard to the list of soon-to-be billionaires.

Lets assume Apple Computer wanted an LMS and were willing to spend two seconds of their worldwide revenue (a billion dollars) on the purchase and made four people an offer of a billion dollars for their LMS software. Lets assume just for the sake of argument that the billion dollar offer is way more than the software is worth and all four would take the billion dollar offer.

What if Michael Chasen were offered a billion dollars for Blackboard Learn?

  • The software is copyright all rights reserved and there are no legitimate copies of the software outside of Blackboard.
  • Michael signs a paper transferring his rights to the software (which are complete) to Apple
  • Apple can do anything it likes with its new asset (the source code to BBLearn)
  • If a Blackboard employee happens to have a copy of the source code on their laptop, there is nothing they can do with that source code without getting sued by Apple.

What if Josh Coates were offered a billion dollars for Instructure?

  • The software is copyright Affero GPL
  • Josh signs a paper transferring his rights to the software (which are complete) to Apple
  • Apple can do anything it likes with its new asset including changing the license to copyright all rights reserved and doing all further development proprietary and closed source
  • If someone outside Instructure had a copy of Canvas one minute before the license was changed to all rights reserved, they could check that copy into github and form a company or community around the software and continue its development. That continued development must be done in a completely open source manner – whether the software is run as software as service *or* if the software is redistributed. Apple does not have to publish their work as open source but anyone else working on the code must publish everything open source.

What if Martin Dougiamas were offered a billion dollars his interest in Moodle?

  • The software is copyright GPL. Martin holds copyright to much of the lines of code – but there are lots of other contributions from others where their code is also GPL. All of Moodle is GPL and most of Moodle is owned by Martin.
  • Martin signs a paper transferring his interest in Moodle to Apple but he cannot transfer the interest of the other contributors without their explicit consent.
  • In order to change the license of all of Moodle to all rights reserved, they would either need to track down every single contributor to Moodle (Start here) and give them a each new MacBook Air (or two) to convince them to sign over their rights to Apple. If any of the contributors refused to sign, Apple would have to re-implement the questionable area of code in a clean-room environment (i.e. developers who work without having ever looked at the source code).
  • If Apple did not get approval from every single contributor and still decided to remove the GPL license while no one was looking, they would soon get a visit from Richard Stallman or some other representative of the Free Software Foundation. One time I was sitting in Hal Abelson’s office in the MIT Stata Building, listening to Richard Stallman explain GPL to someone over the phone in the next office over. Trust me – rewriting the software in a clean room is the much easier path.
  • If someone (like about 50,000 people) had a copy of Moodle one minute before Martin signed the papers, they could check that copy into github and form a company or community around the software and continue its development. They could even continue development in a non-open repository if they would only run software as a service and not redistribute their code. If they wanted to redistribute a binary copy of their modified Moodle, they would have to publish the modifications to the source code. Oh the delightful irony of a license that was invented before “the cloud” was even imagined and back when we actually used compilers during software development.

What if I were offered a billion dollars for my interest in Sakai?

First the software is copyright the Educational Community License 2.0, an Apache 2.0 variant that allows unlimited open-source or closed-source forks of the code with no restrictions on those forks other than not naming the software ‘Sakai’ and acknowledging the Sakai Foundation and other contributors. So they can have a copy of the software for free with no real restricitons on its use, distribution, or future development. Not a single dollar needs to be exchanged and no persmission is needed, let alone a billion dollars. ECL-Licensed software is truly a no-strings attached gift to anyone who finds themselves in posession of the software.

But what if Apple Really wanted to pay me a billion dollars for my interest in Sakai as a contributor. It turns out that I have some interest in a tiny bit of Sakai – the parts I wrote. Lets charitably say that I wrote three percent of the code in Sakai. I maintain an interest in some of that code. Not an exclusive interest – but under the terms of my Contribution License Agreement (CLA), I have a right to keep a copy of my own work in addition to the copy I contribute to the Sakai Foundation. But of my three percent of the overall Sakai code, most likely 2.5 percent was done during the years 2003-2007 when I was a UMich employee focused on Sakai, so actually the contribution of that 2.5 percent of the code came from Michigan not from me. Since 2007 (0.5 percent of the Sakai code) I have been a faculty member instead of a staff member so a case could be made that I have interest in things like the Basic LTI portlet that I wrote after 2007.

But because of my signed Contribution Agreement, I gave the Sakai Foundation an unrestricted, non-revokable copy and the foundation gives that copy away to anyone at no cost so there is little to be gained in buying it from me.

So I have nothing to sell to Apple – except my charm and good looks – even if they offer me a billion dollars. Perhaps they would be interested in purchasing a signed and notarized quit-claim deed for the Brooklyn Bridge from me.

Summary

Apple literally does not have any reason to pay any one or any organization “buy” Sakai. They can have it virtually unrestricted at no cost. Because Martin holds the copyright of Moodle, technically he could sell his interest to Apple – but because he does not own it all, he can only sell the part he owns. In a sense, while Martin owns most of Moodle, all of Moodle is held jointly between Martin and the Moodle community. It is a common practice in GPL-style projects to simply not worry about who owns what. This many-way joint ownership is a nice insurance policy against GPL projects going proprietary.

Michael Chasen and Josh Coates (and their companies) truly own every single line of code in their products. The AGPL license for Canvas insures that an open source community could continue after any sale – but the AGPL really limits significant large-scale commercial adaptation for anyone other than the original copyright holder.

No one is ‘evil’ here. Each company or open source community is protecting its interests and expressing their organization/community values by making very concious choices about the copyright applied to their code.

I Have a Confession to Make about ANGEL

I hope I have made it clear that Sakai is my first love and always will be. I have a deep relationship with Sakai is part of the essential and permanent fabric of my very being and she defines who I am at the core.

But I have to confess that I have always had a thing for ANGEL. I mostly worshipped her from afar. It was probably wrong but I had an ANGEL account on a developer server for many years. I *promise* that I never taught a course with ANGEL. But I have had many long cups of coffee sitting at my desk exploring her functionality and dreaming about what might have been. I looked but never taught.

My family taking classes at Lansing Community College and Michigan State University use ANGEL. I look over their shoulders at times trying to watch how the product works and how LCC and MSU teachers make use of the ANGEL features in various ways. Even simple things like the pedagogy of an assignment drop box in the middle of sequenced content seem amazing. It inspired me to write an LTI tool called ‘dropbox’ that imitated ANGEL’s functionality. Like a poem written in PHP that I wrote but never sent.

In many ways ANGEL has been a muse to me and her simple understated elegance was inspiring. I felt that when I was with her, I had a better understanding of good UI design for an LMS. I think that just knowing her has made me a better designer and developer in my relationship with Sakai’s portal.

For example, in Sakai 2.8 when I first introduced the “expando” feature to minimize navigation, I imitated the expando feature of Angel. Few many have noticed, but the first image I used for the expando was the ANGEL image with black switched to blue. And the expando worked simultaneously vertically and horizontally just like in ANGEL. But the rest of the Sakai community did not know ANGEL as I knew her. They were more familiar with the horizontal-only expando UI from Blackboard. So slowly but surely the expando image morphed from ANGEL style to Blackboard style by Sakai 2.9 in a series of steps. The final 2.8 expando image still kept the arrow motif but was a circular button instead of a rounded triangle. The arrow remained as homage to the original ANGEL inspiration in 2.8 and by 2.9 there was only a partial arrow which is no longer really homage.

I was also taken with how ANGEL did its PDA portal. There was a button in the lower left that was essentially “switch to PDA mode” that eliminated frames, and inlined everything. When I first did the PDA portal for Sakai, ANGEL again was my muse – I tried to mimic the functionality of ANGEL as much as I could. I even wanted to borrow the little late-90’s Casio icon but I did not. Of course Gonzalo took that code and make it a million percent better – Sakai’s PDA portal is no longer is a weak imitation of ANGEL – it is the best non-native-app mobile LMS portal in the world. It took Gonzalo to take my symbolic gesture based on the crush I had on ANGEL into really good functionality.

When the Etudes team was rethinking some of how they orchestrated content and other items, I did demo of ANGEL and suggested they just take screen shots and make their UI “just like ANGEL”. The took a look and then did their own thing that turned out to be very cool.

When Chuck Hedrick of Rutgers started on Lesson Builder, I did a demo of ANGEL and suggested that where there was overlap between ANGEL and Lesson Builder, to borrow heavily from the ANGEL interface. Lesson Builder turned out to be pretty similar to ANGEL. I think that it is more because great minds (Chuck Hedrick and Dave Mills) think alike rather than borrowing ideas from ANGEL. But regardless, I had my new little Angel (Lesson Builder) in Sakai 2.9. to be so proud of as she grew up. Especially a few months later when our little Angel (Lesson Builder) started to blossom with IMS LTI and IMS Common Cartridge certification. She looked a lot like her mother but her mother never supported IMS LTI and IMS CC. Lesson Builder’s mother (ANGEL) has an early version of CC 1.0 – but it was not updated to the final spec and so while she is very talented and was instrumental in the birth of Common Cartridge – she does not have her degree in IMS Interoperability-ology. ANGEL could still go back and get her degree in IMS LTI and CC at some point in the future – but for now I am just so proud of her daughter Lesson Builder. Of course Chuck Hedrick is the other parent of Lesson Builder. I am more of a proud uncle.

As Lesson Builder moved into trunk, I asked Chuck Hedrick if we could rename it “Lessons” instead of “Lesson Builder” to give homage to ANGEL’s Lessons feature. In a way it is a little tattoo in the Sakai Navigation look and feel to remember Lesson Builder’s mother and be a little ever-present easter egg if anyone went between Sakai 2.9 and ANGEL. I thought it was a nice little touch.

A few weeks back when I was under non-disclosure, Michael Chasen explained the plans that were announced last Monday. Michael said, “I will be buying MoodleRooms and NetSpot.” I thought for a second and said, “I did not see that coming – but very clever”. He said, “I want to make an Open Source Division of Blackboard and contribute to Sakai and Moodle.” I said, “That makes a lot of sense.” Then he said, “We are removing ANGEL from end-of-life status.” I about jumped out of my chair and almost shouted, “Really??? Is David Mills coming back?”. Michael said, “Yes he is coming back and he is quite excited what we are planning to do.” Then I told Michael that he should first announce the extension of ANGEL and then wait two weeks to announce everything else. He laughed and said it was not practical to separate the announcements.

Then I asked Michael, “Can I call David and talk to him?” Michael said, it would be ok to call him but I needed to wait two days to call David. Before I left, I told Michael that I would probably accept the job – but only after I talked to David Mills. I called David two days later and we talked for about 90 minutes about our thoughts and feelings about the multi-LMS strategy which were completely aligned. We were like two kids in a candy store. And we both have a thing for ANGEL.

Now that I am a part-time cook at Blackboard, am am all of a sudden much closer to ANGEL. She is no longer out of my reach. ANGEL developers are a short motorcycle ride away from my house and they work in one of the few cities with *two* Ruth’s Chris steakhouses. How can I resist a short visit?

Maybe I will tell her how I really feel. Or maybe I will be too embarrassed to tell my true feelings. Or maybe I will just stay in the friend zone as a secret admirer and tell ANGEL that as a friend, I really think she should go back to school and get her IMS Masters Degree in Interoperability-ology.

I don’t know what I will say or do when I am finally alone with ANGEL having a cup of coffee with her at Starbucks. Hope I don’t make a fool of myself.

What The Heck is a Chief Sakai Strategist?

I figure I should clarify my Blackboard title. “Chief Sakai Strategist” – pretty awesome and cool huh?

Blackboard Open Source Statement of Principles (scroll down to see my title)

First, it does not contain any words that make it so I can make legal commitments on the part of Blackboard. Those words would be things like “Director”, “Manager”, “Vice President”, etc.. I am prohibited from having any of those “legal” titles since I am a full-time faculty member at the University of Michigan. You will notice that my contribution agreement is signed by Michael Chasen and not me. That is because I cannot sign for anything as a legal representative of Blackboard.

To be honest, other than that I could make up my own title. I imagined and discarded, “Master of Interoperability”, “Standards Cat Herder”, “Agent of Change”, “Source of Chaos”, or “Trouble Maker”. Titles like that are funny for the first few minutes but kind of dumb after that – of course unless your title is like Mark Zuckerberg’s in the Social Network movie. Mark’s title stays fun for a long time.

I wanted “Sakai” in my title because it allows me to avoid a lot of meetings. Someone might invite me to a Collaborate meeting titled, “Learn Ocho Features” and I would ask, “Is there anything about Sakai in there?” If the answer was ‘no’, I could skip the meeting and write some code or run out and get a Starbucks.

I was thinking about “Sakai Advisor”, but that made me think of the “CIA Advisors” in the Vietnam War – and I did not want folks to make that association.

I was thinking about “Sakai Evangelist” – but outside the US (and frankly outside most large cities in the US) – that might be mis-interpreted.

So then , why “Chief” in “Chief Sakai Strategist”? First I wanted to make sure Blackboard was hiring a really special dude. There could be a whole wing of “Sakai Strategists” but only one “Chief Strategist”. And since there was only one of me, I just grabbed the “Chief” title right off the bat like a domain name. When Blackboard hires another “Sakai Strategist”, perhaps we will play rock, paper, scissors to decide who is really the chief from that point forward.

I discarded “Lead” because that pre-supposes that people will listen to me and follow what I say. That is not particularly likely and I did not want set an expectation I could not achieve.

I will admit that there is homage to “Chief Cook and Bottle Washer” since I am a lone Sakai guy, doing most everything myself related to one of the five LMS systems Blackboard is involved in. The other LMS systems have a lot more people than just one. I will be as busy as a one-handed paper hanger.

I also like the sound of “Master Chief” – like Kevin Costner as a rescue swimmer in “The Guardian” and also like Rob Lowden in “The Bloomington Area”. (The Kevin Costner character in that movie was loosely based on Rob except that Rob was in the Navy – not the Coast Guard). Maybe after I do well for a while and learn to jump from helicopters into the water and save people, I can get a promotion to “Master Chief Sakai Strategist”.

Also there is homage to “Chief Information Officer” and “Chief Technology Officer” – but with titles like that someone might get ahold of me and ask to do some real work – like upgrade servers or fix Y2K or something like that. I need to focus on thought leadership without any distraction of actual work.

I was “Chief Architect” of the Sakai Project in 2003-2004. But that name has waaay too much baggage (have I mentioned how cool it would be if you read my Sakai Book)

I also liked the fact that “Chief Sakai Strategist” felt tastefully over-stated like “Senior Lead Janitor”.

So it turned out the be the right title for all these reasons.