More Tsugi Refactoring – Removal of the mod folder

I completed the last of many refactoring steps of Tsugi yesterday. when I moved the contents of the “mod” folder into its own repository. The goal of all this refactoring was to get it to the point where checking out the core Tsugi repository did not include any end-user tools – it just would include the administrator, developer, key management, and support capabilities (LTI 2, CASA, ContentItem Store). The key is that this console will also be used for the Java and NodeJS implementations of Tsugi until we build the functionality in the console in each of those languages and so it made no sense to drag in a bunch of PHP tools if you were just going to use the console. I wrote a bunch of new documentation showing how the new “pieces of Tsugi” fit together:

This means that as of this morning if you do a “git pull” in your /tsugi folder – the mod folder will disappear. But have no fear – you can restore it with the following steps:

cd tsugi
git clone mod

And your mod folder will be restored. You will now have to do separate git pulls for both Tsugi and the mod folder.

I have all this in solid production (with the mod restored as above) with my Coursera and on campus Umich courses. So I am pretty sure it holds together well.

This was the last of a multi-step refactor for this code to modularize it in multiple repositories so as to better prepare for Tsugi in multiple languages as well as plugging Tsugi into various production environments.

Ring Fencing JSON-LD and Making JSON-LD Parseable Strictly as JSON

My debate with my colleagues[1, 2] about the perils of unconstrained JSON-LD as an API specification is coming to a positive conclusion. We have agreed to the following principles:

  • Our API standard is a JSON standard and we will constrain our JSON-LD usage so as to make it so that the API can be deterministically produced and consumed using *only* JSON parsing libraries. During de-serialization, it must be possible to parse the JSON deterministically using a JSON library without looking at the @context at all. It must be possible to produce the correct JSON deterministically and add a hard-coded and well understood @context section that does not need to change.
  • There should never be a requirement in the API specification or in our certification suite that forces the use of JSON-LD serialization or de-serialization on either end of the API.
  • If some software in the ecosystem covered by the standard decides to use JSON-LD serializers or de-serializers and and they cannot produce the canonical JSON form for our API – that software will be forced to change and generate the precise constrained JSON (i.e. we will ignore any attempts to coerce the rest of the ecosystem using our API to accept unconstrained JSON-LD).
  • Going forward we will make sure that our sample JSON that we publish in our specifications will always be in JSON-LD Compacted form with either a single @context or a multiple contexts with the default @context included as “@vocab” and all fields in the default context having no prefixes and all fields outside the default @context having simple and predictable prefixes.
  • We are hopeful and expect that Compacted JSON-LD is so well defined in the JSON-LD W3C specification that all implementations in all languages that produce compact JSON-LD with the same context will produce identical JSON. If for some strange reason, a particular JSON-LD compacting algorithm starts producing JSON that is incompatible with our canonical JSON – we will expect that the JSON-LD serializer will need changing – not our specification.
  • In the case of extending the data model, the prefixes used in the JSON will be agreed upon to maintain predictable JSON parsing. If we cannot pre-agree on the precise prefixes themselves then at least we can agree on a convention for prefix naming. I will recommend they start with “x_” to pay homage to the use of “X-” in RFC-822 and friends.
  • As we build API certification mechanisms we will check and validate incoming JSON to insure that it is valid JSON-LD and issue a warning for any flawed JSON-LD but consider that non-fatal and parse the content using only the deterministic JSON parsing to judge whether or not an implementation passes certification.

It is the hope that or the next 3-5 years we can rely on JSON-only infrastructure but at the same time lay the groundwork for a future set of more elegant and expandable APIs using JSON-LD once performance and ubiquity concerns around JSON-LD are addressed.

Some Sample JSON To Demonstrate the Point

Our typical serialization starts with the short form for a single default @context as in this example from the JSON-LD playground:

  "@context": "",
  "@type": "Person",
  "name": "Jane Doe",
  "jobTitle": "Professor",
  "telephone": "(425) 123-4567",
  "url": ""

But lets say we want to extend this with a field – the @context would need to switch from a single string to an object that maps prefixes to IRIs as shown below:

  "@context": {
    "@vocab": "",
    "csev": ""
  "@type": "Person",
  "url": "",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "csev:debug" : "42"

If you compact this with a single schema for – all extensions get expanded:

  "@context": "",
  "type": "Person",
  "": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "schema:url": ""

The resulting JSON is tacky and inelegant. If on the other hand you compact with this context:

  "@context": {
    "@vocab" : "",
    "csev" : ""

You get JSON that is succinct and deterministic with predictable prefixes and minus the context looks like clean looking JSON that one might design even without the influence of JSON-LD.

  "@context": {
    "@vocab": "",
    "csev": ""
  "@type": "Person",
  "csev:debug": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "url": ""

What is beautiful here is that when you use the @vocab + extension prefixes as the @context, it means that our “canonical JSON serialization” can be read by JSON-LD parsers and produced deterministically by a JSON LD compact process.

In a sense, what we want for our canonical serialization is the output of a jsonld_compact operation and if you were to run the resulting JSON through jsonld_compact again – you would the the exact same JSON.

Taking this approach and pre-agreeing on all the official context and all prefixes for official contexts as well as a prefix naming convention for any and all extensions – means we should be able to use pure-JSON libraries to parse the JSON whilst ignoring the @context completely.


Comments welcome. I expect this document will be revised and clarified over time to insure that it truly represents a consensus position.

Abstract: Massively Open Online Courses (MOOCs) – Past, Present, and Future

This presentation will explore what it was like when MOOCs were first emerging in 2012 and talk about what we have learned from the experience so far. Today, MOOC providers are increasingly focusing on becoming profitable and this trend is changing both the nature of MOOCS and university relationships with MOOC platform providers. Also, we will look at how a university can scale the development of MOOCs and use knowledge gained in MOOCs to improve on-campus teaching. We will also look forward at how the MOOC market may change and how MOOC approaches and technologies may ultimately impact campus courses and programs.

Unconstrained JSON-LD Performance Is Bad for API Specs

I am still arguing fiercely with some of my enterprise architect friends whether we should use JSON or JSON-LD to define our APIs. I did some research this morning that I think is broadly applicable so I figure I would share it widely.

You might want to read as background the following 2014 blog post from Many Sporny who is one of the architects of JSON-LD:

Here are a few quotes:

I’ve heard many people say that JSON-LD is primarily about the Semantic Web, but I disagree, it’s not about that at all. JSON-LD was created for Web Developers that are working with data that is important to other people and must interoperate across the Web. The Semantic Web was near the bottom of my list of “things to care about” when working on JSON-LD, and anyone that tells you otherwise is wrong. :)

TL;DR: The desire for better Web APIs is what motivated the creation of JSON-LD, not the Semantic Web. If you want to make the Semantic Web a reality, stop making the case for it and spend your time doing something more useful, like actually making machines smarter or helping people publish data in a way that’s useful to them.

In the vein of Manu’s TL;DR: above I will add my own TL;DR for this post:

TL;DR: Using unconstrained JSON-LD to define an API is a colossal mistake.

There is a lot to like about JSON-LD – I am glad it exists. For example, JSON-LD is far better than XML with namespaces, better than XML Schema, and better than WSDL. And JSON-LD is quite suitable for long lived documents that will be statically stored and have data models that slowly evolve over time where any processing and parsing is done in batch mode (perhaps like the content behind Google’s Page Rank Algorithm).

But JSON-LD is really bad for APIs that need sub-millisecond response times at scale. Please stop your enterprise architects from making this mistake just so they gain “cool points” at the enterprise architect retreats.

Update: I removed swear words from this post 4-Apr-2016 and added the word “unconstrained” several places to be more clear. Also I made a sweet web site to show what I mean by “unconstrained JSON-LD” – I called it the JSON-LD API Failground.

Update II: Some real JSON-LD experts (Dave Longley and Manu Sporney) did their own performance tests that provide a lot more detail and better analysis than my own simplistic analysis. Here is a link to their JSON-LD Best Practice: Context Caching – they make the same points as I do but with more precision and detail.

Testing JSON-LD Performance

This is a very simple test simulating parsing of a JSON-only document versus a JSON-LD document. The code is super-simple. Since JSON-LD requires the document be first parsed with JSON and then augmented by JSON-LD to run an A/B performance test we simply turn on and off the additional required JSON-LD step and time it.

This code uses the JSON-LD PHP library from Manu Sporny at:

I use the profile sample JSON-LD for the Product at:

Methodology of the code – it is quite simple:

    require_once "jsonld.php";

    $x = file_get_contents('product.json');
    $result = array();
    for($i=0;$i<1000;$i++) {
       $y = json_decode($x);
       $y = jsonld_compact($y, "");
       $result[] = $y;

To run the JSON-only version simply comment out the `jsonld_compact` call. We reuse the $y variable to make sure we don't double store any data and accumulate the 1000 parsed results in an array to get a sense of whether or not there is a different memory size for JSON or JSON-LD.

I used `/usr/bin/time` on my MacBook Pro 15 with PHP 5.5 as the test.

Output of the test runs

si-csev15-mbp:php-json-ld-test-02 csev$ /usr/bin/time -l php j-test.php
            0.09 real         0.08 user         0.00 sys
      17723392  maximum resident set size
             0  average shared memory size
             0  average unshared data size
             0  average unshared stack size
          4442  page reclaims
             0  page faults
             0  swaps
             0  block input operations
             6  block output operations
             0  messages sent
             0  messages received
             0  signals received
             0  voluntary context switches
             6  involuntary context switches
    si-csev15-mbp:php-json-ld-test-02 csev$ /usr/bin/time -l php jl-test.php
          167.58 real         4.94 user         0.51 sys
      17534976  maximum resident set size
             0  average shared memory size
             0  average unshared data size
             0  average unshared stack size
          4428  page reclaims
             0  page faults
             0  swaps
             0  block input operations
             0  block output operations
         14953  messages sent
         24221  messages received
             0  signals received
          2998  voluntary context switches
          6048  involuntary context switches

Results by the numbers

Memory usage is equivalent - actually slightly lower for the JSON-LD - that is kind of impressive and probably leads to a small net benefit for long-lived document-style data. Supporting multiple equivalent serialized forms may save space at the cost of processing.

Real time for the JSON-LD parsing is nearly 2000X more costly than JSON - well beyond three orders of magnitude [*]

CPU time for the JSON-LD parsing is about 70X more costly - almost 2 orders of magnitude [*]

[*] Some notes for the "Fans of JSON-LD"

To stave off the obvious objections that will arise from the Enterprise-Architect crowd eager to rationalize JSON-LD at any cost, I will simply put the most obvious reactions to these results here in the document

  1. Of course the extra order of magnitude increase in real-time is due to the many repeated re-retrievals of the context documents. JSON-LD evangelists will talk about "caching" - this of course is an irrelevant argument because virtually all of the shared hosting PHP servers do not allow caching so at least in PHP the "caching fixes this" is a useless argument. Any normal PHP application in real production environments will be forced to re-retrieve and re-parse the context documents on every request / response cycle.
  2. The two orders of magnitude increase in the CPU time is harder to explain away. The evangelists will claim that a caching solution would cache the post-parsed versions of the document - but given that the original document is one JSON document and there are five context documents - the additional parsing from string to JSON would only explain a 5X increase in CPU time - not a 70X increase in CPU time. My expectation is that even with cached pre-parsed documents the additional order of magnitude is due to the need to loop through the structures over and over, to detect many levels of *potential* indirection between prefixes, contexts, and possible aliases for prefixes or aliases.
  3. A third argument about the CPU time might be that json_decode is written in C in PHP and jsonld_compact is written in PHP and if jsonld_compact were written in C and merged into the PHP core and all of the hosting providers around the world upgraded to PHP 12.0 - it means that perhaps the negative performance impact of JSON-LD would be somewhat lessened "when pigs fly".


Unconstrained JSON-LD should never be used for non-trivial APIs - period. Its out of the box performance is abhorrent.

Some of the major performance failure can be explained away if we could magically improve hosting plans, and make the most magical of JSON-LD implementation - but even with this there is over an order of magnitude of performance cost to parse JSON-LD than to parse JSON because of the requirement to transform an infinite number of equivalent forms into a single canonical form.

Ultimately it means if a large scale operator started using JSON-LD based APIs heavily to enable a distributed LMS - so we get to the point where the core servers are spending more time servicing standards-based API calls rather than generating UI markup - it will require somewhere between 10 and 100 times more compute power to support JSON-LD than simply supporting JSON.

Frankly in the educational technology field - if you want to plant a poison pill in the next generation of digital learning systems - I cannot think of a better poison pill than making interoperability standards using JSON-LD as the foundation.

I invite anyone to blow a hole in my logic - the source code is here:

A Possible Solution

The only way to responsibly use JSON-LD in an API specification is to have a canonical serialized JSON form that is *the* required specification - it can also be valid JSON-LD but it must be possible to deterministically parse the API material using only JSON and ignoring the @context completely. If there is more than one @context because of extensions, then the prefixes used to represent the contexts other then the @vocab then the prefixes used by those other contexts must also be legislated so once again, a predictable JSON-only parse of the document without looking at the contexts is possible.

It is also then necessary to build conformance suites that validate all interactions for simultaneous JSON and JSON-LD parse-ability. It is really difficult to maintain the sufficient discipline - because if a subset of the interoperating applications start using JSON-LD for serialization and de-serialization - it will be really easy to drift away from "also meeting the JSON parse-ability" requirements. Then when those JSON-LD systems interact with systems that use JSON only for serialization and de-serialization - it will get ugly quickly. Inevitably uninformed the JSON-LD advocates will claim they have the high moral ground and won't be willing to comply with the JSON-only syntax tell everyone they should be using JSON-LD libraries instead - and it won't take much of a push for interoperability to descend into interoperability finger-pointing hell.

So while this compromise seems workable at the beginning - it is just the Semantic Web/RDF Camel getting its nose under the proverbial tent. Supporting an infinite number of equivalent serialization formats is neither a bug nor a feature - it is a disaster.

If the JSON-LD community actually wants its work to be used outside the "Semantic Web" backwaters - or in situations where hipsters make all the decisions and never run their code into production, the JSON-LD community should stand up and publish a best practice to use JSON-LD in a way that maintains compatibility with JSON - so that APIs and be interoperable and performant in all programming languages. This document should be titled "High Performance JSON-LD" and be featured front and center when talking about JSON-LD as a way to define APIs.

Face-to-Face @Coursera Office Hours Sun March 20 – 5-6PM at De Boterwaag in The Hague, NL

I will be having face-to-face @Coursera office hours Sunday March 20 – 5-6PM at De Boterwaag Cafe in the Hague, NL

Cafe Restaurant de Boterwaag
Grote Market 8A
2511 BG Den Haag

Here is a video from the most recent F2F office hours in Orlando:

Also, here is a Google Hangout we did this week talking about the Capstone:

I hope to see you at office hours in Den Haag. Thanks to Catalina O. for helping me set this up.

Report from the First Apereo Conference (2013)

Report from the First Apereo Conference (2013)

Note: This was partially completed in by draft folder for three years – oops.

I really enjoyed attending the first Open Apereo 2013 conference in San Diego June 2-7, 2013.

There was palpable sense of joy at the conference. I think for many of us, who had a vision of a Foundation like Apereo to be created to create a “big-tent” organization to support a wide-range of open source activities in higher education. The idea was that the more diverse our community would become – the more solid and sustainable it will be. In particular we wanted to create an environment where new projects could be quickly formed and by virtual of being part of Apereo, those projects could form the nucleus of the leadership from people and organizations already part of Apereo and attending Apereo meetings.

We need to stop and thank those who gave so much to make this a reality. This was three years where a number of people learned far more about non-profit laws than you could imagine. Building something good takes time – but a lot of people are very relieved to have it finished so we can look to the future.

People who stick out for me include: Patty Gertz, Ian Dolphin, Josh Baron, Jens Haeusser, Robert Sherratt, John Lewis, Set Theriault, and both of the board of directors of Sakai and JASIG as well as the transition committee made up of members from both boards. It was a long and winding road – and the only way to move forward was to be patient.

Sakai in a Apereo-Foundation World

The Sakai-related efforts that are now part of Apereo are now so much better to make forward progress. In the Sakai Project and Foundation – these ideas were often too intertwined to make forward progress. We spent too much time trying to come up with one set of priorities that distracted from evolving our efforts. Here are my observations:

  • The Apereo Open Academic Environment has renamed itself to emphasize that the OAE is very much an independent project exploring next generation approaches to teaching, learning, and collaboration. The OAE team has rewritten much of the core software since the end of 2012 and is moving quickly to a version 1.0 sometime this summer running in production for Marist, Georgia Tech, and Cambridge. Getting a 1.0 project into production is a wonderful milestone and will likely re-kindle interest in the OAE project, growing their interest and resources. Some might say that OAE died and has been reborn – I actually disagree with this notion – OAE has been on a path all along and there were bumps on that path – as the bumps smoothed out the project is moving toward a release nicely.
  • Teaching and Learning SIG – Because this is now an independent entity within Apereo it is a natural place to look across the Sakai CLE and OAE as well as looking at emerging efforts (below). The T/L group also will continue the TWISA (Teaching with Sakai Innovation Awards) and look to expand the effort. This group serves as a natural gathering point for the faculty and student more interest in applying the ideas of openness to teaching and learning. I think that this group will make sure that the end-users of our software have a place at the conference. I also think that this group can nurture interest in areas like Open Education Resources (OER) and if there is an interest in developing practice or software error OER – Apereo might be a great place to incubate that work.
  • The WAD Portfolio Effort – Thanks to efforts like Janice Smith, Shoji Kajita, Alan Berg, and many others, there is continued interest in portfolio solutions in open source. The current effort is a pre-incubation group working together on a product they call WAD (I don’t know what it stands for). The idea for WAD is to build a portfolio system outside of the LMS and find ways to do a deep integration to pull out LMS data as needed. In many ways WAD feels like a throwback to the OSP 1.0 times where practicing pedagogists kept themselves very close to the emerging development efforts and gently steered the process. I am very excited to feel the energy in this group that being part of Apereo makes possible. It was exciting to see the re-engagement of some of the people who brought so much passion to OSP in the early days.
  • The Learning Analytics Effort – There has been a small group of highly interested folks within the Sakai community interested in learning analytics activities for quite some time now. This has resulted in tools like SiteStats in Sakai. But as we gain understanding about the real approach to LA it becomes increasingly clear that analytics work must be done outside of the LMS with (again) many deep integration points. Add to this the TinCan support in Sakai (and soon uPortal and OAE) it paves the way to take real steps in a separate software development project that is just about analyzing analytic data. This group is also pre-incubation but it looks like there is interest that is building on building shared open source software to analyze learning data from many sources.
  • Sakai CLE – I will talk more about this later in a separate section. June 2012 was really the time where the CLE started to re-emerge from being under the radar in the Sakai Foundation politics since about 2008. The 2.9 release (November 2012) and 2.9.2 release (May 2013) have greatly energized the community. Leading schools and commercial affiliates have enthusiastically jumped onto the bandwagon and many have converted or are converting to the 2.9 release. The 2.9 release has enough “good stuff” to make it attractive to move to the latest release. We as a community are reducing our installed version skew and that is very important for long-term sustainability. If we put out a version and no one installs – it is game over. Once the Board issues around the CLE/OAE were resolved we can focus all our limited resources on moving the Sakai CLE forward.

In addition to these efforts, there were many other ideas bouncing around the hallways, breaks, and pubs. What was nice was to say over and over – “Hey that could be a new Apereo working group!” – What was most exciting for me was these working groups would have had a tough time being part of Sakai with a foundation that was dedicated to one (or two) core products and far too much debate about what should get the “resources”. In Apereo with independent projects large and small and lassez faire approach by Apereo each project builds its own small subcommunity and finds its own resources. It is amazing how this Sakai+JASig community has so many ideas as what to do next – but when we were the “Sakai Foundation” the question of “Is that Sakai or not?” kept most of these nascent efforts from gaining forward inertia. With in Apereo – there is little to slow a small and dedicated group from moving an idea forward.

The Sakai CLE

I think that this kind of expanding scope in the area of higher education open source efforts will be the hal

[ed. Note: the original draft stopped here in mid-word]

An IMS Proposal – Eliminate all use of JSON-LD

I sent the following message to IMS because I am really unhappy with IMS use of the JSON-LD in our JSON-based specifications. Apologies in advance to the fans of RDF. We all hoped that JSON-LD would give us the best of both worlds – but it seems like it is the worst of all worlds. I don’t expect to win this argument – because the people making the decisions are not the people writing the code and feeling the unneeded pain caused by JSON-LD.

Hi all,

I would like to formally propose that we no longer use JSON-LD in any IMS specification going forward. I would like to also propose that we formally standardize prefixes for all specifications we have issued that use JSON-LD so implementations can legitimately parse our data models using JSON reliably.

Furthermore we would alter certifications for JSON-LD specs to generate and accept JSON instead of JSON-LD.

My reasoning is that we are far outside the norm of the modern-day REST web services world – and while there are fans of JSON-LD – they are the same folks that loved RDF and just found a new place to push their ideas.

Our standards are one domain of interest and our use of JSON-LD actually tends to create silos of data models. If we compare the JSON-LD for LTI 2.0 and the JSON-LD for ContentItem – they are completely distinct namespaces and things like the “contact” structure – which *should be the same* are actually completely different – and our dysfunctional use of JSON-LD *discourages* the sharing of data model elements between different specifications.

And if you take a look at CASA using JSON Schema – it is even worse. Simple things like contact information again are given completely different data models.

And as I am starting to write code that crosses these spec boundaries boundaries, I am finding that it is far less important to have globally unique identifiers for the notion of a contact data structure – but instead a way to have a contact data structure that we can share and reuse across many specifications.

I think that the right approach is to go straight to a namespaced OO approach to model our underlying data objects and then when we build a new spec and want to pull in the org.imsglobal.shared.Contact object – we just pull in the object and then the JSON serialization is obvious.

As we move away from document-styled specs to API-styled specs – it would seem like we just should move towards defining our interoperable data formats in a way that makes the development of APIs very simple and straightforward instead of wasting so much effort to achieve some dream of future RDF nirvana.

I now have samples of how I model these JSON documents across services – and I can tell you that (a) we are woefully inconsistent across our specs and JSON-LD is partially *causing* the problem and (b) anything that has to do with properly parsing JSON-LD is really poor given the lack of real toolset support and (c) it is frustrating the increasing way that the certification suites are making slightly harder by randomly throwing in JSON-LD just to break those who just want to parse JSON – the solution is to reverse engineer the certification patterns and build lame JSON parsers instead of really using JSON-LD tool chains.

It is high time to walk away from JSON-LD going forward.

Looking forward to your comments.


My MOOC Approach / Pedagogy

I was recently asked to come up with an outline of how I think about building a MOOC. In particular I have been slowly building a Web Applications MOOC based on – starting from my classroom and moving through a MOOC, back to the classroom and then to an innovative on-campus curriculum. This in a sense is my master plan for improving education though MOOCs. They are abstract talking points. Perhaps if you want to hear more, your campus could retain me as a consultant or this might be a good abstract for a keynote or workshop :)

Before the MOOC

Organize/clean your content – understand the topic sequence
Build auto-gradable LTI assignments – test test test
Use residential students as QA – rapid feedback

From the Classroom to the MOOC

Expand time scale – roughly 2x
Eliminate rigor for rigor sake
All assessment is low-stakes and leads to learning
Assessments as puzzles rather than precise measures
Automate automate automate
Recall that LTI tools can be reused outside MOOC platforms
Use CloudFlare to scale static content cheaply
The magic of 5-week classes and 3-week cohorts

From the MOOC to the classroom

Use recordings as assets not lecture replacements
Increase the pace – teach more – make students responsible
Use auto-graded assignments but add manual grading aspects
Do old-school things impossible in a MOOC – like paper exams
Improve MOOC assessments – use F2F students as QA

Impacting other teachers and students broadly

Open Educational Resources – free E-Resources
Low-cost printed textbooks – Amazon CreateSpace
Use CloudFlare to scale static content cheaply
Package materials (including auto-graders) as self-service web site
Get materials on github – allow others to fork and track

Impacting your institution and higher education

Apply the 5-week / 3-week magic on campus for skill-like education
Take advantage of on-campus environment and give better student support

Git cherry-pick a sequence of commits from one branch to another

I finally figured out how to pull a sequence of commits from one branch to another using git’s cherry-pick feature.

In Sakai, when we are preparing for a release, we make a release branch for the release (11.x) and then continue to move master forward and cherry-pick from master back to the release branch. We looked at git-flow and decided against it because most of our adopters use source and not binary artifacts so our release branches live 4-5 years in production we cannot have master “jumping versions”.

So the question is how to cherry-pick back a set of commits to a folder from the baster to the release branch. This is how I do it. First go into master and go into the folder of interest.

git checkout master
cd basiclti
git log .

Capture the log until you find the commit that made the branch.

commit 8cc25781d632e48bfae65009b57c6391d074a3d0
Author: Charles Severance
Date: Mon Feb 29 23:03:28 2016 -0500

SAK-30418 - Initial commit of IMS Content Item

commit 791b12634164003b7c1a59747c28ec9896fc0885
Author: Charles Severance
Date: Sun Feb 28 23:26:51 2016 -0500

SAK-30372 - Fix small mistake in the CASA output

commit 13d21ccd26901c5186a709be27ede499d7de65fc
Author: Charles Severance
Date: Sat Feb 27 11:27:12 2016 -0500

SAK-30372 - Switch the implementation to Jackson

Then I cut and paste the entries in reverse order and make a shell script by changing some bits to a comment and changing the commits to “git cherry-pick” – the script ends up as follows:

# To revert, if some cherry-picks go bad
# git checkout 11.x (to be extra sure)
# git reset --hard 11.x

# After all is verified
# git push origin 11.x

# Make sure to be in 11.x first
git checkout 11.x

git cherry-pick aff5c0343b419fda125d9c217d340bb660929c3c
# Author: Charles Severance
# Date: Fri Feb 19 09:49:23 2016 -0500
# SAK-30308 - Change the groupid and artifact id back

git cherry-pick b6acdbee2bd9fd55f8a77de56732582a7eaa08ae
# Author: Charles Severance
# Date: Tue Feb 23 16:17:14 2016 -0500
# SAK-30362 - Fix small issues.


Again – the script is the commits in reverse order so you are cherry-picking from oldest to newest. I leave the commit details in as comment for my own sanity.

I like the script in case you need to run this more than once.

Hope this helps someone.

Implementing the Next Generation Digital Learning Environment – BOF

The NGDLE ( has been proposed as a “learning ecosystem” where everything fits together beautifully like Lego blocks. There will be lots of learning management systems and thousands of easily installed applications. And we will have electric cars that never need recharging and use anti-gravity for a very smooth ride!

But seriously, how will we ever get there and how can we insure that open source leads the way to this bold new future. What is the grand plan and what are the first steps? Do we have a better understanding about how open source plays in the market? How to we insure sustainability of open source efforts from the beginning? Have we learned any lessons from the past decade of building world-class open source applications – and how can that experience reduce the number of mistakes and mis-steps as we go forward.

This BOF proposal has been submitted to the Open Apereo 2016 conference in New York City on the 24th and 25th May 2016.