October 1: Moving the Tsugi GitHub Repositories

I am just back from a successful trip to South Korea where I talked a lot about the NGDLE, Sakai, and Tsugi:

http://www.slideshare.net/csev/building-the-next-generation-teaching-and-learning-environment-66291838

I focused a lot on the new Tsugi use case of being a single-course LMS that is integrated into a single-site OER materials / course site.

https://www.py4e.com/
https://www.wa4e.com/

There was a good bit of interest from technically minded teachers and folks from educational technology centers. I made it clear that Tsugi was not trivial to install and run yet – but on a good path to be ready for teachers to to build web sites in 2017.

But some want to get started now. And so on October 1, I will be moving the main Tsugi repositories from

https://github.com/csev

to

https://github.com/tsugiproject
https://github.com/tsugitools

The core bits (PHP, Node, and Java) will all move into the “tsugiproject” and tools will move into “tsugitools”.

For folks who have been using the “csev” repositories, GitHub is good about forwarding requests when repositories are renamed or moved.

I am sure this will be a bit of a disruption – but probably better now than later.

If you have any issues with this or suggestions as to how to best do it, let me know.

Abstract: Building the Next Generation Digital Learning Environment using Tsugi

This presentation will give an overview of the Tsugi project and applications of the Tsugi software in building a distributed approach to teaching and learning tools and content. One company involved in the Internet of Things claims that “The next big thing will be a lot of small things”. If we apply this logic to the educational technology marketplace, an essential element needed to achieve the NGDLE is to reduce the granularity of the learning content and applications to the individual teacher or even individual student. Tsugi is a 100% open source effort that is part of the Apereo Foundation.

It is not sufficient to simply make a bunch of small web-hosted things and claim we have “implemented” the NGDLE. We must be able to coherently search, find, re-construct and re-combine those “small pieces” in a way that allows teaching and learning to happen. To do this, each of the learning application and content providers must master detailed interoperability standards to allow us “mash up” and bring those distributed and disparate elements back together. While there has been much said about the ultimate shape and structure of the NGDLE, and there are many current and emerging interoperability standards, there is little effort to build and train providers with usable technology that will empower thousands or hundreds of thousands of people to build and share applications and content that will populate the new learning ecosystem.

In effect, we need to build the educational equivalent of the Apple App Store. Except that it needs to be open and extensible and not depend on a single vendor intent on maximizing shareholder value. This presentation will show how the Tsugi project is doing research into how this works in actual practice. Tsugi is a 100% open source production-ready application and content hosting system that is simple enough to use to allow interoperable and pluggable learning applications or learning content to be built, hosted, deployed and shared by individuals or various-sized organizations.

Dynamic .htaccess to deal with Url Rewriting mod_rewrite.c, and FallbackResource

As I built Tsugi, I want to ship with a decent, working .htaccess in folders that need it. My most typical use case is that I want to map all the URLs in a folder into a file like index.php.

There are two good ways to do this. The old standby is a long set of mod_rewrite rules. The new, much more elegant trick is to use FallbackResource in mod_dir in later versions of Apache 2.2.

The problem is that clever hosting providers upgrade to the new Apache and then figure they can remove mod_rewrite so you know how to do it in either case but don’t have a good way to trigger when to use what approach.

This is my approach that I use in Tsugi when I want to map all URLs to one file:

    <IfModule mod_rewrite.c>
        RewriteEngine on
        RewriteRule ^ - [E=protossl]
        RewriteCond %{HTTPS} on
        RewriteRule ^ - [E=protossl:s]
        RewriteRule "(^|/)\." - [F]
        RewriteCond %{REQUEST_FILENAME} !-f
        RewriteCond %{REQUEST_FILENAME} !-d
        RewriteCond %{REQUEST_URI} !=/favicon.ico
        RewriteRule ^ index.php [L]
    </IfModule>
    
    <IfModule !mod_rewrite.c>
        FallbackResource index.php
    </IfModule>

It is not perfect but kind of deals with things as the move forward. If mod_rewrite is there – use it – it works in later Apache versions as well but if mod_rewrite is there, use it and if not, hope that FallbackResource is there.

Now of course there are some Apache versions / setups where this fails – but on average, over time as Apache’s get upgraded, things get simpler and over time the mod_rewrite code just will stop activating.

I also added this information to a Stack Overflow question.

Abstract: Building the Next Generation Digital Learning Environment (NGDLE)

The concept of a Learning Management System is nearly 20 years old. For the most part, modern-day Learning Management Systems are simply well-developed versions of those first learning systems developed at universities and commercialized through companies like Blackboard, WebCT, and Angel. Since the early LMS systems were developed for a single organization and developed as a single application, it was natural for them to keep adding more functionality to that single application. Each vendor added proprietary formal expansion points to their LMS systems like Building Blocks and PowerLinks. The concept of a single expansion point across multiple LMS systems was proposed by the Sakai project in 2004. The idea evolved over the next few years to become the IMS Learning Tools Interoperability Specification (LTI) released in 2010. LTI provided a basic expansion point across the whole LMS marketplace. LTI greatly expanded the number of applications that could be integrated into an LMS – but those integrations were naturally limited because of the simplicity of the early versions of LTI. In this talk we will look at the standards activities over the past six years that have been laying the groundwork to move from simple plug-in integrations to an open multi-vendor learning ecosystem where the LMS is just one part of that ecosystem. Many are now calling the concept of the new structure of a broad and interoperable market for educational software as the Next Generation Digital Learning Environment (NGDLE). We will look at the work that has been done and an outline of what is left to do to deliver an open learning ecosystem.

Sakai 11.1 maintenance is released!

(This is from an email sent by Neal Caidin)

Dear Community,

I’m pleased to announce on behalf of the worldwide community of participants that Sakai 11.1 is released and available for downloading at

http://source.sakaiproject.org/release/11.1/

Sakai 11.1 has 146 improvements [2a, 2b, 2c] in place including
43 fixes for responsive design (Morpheus)
36 fixes in quizzes (Samigo)
28 fixes in gradebook (aka GradebookNG)
13 fixes in Lessons.

Other areas improved include:
Assignments
Dropbox
Forums
Membership
Portal
Preferences
Profile
Resources
Signup
Site Info
Statistics
Syllabus
Web Services
No new security issues fixed in 11.1 .

Beyond MOOCs: Open Education at Scale (Abstract)

Here is a draft abstract for an upcoming keynote – comments welcome.

Massively Open Online Course (MOOC) providers like edX and Coursera have revealed an almost unlimited desire for education for people of all ages and all walks of life. While these pioneering efforts have achieved much, these learning opportunities are still in relatively short supply. Each course is costly to produce, deploy, and support. These costs are a rate limiting factor in scaling online education to the point where we begin meeting the much larger demand for high quality, plentiful and relevant education worldwide. We need to build a Next Generation Digital Learning Environment (NGDLE) that makes it so any teacher can build and efficiently deploy their own open courses to a worldwide audience. In this presentation, we will look at how we can build an open source infrastructure that is based on open standards and open content that will make creating an open education experience within the reach of any teacher, anywhere in the world. We will look at how educational technology will need to change to reduce the cost to produce, share, and even remix online educational content.

Live, Online, Graduation Ceremony for Python for Everybody Specialization Capstone – Wed June 8 at 9AM

We are reaching the end of the first session of the Python Specialization capstone. We have two things planned to celebrate the time and commitment of the students who have made it through five classes, 25 weeks of work and completed the capstone.

First, the University of Michigan School of Information will be sending signed paper certificates, a temporary tattoo, and a waiver of the application fee to the University of Michigan School of Information Masters Program to everyone who completed the Capstone. We will be sending the packet to every student that has completed the first capstone regardless of geography.

Second, we will be the first MOOC to have an online, live, graduation ceremony and we want to invite anyone to watch our celebration. The graduation ceremony will be held:

Wednesday June 8, at 9:00AM Eastern time

Location: http://live.dr-chuck.com/

The URL will be ready Tuesday night. The ceremony will be streamed live on YouTube using Google Hangouts and then later a recording will be uploaded to Coursera for those without access to YouTube.

The agenda for the graduation ceremony will be to (1) thank those who have worked behind the scenes to make the course successful, (2) hear a short commencement speech from my colleague Colleen van Lent (who also teaches the Web Design for Everybody) specialization, and then (3) read the student’s names as we show each student’s picture.

There are 1165 students who have completed the capstone – but it is optional to participate in the ceremony.

If you have completed the capstone session 1 and want to participate in the graduation ceremony, go back to the course site and read the instructions for joining the ceremony that we sent you in email.

Thanks to everyone who made this possible and congratulations to the graduates on finishing the specialization.

I hope to see you at graduation.

More Tsugi Refactoring – Removal of the mod folder

I completed the last of many refactoring steps of Tsugi yesterday. when I moved the contents of the “mod” folder into its own repository. The goal of all this refactoring was to get it to the point where checking out the core Tsugi repository did not include any end-user tools – it just would include the administrator, developer, key management, and support capabilities (LTI 2, CASA, ContentItem Store). The key is that this console will also be used for the Java and NodeJS implementations of Tsugi until we build the functionality in the console in each of those languages and so it made no sense to drag in a bunch of PHP tools if you were just going to use the console. I wrote a bunch of new documentation showing how the new “pieces of Tsugi” fit together:


https://github.com/csev/tsugi/blob/master/README.md

This means that as of this morning if you do a “git pull” in your /tsugi folder – the mod folder will disappear. But have no fear – you can restore it with the following steps:

cd tsugi
git clone https://github.com/csev/tsugi-php-mod mod

And your mod folder will be restored. You will now have to do separate git pulls for both Tsugi and the mod folder.

I have all this in solid production (with the mod restored as above) with my Coursera and on campus Umich courses. So I am pretty sure it holds together well.

This was the last of a multi-step refactor for this code to modularize it in multiple repositories so as to better prepare for Tsugi in multiple languages as well as plugging Tsugi into various production environments.

Ring Fencing JSON-LD and Making JSON-LD Parseable Strictly as JSON

My debate with my colleagues[1, 2] about the perils of unconstrained JSON-LD as an API specification is coming to a positive conclusion. We have agreed to the following principles:

  • Our API standard is a JSON standard and we will constrain our JSON-LD usage so as to make it so that the API can be deterministically produced and consumed using *only* JSON parsing libraries. During de-serialization, it must be possible to parse the JSON deterministically using a JSON library without looking at the @context at all. It must be possible to produce the correct JSON deterministically and add a hard-coded and well understood @context section that does not need to change.
  • There should never be a requirement in the API specification or in our certification suite that forces the use of JSON-LD serialization or de-serialization on either end of the API.
  • If some software in the ecosystem covered by the standard decides to use JSON-LD serializers or de-serializers and and they cannot produce the canonical JSON form for our API – that software will be forced to change and generate the precise constrained JSON (i.e. we will ignore any attempts to coerce the rest of the ecosystem using our API to accept unconstrained JSON-LD).
  • Going forward we will make sure that our sample JSON that we publish in our specifications will always be in JSON-LD Compacted form with either a single @context or a multiple contexts with the default @context included as “@vocab” and all fields in the default context having no prefixes and all fields outside the default @context having simple and predictable prefixes.
  • We are hopeful and expect that Compacted JSON-LD is so well defined in the JSON-LD W3C specification that all implementations in all languages that produce compact JSON-LD with the same context will produce identical JSON. If for some strange reason, a particular JSON-LD compacting algorithm starts producing JSON that is incompatible with our canonical JSON – we will expect that the JSON-LD serializer will need changing – not our specification.
  • In the case of extending the data model, the prefixes used in the JSON will be agreed upon to maintain predictable JSON parsing. If we cannot pre-agree on the precise prefixes themselves then at least we can agree on a convention for prefix naming. I will recommend they start with “x_” to pay homage to the use of “X-” in RFC-822 and friends.
  • As we build API certification mechanisms we will check and validate incoming JSON to insure that it is valid JSON-LD and issue a warning for any flawed JSON-LD but consider that non-fatal and parse the content using only the deterministic JSON parsing to judge whether or not an implementation passes certification.

It is the hope that or the next 3-5 years we can rely on JSON-only infrastructure but at the same time lay the groundwork for a future set of more elegant and expandable APIs using JSON-LD once performance and ubiquity concerns around JSON-LD are addressed.

Some Sample JSON To Demonstrate the Point

Our typical serialization starts with the short form for a single default @context as in this example from the JSON-LD playground:

{
  "@context": "http://schema.org/",
  "@type": "Person",
  "name": "Jane Doe",
  "jobTitle": "Professor",
  "telephone": "(425) 123-4567",
  "url": "http://www.janedoe.com"
}

But lets say we want to extend this with a http://dr-chuck.com/ field – the @context would need to switch from a single string to an object that maps prefixes to IRIs as shown below:

{
  "@context": {
    "@vocab": "http://schema.org/",
    "csev": "http://dr-chuck.com/"
  },
  "@type": "Person",
  "url": "http://www.janedoe.com",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "csev:debug" : "42"
}

If you compact this with a single schema for http://schema.org – all extensions get expanded:

 
{
  "@context": "http://schema.org/",
  "type": "Person",
  "http://dr-chuck.com/debug": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "schema:url": "http://www.janedoe.com"
}

The resulting JSON is tacky and inelegant. If on the other hand you compact with this context:

{
  "@context": {
    "@vocab" : "http://schema.org/",
    "csev" : "http://dr-chuck.com/"
  }
}

You get JSON that is succinct and deterministic with predictable prefixes and minus the context looks like clean looking JSON that one might design even without the influence of JSON-LD.

 
{
  "@context": {
    "@vocab": "http://schema.org/",
    "csev": "http://dr-chuck.com/"
  },
  "@type": "Person",
  "csev:debug": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "url": "http://www.janedoe.com"
}

What is beautiful here is that when you use the @vocab + extension prefixes as the @context, it means that our “canonical JSON serialization” can be read by JSON-LD parsers and produced deterministically by a JSON LD compact process.

In a sense, what we want for our canonical serialization is the output of a jsonld_compact operation and if you were to run the resulting JSON through jsonld_compact again – you would the the exact same JSON.

Taking this approach and pre-agreeing on all the official context and all prefixes for official contexts as well as a prefix naming convention for any and all extensions – means we should be able to use pure-JSON libraries to parse the JSON whilst ignoring the @context completely.

Conclusion

Comments welcome. I expect this document will be revised and clarified over time to insure that it truly represents a consensus position.

Abstract: Massively Open Online Courses (MOOCs) – Past, Present, and Future

This presentation will explore what it was like when MOOCs were first emerging in 2012 and talk about what we have learned from the experience so far. Today, MOOC providers are increasingly focusing on becoming profitable and this trend is changing both the nature of MOOCS and university relationships with MOOC platform providers. Also, we will look at how a university can scale the development of MOOCs and use knowledge gained in MOOCs to improve on-campus teaching. We will also look forward at how the MOOC market may change and how MOOC approaches and technologies may ultimately impact campus courses and programs.