Category Archives: Uncategorized

Report from the First Apereo Conference (2013)

Report from the First Apereo Conference (2013)

Note: This was partially completed in by draft folder for three years – oops.

I really enjoyed attending the first Open Apereo 2013 conference in San Diego June 2-7, 2013.

There was palpable sense of joy at the conference. I think for many of us, who had a vision of a Foundation like Apereo to be created to create a “big-tent” organization to support a wide-range of open source activities in higher education. The idea was that the more diverse our community would become – the more solid and sustainable it will be. In particular we wanted to create an environment where new projects could be quickly formed and by virtual of being part of Apereo, those projects could form the nucleus of the leadership from people and organizations already part of Apereo and attending Apereo meetings.

We need to stop and thank those who gave so much to make this a reality. This was three years where a number of people learned far more about non-profit laws than you could imagine. Building something good takes time – but a lot of people are very relieved to have it finished so we can look to the future.

People who stick out for me include: Patty Gertz, Ian Dolphin, Josh Baron, Jens Haeusser, Robert Sherratt, John Lewis, Set Theriault, and both of the board of directors of Sakai and JASIG as well as the transition committee made up of members from both boards. It was a long and winding road – and the only way to move forward was to be patient.

Sakai in a Apereo-Foundation World

The Sakai-related efforts that are now part of Apereo are now so much better to make forward progress. In the Sakai Project and Foundation – these ideas were often too intertwined to make forward progress. We spent too much time trying to come up with one set of priorities that distracted from evolving our efforts. Here are my observations:

  • The Apereo Open Academic Environment has renamed itself to emphasize that the OAE is very much an independent project exploring next generation approaches to teaching, learning, and collaboration. The OAE team has rewritten much of the core software since the end of 2012 and is moving quickly to a version 1.0 sometime this summer running in production for Marist, Georgia Tech, and Cambridge. Getting a 1.0 project into production is a wonderful milestone and will likely re-kindle interest in the OAE project, growing their interest and resources. Some might say that OAE died and has been reborn – I actually disagree with this notion – OAE has been on a path all along and there were bumps on that path – as the bumps smoothed out the project is moving toward a release nicely.
  • Teaching and Learning SIG – Because this is now an independent entity within Apereo it is a natural place to look across the Sakai CLE and OAE as well as looking at emerging efforts (below). The T/L group also will continue the TWISA (Teaching with Sakai Innovation Awards) and look to expand the effort. This group serves as a natural gathering point for the faculty and student more interest in applying the ideas of openness to teaching and learning. I think that this group will make sure that the end-users of our software have a place at the conference. I also think that this group can nurture interest in areas like Open Education Resources (OER) and if there is an interest in developing practice or software error OER – Apereo might be a great place to incubate that work.
  • The WAD Portfolio Effort – Thanks to efforts like Janice Smith, Shoji Kajita, Alan Berg, and many others, there is continued interest in portfolio solutions in open source. The current effort is a pre-incubation group working together on a product they call WAD (I don’t know what it stands for). The idea for WAD is to build a portfolio system outside of the LMS and find ways to do a deep integration to pull out LMS data as needed. In many ways WAD feels like a throwback to the OSP 1.0 times where practicing pedagogists kept themselves very close to the emerging development efforts and gently steered the process. I am very excited to feel the energy in this group that being part of Apereo makes possible. It was exciting to see the re-engagement of some of the people who brought so much passion to OSP in the early days.
  • The Learning Analytics Effort – There has been a small group of highly interested folks within the Sakai community interested in learning analytics activities for quite some time now. This has resulted in tools like SiteStats in Sakai. But as we gain understanding about the real approach to LA it becomes increasingly clear that analytics work must be done outside of the LMS with (again) many deep integration points. Add to this the TinCan support in Sakai (and soon uPortal and OAE) it paves the way to take real steps in a separate software development project that is just about analyzing analytic data. This group is also pre-incubation but it looks like there is interest that is building on building shared open source software to analyze learning data from many sources.
  • Sakai CLE – I will talk more about this later in a separate section. June 2012 was really the time where the CLE started to re-emerge from being under the radar in the Sakai Foundation politics since about 2008. The 2.9 release (November 2012) and 2.9.2 release (May 2013) have greatly energized the community. Leading schools and commercial affiliates have enthusiastically jumped onto the bandwagon and many have converted or are converting to the 2.9 release. The 2.9 release has enough “good stuff” to make it attractive to move to the latest release. We as a community are reducing our installed version skew and that is very important for long-term sustainability. If we put out a version and no one installs – it is game over. Once the Board issues around the CLE/OAE were resolved we can focus all our limited resources on moving the Sakai CLE forward.

In addition to these efforts, there were many other ideas bouncing around the hallways, breaks, and pubs. What was nice was to say over and over – “Hey that could be a new Apereo working group!” – What was most exciting for me was these working groups would have had a tough time being part of Sakai with a foundation that was dedicated to one (or two) core products and far too much debate about what should get the “resources”. In Apereo with independent projects large and small and lassez faire approach by Apereo each project builds its own small subcommunity and finds its own resources. It is amazing how this Sakai+JASig community has so many ideas as what to do next – but when we were the “Sakai Foundation” the question of “Is that Sakai or not?” kept most of these nascent efforts from gaining forward inertia. With in Apereo – there is little to slow a small and dedicated group from moving an idea forward.

The Sakai CLE

I think that this kind of expanding scope in the area of higher education open source efforts will be the hal

[ed. Note: the original draft stopped here in mid-word]

An IMS Proposal – Eliminate all use of JSON-LD

I sent the following message to IMS because I am really unhappy with IMS use of the JSON-LD in our JSON-based specifications. Apologies in advance to the fans of RDF. We all hoped that JSON-LD would give us the best of both worlds – but it seems like it is the worst of all worlds. I don’t expect to win this argument – because the people making the decisions are not the people writing the code and feeling the unneeded pain caused by JSON-LD.

Hi all,

I would like to formally propose that we no longer use JSON-LD in any IMS specification going forward. I would like to also propose that we formally standardize prefixes for all specifications we have issued that use JSON-LD so implementations can legitimately parse our data models using JSON reliably.

Furthermore we would alter certifications for JSON-LD specs to generate and accept JSON instead of JSON-LD.

My reasoning is that we are far outside the norm of the modern-day REST web services world – and while there are fans of JSON-LD – they are the same folks that loved RDF and just found a new place to push their ideas.

Our standards are one domain of interest and our use of JSON-LD actually tends to create silos of data models. If we compare the JSON-LD for LTI 2.0 and the JSON-LD for ContentItem – they are completely distinct namespaces and things like the “contact” structure – which *should be the same* are actually completely different – and our dysfunctional use of JSON-LD *discourages* the sharing of data model elements between different specifications.

And if you take a look at CASA using JSON Schema – it is even worse. Simple things like contact information again are given completely different data models.

And as I am starting to write code that crosses these spec boundaries boundaries, I am finding that it is far less important to have globally unique identifiers for the notion of a contact data structure – but instead a way to have a contact data structure that we can share and reuse across many specifications.

I think that the right approach is to go straight to a namespaced OO approach to model our underlying data objects and then when we build a new spec and want to pull in the org.imsglobal.shared.Contact object – we just pull in the object and then the JSON serialization is obvious.

As we move away from document-styled specs to API-styled specs – it would seem like we just should move towards defining our interoperable data formats in a way that makes the development of APIs very simple and straightforward instead of wasting so much effort to achieve some dream of future RDF nirvana.

I now have samples of how I model these JSON documents across services – and I can tell you that (a) we are woefully inconsistent across our specs and JSON-LD is partially *causing* the problem and (b) anything that has to do with properly parsing JSON-LD is really poor given the lack of real toolset support and (c) it is frustrating the increasing way that the certification suites are making slightly harder by randomly throwing in JSON-LD just to break those who just want to parse JSON – the solution is to reverse engineer the certification patterns and build lame JSON parsers instead of really using JSON-LD tool chains.

It is high time to walk away from JSON-LD going forward.

Looking forward to your comments.

/Chuck

My MOOC Approach / Pedagogy

I was recently asked to come up with an outline of how I think about building a MOOC. In particular I have been slowly building a Web Applications MOOC based on www.php-intro.com – starting from my classroom and moving through a MOOC, back to the classroom and then to an innovative on-campus curriculum. This in a sense is my master plan for improving education though MOOCs. They are abstract talking points. Perhaps if you want to hear more, your campus could retain me as a consultant or this might be a good abstract for a keynote or workshop :)

Before the MOOC

Organize/clean your content – understand the topic sequence
Build auto-gradable LTI assignments – test test test
Use residential students as QA – rapid feedback

From the Classroom to the MOOC

Expand time scale – roughly 2x
Eliminate rigor for rigor sake
All assessment is low-stakes and leads to learning
Assessments as puzzles rather than precise measures
Automate automate automate
Recall that LTI tools can be reused outside MOOC platforms
Use CloudFlare to scale static content cheaply
The magic of 5-week classes and 3-week cohorts

From the MOOC to the classroom

Use recordings as assets not lecture replacements
Increase the pace – teach more – make students responsible
Use auto-graded assignments but add manual grading aspects
Do old-school things impossible in a MOOC – like paper exams
Improve MOOC assessments – use F2F students as QA

Impacting other teachers and students broadly

Open Educational Resources – free E-Resources
Low-cost printed textbooks – Amazon CreateSpace
Use CloudFlare to scale static content cheaply
Package materials (including auto-graders) as self-service web site
Get materials on github – allow others to fork and track

Impacting your institution and higher education

Apply the 5-week / 3-week magic on campus for skill-like education
Take advantage of on-campus environment and give better student support

Git cherry-pick a sequence of commits from one branch to another

I finally figured out how to pull a sequence of commits from one branch to another using git’s cherry-pick feature.

In Sakai, when we are preparing for a release, we make a release branch for the release (11.x) and then continue to move master forward and cherry-pick from master back to the release branch. We looked at git-flow and decided against it because most of our adopters use source and not binary artifacts so our release branches live 4-5 years in production we cannot have master “jumping versions”.

So the question is how to cherry-pick back a set of commits to a folder from the baster to the release branch. This is how I do it. First go into master and go into the folder of interest.

git checkout master
cd basiclti
git log .

Capture the log until you find the commit that made the branch.

commit 8cc25781d632e48bfae65009b57c6391d074a3d0
Author: Charles Severance
Date: Mon Feb 29 23:03:28 2016 -0500

SAK-30418 - Initial commit of IMS Content Item

commit 791b12634164003b7c1a59747c28ec9896fc0885
Author: Charles Severance
Date: Sun Feb 28 23:26:51 2016 -0500

SAK-30372 - Fix small mistake in the CASA output

commit 13d21ccd26901c5186a709be27ede499d7de65fc
Author: Charles Severance
Date: Sat Feb 27 11:27:12 2016 -0500

SAK-30372 - Switch the implementation to Jackson
...

Then I cut and paste the entries in reverse order and make a shell script by changing some bits to a comment and changing the commits to “git cherry-pick” – the script ends up as follows:

# To revert, if some cherry-picks go bad
# git checkout 11.x (to be extra sure)
# git reset --hard 11.x

# After all is verified
# git push origin 11.x

# Make sure to be in 11.x first
git checkout 11.x

git cherry-pick aff5c0343b419fda125d9c217d340bb660929c3c
# Author: Charles Severance
# Date: Fri Feb 19 09:49:23 2016 -0500
# SAK-30308 - Change the groupid and artifact id back

git cherry-pick b6acdbee2bd9fd55f8a77de56732582a7eaa08ae
# Author: Charles Severance
# Date: Tue Feb 23 16:17:14 2016 -0500
# SAK-30362 - Fix small issues.


...

Again – the script is the commits in reverse order so you are cherry-picking from oldest to newest. I leave the commit details in as comment for my own sanity.

I like the script in case you need to run this more than once.

Hope this helps someone.

Implementing the Next Generation Digital Learning Environment – BOF

The NGDLE (www.ngdle.org) has been proposed as a “learning ecosystem” where everything fits together beautifully like Lego blocks. There will be lots of learning management systems and thousands of easily installed applications. And we will have electric cars that never need recharging and use anti-gravity for a very smooth ride!

But seriously, how will we ever get there and how can we insure that open source leads the way to this bold new future. What is the grand plan and what are the first steps? Do we have a better understanding about how open source plays in the market? How to we insure sustainability of open source efforts from the beginning? Have we learned any lessons from the past decade of building world-class open source applications – and how can that experience reduce the number of mistakes and mis-steps as we go forward.

This BOF proposal has been submitted to the Open Apereo 2016 conference in New York City on the 24th and 25th May 2016.

Introducing the Tsugi Learning Application Framework

This presentation will introduce attendees to the Tsugi project. Tsugi intended to be the world’s first open source implementation of the Next Generation Digital Learning Environment (www.ngdle.org) as proposed by Malcolm Brown and others. Tsugi is a set of libraries and frameworks that enable the rapid development of scalable learning applications that can be placed in an interoperable application store. These tools are not limited to use in Sakai but can be used in any LMS that supports IMS LTI 1.1, LTI 2.1, or Content Item. There are Tsugi libraries under development for PHP, Java, and Node.js.

This presentation proposal has been submitted to the Open Apereo 2016 conference in New York City on the 24th and 25th May 2016.

Getting Students Involved in Open Source Software

This will be a report from an independent study course at the University of Michigan School of Information which set out to introduce a student programmer to developing and contributing to in Sakai. We will look at the barriers that needed to be overcame, what we learned along the way, and what was accomplished.

This presentation proposal has been submitted to the Open Apereo 2016 conference in New York City on the 24th and 25th May 2016.

Workshop Abstract: Developing Interoperable Learning Tools Using Tsugi

This workshop will introduce attendees to the Tsugi learning application framework. With Tsugi you can easily develop rich and powerful tools that plug in seamlessly to any LMS. Tsugi implements the IMS standards and provides easy-to-use APIs to allow developers to focus on building new and interesting tools. Tsugi tools support IMS Learning Tools Interoperability (LTI) 1.1, LTI 2.0, IMS ContentItem and IMS Community App Sharing Architecture (CASA) and can be served as part of an interoperable learning Tool App store. As new standards are approved such as those for analytics, Tsugi will support those standards as well.

This workshop proposal was presented at the Open Apereo 2016 conference in New York City on the 24th and 25th May 2016.

Two Face-to-Face @Coursera Office Hours – Orlando Florida

I will be having two face-to-face @Coursera office hours in Orlando Florida this week. One in Universal Studios near Harry Potter World and another at the hotel where I will be attending a meeting.

The first Orlando face-to-face office hours for my Internet History and Python for Everybody courses will be Sunday Jan 24 – 3:00PM – 4:00PM at Moe’s Tavern in Universal Studios.

https://www.universalorlando.com/Restaurants/Universal-Studios-Florida/Springfield-Dining.aspx

I wish we could have met at the Leaky Cauldron, but people tell me it is too crowded. But the Leaky Cauldron is a five minute walk away so at the end of the office hours we can walk to Diagon Alley and take a video.

The second face-to-face office hours will be at Holiday Inn Express & Suites – Orlando International Drive Tue Jan 26 – 6:00PM – 7:00PM in the lobby / breakfast area.

7276 International Drive
Orlando, Florida 32819

http://www.ihg.com/holidayinnexpress/hotels/us/en/orlando/mcocd/hoteldetail

I hope to see you at one or the other of the office hours.

Contributing to Python 3.0 for Informatics

After many years as a successful Open Python 2.0 textbook, the time has come to update Python for Informatics to be Python 3.0. There will be a lot of work since the Python 2.0 textbook and slides have been translated into so many languages and there are five courses on Coursera all built around the textbook.

Since there is so much work to do, I welcome any and all assistance in the conversion and review of the book. If I can get help in converting the core book, I will have time to add three new chapters that have been requested by the students (see the TODO list for details).

While there are several groups that will likely translate the book and/or slides into Python 3.0, lets wait until the book is relatively solid to make sure that all of the variations of these materials are well aligned.

Temporary Copyright

While I am in the process of drafting a book, I do not put it up with a Creative Commons license. I don’t want anyone grabbing the half-completed text and publishing it prematurely. Once the book is in reasonably good shape I switch the copyright to my normal CC license (CC-BY-NC for print books and CC-BY for all electronic copies). I expect the book to be ready to release in early 2016.

Contributing to the Book

The entire text of the book is in GitHub in this repository:

https://github.com/csev/pythonlearn

There are two basic ways to contribute to the book:

  • Create a GitHub account, then navigate to one of the files for the book in the repository like
    https://github.com/csev/pythonlearn/blob/master/book/02-variables.mkd
    Press the pencil icon to edit the text, and then when you “save” the text, it sends me a “pull request” where I can review your changes, approve them, and apply them to the master copy. Once you get going, it is really easy.
  • If you have more tech-skillz, you can “fork” the repository and send me pull requests the normal way. If you use this approach, please send pull requests quickly so we all stay synchronized. Don’t worry about trying to squeeze a bunch of work into a single comment (like many projects prefer). Lots of little commits avoid merge conflicts.

Make sure to take a look at the TODO list to figure out where you can help. We are only working in the book and code3 folders. We will not be converting the code folder as that will be maintained as Python 2.0.

We have a build server that re-builds the entire book every hour at:

http://do1.dr-chuck.com/pythonlearn/

So you can see your contributions appearing in the final book within an hour of me approving your pull request. GitHub tracks your contribution and gives you credit for everything you do. Once the book is ready to publish, I will go through the GitHub history and add acknowledgements for all of the contributions to the text of the book.

If you send in a pull request and it seems like I am not moving quickly enough for you, simply send a tweet with the URL of the pull request and mention @drchuck. That will make sure I see it and get working on it.

Thanks in Advance

I appreciate your interest, support, and effort in helping make this open book a success for all these years.

I want to make sure to acknowledge the contributions from authors of
Think Python: How to Think like a Computer Scientist” by
Allen B. Downey, Jeff Elkner and Chris Meyers. Their original groundbreaking work in building an open and remixable textbook in 2002 has made the current work possible.