Sakai LTI 1.1 Beta Release – 2.1.0-b01

I have been working on a new release of the Sakai LTI support to complete the support for IMS LTI 1.1 and lay the ground work for an effort to implement IMS LTI 2.0 in the summer. The internel Sakai version number for the LTI code base is 2.1.0 (unrelated to the IMS spec version number).

This code is planned to be shipped with the Sakai 2.9.2 release – sometime in May or June. It will likely be the version that is installed by many Sakai schools over the summer.

Many things have been fixed, new features have been added, the UI has been cleaned up, and there are several security fixes.

Release notes including the 40+ JIRA tags fixed in the 2.1.0 release:

https://confluence.sakaiproject.org/display/LTI/LTI+-+Release+2.1.0+Notes

Video demonstration:

http://www.youtube.com/watch?v=Xfmc6ePgyaw

Beta 01 Tag:

https://source.sakaiproject.org/svn/basiclti/tags/basiclti-2.1.0-b01/

It would be great if folks were willing to test this and give me some feedback.

Abstract: Sakai – The First Ten Years and the Next Ten Years

I will be submitting this abstract to the 2013 Apereo Conference in San Diego June 2-7, 2013.

The Sakai project is nearly ten years old, having started informally in June 2003 and then formally funded by the Andrew W. Mellon Foundation in January 2004. There is no question that Sakai has brought tremendous value to the LMS market – even for schools that have never used or installed the product. Sakai has been a force for good and shown other LMS systems the right path to meet the real needs of their users. Sakai is the only Apache-style open source / open community LMS project in the marketplace. As of November 2012, Sakai represented six percent of the US LMS marketplace. These are all impressive results and with the release of Sakai 2.9, we have a product that we can be very proud of (and everyone should upgrade to it as soon as possible). But at the same time, we cannot rest on our laurels and need to think carefully about the kinds of activities that we will undertake beyond Sakai 2.9 to maintain and strengthen our place in the marketplace in an increasingly standards-oriented, component-based learning systems, trends toward multi-tenancy, software as service, MOOCs and extreme scalability. In this presentation, we will take a look at the past, present, and future of the Sakai Collaborative Learning Environment.

This is just a draft – comments welcome.

How do I record my MOOC Lectures?

I use these bits of technology to record my MOOC lectures:

http://www.wacom.com/en/creative/products/pen-displays/cintiq/cintiq-12wx

http://www.techsmith.com/camtasia.html (for Mac)

http://www.amazon.com/Logitech-Widescreen-Calling-Recording-960-000764/dp/B006JH8T3S/ref=dp_ob_title_ce (Logitech 920)

https://itunes.apple.com/us/app/webcam-settings/id533696630?mt=12

http://www.omnigroup.com/products/omnidazzle/

Update: Sadly, OmniDazzle no longer works for Mac OS 10.9 so I have switched to Ink2Go to annotate the slides. Ink2Go is an adequate product and draws nicely the but its poor hotkey support means that I cannot change colors with a mere keystroke or Wacom button and I need to have the Ink2Go menu in the lower left of the screen – which you can sometimes see on my later recordings – which makes the recordings look less professional.

Update: I am so disappointed with all of the screen-drawing products that I have started to build some of my slides using Reveal.JS and my own JavaScript-based screen drawing tool that I call DazzleSketch. I am experimenting with this approach for my new book TCP/IP Networking.

TorchLED 50 Watt Light – I use this to light my face a little bit – takes away shadows on people’s faces.

I record on a 4-CPU MacBook Pro 15 with an SSD drive – and it seems to labor a bit -I tried a recent 2-CPU MacBook 13 and it could not keep up. Camtasia does a great job of compressing the video without loss – but it is a bit CPU heavy. If you look the Camtasia files are surprisingly small and so easy to archive the original high quality materials instead of the rendered MP4 files.

The Logitech camera drivers for Mac are kind of weak and so the WebCam Settings tool is very important to adjust and fix color balance and turn off auto-focus to keep me from looking too blue or randomly changing colors and having the focus wander while I wave my hands.

I have derived some settings for the screen layout and came up via experiment with some compression settings for YouTube and for MP4’s that I make. I find that I need to make my files about 2X larger to keep them looking good on YouTube. In Camtasia when I export, to get good results I need the quality at the 3/4 mark. But for just making files to be played with Quicktime or to keep for archive I export with Camtasia’s quality setting at the 1/2 mark.

For very wide-screen videos with a big version of me on the right hand side that I produce for MOOC / Distance education like this:

http://www.youtube.com/watch?v=SQ0HXfB8Q1w

I use a 1280×525 Canvas in Camtasia.

For situations where I make a screencast be played in a classroom make the Camtasia Canvas 1024×768 and move the image around or even remove it to keep it off the slide content as in:

http://www.youtube.com/watch?v=Za3TXZXGJAE

Folks have more pixels on their computers than in classroom projectors :)

Working on the Skulpt Python to Javascript Compiler

I am making heavy use of the Skulpt cross-compiler that allows me to run Python programs completely in the browser. It compiles the Python to Javascript and then runs the Javascript, allowing an auto-grader to be built that requires zero server footprint that I use in my free online MOOC called Python for Informatics. The same compiler is used by CodeSkulptor which is part of the Rice University Coursera MOOC titled An Introduction to Interactive Programming in Python.

Since Skulpt is a complete ground up implementation of Python including the need to implement all the standard libraries it is naturally incomplete. And so as my students go through the various assignments, we encounter little bits and pieces that are not quite right or not implemented.

Earlier this week, I was thinking that I would have to just work around the little things that were wrong or missing, but then the Computer Scientist inside me wondered how hard it would be to dive into the source code of the Skulpt compiler and fix a few things that were bothering me.

I started working on the code Thursday morning and it was relatively straightforward in its approach. The nice thing is that it approaches to writing a compiler have not changed too much since I last wrote a compiler in 1979. They create a parser to turn the language into tokens, a grammar that expresses how the tokens are combined, and then code that triggers on each of the rules of the grammar that produces an intermediate representation of the program, a code generator that turns the intermediate representation into runnable code, and a run-time library to implement the built-in functions needed. After all this, Skulpt uses the Google closure compiler to pull all the pieces together to produce a nice tight include file with all of it ready to run in the browser:

-rw-r–r– 1 csev staff 171469 Jan 20 09:05 builtin.js
-rw-r–r– 1 csev staff 214624 Jan 20 09:05 skulpt.js

I write down some of my steps for my own record and so others might possibly benefit if they too want to dive into working on Skulpt.

The first step is to clone the Mercurial repository on Google Code. Here is my clone:

http://code.google.com/r/drchuck-skulpt-mods/

Then I checked out my clone to my laptop:

hg clone https://drchuck@code.google.com/r/drchuck-skulpt-mods/

Most of the operations are run from a shell script called ‘m’ – the first thing you might want to do is run the unit tests to get a baseline

./m

Yup – it is that simple. There are over 300 unit tests that get run through skulpt, Python, and Google V8 and have their output compared.

Working with Skulpt

As best I can tell this is a pretty slow-moving project – but it does move so I felt it was important to document all my work in the Skulpt issue list. Before I worked on something, I wrote an issue in the main skulpt repo like Issue 116. Then I would add a comment when my modification was complete in my clone. I hope this helps the people running the Skulpt project the best chance of getting my code back into their repo.

Extending the runtime

If you are going to add a new feature, first you need a bit of Python code to exercise the feature. For the round, I wrote this:

f = 2.515
g = round(f,1)
print g

To test this, you run:

./m run rnd.py

Your output will look like this

-----
f = 2.515
g = round(f,1)
print g

-----
Uncaught: "#", [unnamed] line 61 column 29
/*    61 */                 throw err;
                            ^
dbg> 

It just means that the round function does not work. Modify the files

src/builtindict.js
src/builtin.js

And add the implementation. Here are the changes needed. Ignore the dist and doc diffs and focus on the src diffs. The dist and doc diffs are generated in a bit – they end up in the repo so folks can just grab the disk and doc from the repo without needing to check out the code.

When you make changes, run

./m

Until unit tests pass and then run

./m dist

Until it successfully completes:

...
. Wrote dist/builtin.js
. Updated doc dir
. Wrote dist/skulpt.js.
. gzip of compressed: 50585 bytes

Then re-run your code:

./m runopt rnd.py

Until your code works. You may go a few rounds edit, unit test, dist, re-run, but the process takes about 20 seconds so it is not as painful as it sounds. I could not figure out how and exactly when the “./m run” looks at the new code in src and when it needs a “./m dist” to get new code – so I pretty much do a “./m dist” on every modificiation.

When everything works and the output you see from “./m run” matches the output of running Python on the same code you can turn your test code into a new unit test. Run

./m nrt

It brings up vi in a file that is the next available unit test number. Paste in the code form your “rnd.py” and save it. Then run:

./m regentests

Then run

./m
./m dist

You may find little things in each of these steps. Edit your code and/or the unit test until “./m dist is completely clean”. Then I actually copy the two files in dist into my online autograder and do a quick test of the new feature in the browser.

If all goes well, you can use mercurial to add the unit tests, checking things are OK and then do a commit and push

hg add test/run/*322* (do this for each unit test you have added)
hg status
hg commit
hg push

Changing the Language

If you need to change the language (i.e. anything other than the runtime) it is a little trickier. Examples of two language changes I did were:

  • Change the code generator for try / except – this was relatively straightforward because it did not entail a grammar change
  • Add support for quit and exit – I initially thought I could do this by extending the run-time and having them throw an exception that the outer execution loop would catch – but somehow I never got it to work so I switched to making them part of the language like break, continue, and other flow operations. If you look at the code, I touched a lot more files in this change – but it should serve as a nice roadmap when you make a grammar change and then have to work through and get all the parsing and code generation to work.

The steps I take when making any change to the parser are as follows:

./m regenparser
./m
./m dist
./m runopt quit1.py

Again, I don’t know which changes need which of the above steps, but it seems that a lot of the changes needed to do a complete “./m dist” before I could test them in my own code – so after a while – I just did them all on every change.

The first thing you need to do is get the dump of the generated JavaScript code as part of your testing. I searched vainly for a nice option to make this happen and perhaps there is a better way – but I found that what worked for me was un-commmenting some code in “src/import.js”:

--- a/src/import.js	Fri Jan 18 11:03:55 2013 -0500
+++ b/src/import.js	Fri Jan 18 12:20:25 2013 -0500
@@ -84,7 +84,7 @@
  */
 Sk.importModuleInternal_ = function(name, dumpJS, modname, suppliedPyBody)
 {
-    //dumpJS = true;
+    dumpJS = true;
     Sk.importSetUpPath();
 
     // if no module name override, supplied, use default name
@@ -170,7 +170,7 @@
                 return lines.join("\n");
             };
             finalcode = withLineNumbers(co.code);
-//          Sk.debugout(finalcode);
+            Sk.debugout(finalcode);
         }
     }

Make sure not to check these changes in by doing an “hg revert src/import.js” right before the commit and push.

If you make these changes to src/import.js go throught the steps above and you will see a lot of nicely formatted JavaScript flying by in addition to the other output.

Once your have the changes to skulpt making it past “./m dist” it is time to test your own code and the new feature.
when you do a “./m runopt file.py” – you get a lot of Javascript output on the terminal. It is a little obtuse – but like the displays in the Matrix – after a while it makes sense. The basic runtime is a while containing a switch statement and each of the code blocks is a case statement. It is like the classic code generator I wrote in 1979. Don’t expect the blocks to be in the same order as the Python source – just look at the “$blk=4” code at the end of each block to see where the code will be going next.

Here is the generated JavaScript from a simple hello world Python program with a few line breaks:

-----
print "Hello world"

-----
/*     1 */ var $scope0=(function($modname){var $blk=0,$exc=[],$gbl={},$loc=$gbl;
    $gbl.__name__=$modname;
    while(true){try{ switch($blk){case 0: /* --- module entry --- */
/*     2 */ //
/*     3 */ // line 1:
/*     4 */ // print "Hello world"
/*     5 */ // ^
/*     6 */ //
/*     7 */ 
/*     8 */ Sk.currLineNo = 1;
/*     9 */ Sk.currColNo = 0
/*    10 */ 
/*    11 */ 
/*    12 */ Sk.currFilename = './hello.py';
/*    13 */ 
/*    14 */ var $str1=new Sk.builtins['str']('Hello world');
    Sk.misceval.print_(new Sk.builtins['str']($str1).v);
    Sk.misceval.print_("\n");return $loc;goog.asserts.fail('unterminated block');} }
    catch(err){if ($exc.length>0) { $blk=$exc.pop(); continue; } else { throw err; }} }});

Hello world
-----

Here is code generated from a more complex Python example with more than one block. I wish I knew how to make the JavaScript prettier when debugging your code. The JavaScript is pretty in the unit tests – but ugly when you do runopt.

I won’t go through the detailed code modification steps – that is best shown looking at the diffs from my two changes above.

Pulling in merges from other clones

The Skulpt project is pretty slow-moving so interesting things happen in clones other than the main repo – so it is helpful to pull those changes into your repo. I include how I did this just to help jog my own memory.

Make sure you have any of your changes fully committed and your local repo is clean before you start:

hg incoming https://code.google.com/r/theajp01-skulpt-int-fix/
hg pull https://code.google.com/r/theajp01-skulpt-int-fix/
hg status
hg heads
hg merge
hg diff
./m 
./m dist

If you have a problem with the patches you may need to fix them by editing the files or even add new unit tests using “./m nrt”. When you are satisfied with the patches you do the following:

hg status
hg commit
hg push

Summary

In short it has been a fun three days, re-learning how compilers work internally. I really like the internal structure of the Skulpt project. It is very impressive and thorough and surprisingly easy to work in. This experience also reinforces my sense of the value of very deep learning needed in a Computer Science degree. Some might say that Computer Science students don’t need to learn operating systems or compilers or hardware – but someone needs to be able to dig into these pretty layers and make something work sooner or later.

Of course not everyone who should learn to program needs to be a trained Computer Scientist. There are plenty of people who need just to know how Python and a few other things works so they can sling data around and connect between things. But it is good to be able to call in a plumber once in a while. And for me, it was fun to go back to my plumber days these past three days.

Thanks to the great folks who built Skulpt and thanks to my SI301 on-campus students and Python MOOC students for their patience as I worked through this code as the autograder kept breaking in mysterious ways :).

No EPUB3 SI791 Class this Semester

A number of people expressed some interest in my SI791 independent study course covering EPUB3 and writing a Sakai book in EPUB3. Here is the blog post where I announced the course:

http://www.dr-chuck.com/csev-blog/2012/12/si791040-connecting-next-generation-learning-management-systems-and-publishing/

Unfortunately I bit off more than I could chew over the break in deciding to build my own open source MOOC environment based on IMS LTI and Moodle and teaching a Python MOOC with my own infrastructure starting January 14 (yikes).

https://online.dr-chuck.com/

I don’t think I can also pull off three entertaining hours of lecture per week on EPUB3 every week. I really needed to get a bit of a head-start on the class before it started and all my energy was consumed building MOOC software and learning the Raspberry Pi.

But all is not lost – I talked to Jim Eng and we hope to do a series of meetings at the Library around EPUB3 so we can still come up to speed a bit later in the semester. I am still going to move things gently forward as soon as the semester starts. If we set these up – I will announce them on si.all so SI folks can come if they like.

I still want to start writing the Sakai book in github. I have a new possible strategy after some recent experience with Calibre and EPUB – It occurs to me that the right way to do an EPUB3 might be to just to a constrained HTML 5 book that is carefully constrained so as to allow easy automated conversion into EPUB3 format through Calibre (or similar). If it will work it is a lot easier than trying to maintain it all in EPUB3 all the time. Maybe this will lead to an EPUB3 “shim” for the HTML 5 version of the book… All are just working ideas for now.

Sorry to back out on this…

Abstracts for EuroSakai 2013 – Paris

I sent in two abstracts for the EuroSakai 2013 in Paris January 28-30, 2013. Here are my abstracts:

Experiences with Massive Open Online Courses

Dr. Severance taught the online course “Internet History, Technology, and Security” during 2012 using the Coursera teaching platform. The course had over 56,000 registered students from all over the world and 5,000 received a certificate. In this keynote, we will look at the current trends in teaching and learning technology as well as look at technology and pedagogy behind the course, and behind Coursera in general. We will look at the data gathered for the course and talk about what worked well and what could be improved. Dr. Severance is also teaching an independent MOOC starting in January 2014 using technology of his own making and he will report on those efforts to date. We will also look at emergent effects in the MOOC space including recent entries and efforts as well as look toward the future of where MOOCs are headed and what their impact might be. We will also look at possible new architectures for MOOCs the role of open source in the emerging MOOC ecosystem.

References

[1] http://www.coursera.org/
[2] http://class.stanford.edu/
[3] http://online.dr-chuck.com/ (may be under construction…)

Directions for Standards in Teaching and Learning

This presentation will cover a wide range of topics around evolving standards for teaching and learning. First we will look at the well-established standard of IMS Learning Tools Interoperability (LTI) and Common Cartridge (CC) and look at the market penetration of each of the standards. Then we will look at upcoming standards form IMS like LTI 2.0 and give a sense of where they fit into the marketplace. And then looking beyond IMS, we will look at how EPUB3 may have a very significant impact in the LMS market and examine the overlap between the IMS standards and ODPF (EPUB3) standards. We will also talk about open source projects around EPUB3 and other content authoring efforts.

References

[1] http://www.imsglobal.org/
[2] http://www.ipdf.org/
[3] http://developers.imsglobal.org/
[4] Readium: Digital Publishing Meets Open Web
http://readium.org/
[5] Bill McCoy (IDPF) – Introducing ePUB3

SI791/040 – Connecting Next Generation Learning Management Systems and Publishing

This is an independent study course where we will be looking closely at the next-generation EPUB3 electronic book publishing format and how it can revolutionize teaching and learning systems as well as the open educational resources (OER) space. EPUB3 is the standard for the next generation of electronic books and includes features similar to those found in Apple’s iBooks Author program. We will also look at blending capabilities from the IMS Common Cartridge into EPUB3. We will also look at the architectures of next-generation learning management systems using the IMS Learning Tools Interoperability Specification, specifically with an eye toward building systems suitable for use in MOOC environments.

We will collaborate and involve groups like IMS Global Learning Consortium, the International Digital Publishing Forum, the Connexions Project, OERPUB, Mozilla, INGRAM, UM Libraries, UM ITS, and Open Michigan, and others.

Two of the tasks we will undertake to contextualize our inquiry is to write a complete open book in EPUB3 about Sakai and also build an open source EPUB3 editing system in HTML5/Javascript. We are looking for a diverse group of people ranging from writers to programmers. Even though I have two contextualizing tasks to get us started – we may take the class in different directions once we have our first few class meetings.

Space will be limited and students will need to apply to join the course. There will be orientation session session where I will answer questions and gauge interest:

Update: I am looking to move the meeting time

The course will meet:

Tuesdays 3:30 – 5:00 – 1265NQ, Starting January 15, 2013

Interested students are also welcome to send me E-Mail with questions.

References:

MOOCs are Really Great! But What’s Next?
http://www.youtube.com/watch?v=p8ZItXwF2ys
http://www.slideshare.net/csev/moocs-are-great-whats-next

Bill McCoy (IDPF) – Introducing ePUB3

Share Everywhere : Create and Share Content with Legs
http://www.slideshare.net/oerpub/share-everywhere-creating-content-with-legs-slideshare

BEA 2012 – EPUB3 is here – are you ready?

Next Generation Learning Platforms

EPUB3: Not Your Father’s EPUB

EPUB3 Demo and Examples

International Digital Publishing Forum
http://idpf.org/

Readium: Digital Publishing Meets Open Web (A free HTML5 EPUB3 Reader)
http://readium.org/

EPUB3 Sample Documents
http://code.google.com/p/epub-samples/downloads/list

Abstract: MOOCs Are Really Great! But What’s Next?

This was an invited presentation at the Dé Onderwijsdagen 2012 – World Trade Center, Rotterdam November 13, 2012.

Dr. Severance taught the online course “Internet History, Technology, and Security” using the Coursera teaching platform. His course started July 23, 2012 and was free to all who want to register. The course has over 46,000 registered students from all over the world and 6000 are on track to complete the course and earn a certificate. In this session, we will look at the current trends in teaching and learning technology as well as look at technology and pedagogy behind the course, and behind Cour Sera in general. We will look at the dates Gathered for the course and talk about what worked well and what could be improved. Also we will look at some potential long-term effects of the current MOOC efforts. Charles Severance is a Clinical Associate Professor and teaches in the School of Information at the University of Michigan. Charles is a founding faculty member of the Informatics Concentration undergraduate degree program at the University of Michigan. Hey Also works for Blackboard axis Sakai Chief Strategist. Hey Also works with the IMS Global Learning Consortium promoting and developing standards for teaching and learning technology. Previously he was the Executive Director of the Sakai Foundation and the Chief Architect of the Sakai Project.

A New Dr. Chuck-Mobile – Toyota Prius

My new Dr. Chuck-Mobile is a Toyota Prius. My 2001 Buick LeSabre has 227,000 miles on it and I wanted a vehicle that gets 50 miles per gallon given how much I drive (about 30,000 miles per year).

A Prius is quite a departure for me. My last *new* car was in 1980 – I don’t even remember what it looked like – it was beige. I have been driving effectively the same “family” of car since 1995. The cars were all some variation of a General Motor’s full-size vehicle with a 3.8 Liter engine. There was a Pontiac Bonneville, several Oldsmobile 88’s, and most recently, a Buick LeSabre. Since I drive so many miles, I would purchase these cars with about 100,000 miles on them and drive them until they would have about 220,000 miles on them and sell them. I have driven well over a half-million miles in these cars.

I have been thinking about a Prius for years now. Perter Knoop has a Prius, Michael Korkusa has a Prius, and Joseph Hardin has a Prius. I had been looking at Prius used prices and found that their resale value was very high. I never saw what seemed to be a bargain price. A Prius with 100,000 miles is worth $12,000 – it looked like the new Prius was the best value.

A few months back, my car was in the shop so I rented a Prius for three days and fell in love with the car. I was amazed that the intelligence of the power management system and was able to verify the gas mileage in real-world driving conditions.

I am saving $0.10 every mile I drive the car. That should save me $250.00 per month in real in-my-pocket savings. The fuel saving almost makes the car payment. If I drive the car 200,000 miles – it will save me $20,000 – pretty impressive. My motorcycle also gets 50 miles per gallon so with the Prius all my vehicles get 50 miles per gallon. Pretty cool.

Some asked my why I did not purchase a Chevy Volt. The Volt is really pretty and I love the notion of a plug-in vehicle. But since I drive 120 miles every day and the Volt runs out of a charge at about 30 miles. So for me the key factor in the Volt is the mileage it gets when the gas engine is running. The savings you gain while you are running battery-only are quickly lost when you are running in hybrid mode. Since most of my travel is long distances the nod goes to the Prius.

All The World’s a Classroom

This is a report of my UMSI monthly article of the same title –
http://monthly.si.umich.edu/2012/10/18/all-the-worlds-a-classroom/

This summer it was my great pleasure to teach an online non-credit course titled Internet History, Technology, and Security to a students around the world at no cost using the Coursera platform for large-scale online courses. Over 49,000 students registered for the free class, over 16,000 attended the first week’s lecture and over 4900 students earned a certificate at the end of the 10-week course. It would take 32 years of teaching our SI502 foundations course on Networked Computing to interact with that many students.

For that first course, I chose to use several weeks of SI502 that focused on how the network was built over time and how it functions today and expanded it to become a 10-week course. I chose this material because it is fun, engaging, and very well suited for a video format. But more importantly, I wanted to create a course about technology that would be accessible to learners of all levels and all languages around the world. I also wanted a course that showcased the School of Information’s core competency of “connecting people, information, and technology in more valuable ways”.

The course started by looking at the code breaking efforts during World War II in the United States and the UK. It was a perfect example of having lots of data and using computing to transform that raw, encrypted, and seemingly meaningless data first into information and then ultimately into knowledge. Because of the heavy use of advanced encryption techniques for wartime communications, high-speed computation devices were developed to “crack” the encryption. Initially those devices were electromechanical and then later to increase the speed of the devices, the first electronic computers were invented and built under the top-secret wartime conditions at Bletchley Park. Bletchley Park was a beehive of activity with over 10,000 people, many thousands of encrypted messages (information) and hundreds of computers working 24 hours per day (technology).

The course followed the history through the post war period, through to the current day featuring interviews of many innovators ranging from the co-inventor of the World-Wide-Web (Robert Calliau) through the founder of Amazon (Jeff Bezos).

Once we had viewed the Internet through a historical lens, we went back and took another look at the Internet through a more technical lens, examining how packets work, the Link Layer, Internetwork layer (IP), the Transport Layer (TCP), and the Application Layer.

I saw this course as far more than just another course. To me it represented so much of what it means to be part of the School of Information at the University of Michigan and I tried to reflect the values of SI throughout the course material and how I approached and taught the course. I wanted to make all of the technical material in the course accessible to learners of all levels. The course lecture and video materials were translated by the students (crowd sourcing) into over 30 languages and we had students from all over the world and nearly every single country was represented. I made sure to teach the course in a way that would be accessible to non-English speakers as well as those with slow or unreliable network connections.

Another exciting part of the course was how the students became a self-organizing social learning community. With over 10,000 students active throughout most of the course, there was literally no way that I as the faculty member could help each individual student with a technical issue or problem understanding the materials. The students were amazingly wonderful at helping each other, forming study groups, and some even took the initiative produce supporting course materials and reading lists for the class. Because of so much proactive student involvement, my workload was surprisingly low.

One of the issues in online courses is the sense of loneliness and isolation. One experiment I tried was to have “office hours” in various cities as I travelled in the late summer and early fall. I had office hours in New York City, Los Angeles, Wilmington, NC, Ann Arbor, Chicago, Memphis, Washington, DC, and Seattle WA. The office hours had 2-15 students show up in a local coffee shop and we talked about the class and how it could be improved. The students thought it was cool to meet their online instructor and that I was being very giving of my time. But in actuality I did the office hours because I wanted to see and meet my students – or at least some of them. It helped me maintain my own motivation to know that my students were real and not just numbers and data inside of a computer. I learned so much about how to better teach the course from these interactions. I have upcoming office hours in Seoul, South Korea, Barcelona Spain, Denver Colorado, and Amsterdam as part of my travel plans for the fall.

Students who completed the course with a passing score will receive an online certificate of achievement from Coursera. Students can print out the certificate or link to it in their resumes. I decided to go a step further and offer to sign their certificates if they would send them to me at the School of Information with a self-addressed stamped envelope. I have warned the Dean’s office that they might be receiving 4,900 pieces of mail for me over the next few months. Like everything in the course, for me this is just another experiment in how far we can expand the boundaries of this new form of interacting in the context of teaching the world.

I did a summary lecture for the course and put it up on YouTube that you are welcome to watch to see my reflections on the course as well as a presentation of the student demographics and retention statistics for the course. You can also take a look at an interactive map of the geographic distribution of the students in my course from a blog post that I wrote.

And if you found this interesting, the course will be offered again soon and you are welcome to sign up and join us online. I hope to see you on the net.