Category Archives: Uncategorized

How do I record my MOOC Lectures?

I use these bits of technology to record my MOOC lectures:

http://www.wacom.com/en/creative/products/pen-displays/cintiq/cintiq-12wx

http://www.techsmith.com/camtasia.html (for Mac)

http://www.amazon.com/Logitech-Widescreen-Calling-Recording-960-000764/dp/B006JH8T3S/ref=dp_ob_title_ce (Logitech 920)

https://itunes.apple.com/us/app/webcam-settings/id533696630?mt=12

http://www.omnigroup.com/products/omnidazzle/

Update: Sadly, OmniDazzle no longer works for Mac OS 10.9 so I have switched to Ink2Go to annotate the slides. Ink2Go is an adequate product and draws nicely the but its poor hotkey support means that I cannot change colors with a mere keystroke or Wacom button and I need to have the Ink2Go menu in the lower left of the screen – which you can sometimes see on my later recordings – which makes the recordings look less professional.

Update: I am so disappointed with all of the screen-drawing products that I have started to build some of my slides using Reveal.JS and my own JavaScript-based screen drawing tool that I call DazzleSketch. I am experimenting with this approach for my new book TCP/IP Networking.

TorchLED 50 Watt Light – I use this to light my face a little bit – takes away shadows on people’s faces.

I record on a 4-CPU MacBook Pro 15 with an SSD drive – and it seems to labor a bit -I tried a recent 2-CPU MacBook 13 and it could not keep up. Camtasia does a great job of compressing the video without loss – but it is a bit CPU heavy. If you look the Camtasia files are surprisingly small and so easy to archive the original high quality materials instead of the rendered MP4 files.

The Logitech camera drivers for Mac are kind of weak and so the WebCam Settings tool is very important to adjust and fix color balance and turn off auto-focus to keep me from looking too blue or randomly changing colors and having the focus wander while I wave my hands.

I have derived some settings for the screen layout and came up via experiment with some compression settings for YouTube and for MP4’s that I make. I find that I need to make my files about 2X larger to keep them looking good on YouTube. In Camtasia when I export, to get good results I need the quality at the 3/4 mark. But for just making files to be played with Quicktime or to keep for archive I export with Camtasia’s quality setting at the 1/2 mark.

For very wide-screen videos with a big version of me on the right hand side that I produce for MOOC / Distance education like this:

http://www.youtube.com/watch?v=SQ0HXfB8Q1w

I use a 1280×525 Canvas in Camtasia.

For situations where I make a screencast be played in a classroom make the Camtasia Canvas 1024×768 and move the image around or even remove it to keep it off the slide content as in:

http://www.youtube.com/watch?v=Za3TXZXGJAE

Folks have more pixels on their computers than in classroom projectors :)

Working on the Skulpt Python to Javascript Compiler

I am making heavy use of the Skulpt cross-compiler that allows me to run Python programs completely in the browser. It compiles the Python to Javascript and then runs the Javascript, allowing an auto-grader to be built that requires zero server footprint that I use in my free online MOOC called Python for Informatics. The same compiler is used by CodeSkulptor which is part of the Rice University Coursera MOOC titled An Introduction to Interactive Programming in Python.

Since Skulpt is a complete ground up implementation of Python including the need to implement all the standard libraries it is naturally incomplete. And so as my students go through the various assignments, we encounter little bits and pieces that are not quite right or not implemented.

Earlier this week, I was thinking that I would have to just work around the little things that were wrong or missing, but then the Computer Scientist inside me wondered how hard it would be to dive into the source code of the Skulpt compiler and fix a few things that were bothering me.

I started working on the code Thursday morning and it was relatively straightforward in its approach. The nice thing is that it approaches to writing a compiler have not changed too much since I last wrote a compiler in 1979. They create a parser to turn the language into tokens, a grammar that expresses how the tokens are combined, and then code that triggers on each of the rules of the grammar that produces an intermediate representation of the program, a code generator that turns the intermediate representation into runnable code, and a run-time library to implement the built-in functions needed. After all this, Skulpt uses the Google closure compiler to pull all the pieces together to produce a nice tight include file with all of it ready to run in the browser:

-rw-r–r– 1 csev staff 171469 Jan 20 09:05 builtin.js
-rw-r–r– 1 csev staff 214624 Jan 20 09:05 skulpt.js

I write down some of my steps for my own record and so others might possibly benefit if they too want to dive into working on Skulpt.

The first step is to clone the Mercurial repository on Google Code. Here is my clone:

http://code.google.com/r/drchuck-skulpt-mods/

Then I checked out my clone to my laptop:

hg clone https://drchuck@code.google.com/r/drchuck-skulpt-mods/

Most of the operations are run from a shell script called ‘m’ – the first thing you might want to do is run the unit tests to get a baseline

./m

Yup – it is that simple. There are over 300 unit tests that get run through skulpt, Python, and Google V8 and have their output compared.

Working with Skulpt

As best I can tell this is a pretty slow-moving project – but it does move so I felt it was important to document all my work in the Skulpt issue list. Before I worked on something, I wrote an issue in the main skulpt repo like Issue 116. Then I would add a comment when my modification was complete in my clone. I hope this helps the people running the Skulpt project the best chance of getting my code back into their repo.

Extending the runtime

If you are going to add a new feature, first you need a bit of Python code to exercise the feature. For the round, I wrote this:

f = 2.515
g = round(f,1)
print g

To test this, you run:

./m run rnd.py

Your output will look like this

-----
f = 2.515
g = round(f,1)
print g

-----
Uncaught: "#", [unnamed] line 61 column 29
/*    61 */                 throw err;
                            ^
dbg> 

It just means that the round function does not work. Modify the files

src/builtindict.js
src/builtin.js

And add the implementation. Here are the changes needed. Ignore the dist and doc diffs and focus on the src diffs. The dist and doc diffs are generated in a bit – they end up in the repo so folks can just grab the disk and doc from the repo without needing to check out the code.

When you make changes, run

./m

Until unit tests pass and then run

./m dist

Until it successfully completes:

...
. Wrote dist/builtin.js
. Updated doc dir
. Wrote dist/skulpt.js.
. gzip of compressed: 50585 bytes

Then re-run your code:

./m runopt rnd.py

Until your code works. You may go a few rounds edit, unit test, dist, re-run, but the process takes about 20 seconds so it is not as painful as it sounds. I could not figure out how and exactly when the “./m run” looks at the new code in src and when it needs a “./m dist” to get new code – so I pretty much do a “./m dist” on every modificiation.

When everything works and the output you see from “./m run” matches the output of running Python on the same code you can turn your test code into a new unit test. Run

./m nrt

It brings up vi in a file that is the next available unit test number. Paste in the code form your “rnd.py” and save it. Then run:

./m regentests

Then run

./m
./m dist

You may find little things in each of these steps. Edit your code and/or the unit test until “./m dist is completely clean”. Then I actually copy the two files in dist into my online autograder and do a quick test of the new feature in the browser.

If all goes well, you can use mercurial to add the unit tests, checking things are OK and then do a commit and push

hg add test/run/*322* (do this for each unit test you have added)
hg status
hg commit
hg push

Changing the Language

If you need to change the language (i.e. anything other than the runtime) it is a little trickier. Examples of two language changes I did were:

  • Change the code generator for try / except – this was relatively straightforward because it did not entail a grammar change
  • Add support for quit and exit – I initially thought I could do this by extending the run-time and having them throw an exception that the outer execution loop would catch – but somehow I never got it to work so I switched to making them part of the language like break, continue, and other flow operations. If you look at the code, I touched a lot more files in this change – but it should serve as a nice roadmap when you make a grammar change and then have to work through and get all the parsing and code generation to work.

The steps I take when making any change to the parser are as follows:

./m regenparser
./m
./m dist
./m runopt quit1.py

Again, I don’t know which changes need which of the above steps, but it seems that a lot of the changes needed to do a complete “./m dist” before I could test them in my own code – so after a while – I just did them all on every change.

The first thing you need to do is get the dump of the generated JavaScript code as part of your testing. I searched vainly for a nice option to make this happen and perhaps there is a better way – but I found that what worked for me was un-commmenting some code in “src/import.js”:

--- a/src/import.js	Fri Jan 18 11:03:55 2013 -0500
+++ b/src/import.js	Fri Jan 18 12:20:25 2013 -0500
@@ -84,7 +84,7 @@
  */
 Sk.importModuleInternal_ = function(name, dumpJS, modname, suppliedPyBody)
 {
-    //dumpJS = true;
+    dumpJS = true;
     Sk.importSetUpPath();
 
     // if no module name override, supplied, use default name
@@ -170,7 +170,7 @@
                 return lines.join("\n");
             };
             finalcode = withLineNumbers(co.code);
-//          Sk.debugout(finalcode);
+            Sk.debugout(finalcode);
         }
     }

Make sure not to check these changes in by doing an “hg revert src/import.js” right before the commit and push.

If you make these changes to src/import.js go throught the steps above and you will see a lot of nicely formatted JavaScript flying by in addition to the other output.

Once your have the changes to skulpt making it past “./m dist” it is time to test your own code and the new feature.
when you do a “./m runopt file.py” – you get a lot of Javascript output on the terminal. It is a little obtuse – but like the displays in the Matrix – after a while it makes sense. The basic runtime is a while containing a switch statement and each of the code blocks is a case statement. It is like the classic code generator I wrote in 1979. Don’t expect the blocks to be in the same order as the Python source – just look at the “$blk=4” code at the end of each block to see where the code will be going next.

Here is the generated JavaScript from a simple hello world Python program with a few line breaks:

-----
print "Hello world"

-----
/*     1 */ var $scope0=(function($modname){var $blk=0,$exc=[],$gbl={},$loc=$gbl;
    $gbl.__name__=$modname;
    while(true){try{ switch($blk){case 0: /* --- module entry --- */
/*     2 */ //
/*     3 */ // line 1:
/*     4 */ // print "Hello world"
/*     5 */ // ^
/*     6 */ //
/*     7 */ 
/*     8 */ Sk.currLineNo = 1;
/*     9 */ Sk.currColNo = 0
/*    10 */ 
/*    11 */ 
/*    12 */ Sk.currFilename = './hello.py';
/*    13 */ 
/*    14 */ var $str1=new Sk.builtins['str']('Hello world');
    Sk.misceval.print_(new Sk.builtins['str']($str1).v);
    Sk.misceval.print_("\n");return $loc;goog.asserts.fail('unterminated block');} }
    catch(err){if ($exc.length>0) { $blk=$exc.pop(); continue; } else { throw err; }} }});

Hello world
-----

Here is code generated from a more complex Python example with more than one block. I wish I knew how to make the JavaScript prettier when debugging your code. The JavaScript is pretty in the unit tests – but ugly when you do runopt.

I won’t go through the detailed code modification steps – that is best shown looking at the diffs from my two changes above.

Pulling in merges from other clones

The Skulpt project is pretty slow-moving so interesting things happen in clones other than the main repo – so it is helpful to pull those changes into your repo. I include how I did this just to help jog my own memory.

Make sure you have any of your changes fully committed and your local repo is clean before you start:

hg incoming https://code.google.com/r/theajp01-skulpt-int-fix/
hg pull https://code.google.com/r/theajp01-skulpt-int-fix/
hg status
hg heads
hg merge
hg diff
./m 
./m dist

If you have a problem with the patches you may need to fix them by editing the files or even add new unit tests using “./m nrt”. When you are satisfied with the patches you do the following:

hg status
hg commit
hg push

Summary

In short it has been a fun three days, re-learning how compilers work internally. I really like the internal structure of the Skulpt project. It is very impressive and thorough and surprisingly easy to work in. This experience also reinforces my sense of the value of very deep learning needed in a Computer Science degree. Some might say that Computer Science students don’t need to learn operating systems or compilers or hardware – but someone needs to be able to dig into these pretty layers and make something work sooner or later.

Of course not everyone who should learn to program needs to be a trained Computer Scientist. There are plenty of people who need just to know how Python and a few other things works so they can sling data around and connect between things. But it is good to be able to call in a plumber once in a while. And for me, it was fun to go back to my plumber days these past three days.

Thanks to the great folks who built Skulpt and thanks to my SI301 on-campus students and Python MOOC students for their patience as I worked through this code as the autograder kept breaking in mysterious ways :).

No EPUB3 SI791 Class this Semester

A number of people expressed some interest in my SI791 independent study course covering EPUB3 and writing a Sakai book in EPUB3. Here is the blog post where I announced the course:

http://www.dr-chuck.com/csev-blog/2012/12/si791040-connecting-next-generation-learning-management-systems-and-publishing/

Unfortunately I bit off more than I could chew over the break in deciding to build my own open source MOOC environment based on IMS LTI and Moodle and teaching a Python MOOC with my own infrastructure starting January 14 (yikes).

https://online.dr-chuck.com/

I don’t think I can also pull off three entertaining hours of lecture per week on EPUB3 every week. I really needed to get a bit of a head-start on the class before it started and all my energy was consumed building MOOC software and learning the Raspberry Pi.

But all is not lost – I talked to Jim Eng and we hope to do a series of meetings at the Library around EPUB3 so we can still come up to speed a bit later in the semester. I am still going to move things gently forward as soon as the semester starts. If we set these up – I will announce them on si.all so SI folks can come if they like.

I still want to start writing the Sakai book in github. I have a new possible strategy after some recent experience with Calibre and EPUB – It occurs to me that the right way to do an EPUB3 might be to just to a constrained HTML 5 book that is carefully constrained so as to allow easy automated conversion into EPUB3 format through Calibre (or similar). If it will work it is a lot easier than trying to maintain it all in EPUB3 all the time. Maybe this will lead to an EPUB3 “shim” for the HTML 5 version of the book… All are just working ideas for now.

Sorry to back out on this…

Abstracts for EuroSakai 2013 – Paris

I sent in two abstracts for the EuroSakai 2013 in Paris January 28-30, 2013. Here are my abstracts:

Experiences with Massive Open Online Courses

Dr. Severance taught the online course “Internet History, Technology, and Security” during 2012 using the Coursera teaching platform. The course had over 56,000 registered students from all over the world and 5,000 received a certificate. In this keynote, we will look at the current trends in teaching and learning technology as well as look at technology and pedagogy behind the course, and behind Coursera in general. We will look at the data gathered for the course and talk about what worked well and what could be improved. Dr. Severance is also teaching an independent MOOC starting in January 2014 using technology of his own making and he will report on those efforts to date. We will also look at emergent effects in the MOOC space including recent entries and efforts as well as look toward the future of where MOOCs are headed and what their impact might be. We will also look at possible new architectures for MOOCs the role of open source in the emerging MOOC ecosystem.

References

[1] http://www.coursera.org/
[2] http://class.stanford.edu/
[3] http://online.dr-chuck.com/ (may be under construction…)

Directions for Standards in Teaching and Learning

This presentation will cover a wide range of topics around evolving standards for teaching and learning. First we will look at the well-established standard of IMS Learning Tools Interoperability (LTI) and Common Cartridge (CC) and look at the market penetration of each of the standards. Then we will look at upcoming standards form IMS like LTI 2.0 and give a sense of where they fit into the marketplace. And then looking beyond IMS, we will look at how EPUB3 may have a very significant impact in the LMS market and examine the overlap between the IMS standards and ODPF (EPUB3) standards. We will also talk about open source projects around EPUB3 and other content authoring efforts.

References

[1] http://www.imsglobal.org/
[2] http://www.ipdf.org/
[3] http://developers.imsglobal.org/
[4] Readium: Digital Publishing Meets Open Web
http://readium.org/
[5] Bill McCoy (IDPF) – Introducing ePUB3

SI791/040 – Connecting Next Generation Learning Management Systems and Publishing

This is an independent study course where we will be looking closely at the next-generation EPUB3 electronic book publishing format and how it can revolutionize teaching and learning systems as well as the open educational resources (OER) space. EPUB3 is the standard for the next generation of electronic books and includes features similar to those found in Apple’s iBooks Author program. We will also look at blending capabilities from the IMS Common Cartridge into EPUB3. We will also look at the architectures of next-generation learning management systems using the IMS Learning Tools Interoperability Specification, specifically with an eye toward building systems suitable for use in MOOC environments.

We will collaborate and involve groups like IMS Global Learning Consortium, the International Digital Publishing Forum, the Connexions Project, OERPUB, Mozilla, INGRAM, UM Libraries, UM ITS, and Open Michigan, and others.

Two of the tasks we will undertake to contextualize our inquiry is to write a complete open book in EPUB3 about Sakai and also build an open source EPUB3 editing system in HTML5/Javascript. We are looking for a diverse group of people ranging from writers to programmers. Even though I have two contextualizing tasks to get us started – we may take the class in different directions once we have our first few class meetings.

Space will be limited and students will need to apply to join the course. There will be orientation session session where I will answer questions and gauge interest:

Update: I am looking to move the meeting time

The course will meet:

Tuesdays 3:30 – 5:00 – 1265NQ, Starting January 15, 2013

Interested students are also welcome to send me E-Mail with questions.

References:

MOOCs are Really Great! But What’s Next?
http://www.youtube.com/watch?v=p8ZItXwF2ys
http://www.slideshare.net/csev/moocs-are-great-whats-next

Bill McCoy (IDPF) – Introducing ePUB3

Share Everywhere : Create and Share Content with Legs
http://www.slideshare.net/oerpub/share-everywhere-creating-content-with-legs-slideshare

BEA 2012 – EPUB3 is here – are you ready?

Next Generation Learning Platforms

EPUB3: Not Your Father’s EPUB

EPUB3 Demo and Examples

International Digital Publishing Forum
http://idpf.org/

Readium: Digital Publishing Meets Open Web (A free HTML5 EPUB3 Reader)
http://readium.org/

EPUB3 Sample Documents
http://code.google.com/p/epub-samples/downloads/list

Abstract: MOOCs Are Really Great! But What’s Next?

This was an invited presentation at the Dé Onderwijsdagen 2012 – World Trade Center, Rotterdam November 13, 2012.

Dr. Severance taught the online course “Internet History, Technology, and Security” using the Coursera teaching platform. His course started July 23, 2012 and was free to all who want to register. The course has over 46,000 registered students from all over the world and 6000 are on track to complete the course and earn a certificate. In this session, we will look at the current trends in teaching and learning technology as well as look at technology and pedagogy behind the course, and behind Cour Sera in general. We will look at the dates Gathered for the course and talk about what worked well and what could be improved. Also we will look at some potential long-term effects of the current MOOC efforts. Charles Severance is a Clinical Associate Professor and teaches in the School of Information at the University of Michigan. Charles is a founding faculty member of the Informatics Concentration undergraduate degree program at the University of Michigan. Hey Also works for Blackboard axis Sakai Chief Strategist. Hey Also works with the IMS Global Learning Consortium promoting and developing standards for teaching and learning technology. Previously he was the Executive Director of the Sakai Foundation and the Chief Architect of the Sakai Project.

A New Dr. Chuck-Mobile – Toyota Prius

My new Dr. Chuck-Mobile is a Toyota Prius. My 2001 Buick LeSabre has 227,000 miles on it and I wanted a vehicle that gets 50 miles per gallon given how much I drive (about 30,000 miles per year).

A Prius is quite a departure for me. My last *new* car was in 1980 – I don’t even remember what it looked like – it was beige. I have been driving effectively the same “family” of car since 1995. The cars were all some variation of a General Motor’s full-size vehicle with a 3.8 Liter engine. There was a Pontiac Bonneville, several Oldsmobile 88’s, and most recently, a Buick LeSabre. Since I drive so many miles, I would purchase these cars with about 100,000 miles on them and drive them until they would have about 220,000 miles on them and sell them. I have driven well over a half-million miles in these cars.

I have been thinking about a Prius for years now. Perter Knoop has a Prius, Michael Korkusa has a Prius, and Joseph Hardin has a Prius. I had been looking at Prius used prices and found that their resale value was very high. I never saw what seemed to be a bargain price. A Prius with 100,000 miles is worth $12,000 – it looked like the new Prius was the best value.

A few months back, my car was in the shop so I rented a Prius for three days and fell in love with the car. I was amazed that the intelligence of the power management system and was able to verify the gas mileage in real-world driving conditions.

I am saving $0.10 every mile I drive the car. That should save me $250.00 per month in real in-my-pocket savings. The fuel saving almost makes the car payment. If I drive the car 200,000 miles – it will save me $20,000 – pretty impressive. My motorcycle also gets 50 miles per gallon so with the Prius all my vehicles get 50 miles per gallon. Pretty cool.

Some asked my why I did not purchase a Chevy Volt. The Volt is really pretty and I love the notion of a plug-in vehicle. But since I drive 120 miles every day and the Volt runs out of a charge at about 30 miles. So for me the key factor in the Volt is the mileage it gets when the gas engine is running. The savings you gain while you are running battery-only are quickly lost when you are running in hybrid mode. Since most of my travel is long distances the nod goes to the Prius.

All The World’s a Classroom

This is a report of my UMSI monthly article of the same title –
http://monthly.si.umich.edu/2012/10/18/all-the-worlds-a-classroom/

This summer it was my great pleasure to teach an online non-credit course titled Internet History, Technology, and Security to a students around the world at no cost using the Coursera platform for large-scale online courses. Over 49,000 students registered for the free class, over 16,000 attended the first week’s lecture and over 4900 students earned a certificate at the end of the 10-week course. It would take 32 years of teaching our SI502 foundations course on Networked Computing to interact with that many students.

For that first course, I chose to use several weeks of SI502 that focused on how the network was built over time and how it functions today and expanded it to become a 10-week course. I chose this material because it is fun, engaging, and very well suited for a video format. But more importantly, I wanted to create a course about technology that would be accessible to learners of all levels and all languages around the world. I also wanted a course that showcased the School of Information’s core competency of “connecting people, information, and technology in more valuable ways”.

The course started by looking at the code breaking efforts during World War II in the United States and the UK. It was a perfect example of having lots of data and using computing to transform that raw, encrypted, and seemingly meaningless data first into information and then ultimately into knowledge. Because of the heavy use of advanced encryption techniques for wartime communications, high-speed computation devices were developed to “crack” the encryption. Initially those devices were electromechanical and then later to increase the speed of the devices, the first electronic computers were invented and built under the top-secret wartime conditions at Bletchley Park. Bletchley Park was a beehive of activity with over 10,000 people, many thousands of encrypted messages (information) and hundreds of computers working 24 hours per day (technology).

The course followed the history through the post war period, through to the current day featuring interviews of many innovators ranging from the co-inventor of the World-Wide-Web (Robert Calliau) through the founder of Amazon (Jeff Bezos).

Once we had viewed the Internet through a historical lens, we went back and took another look at the Internet through a more technical lens, examining how packets work, the Link Layer, Internetwork layer (IP), the Transport Layer (TCP), and the Application Layer.

I saw this course as far more than just another course. To me it represented so much of what it means to be part of the School of Information at the University of Michigan and I tried to reflect the values of SI throughout the course material and how I approached and taught the course. I wanted to make all of the technical material in the course accessible to learners of all levels. The course lecture and video materials were translated by the students (crowd sourcing) into over 30 languages and we had students from all over the world and nearly every single country was represented. I made sure to teach the course in a way that would be accessible to non-English speakers as well as those with slow or unreliable network connections.

Another exciting part of the course was how the students became a self-organizing social learning community. With over 10,000 students active throughout most of the course, there was literally no way that I as the faculty member could help each individual student with a technical issue or problem understanding the materials. The students were amazingly wonderful at helping each other, forming study groups, and some even took the initiative produce supporting course materials and reading lists for the class. Because of so much proactive student involvement, my workload was surprisingly low.

One of the issues in online courses is the sense of loneliness and isolation. One experiment I tried was to have “office hours” in various cities as I travelled in the late summer and early fall. I had office hours in New York City, Los Angeles, Wilmington, NC, Ann Arbor, Chicago, Memphis, Washington, DC, and Seattle WA. The office hours had 2-15 students show up in a local coffee shop and we talked about the class and how it could be improved. The students thought it was cool to meet their online instructor and that I was being very giving of my time. But in actuality I did the office hours because I wanted to see and meet my students – or at least some of them. It helped me maintain my own motivation to know that my students were real and not just numbers and data inside of a computer. I learned so much about how to better teach the course from these interactions. I have upcoming office hours in Seoul, South Korea, Barcelona Spain, Denver Colorado, and Amsterdam as part of my travel plans for the fall.

Students who completed the course with a passing score will receive an online certificate of achievement from Coursera. Students can print out the certificate or link to it in their resumes. I decided to go a step further and offer to sign their certificates if they would send them to me at the School of Information with a self-addressed stamped envelope. I have warned the Dean’s office that they might be receiving 4,900 pieces of mail for me over the next few months. Like everything in the course, for me this is just another experiment in how far we can expand the boundaries of this new form of interacting in the context of teaching the world.

I did a summary lecture for the course and put it up on YouTube that you are welcome to watch to see my reflections on the course as well as a presentation of the student demographics and retention statistics for the course. You can also take a look at an interactive map of the geographic distribution of the students in my course from a blog post that I wrote.

And if you found this interesting, the course will be offered again soon and you are welcome to sign up and join us online. I hope to see you on the net.

The University as a Cloud: Openness in Education

The University as a Cloud: Openness in Education
Dr. Charles Severance
University of Michigan School of Information

This is an extended abstract for a keynote I recently gave in Andora. The slides for the presentation are available on SlideShare.

Warning: I caution you to read this with an understanding that this is not a precise academic treatise backed by deep facts, figures, and data. It is an exaggerated perspective that is intended to make people think. I would also point out that I have little respect for keynote speakers that simply say “all is lost”. For me, this is not an “all is lost” or “higher education will die a horrible death in the next five years” keynote. Frankly – those gloom-and-doom keynotes are a dime-a-dozen, uncreative, and complete crap. I think that those faculty and universities that see the trends I am talking about and evolve to a more open approach to their teaching, research, and engagement with their stakeholders will thrive. So this is an optimistic outlook even though it seems rather negative in the first part.


In this keynote, I explored the nature of the changing relationship between universities and their governments and propose ideas as to how faculty may need to evolve their strategies over the next decade or so around teaching and research to insure a successful career for themselves as well as continued success for higher education in general. The keynote was not intended as presentation of the facts of the current situation – but rather to get people talking about why openness may soon move from optional to nearly required in higher education.

Up to the 1900, the world was relatively unconnected and higher education was only available to a select wealthy few. The pace of research and innovation was relatively slow and much of the academic research was focused on investigating the natural world and gaining increasing understanding in areas like physics, electricity, civil engineering, and metallurgy. Efforts like the Eiffel Tower, electric light, the first powered flight, radio, and the Titanic were the “grand challenges” of the time.

World War II caused a great deal of research into how the “technology” discoveries in the 1800’s and early 1900’s could be used to gain an advantage in war. If you compare trench warfare with rifles at the beginning of World War I with the technology of aircraft carriers, jet engines, rockets, and nuclear weapons by the end of World War II in the 1940s, it was an amazing collaboration between governments, scientists, and engineers to develop these sophisticated technologies in such a short period.

After the war it was clear that the “winner” of the war was the side with the quickest minds and the most scientists. Most countries around the world saw higher education, particularly research-oriented higher education as essential to national security going forward. This resulted in a great expansion of the size and number of research universities around the world. Research funds were available from governments to reward successful scientists and support the next generation of PhD. students.

In order to break the Axis encryption during World War II, scientists at Bletchley Park and elsewhere invented the first high-speed electronic computers and demonstrated the strategic importance of efficient communication and organizing and processing large amounts of information. In the United States after the war, the National Science Foundation was created to fund research into these new technologies. The military also made direct investments in funding academic research through agencies like the United States Advanced Research Projects Agency (ARPA).

From the 1950’s through 2000, governments saw significant payoff in investing in academic research as academics provided solutions to problems first in computing technology, then networking technology as embodied in the Internet and then advances in information management as embodied in the World-Wide-Web. Academic research was an essential element of the progress in the last half of the twentieth century. Academics funded through agencies like the National Science Foundation and Advanced Research Projects Agency (ARPA) performed research that was of strategic value to governments around the world.

With the bounty of research funding during the latter part of the 20th century, there was an increasing need to efficiently allocate research funds to the most worthy scientists. An extensive network of peer-reviewed journals and high-quality conferences are used to award research funds to those who are most worthy. In this environment, a research faculty member must spend their early career furiously publishing research in journals to develop a sufficient reputation within their field so as to assure a regular stream of research funding. Those who can successfully navigate this gauntlet are rewarded with tenure.

After the year 2000 with computing, networking, and the web well developed, there are fewer research areas that are of strategic value to governments. Research funding continues albeit at lower levels with a focus on research that benefits society more broadly such as health-related areas. While government funded research is not going away, the overall level of investment will naturally be lower if research is not seen as a strategic military priority as it was in the second half of the 20th century.

Furthermore as the Internet, web and other technologies have lead to increasingly global manufacturing and markets, the first-world countries are experiencing a slow decline in their economic advantage relative to the rest of the world. This has put governments under pressure to spend shrinking public funds on the most important pressing needs facing society. Around the world the general trend is for governments to reduce the public funding available to support teaching and research.

It is not likely that these reductions in government teaching and research funding levels are “temporary”. It is more likely that universities and colleges will need to rely more on other sources of funding in the long-term. These other sources will likely come from three areas: (a) tuition, (b) donations from alumni, and (c) research funding from industry.

If we assume this hypothesis for the sake of argument, then what would be the best strategy for faculty and administrators to survive and thrive in this new funding environment?

The simple answer that I think will become an increasing element of higher education’s strategy is one of openness. Higher Education must more directly demonstrate its value to governments, students, private industry, and the rest of society. The number of journal papers, which for the past 60 years is the most important measure of the value of a faculty member will have far less value for this new set of stakeholders and possible funders for higher education. Faculty must get their ideas into the open and do so quickly to have real impact on the real problems that are facing our world today. Hiding the best teachers at a university in a small face-to-face classrooms will also not lead to broadly perceived value for the faculty members and the institution.

Higher education must compete in public and in the open. The recent excitement surrounding Massive Open Online Courses (MOOCs) is but one example of how being open and doing something for free shows the value of universities to a broader populace. This is an example of helping amazingly talented teachers at outstanding schools teach in the open – and it has had a very positive worldwide impact for those universities that have participated in the current MOOC efforts.

We need to see this kind of direct openness between universities and their faculty with the rest of society in research areas as well. Ideas need to be made more public and communicated clearly in a way that all people can understand them – rather than cloaked in obtuse mathematics in dusty journals.

Of course the trends I describe in the keynote and this paper are exaggerated to make my point. Nothing moves so quickly so I am not suggesting that young faculty immediately stop writing journal articles. That would be harmful to their career and tenure because the new systems of determining value and measuring impact are not in place at most universities. But young faculty should begin to compliment their traditional research and teaching output with more open forms of teaching and publication so they are prepared to participate in new ways of demonstrating value to society in the future.

Comments welcome.

P.S. I was invited to expand on this notion as an invited submission to a special issue of a journal. That would be ironic. A journal article about the shrinking value of journal articles.

Sakai / Jasig Consolidation Vote Passes

Here is a note from Ian Dolphin sent to the Sakai announcements list:

I would like to report the results of the recent ballots on the merger of the Sakai Foundation and Jasig.

Sakai
58 Members voted for the merger, 3 voted against, and 3 abstained. 13 Members did not register a vote.

Jasig
40 Members voted for the merger, 1 voted against, and 1 abstained. 5 Members did not register a vote

Thanks to organisational representatives in both organisations who took the time to appreciate the issues and cast their vote. Minutes of the teleconferences which took place last week, together with a voting record, will be made available shortly.

We will now proceed with the remaining legal steps to bring the two organisations together as the Apereo Foundation. Further announcements of progress will be made in coming weeks.

This is an exciting development and I think lays the groundwork for a significantly stronger higher education participation in open source activities in the long term.