The University as a Cloud: Openness in Education

The University as a Cloud: Openness in Education
Dr. Charles Severance
University of Michigan School of Information

This is an extended abstract for a keynote I recently gave in Andora. The slides for the presentation are available on SlideShare.

Warning: I caution you to read this with an understanding that this is not a precise academic treatise backed by deep facts, figures, and data. It is an exaggerated perspective that is intended to make people think. I would also point out that I have little respect for keynote speakers that simply say “all is lost”. For me, this is not an “all is lost” or “higher education will die a horrible death in the next five years” keynote. Frankly – those gloom-and-doom keynotes are a dime-a-dozen, uncreative, and complete crap. I think that those faculty and universities that see the trends I am talking about and evolve to a more open approach to their teaching, research, and engagement with their stakeholders will thrive. So this is an optimistic outlook even though it seems rather negative in the first part.


In this keynote, I explored the nature of the changing relationship between universities and their governments and propose ideas as to how faculty may need to evolve their strategies over the next decade or so around teaching and research to insure a successful career for themselves as well as continued success for higher education in general. The keynote was not intended as presentation of the facts of the current situation – but rather to get people talking about why openness may soon move from optional to nearly required in higher education.

Up to the 1900, the world was relatively unconnected and higher education was only available to a select wealthy few. The pace of research and innovation was relatively slow and much of the academic research was focused on investigating the natural world and gaining increasing understanding in areas like physics, electricity, civil engineering, and metallurgy. Efforts like the Eiffel Tower, electric light, the first powered flight, radio, and the Titanic were the “grand challenges” of the time.

World War II caused a great deal of research into how the “technology” discoveries in the 1800’s and early 1900’s could be used to gain an advantage in war. If you compare trench warfare with rifles at the beginning of World War I with the technology of aircraft carriers, jet engines, rockets, and nuclear weapons by the end of World War II in the 1940s, it was an amazing collaboration between governments, scientists, and engineers to develop these sophisticated technologies in such a short period.

After the war it was clear that the “winner” of the war was the side with the quickest minds and the most scientists. Most countries around the world saw higher education, particularly research-oriented higher education as essential to national security going forward. This resulted in a great expansion of the size and number of research universities around the world. Research funds were available from governments to reward successful scientists and support the next generation of PhD. students.

In order to break the Axis encryption during World War II, scientists at Bletchley Park and elsewhere invented the first high-speed electronic computers and demonstrated the strategic importance of efficient communication and organizing and processing large amounts of information. In the United States after the war, the National Science Foundation was created to fund research into these new technologies. The military also made direct investments in funding academic research through agencies like the United States Advanced Research Projects Agency (ARPA).

From the 1950’s through 2000, governments saw significant payoff in investing in academic research as academics provided solutions to problems first in computing technology, then networking technology as embodied in the Internet and then advances in information management as embodied in the World-Wide-Web. Academic research was an essential element of the progress in the last half of the twentieth century. Academics funded through agencies like the National Science Foundation and Advanced Research Projects Agency (ARPA) performed research that was of strategic value to governments around the world.

With the bounty of research funding during the latter part of the 20th century, there was an increasing need to efficiently allocate research funds to the most worthy scientists. An extensive network of peer-reviewed journals and high-quality conferences are used to award research funds to those who are most worthy. In this environment, a research faculty member must spend their early career furiously publishing research in journals to develop a sufficient reputation within their field so as to assure a regular stream of research funding. Those who can successfully navigate this gauntlet are rewarded with tenure.

After the year 2000 with computing, networking, and the web well developed, there are fewer research areas that are of strategic value to governments. Research funding continues albeit at lower levels with a focus on research that benefits society more broadly such as health-related areas. While government funded research is not going away, the overall level of investment will naturally be lower if research is not seen as a strategic military priority as it was in the second half of the 20th century.

Furthermore as the Internet, web and other technologies have lead to increasingly global manufacturing and markets, the first-world countries are experiencing a slow decline in their economic advantage relative to the rest of the world. This has put governments under pressure to spend shrinking public funds on the most important pressing needs facing society. Around the world the general trend is for governments to reduce the public funding available to support teaching and research.

It is not likely that these reductions in government teaching and research funding levels are “temporary”. It is more likely that universities and colleges will need to rely more on other sources of funding in the long-term. These other sources will likely come from three areas: (a) tuition, (b) donations from alumni, and (c) research funding from industry.

If we assume this hypothesis for the sake of argument, then what would be the best strategy for faculty and administrators to survive and thrive in this new funding environment?

The simple answer that I think will become an increasing element of higher education’s strategy is one of openness. Higher Education must more directly demonstrate its value to governments, students, private industry, and the rest of society. The number of journal papers, which for the past 60 years is the most important measure of the value of a faculty member will have far less value for this new set of stakeholders and possible funders for higher education. Faculty must get their ideas into the open and do so quickly to have real impact on the real problems that are facing our world today. Hiding the best teachers at a university in a small face-to-face classrooms will also not lead to broadly perceived value for the faculty members and the institution.

Higher education must compete in public and in the open. The recent excitement surrounding Massive Open Online Courses (MOOCs) is but one example of how being open and doing something for free shows the value of universities to a broader populace. This is an example of helping amazingly talented teachers at outstanding schools teach in the open – and it has had a very positive worldwide impact for those universities that have participated in the current MOOC efforts.

We need to see this kind of direct openness between universities and their faculty with the rest of society in research areas as well. Ideas need to be made more public and communicated clearly in a way that all people can understand them – rather than cloaked in obtuse mathematics in dusty journals.

Of course the trends I describe in the keynote and this paper are exaggerated to make my point. Nothing moves so quickly so I am not suggesting that young faculty immediately stop writing journal articles. That would be harmful to their career and tenure because the new systems of determining value and measuring impact are not in place at most universities. But young faculty should begin to compliment their traditional research and teaching output with more open forms of teaching and publication so they are prepared to participate in new ways of demonstrating value to society in the future.

Comments welcome.

P.S. I was invited to expand on this notion as an invited submission to a special issue of a journal. That would be ironic. A journal article about the shrinking value of journal articles.

Sakai / Jasig Consolidation Vote Passes

Here is a note from Ian Dolphin sent to the Sakai announcements list:

I would like to report the results of the recent ballots on the merger of the Sakai Foundation and Jasig.

Sakai
58 Members voted for the merger, 3 voted against, and 3 abstained. 13 Members did not register a vote.

Jasig
40 Members voted for the merger, 1 voted against, and 1 abstained. 5 Members did not register a vote

Thanks to organisational representatives in both organisations who took the time to appreciate the issues and cast their vote. Minutes of the teleconferences which took place last week, together with a voting record, will be made available shortly.

We will now proceed with the remaining legal steps to bring the two organisations together as the Apereo Foundation. Further announcements of progress will be made in coming weeks.

This is an exciting development and I think lays the groundwork for a significantly stronger higher education participation in open source activities in the long term.

Why I Support the Consolidation Of Sakai and Jasig into Apereo

I have been thinking about writing a blog post as to why I am a strong supporter of the consolidation of the two foundations into Apereo. But then Steve Swinsberg did an outstanding job of summarizing the issues and the actions needed. Please read his blog post here:

http://steveswinsburg.wordpress.com/2012/10/27/why-i-support-the-jasig-sakai-consolidation/

In short, Steve’s post points out:

  • Financial savings
  • Coordinated conferences
  • Greater sustainability that comes from less cost across both organizations

I agree with everything Steve says.

The reasons Steve points out in his post would be logic enough to do the consolidaton – but to me as important as those reasons are, there is an even more important reason to do the consolidation.

We as higher education need a foundation that allows for the straightforward expansion and bringing in new projects. For example I would love to see projects like this have an annual conference and solid foundation to insure the long-term ownership of their source code, incubation, and having a community of like minded individuals to help advance their causes. Here are some projects that need a foundation:

P.S. These projects are *just* examples – they may never be part of Apereo – I just list them as efforts that might someday benefit from a foundation to hold the IP. They are *just examples*.

In a sense none of these projects would join the Sakai Foundation because Sakai means a particular LMS. And they would not join Jasig because it is not about learning.

Apereo *is* something they would, could, and should join.

This is why the combination of Sakai and Jasig is far more valuable to us than the two separate organizations. This is not just about saving money and being more efficient – this is about building an amazing portfolio of open source projects that work together under a large tent.

Others feel that this somehow changes the Sakai “brand” – nothing could be further from the truth. Apero is just a foundation – it is not the projects. The Sakai, uPortal, uMobile, Class2Go, CAS, OpenCast, etc etc will be *the* brands. Apereo will always be the background brand / holding organization – much like Jasig is the “background brand” to uPortal already. This does not harm the Sakai brand in any way – to me it enhances it because we can make “Sakai” mean an LMS product and really refine the brand going forward.

Please look at Steve’s post above to understand how to vote and do so *right now*. An abstain vote is a no vote.

While I am very supportive of the consolidation – I want the members to make the decision explicitly – so please vote. Don’t make the decision by inaction.

(sorry for the typos – I am jumping on a plane to Barcelona)

Installing Stanford’s Open Source Class2Go Large-Scale Open Teaching Environment

I have been watching the commits go by on Stanford’s Class2Go software and really liked the high level of activity by Sef, Jason, and the rest of the team. It is fun to watch a project like this even when they are a small team and sprinting forward and don’t have much time to build a community. Starting from day one in the open is simply great – we can learn so much even if all we do is watch.

Of course I tweeted about how cool it was to watch and threatened to write some code and contribute it. Then Jane Manning suggested via Twitter that I actually try to add IMS Learning Tools Interoperability (LTI) to it.

I didn’t teach SI502 until 1PM today and I had a full cup of coffee so I figured I would give it a try. About 90 minutes later I have Class2Go running on my laptop.

Along the way I figured out that Class2Go already has IMS LTI built in as its method to integrate Piazza discussion forum. But I still figure I can hack in a nice “External Tool” capability into Class2Go and do LTI 1.1 grade integration since I bet they don’t have that yet. Here are my results so far:

The following are my notes as I went along that adjust what the Stanford documentation told me to do. My step numbers follow their step numbers it is probably best to scroll through their documentation and my steps at the same time.

Update October 15, 2012: I submitted improvements to the Stanford instructions and they are now updated to reflect everything I say below. So ignore my notes below and just look at the new-and-improved Stanford instructions – they should work.


Stanford's Instructions:
https://github.com/Stanford-Online/class2go/blob/master/README_SETUP.md

Chuck's Notes:  Update: My notes below have now been integrated into the above instructions.

1. Upgrade Xcode to 4.5.1 via App Store

2. Within XCode, add the command line tools: 
Preferences -> Downloads -> "Command Line Tools" Install button
(tiny typo in their instructions)

3.  Homebrew URL in their instructions - gives 404

So I went here:   http://mxcl.github.com/homebrew/

And did this:

ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"

Afterwards I ran :

brew doctor

4. I skipped this step did not install Python - I already had 2.7.1 and 
hoped I would be OK.

5. I already had mysql so I skipped this step

6. sudo easy_install pip

7. sudo pip install virtualenv

8. Here is what I saw

csev$ pwd
/Users/csev/dev/class2go

virtualenv sophi-venv --no-site-packages  
New python executable in sophi-venv/bin/python
Installing setuptools............done.
Installing pip...............done.

9. Here is what I saw

 . ./sophi-venv/bin/activate
(sophi-venv):class2go csev$ 

10. Here is what I saw

(sophi-venv):class2go csev$ pip install django
Downloading/unpacking django
  Downloading Django-1.4.1.tar.gz (7.7MB): 7.7MB downloaded
  Running setup.py egg_info for package django
    
Installing collected packages: django
  Running setup.py install for django
    changing mode of build/scripts-2.7/django-admin.py from 644 to 755
    
    changing mode of /Users/csev/dev/class2go/sophi-venv/bin/django-admin.py to 755
Successfully installed django
Cleaning up...
(sophi-venv):class2go csev$ 

11 - 15 - Ran just fine - there were lots of compiler warnings - but no errors so 
I pressed on

16 - ?? I am sure this will bite me much later when I try to 
integrate with Google Apps :)

17 - Ran just fine

This is missing prior to step 18

In Mysql:

create database class2go;
grant all on class2go.* to class2go@'localhost' identified by 'class2gopw';
grant all on class2go.* to class2go@'127.0.0.1' identified by 'class2gopw';

As root:

sudo mkdir /var/log/django/
sudo chmod 777 /var/log/django/

mkdir /Users/csev/dev/class2go/sqlite3/

Must setup databases.py from databases_example.py

cd main

databases.py:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql', 
        'NAME': 'class2go',                      
        'USER': 'class2go',                    
        'PASSWORD': 'class2gopw',        
        'HOST': '',                 
        'PORT': '',       
    },
    'celery': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': '/Users/csev/dev/class2go/sqlite3/celerydb.sqlite',
    },
}

18. Set up initial db.

./manage.py syncdb 

This failed the first time I ran it and I ran it again and it 
did not fail - not sure if it worked really - but pressed on - 
this is the output I saw the second time it ran:

(sophi-venv):main csev$ ./manage.py syncdb
Syncing...
Creating tables ...
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)

Synced:
 > django.contrib.auth
 > django.contrib.contenttypes
 > django.contrib.sessions
 > django.contrib.sites
 > django.contrib.messages
 > django.contrib.staticfiles
 > django.contrib.admin
 > django.contrib.admindocs
 > registration
 > south
 > courses
 > courses.forums
 > courses.announcements
 > courses.videos
 > courses.video_exercises
 > courses.email_members
 > khan
 > problemsets
 > django.contrib.flatpages
 > storages
 > celerytest
 > convenience_redirect
 > exception_snippet
 > db_test_data

Not synced (use migrations):
 - djcelery
 - c2g

These worked:

    ./manage.py syncdb --database=celery
    ./manage.py migrate --database=celery

This is that I saw in my MySql

mysql> show tables;
+----------------------------------+
| Tables_in_class2go               |
+----------------------------------+
| auth_group                       |
| auth_group_permissions           |
| auth_permission                  |
| auth_user                        |
| auth_user_groups                 |
| auth_user_user_permissions       |
| c2g_additional_pages             |
| c2g_announcements                |
| c2g_content_sections             |
| c2g_course_emails                |
| c2g_courses                      |
| c2g_courses_share_to             |
| c2g_emailaddr                    |
| c2g_exercises                    |
| c2g_files                        |
| c2g_institutions                 |
| c2g_listemail                    |
| c2g_mailinglist                  |
| c2g_mailinglist_members          |
| c2g_news_events                  |
| c2g_page_visit_log               |
| c2g_problem_activity             |
| c2g_problem_sets                 |
| c2g_problemset_to_exercise       |
| c2g_sections                     |
| c2g_sections_members             |
| c2g_user_profiles                |
| c2g_user_profiles_institutions   |
| c2g_video_activity               |
| c2g_video_to_exercise            |
| c2g_video_view_traces            |
| c2g_videos                       |
| celery_taskmeta                  |
| celery_tasksetmeta               |
| django_admin_log                 |
| django_content_type              |
| django_flatpage                  |
| django_flatpage_sites            |
| django_session                   |
| django_site                      |
| djcelery_crontabschedule         |
| djcelery_intervalschedule        |
| djcelery_periodictask            |
| djcelery_periodictasks           |
| djcelery_taskstate               |
| djcelery_workerstate             |
| registration_registrationprofile |
| south_migrationhistory           |
+----------------------------------+
48 rows in set (0.01 sec)

mysql> 

(sophi-venv): csev$  cd main
(sophi-venv):main csev$ python manage.py runserver 8100

Validating models...

0 errors found
Django version 1.4.1, using settings 'settings'
Development server is running at http://127.0.0.1:8100/
Quit the server with CONTROL-C.

Navigate to http://localhost:8100 in a browser and start hacking :)

Abstract: Emerging connections between content, SW and platform – Learning Tools Interoperability

This is an abstract for a keynote speech I will be giving in Korea October 24 – Smart on ICT International Open Forum 2012

The IMS Learning Tools Interoperability standard (www.imsglobal.org/lti) greatly reduces the effort required to integrate an externally hosted learning tool into nearly all of the mainstream learning management systems (Blackboard, Desire2Learn, Moodle, Canvas, Sakai, OLAT, and others). IMS Learning Tools Interoperability uses the OAuth protocol to send identity, course, user, and role data to the external tools. LTI allows those who would build innovative tools for teaching and learning an unprecedented simplicity in plugging their tool into any number of different learning management systems. We will look at the LTI standard, how it is implemented and look at the next steps in the evolution of the capabilities of LTI.

Grand Finale Lecture: Internet History, Technology, and Security on Coursera

I like the tradition of having Coursera instructors to some kind of post-class wrap up reflecting on their courses. I saw some of the materials from the wrap up presentation from Scott Klemmer’s Stanford HCI course on Coursera and based on his comments changed how I approached my my Internet History, Technology, and Security class. I figured that my job is like Scott to “pay it forward” and share my insights and thoughts with the next round of Coursera courses or other experiments in teaching MOOCs.

In his lecture I go through survey data taken during the course, student performance throughout the course, maps of student locations around the world as well as where the students went to school. The lecture also reflects on what worked well and what could be improved, takes a peek toward the future including possible new courses, as well as a bit of fun and few little surprises. There were 4595 certificates of completion awarded in the course.


Taped October 1, 2012 after the course was completed. Comments welcome.

Visualizing the Geographic Distribution of my Coursera Course

As part of my Internet History, Technology, and Security course on Coursera I did a demographics survey and received 4701 responses from my students.

I will publish all the data in a recorded lecture summarizing the class, but I wanted to give a sneak preview of some of the geographic data results because the Python code to retrieve the data was fun to build. Click on each image to play with a zoomable map of the visualized data in a new window. At the end of the post, I describe how the data was gathered, processed and visualized.

Where are you taking the class from (State/Country)?

If you went to college or are currently going to college, what is the name of your college or university?

The second graph is naturally more detailed as the first question asked them to reduce their answer to a state versus the second question asking about a particular university. The data is noisy because it is all based on user-entered data with no human cleanup.

Gathering the data

Both fields were open-ended (i.e. the user was not picking from a drop-down). I had no idea how I would ever clean up the data, and when I got 4701 responses, I figured I would just take a look around and realized that my students were from a lot of places. On a lark Friday morning I started looking for the Yahoo! Geocoding API that I had heard about several years ago at a Yahoo! hackathon on the UM campus where I met Rasmus Lerdorf – the inventor of PHP. I was disappointed to find out that Y! was out of the geocoding business because it sounded cool. But I was pleased to find Google’s Geocoding API looked like it provided the same functionality and was available and easy to use.

So I set out to write a spider in Python that would go though the user-entered data and submit it to the geo-coder lookup API and retrieve the results. I used a local SQLite3 database to make sure that I only looked up the same string once. I had two data sets with nearly 6000 items total and the Google API stops you after 2500 queries in a 24 hour period. So it took three days to get the data all geocoded.

I did not clean up the data at all – I just submitted the user-entered text to Google’s API and took back what it said. Then I used Google’s Maps API for Javascript to produce the zoomable maps.

If you are curious about the nature of the spider, I adapted the code from the sample code in chapters 12-14 in my Python for Informatics textbook.

Code Review Requested: Coursera IHTS Grading Algorithm

This is a bit of a weird blog post. In my Internet History, Technology, and Security Coursera course, I adjusted the grading policy as the course went along as events happened. I was not pleased with the rubric on my week 2 assignment so I gave all full credit. We had two peer-graded extra credit assignments.

I ended up putting up a second copy of the final exam (hence two exam columns) because some students had reported that something went wrong (hard to verify these things) with the final when they took it the first time. I made it abundantly clear that the second final was only for students who had technical difficulties with the first final. For those who took the second final twice but had a reasonable score on the first exam (19 students) – I decided *not* to add the extra five point penalty after looking at the pattens in the data and merely took the lower of two exam scores. For those students with some obvious technical, internet, user error, or timing problem on the first exam (67 students) – the second exam was *very much needed*. I will share all the data with Coursera tech support and we can dig through logs to try to narrow down what might have gone wrong and see what we can learn.

This all ends up in a Python program to do the grading. I am putting it up for code review for a few days to see if anyone sees a bug before this little bit-o-code decides who gets certificates. I include the code and some sample output with names removed. You may need to make your screen quite wide or just view source to get the real data.

Code

fh = open('Gradebook.csv')
grades = list()
for line in fh:
    line = line.rstrip()
    fields = line.split(';')

    for i in range(len(fields)) :
        if len(fields[i]) < 1 : fields[i] = '0'

    # Skip header lines
    try : id = int(fields[0])
    except: 
        print line
        continue

    name = fields[1]
    name = name.replace('"','')

    # field[2] : "Late Days Left"
    # field[3] : "Optional: Demographic Survey"
    # field[4] : "Optional: Propose Final Exam Questions"

    quiz = 0
    # Quizzes for all 1, 3-7 weeks (Week 2 was field[13])
    for i in [5, 6, 7, 8, 9, 10] :
        quiz = quiz + float(fields[i])

    # Exams
    exam1 = float(fields[11])
    exam2 = float(fields[12])

    if ( exam2 == 0.0 ) :
        exam = exam1
    elif ( exam1 <= 10 and exam2 > exam1 ) :
        # print 'Second exam OK ', id, name, exam1, exam2
        exam = exam2
    else:
        # print 'Second exam penalty ', id, name, exam1, exam2
        exam = exam1
        if exam2 < exam1 : exam = exam2
        # exam = exam - 5  # penalty
        if exam < 0 : exam = 0

    # fields[13] was peer-graded week 2 - free 10 points
    excr1 = float(fields[14])
    if ( excr1 < 0 ) : excr1 = 0
    excr2 = float(fields[15])
    if ( excr2 < 0 ) : excr2 = 0
    excr = excr1 + excr2
    
    # Ten points was week 2 peer-graded assessment
    tot = quiz + excr + exam + 10  
    # print fields, tot, quiz, excr, exam, exam1, exam2
    tup = (tot,quiz,excr,exam,id, name, line);
    # print tup
    grades.append( tup )

grades.sort(reverse=True)
for i in grades:
    print i

Output

The output includes the computed values and the input data as the last value in the tuple to allow verification and checking of the algorithm. I have manually line-broken the header line. Names are first and last initial and the user id is all zeros to obscure the data.

"User ID";"Full Name";"Late Days Left";"Optional: Demographic Survey";
"Optional: Propose Final Exam Questions";"Week 1 Quiz";"Week 3 Quiz";
"Week 4 Quiz";"Week 5 Quiz";"Week 6 Quiz";"Week 7 Quiz";
"Final Exam - IHTS";"Final Exam (2) - Do not take the Final Twice - See Email";
"Internet HTS - assignment 1";"Extra Credit - Assignment 1";"Extra Credit - Assignment 2"

(120.0, 60.0, 20.0, 30.0, 0000, 'DT', '0000;"DT";8;5.125;0;10;10;10;10;10;10;30;;9;10;10')
(120.0, 60.0, 20.0, 30.0, 0000, 'AC', '0000;"AC";8;6.25595;;10;10;10;10;10;10;30;;10;10;10')
(120.0, 60.0, 20.0, 30.0, 0000, 'JB', '0000;"JB";8;;;10;10;10;10;10;10;30;;10;10;10')
(119.5, 60.0, 19.5, 30.0, 0000, 'BL', '0000;"BL";8;6.71429;;10;10;10;10;10;10;30;;10;9.5;10')
(119.0, 60.0, 20.0, 29.0, 0000, 'KP', '0000;"KP";8;5.72619;;10;10;10;10;10;10;29;;9;10;10')
(106.5, 50.0, 18.5, 28.0, 0000, 'PJ', '0000;"PJ";8;4.83333;;0;10;10;10;10;10;28;;;8.5;10')
(99.8, 44.75, 18.0, 27.0, 0000, 'VK', '0000;"VK";0;;;9;8.75;7;0;10;10;27;;7;10;8')
(80.9, 47.9, 0.0, 23.0, 0000, 'MM', '0000;"MM";8;6.875;;7;9.75;8;7;8.25;7.9;23;;9;;')
(77.8, 43.8, 0.0, 24.0, 0000, 'DE', '0000;"DE";8;4.76786;;8;0;8;10;8;9.8;24;;7;;')
(76.8, 36.8, 0.0, 30.0, 0000, 'MS', '0000;"MS";8;5.98809;;9;0;0;9;9;9.8;30;;6.4;;')
(60.8, 50.75, 0.0, 0.0, 0000, 'JA', '0000;JA;8;5.96429;;9;9;10;7;5.75;10;;;9;;')
(55.8, 19.8, 0.0, 26.0, 0000, 'JO', '0000;"JO";7;5.09524;;10;0;0;0;0;9.8;26;;9;;')
(48.8, 38.75, 0.0, 0.0, 0000, 'AG', '0000;"AG";1;;;10;9.75;10;9;-0;;;;7.2;;')

This is a *tiny* representative sample pulled from a 45627 line resulting output from the program.

Abstract: Experiences Teaching a Massively Open Online Course (MOOC)

Dr. Severance taught the online course “Internet History, Technology, and Security” using the Coursera teaching platform. His course started July 23, 2012 and was free to all who want to register. The course has over 46,000 registered students from all over the world and 6,000 are on track to complete the course and earn a certificate. In this keynote, we will look at at the current trends in teaching and learning technology as well as look at technology and pedagogy behind the course, and behind Coursera in general. We will look at the data gathered for the course and talk about what worked well and what could be improved. We will also look some potential long-term effects of the current MOOC efforts.

Speaker: Dr. Charles Severance
Date: 13-November-2012
http://www.deonderwijsdagen.nl/

Charles is a Clinical Associate Professor and teaches in the School of Information at the University of Michigan. Charles is a founding faculty member of the Informatics Concentration undergraduate degree program at the University of Michigan. He also works for Blackboard as Sakai Chief Strategist. He also works with the IMS Global Learning Consortium promoting and developing standards for teaching and learning technology. Previously he was the Executive Director of the Sakai Foundation and the Chief Architect of the Sakai Project.

http://www.dr-chuck.com/dr-chuck/resume/bio.htm
https://www.coursera.org/course/insidetheinternet

My Rubric and Approach for Peer-Graded Assignments on Coursera

I am not an expert on rubrics. For the first peer-graded assessment in my Coursera Internet History, Technology, and Security my rubric was really poor. This triggered a discussion in the student forums led by a student named Su-Lyn to produce what the students felt would be the ideal rubric. There was several rounds of edits and comments before the students reached their “final” rubric.

I adopted this Rubric for the rest of the peer-graded assignments for the course and it was far superior to the rubric I used in the first assignment.

The mistake I made in the first rubric I built was that I was trying to construct a rubric that ended up with a average of about 8.5 / 10 – but then all the rubrics were too simplistic and no one felt that they could express their assessment appropriately. The grade on that first assignment was 8.85 / 10 with a standard deviation of 1.49 – pretty much exactly as I planned from a numbers perspective – but the students did not like it.

The student-built rubric was a little harsher and but at least it felt expressive when evaluating basic expository writing – so students assessing each other’s work felt like what they were communicating in their grading was *useful*.

The first peer-graded assignment that used the student-built rubric had a range of -6.0 – 10.0 with an average of 7.15 and a standard deviation of 2.83. Clearly the second rubric was far more expressive.

I don’t like a mean of 7.15 for a straight scale graded course. So I would need to come up with a formula that mapped from the raw score to the actual score. I punted on any formula and just made the peer grading assignments “extra credit” – this meant that students were going to have to fight a little to get those extra points and that felt right to me. If you were going to the the extra credit – you better do some good writing – because if you just cut and pasted Wikipedia in – you would get a quick -6. Negative scores will be changed to zeros – people should not lose points on extra credit.

The last little bit of data is that 5808 students took the first (bad) required peer-graded assignment, 758 students took the second optional assignment and 641 took the third optional assignment. Interestingly the data on the third assignment was a range of -2 – 10 with a mean of 7.99 and a standard deviation of 2.35. I would interpret the drop in the number of students between the second and third assignments as well as the change in range and to mean that students who did badly on the first assignment just gave up and did not submit the second assignment.

This supports my instinct that perhaps in a course like mine, writing needs to be optional / extra credit.

Well, enough of the prelude – on to the question and rubric.

My Question and Rubric

Question: What element of Internet History prior to 2003 would you add to the History of the Internet as described in this course and where would it fit in the course? Draw from the course material and support with additional materials as necessary.

Essay length: 500-1000 words not including references. A separate space for references will be provided – only use this space for references (i.e. don’t continue your essay in this space). There is no specific citation format. While there is no minimum nor maximum required references, most essays will have somewhere between two and five references. If your references are web sites use the URL – if the references are papers include enough information to identify the source using APA http://owl.english.purdue.edu/owl/resource/560/01/ format. Graders will not take points off for syntax errors in references, but they are welcome to suggest how the syntax of references can be improved.

While we would like your answers to be well written, given the number of different languages in the course, graders will **not** take points off for structural mistakes like grammar or punctuation. Graders may *comment* on how to improve the writing technique – but the grade will be based on the quality of the ideas in the answers and how well thought out the arguments are that support those ideas. As graders, please make your comments constructive and helpful and focused on improving learning.

Plagiarism: Looking for plagiarism should *not* be your primary purpose for peer graders. The purpose of the plagiarism deduction rubric is to give graders a way to note when plagiarism is clear and obvious. If you are taking points off from plagiarism, include the source of the material you consider to have been copied in your comments. Please do not add any editorial comments or value judgements about the author or plagiarism in your comments. Be respectful in your comments and make sure to focus on making this a learning experience for the author.

And above all, while the purpose of peer-grading is to assign an accurate score – we are all here to learn. Graders should approach weak or flawed essays as situations where they can help the essay author learn through useful and constructive comments. Our prime directive is to teach each other – within that directive we also assign score.

Rubric

Interest (4 points): Is the answer interesting to read? Did the answer make you think? Did you learn something from the answer?
0 – No
2 – Somewhat
4 – Yes

Relevance (2 points): Does the essay answer the question? Is the answer on-topic?
0 – No
1 – Somewhat
2 – Yes

Analysis (2 points): Are the ideas logical and communicated clearly? Are the arguments reasonable / plausible? Does the analysis go beyond simply stating the obvious?
0 – No
1 – Somewhat
2 – Yes

Evidence (2 points): Does the essay use good examples? Are the arguments well-supported by facts? Does the essay cite its sources?
0 – No
1 – Somewhat
2 – Yes

Plagiarism (up to 6 point deduction): Is there is evidence of plagiarism such as simply cutting and pasting all or part of text from another source without citing?
0 – The essay did not have any evidence of plagiarism
-3 – A portion of the answer was literal text from another source
-6 – The entire essay was taken from another source