Sakai Tips: A One Question Timed Exam in Test Center

Update: Talking to Vivie, she suggested an excellent improvement to this approach that I missed (doh). Instead of uploading the file and making a link in the text, simply add the Word file as an attachment to the question right below the question text. Much easier and an even cleaner user interface – thanks Vivie for the tip! I have updated the instructions below and will redo the video at some point.

This blog post talks about how to make a one question exam in Test Center (a.k.a Mneme). The basic idea is that you want to give an exam and give students flexibility in starting times but only give each student a fixed amount of time to complete the exam. Individual timers as it were.

I use this a lot for on-line finals where students have complex (often international) travel schedules so I open an exam for 48-72 hours and they can pick any 3 hour window. In addition, the exam is in a Word document so they do not have to be connected while they take the exam – they simply edit the exam on their computer and reconnect to upload their answer when they have finished editing. I like having a really low requirement for connectivity for something like an exam. If something goes wrong the students still have a copy of their exam.

Here is a video of me creating an exam using CTools (Sakai@UMich):

http://www.vimeo.com/10931064

Here are the rough steps in the process:

Create an assessment

Add a single essay question – with the question text below

Upload your word document containing the Exam text as an attachment to the question. The attachment button is right below the question text.

Then save the question and back in the assessment screen set the points for the question and save the assessment.

Then switch to ‘Test Drive’ and begin the test. Once you get into the test, you should be able to download the Exam document. At this point you can log out of CTools (or press ‘Finish Later’) and edit the exam document on your computer.

Once you finish the exam on your computer, log back into Sakai and go back into Test Center and press ‘Continue’ to go back into the test. You then upload your edited exam to Test Center and press ‘Finish’.

Once you like the test, you can publish it. Make sure not to publish it until you test it because publishing locks the test for future editing.

Then you go into ‘Publish’ and set the open, close, and late dates, set the time limit for the test (i.e. 3:00), set the number of tries on 1, set the honor code, and anything else you like and then press ‘Publish’. Make sure the number of tries is set to 1. As an example, if you set the tries to two it will let the students have two three-hour test attempts.

If a student makes a mistake and presses ‘Finish’ before they upload their answers, simply use the ‘Special Access’ feature of Test Center to give them one more try.

At the designated time – the test opens up and students can start taking the test.

I am curious if you find this useful. Here is some text I use for the test.

Question Text

This entire test is in a file which you edit. You are to download the test and insert your answers to the questions in the document and then save and upload the document. This file should open in any word processor. When you upload the file you can upload a Word Document, or PDF (preferred).

You have now entered the test and so the timer for you to take this test has *started*. You should download the test and begin working so you can finish and re-upload your answers within the time limit for the test.

The file may open in your browser – in this case you use Save As. Or the file may download to your computer – in this case you go and find the file on your computer.

If you have technical difficulties – send a note to csev@umich.edu – in a pinch send a text message to +1 517-xxx-yyyy – please include a callback number in your text message.

Once you have downloaded the file, you can leave this screen by pressing “Continue Later” and then come back *within 3 the time limit* and upload your answers.

When you have finished the exam – come back and upload your modified document as an attachment to this question below. Press ‘FInish’ only after you have uploaded your answers.

Apple Announces iAd (and Multitasking) in iPhone OS 4.0

On Wednesday, Apple announced its plans for iPhone OS 4.0 and somewhere in the fine print mentioned the new iAd mobile advertising platform.

Apple Previews iPhone OS 4

It is funny that about the same time this announcement was taking place, I was doing an in-class discussion in SI301 (and a day later with some UMich executives in the Business and Finance Forum) where I was trying to get both groups to think about the end of the web and the end of web-search as the primary way we view the web. I told them that instead of trying to figure out why, when or how search and the web would go away – but instead I asked them to assume that at some point in the future, when simply searching the web was not longer ‘interesting’ – and under that assumption, what would be its replacement.

I am thinking that we should make a mental note about the year 2010 and wonder if the introduction of iAds indicates that on Wednesday we quietly passed the half-way point in the time where the web and web browser are the ‘alpha-technology’ technology in the marketplace.

Math version – feel free to skip: Perhaps Wednesday we passed the point where the second derivative of web growth went from positive to ever-so-slightly-negative. A vertical inflection point as it were. Vertical inflection points and changes in the sign of the second derivative of a function are often hard to notice in the short or even medium term because the first derivative remains positive. But in time….. end of Math version.

I have this weird calculation that suggests that the pattern where we lose interest in a technology is the mirror image of the pattern in which we became fascinated with a technology (an S-Curve). So if my instincts are right (and they usually are not) that Tuesday was the point at which the web reached its apogee, and since the web is 20 years old (started in 1990), it will take 10 more years (2020) before the web starts to fade a little bit and after 15 years (2025) the web and search will noticeably be falling off our collective radar and by 2035 teenagers will be asking ‘what was the Web?’ much like they might wonder about America Online now.

The numbers are easy to remember – 2020 a gentle noticeable decline in the primacy of the web and search – 2025 – a noticeable decline in the primacy of web and search.

In the meantime, you have plenty of time to read Clayton Christensen’s book, Innovator’s Dillema to learn about how a market evolves through a series of independent S-Curves where disruptive innovation causes us to jump from curve to curve.

Hmmm. This sounds like another keynote speech for me to try to give places.

Response to “Pandora’s Box” Post on Michael Feldstein’s Blog

I am an avid reader of Michael Feldstein’s Blog, but a recent post with a conversation between Michael and Anya Kamenetz, kind of caused me to have an an “excess use of metaphor” alarm when I read it. So for two days I stewed and wrote several long critical comments – but then in honor of the TedXUofM event tomorrow I decided to throw all my critical comment drafts away and try to make my point in a more fun and light manner..

The Original Post: Should Pandora Have Opened the Box?

Here is my response:

I have written four different comments for this blog post and mercifully threw them all away. Now I have come up with this gentle formulation of my ideas.

I start with this quote from the Wikipedia page on EduPunk:

… Stephen Downes, an online education theorist and an editor for the International Journal of Instructional Technology and Distance Learning, noted that “the concept of Edupunk has totally caught wind, spreading through the blogosphere like wildfire”.

Please read this carefully paying close attention to where the so-called “edupunk movement” is happening.

I actually have some experience with the word “Edu”, “Punk” and “DIY” because I am a teacher that is always looking at ways to fight the system and do things my own way. My son is in a Punk band – so I hang out with a bunch of young punk types wearing my black band T-Shirts and his band plays at lots of DIY music venues. SO while I am not an expert on “EduPunk” and “DIYUniversity” – I actually have some real experience in the underlying metaphors that you are borrowing from.

I would suggest the following exercise to give you both a little experience in the sustainability of the Punk/DIY approach – take a look at this web page:

http://www.dodiy.org/

And go through and figure out how many of these registered DIY venues are still operating one year later.

My son’s band wants to go on tour this summer and they want to go DIY all-the-way – but nearly all DIY music venues close up shop after a few short months because they are a labor of love by some special person but then that special person becomes tired of spending their weekends hosting demanding out-of-town bands with a station wagon and a ratty trailer who bitch about the sound systems in the DIY venues.

So the DIY music venues which are the hives of creativity and clever innovation appear and disappear and never get close to any kind of tipping point – appear and vanish sadly – way too quickly. It is clearly a movement – just not a mainstream movement and a movement that is uninterested in affecting the mainstream in any way. All they want to do is make a place where they can express what they want unhindered by “the man” and in doing that they learn something about themselves and learn something about creativity.

This is really sad because when you find and interact with one of these DIY music venues, it is a very freeing and very uplifting feeling and the people are so cool and fun and you so badly want it to be a “movement” and you want everyone to be able to experience this. But sadly, they generally only exist for a short while after which they go away.

Punk music at a DIY venue is like the most intensely creative group activity I have ever seen – the performers and the crowd function as one – there is continuous sharing and remixing of ideas and fluid group memberships – it is magnificent. It just does not last – it is sustainable as a concept – but no individual stays punk their whole life – it is the domain of the young who are experiencing it for the first time (and the scene-parents like me) who were squares in High School and so they are experiencing it for the first time in their fifties.

And as I have said before, when you first experience this amazing freedom and creativity – you wish it were the future for everything. The bad news is that punk is not the future of all music – sorry about that. Sooner or later we get old and our tastes move toward the blues or some variant of the blues. But the super-duper good news is that if you have not yet experienced it – punk will still be there 20 years from now – it will still be alternative and underground and will be happy to see you when you get there and you will paint your nails black and wear black t-shirts and “hate the man” with your fellow punk/DIY hipsters.

Bringing this back to your post a little bit, neither of you are Pandora, and there is no Pandora’s box – and there was no “opening of a box” that has brought into being some new profound sea change that we cannot undo.

All that happened is that you noticed something that has been happening since the beginning of time and will happen forever going forward. You mis-interpret this marginal, small, continuous, alternative, underground, situation as something that “just happened” and “is coming to get us all” – and a few pundits collectively named it “edupunk” so you could sell some books and sell some Google Adsense.

By the way, you can take a look at Google’s Keyword Value Calculator to see the potential value of the word “edupunk” in the advertising markets.

According to my recent calculations, the word “edupunk” is worth about 0.05 per click and was entered 1000 times in the last month roughly generating $50.00 in potential ad revenue for Google last month.

As contrast the query “lms” is worth $4.14 and was entered 823,000 times last month roughly generating 3.2 million potential dollars for Google last month. A rough calculation of the variations on the “lms” query says that there is a little over 15 million dollars total potential ad revenue for Google last month.

If we do some multiplication – that means for last year LMS systems represented nearly 200 million dollars of potential ad revenue for Google and edupunk represented nearly $600.00 per year in terms of potential Google ad revenue.

Just as another data point, the search for “Charles Severance” represents roughly $2016 of potential ad revenue for Google last year – roughly four times as much interest as “edupunk”. I have no explanation for this but a superficial analysis looking only at advertising revenue data might suggest that the “Charles Severance” movement is more real than the “EduPunk” movement.

P.S. I am still looking for a DIY punk/metal music venue in Nashville or Memphis for a summer visit. If in your research, you find one – let me know. Thanks.

P.P.S. If you really want to see the world’s most awesome DIY music/art venue in action – I claim it is in Lansing, Michigan – Basement 414 – I have lots of DIY punk video of my son’s band – I pick one of the earlier ones here – http://www.vimeo.com/6153856 – their second concert ever.

The Incredible Shrinking iPhone “Nano”

I like to try as a hobby to anticipate Apple’s next move. If you had asked me two weeks ago, where the next steps in terms of the design evolution of the iPhone needed to go, I would have told you with 100% certainty that they needed a smaller iPhone – an iPhone Nano as it were.

I always felt that the iPhone was a little too large and a little too heavy and a little clunky to hold in one hand and felt like it needed a half-inch trimmed in both dimensions. A little more like a Palm Pre in terms of height and width.

Well now that I have my two iPads, it seems as though I am getting what I wanted all along and at no additional cost to me. After about 48 hours of having both an iPad and an iPhone, my iPhone started to shrink. It physically became smaller and the icons were smaller and the screen started to feel small and cramped when I tried to read something.

As of yesterday, it had gotten so small (in my mind) that it started feeling “too small” and I was thinking that perhaps Apple should release a slightly larger version to get away from this sense of packing all those icons into such a tiny space.

The good news is that today the iPhone seems a little larger – and actually right now it feels like it is almost exactly the right size. It is large enough that you can read things if you really have to and the iPad is not close and it is small enough to fit in your pocket and have with you all the time when you leave the iPad on your desk as you go to lunch.

So I am pretty pleased how the iPhone was silently upgraded from “too large” to “just right” simply because I purchased and started using the iPad.

I wonder how many other people are experiencing this “free iPhone upgrade” because they are using their iPhones right next a new, shiny iPad?

Reading / Downloading PDF Books on the iPad or iPod

Update: Thanks to a savvy comment, I have now come across GoodReader. I purchased the full application with iPad support – and it is quite nice – it does a good job viewing and navigating and can download from the web. All in all a nice product and does a perfectly fine job of letting me read my PDF books on my iPad.

Original Post

TvWhile I was buying my second iPad, I was interviewed by the local Television station WLNS and so I may look like a total geek on the 11PM news after the basketball games are over. Here is the raw footage of the interview shot by Richard Wiggins.

Now that I own two iPads and got home and have been playing with mine while watching the basketball game, my first issue to figure out was how to get PDF books/files onto my iPad so I can treat the iPad like a book for PDF files I already own. I was able to use a couple of free file sharing tools – and they worked pretty well – they installed nicely and worked pretty easily.

Files Lite from Oliver Toast

mb Drive Free from mbpowertools

These both allowed me to mount my iPad as a WebDav drive and drag files (including PDF files) onto my iPad and then launch these files within the app on the iPad. The PDF viewer was a little better for Files Lite than for mb Drive free but neither really had anything other than a sequential flip-through the whole 900 page book mode. Flipping sequentially though these books as the only navigational mode is not acceptable.

I have gotten more excited about Stanza from LexCycle – this is a nice ebook reader that runs both on the desktop and on the iPad/iPod. It allows the downloading of a wide range of books form the Internet and allows synchronizing between the Desktop Stanza on Stanza on the iPod.

Stanza also converts from PDF to an EBook format and a bunch of other formats. This is an awesome feature all by itself. I loaded and converted my Python for Informatics and Jon Klienberg’s Networks, Crowds, and Markets. I downloaded both to my Stanza iPad reader.

And while Stanza kind of removes formatting from the PDF so technical bits are kind of compressed, but the rest of the material is quite readable. All in all I was completely impressed by Stanza Desktop’s conversion of PDF to ePub/eBook format – for free. Wow.

Now none of these applications have yet moved to be full-sized for the iPad so I was using them in 2X mode. I really look forward to Stanza in iPad native mode – it will be very impressive. And perhaps they will see fit to adjust their PDF conversion to detect when to keep some fixed formatting for things like code or math. I found the Stanza User interface a little tricky to get from screen to screen – such as the navigation from the page reading screen to the get books screen – I am guessing they will make a nicer navigation interface when they have more space on the iPad.

But all in all while I could not find a perfect solution, Stanza gives me hope that in a month or so, free books will be as welcome on the iPad as the paid-for books on the iBooks store.

Next Generation Teaching and Learning Symposium – April 17, 2010 – UC Berkeley

I just want to invite everyone and encourage everyone to come to the Next Generation Teaching and Learning Symposium coming up Saturday April 17, 2010.

There is no charge for folks coming from far away and a nominal change for folks who don’t have to travel. Light meals are included. See the registration page for details.

I will be at the NGTL Symposium and be on a panel discussion Tools and Technological Models.

The list of speakers is indeed impressive. I personally think that it would be worth attending if the only person talking were Howard Rheingold. Howard is a professor in the UC Berkeley School of Information and all-around “try-anything-and-everything” when it comes to teaching kind of guy. He will be giving the opening keynote titled “Participatory Media for Education” and then it will be an action-packed day from then on.

One thing that makes me personally very excited about this symposium is that it blends two topics that I am passionate about that I think need to be brought together. This symposium marks the beginning of looking at the next generation of Teaching and Learning from the perspective of a School of Information.

I think that the iSchools (Schools of Information) can bring so much to the field of teaching and learning using technology. Because iSchools are by their nature cross-disciplinary organizations, we can look at a problem like the next generation of teaching from many perspectives and generate dialog across many different domains and pursue those discussions in depth. Within iSchools, can identify issues, make needed changes, build new capabilities and then measure the effect of those changes.

If there is one thing we might all agree on in terms of the Next Generation of Teaching and learning is that it will be different somehow. It will be more flexible, it will be more personalized, it will be more open, it will be more social, it will be more web 2.0, it will be more web 3.0… The list goes on and on as to how many ways the next generation will be different.

Schools of Information live in this future world already and study this future world in great detail, bringing together technical analysis from graph theory and information retrieval to social science analysis of human motivation and reaction from game theory, influence, and decision making as well as usability, user experience, and information architecture. Just the set of skills that should be brought to bear on the next-generation of teaching and learning.

To find our way to the next generation of teaching and learning we need to be open to the ideas that all of these fields can bring to bear on the problem. In a way, there is no better place to contemplate what this future should be than the collective skills of the Schools of Information around the country.

I am excited to be part of this breakthrough meeting hosted by my colleagues at the Berkeley School of Information and initial steps toward defining a stronger connection between Schools of information and teaching and learning.

I hope to see you there.

Upgrading My Blog From Moveable Type 2.65 to WordPress 2.9.2 Maintaining PageRank

My blog has been running since 2003 using Movable Type 2.65 – with Lance, Zach, and Steve all suggesting I upgrade, this weekend turned out the be the week I decided to give it a try. I also am starting to use WordPress in my courses using IMS Basic LTI – so I figured I might as well find my way around it. My site has decent PageRank since I have been doing this for seven years now.

I had several goals in the conversion: (a) maintain my Google PageRank on the pages, (b) keep all my old posts and support all the old urls, and (c) keep the page identifiers the same in my WordPress database.

I am not much of an expert on Google PageRank – but I did watch this excellent talk from Google I/O 2008 by Maile Ohye:

Google I/O 2008 – Search Friendly Development

Maile repeatedly talks about the need for permanent redirects when web sites are changed – so I took that to heart. I recommend the video to *anyone* who is interested in maintaining or increasing PageRank legitimately.

I found a few helpful Blog Posts – but I waited so long to convert that all the instructions were pretty-much out-of-date. This blog post from Scott Yang was my inspiration – but I did have to adapt things to a newer version of WordPress:

http://scott.yang.id.au/2004/06/wordpress-migration-notes/

So the first thing to do is export from Moveable Type and retain the post ID’s. In this I followed Scott’s directions slightly adapted to my version of Movable Type. This required editing the file ./lib/MT/App/CMS.pm adding the ‘POSTID’ line at line 2970 of my file:


DATE: <$MTEntryDate format="%m/%d/%Y %I:%M:%S %p"$>
POSTID: <$MTEntryID$>
-----
BODY:

Then, also inspired by Scott’s post I went into the Movable Type’s user interface to export all entries, comments and trackbacks into a plain text file.

My old blog was installed at csev-blog so I initially installed WordPress at csev_blog (with an underscore). I later renamed it to csev-blog below.

Then I made some changes to my WordPress installation. I edited the file ./wp-admin/import/mt.php at line 418:

                                }
                        } else if ( 0 === strpos($line, "POSTID:") ) {
                                $postid = trim( substr($line, strlen("POSTID:")) );
                                $post->import_id = $postid;
                        } else if ( 0 === strpos($line, "EMAIL:") ) {

It turns out that WordPress now understands the notion of import_id – so there was no need to change the SQL (per Scott’s post) and the insert is no longer in ./wp-admin/import/mt.php anyways. Since WordPress already knows about import_id no further changes were necessary.

Then I copied the exported text file into ./wp-content/mt-export.txt and used the WordPress user interface to do the import without the upload. It would only import about 250 entries before hitting a run-time limit. I checked MySql to make sure the ID field in the wp_posts table were really being taken from the MT import.

I then edited the file ./wp-content/mt-export.txt to delete the first 249 posts and re-ran the import. The WordPress import is smart enough to not double import – so I always kept the last successful import to be sure I got them all. By deleting the first “249” posts and re-running the import over and over – after three imports, I had all 638 posts imported.

The next task was to edit my .htaccess to make my of URLs work. I needed to fix individual posts like 000749.html and then monthly digests like 2009_12.html and map the to my new permalink structure. I used the permalink structure that was 2010/03/blog-title-post to make my PageRank be as cool as it could be.

Here is my .htaccess file.


# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /csev-blog/
RewriteRule ^([0-9]{4})_([0-9]{2}).html$ /csev-blog/$1/$2/ [R=permanent,L]
RewriteRule ^([0-9]{6}).html$ /csev-blog/mt-singlepost.php?p=$1 [L]
RewriteRule index.rdf /csev-blog/feed/rdf/ [R=permanent,L]
RewriteRule index.xml /csev-blog/feed/ [R=permanent,L]
RewriteRule atom.xml /csev-blog/feed/atom/ [R=permanent,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /csev-blog/index.php [L]
</IfModule>

# END WordPress

The simplest rule was for the monthly digest files 2009_12.html which could be directly redirected to the new permalink structure of /2009/12/ – I wanted the redirects to be permanent and I wanted there to be one redirect to transfer PageRank as quickly and cleanly as possible – so make sure to have the trailing slash.

The three lines for the RSS feeds were similarly simple and done as permanent redirects – I wrote a bit of code that I never used that was designed to fake the RSS feeds forever called mt-feed.php – I almost got it working – but it was a bit flaky in some readers and I just decided to fall back to the redirect. I include the code for mt-feed.php at the end of the post – make sure to test everything carefully before using it. I did not think that Google cared too much about the RSS feeds w.r.t. PageRank so I took the easy way out with the redirects.

The trickiest bit was to map the individual posts to the new location (000736.html). I could have taken the easy way out and made a rewrite rule to send them all to index.php?p=000736, similar to how Scott did it – but since my WordPress permalink structure was /year/month/title this would be two redirects. The first would be from 000736.html to index.php?p=000736 and the second would be from index.php?p=000736 to /2008/10/some-title and I wanted Google to have every chance to transfer my PageRank – so I wanted one redirect and I wanted it to be a permanent redirect.

So my rewrite rule transformed the individual posts to mt-singlepost.php?p=000736 and I wrote the following code.

require('wp-blog-header.php');

$posts = query_posts('p='.$_REQUEST['p']);
if ( have_posts() ) {
    while ( have_posts() ) {
        the_post();
        header("HTTP/1.1 301 Moved permanently");
        header('Location: '.get_permalink());
        exit;
    }
}
header("HTTP/1.1 404 Not Found");

Again an adaptation to Scott’s pattern but using more modern calls to WordPress 2.9.2. This gave me my single, permanent (301) redirect so I can transfer PageRank efficiently.

By letting both blogs go simultaneously with the original Movable Type blog on csev-blog and the new WordPress blog on csev_blog, I would test lots of URLs and be quite patient going back and forth. But once things worked – it was time to rename the folder on the server.

Important – make a copy of your .htaccess file before taking this step. Because changing the folder in WordPress will rewrite the .htaccess file wiping out all your precious changes. SAVE YOUR .htaccess FILE!!!!!

Go into the WordPress admin interface and under settings rename the blog’s url from csev_blog to csev-blog. Then rename the folders on the server. Then immediately edit the .htaccess file putting back in your clever redirects – making sure to change csev_ to csev- in the rules.

Test all the old URLs – there should be one redirect. Using FireBug you should be able to see the redirects in action and really verify things work. I found Chrome was the best way to test the RSS redirects – both Safari and FireFox get way too tricky when doing RSS feeds to even see what happened – thankfully my version of Chrome was clueless about RSS feeds so I could see what was really happening and verify proper operation. I am sure some new version of Chrome will get “smarter” and make it impossible to figure this out. Then I will write some Python code to do a urllib GET.

So things should now be OK.

As promised – here is the code for the RSS hack that I never deployed. Again this never worked perfectly for me – so test this a lot before you trust it. I called this file mt-feed.php:

require('wp-blog-header.php');
$thetype = $_REQUEST['type'];
$rssurl = get_bloginfo('rss_url');
if ( $thetype == 'rss2' || $thetype == 'atom' || $thetype == 'rdf' ) {
    $rssurl = get_bloginfo($thetype.'_url');
}
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $rssurl);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
$content_type = curl_getinfo( $ch, CURLINFO_CONTENT_TYPE );
curl_close($ch);     
header('Content-type: '.$content_type);
echo($output);

I hope you find this helpful. I love WordPress and the fact that my new blog can accept comments! I have moved forward in time nearly seven years in terms of blog software and it feels pretty good.

I want to thank Scott Yang for such a good blog post that showed me the way forward. With his patterns – all I needed to do was map things to the newer version of WordPress.

A Simple New Post in WordPress

This is a pretty simple new post – the first one in WordPress.

I wonder how paragraph spacing works.  It seems as though my paragraphs turned into br tags on my older posts.

Ah well – at least I got them converted and old urls are all working.

Here is some Python code:

print "Hello world!"
print "The end"

And I have comments so folks can better disagree with me!

Removal of GM Headrest (i.e. Pontiac Sunfire) without Tool J 42214

Many GM vehicles have a hidden latch to remove the headrest (i.e. there is no button) – so it appears that the headrests are impossible to remove such as for the installation of sweet seat covers with skulls on them.  Here is a great set of instructions if you own the Headrest removal tool (J 42214).

http://www.justanswer.com/questions/18ry1-how-do-you-remove-the-headrest-from-a-2002-pontiac-sunfire

Note this is a 2000 Pontiac SunFire with cloth seats I am working on – that is Brent’s first car – so I am less concerned about cosmetics. If you have a 2010 Cadillac CTX with leather seats – I would go to the dealership instead of using this technique and ending up with a giant tear in your leather seats!.

Of course if you are installing aftermarket seat covers from AuoZone with skulls on them in a 2010 Cadillac CTX – it raises some questions broader than just whether or not to purchase a headrest removal tool.

But of course, I did not want to wait until a $72.00 tool was available to install $20.00 seat covers with skulls on them – so I just went after it with a screwdriver and needle nose pliers. The secret is in the image at the right (click on the image for a larger version) that shows the detail of the locking spring and how you are *supposed* to pop the spring off using the special tool.

If instead you use a screwdriver and work down the seat cover and padding about 1/2 inch you will see the clip. Using a combination of the screwdriver and pliers you can remove the clip (on both sides of the headrest support) and pop off the headrest.

The number 3 is toward the front of the car – the image suggests that you pop the spring from the back using the special tool while lifting the headrest – and if you are really good – maybe this will work for you . But what I did was used my screwdriver at (2) and pulled it towards the front of the car and then took the spring off completely with a needle nose pliers and then everything becomes easy.
It is far easier to put the spring clips back in with the headrests off since you may need to get the clip all the way back in by using a screwdriver inside the tube to push the clip outward (near 3) in order to completely seat the spring before reinstalling the headrests.

So you install your totally sweet seat covers with skulls on them, and pop the headrests back on!

Community Source – Universities Building Open Source Software (Book Excerpt)

Copyright, Charles Severance 2010 – All Rights Reserved
From time to time I will put up draft excerpts of the book I am writing about Sakai. Feel free to comment and/or correct where I am mistaken by sending me E-Mail.

The Sakai project was formed in a moment of transition between a hierarchical / centrally controlled approach in campus information technology infrastructure and an organic / distributed approach to coordinating across a community of like-minded individuals and organizations. As a result, the Sakai effort has always been at the boundary between old-school approaches and new-age approaches to technology development and deployment. At times Sakai has achieved great success through a blend of old-school and new-age approaches and at other times operating at that boundary has led to great long-lasting conflict and stresses in the community.

By the year 2000, the concept of open source as an approach to software development was well established with the Linux operating system and Apache Foundation projects as solid sustainable examples. These efforts collected the talents of volunteer developers from around the world with relatively loose leadership and a commitment to working together to solve a common need. Generally the developers who worked in these projects fell into one of several categories: (1) volunteers who had paying day jobs who worked on the software in their spare time, (2) consultants or small companies who made their living doing consulting on the open source software and gained competitive advantage from their involvement in the project, or (3) individuals hired by large companies such as IBM who were given release time to contribute to these projects to support the projects and insure that the company had a voice in the projects going forward.

Many universities used open source software throughout their enterprise since the early 1990’s. Open source software was ideal or University use because it was low-cost and allowed University technology staff to make small changes to the software where some particular or unique feature was needed. Open Source gave Universities a sense of “control” over their destiny and future costs when it came to their information technology solutions. Open source software also allowed a certain level of “agility” as technology needs shifted throughout the 1990’s as things like the Internet and World Wide Web became part of the required campus information technology suite of services.

However, few universities were regular contributors to open source efforts. Universities typically felt that the software and other intellectual property produced by their employees had potential value and if a staff member built something that was valuable, then the university wanted to profit from that creation. After all, the university had paid the person’s salary while they were doing the work. It made perfect sense to a university administrator or attorney but was very frustrating to individual university employees who yearned to work with their colleagues around the world as part of a community effort.

This led to Universities writing a lot of software for their own use but not sharing that software with other universities unless there was some profit to be made on the interaction. And because no University was willing to invest the time and staff in making their software commercial-quality, most University-developed software was “just barely good enough” for local use.

One of the most common examples of “locally developed” software in the late 1990’s in use at Universities was the campus-wide Learning Management System. Learning Management Systems were pretty basic software and allowed instructors to distribute materials for students and interact using e-mail, chat, or threaded discussion forums with the students. These systems were simple enough that it only took a small amount of resources to get a basic system developed, up and running with a team of 1-2 developers in with less than a year of effort. Often the efforts were done “on the side” or “below the radar” of the typical campus IT operations.

In some cases these university-developed course management systems developed to the point where were purchased and turned into today’s commercial Learning Management Systems. The WebCT commercial LMS product was based on software developed at the University of British Columbia in 1995. The initial Blackboard product was based on a system developed at Cornell University in 1997. The ANGEL Learning system was created in 2000 based on technology developed at Indiana University-Purdue University at Indianapolis (IUPUI). The Prometheus system was developed at George Washington University and later purchased by Blackboard in 2002.

Often the universities would make some money in these transactions, but the real winners were the companies that took the software, cleaned it up and began to sell it to all of the other universities who were growing tired of their local home-grown systems. These companies started building market share and applying economies of scale to their software development. In time these companies began merging and buying one another to become ever larger and more pervasive in the marketplace. At the time of this writing, Blackboard has purchased Prometheus, WebCT, and ANGEL resulting in a very large market share.

??? Did D2L come from McGill University ?? When/How ??

Stanford University developed CourseWork system in 2001 and began to share the software with other Universities around the world. Also in In 2001, the Moodle project also starts with a simple LMS system with an open source license. The MIT Open Knowledge Initiative (OKI) was a project funded by the Andrew W. Mellon Foundation in 2001 to try to bring order to the chaos of so many independent LMS projects and so many divergent Learning Management Systems at so many Universities. Other projects such as Boddington at the University of Leeds, OLAT at the University of Zurich and CHEF from the University of Michigan were pursuing an open source approach and tried to convince other schools that their solutions were something that could be adopted.

From 2001 through 2003, the MIT OKI project regularly brought together many of the leading thinkers about LMS systems and technology from Universities around the world. The OKI meetings and discussions began to form a community of technical staff who slowly started to know one another and realized that even though they worked at many different organizations, that they were all facing the same problems and challenges.

As the OKI project funding was ending in 2003, several of the participants in the OKI efforts decided that perhaps they should band together to form a consortium and work more closely together to develop and release a shared Learning Management System that they would all work on collectively and all use in production at their institutions. By pooling resources, the partners would gain much greater leverage and each school would not have to solve the entire software development, testing, and maintenance tasks.

The goal of the Sakai project was to take the “best of breed” of the current university-developed learning management systems and produce one system that included the best of each of the systems. As a key founding principle, the Sakai project was going to operate on open source principles and every school that received Sakai funding was required to agree to give away all their rights to commercial gain from the software that they produced as part of the Sakai project.

Demanding these open source principles was quite necessary because university adopters of “free” software had see the pattern more than once where a piece of software started out as a “free and collective effort” and then once it had a few customers, the university which owned the software sold it a commercial company along with the customers who had adopted the software. The university that had originally written the program typically made some money and was giving the right to use the software forever. but the adopting schools were given no such deal. They were usually forced to pay the new owner of their software to continue to use it.

So Sakai was to be owned by no university – it was to be owned by the collective. That way all the participants in the project could be assured that the software would stay free forever and that no school would profit from participation in Sakai by selling the adopters of the software to a commercial vendor.

The University of Michigan was selected as the lead institution for the Sakai project and the Principle Investigator for the Andrew F. Mellon Foundation grant was Joseph Hardin and I was to the the Chief Architect for the project. The three other partner schools were Indiana University, MIT, and Stanford University. All the schools had a very strong track record for leadership in software for teaching and learning. The Sakai project also included the uPortal project as well as continued funding for the OKI project.

As a condition of being a partner in the project, each school was required to sign the agreement that they would forgo any commercial gain from the software developed as part of the Sakai project. This agreement was relatively easy for the University of Michigan and Indiana University to sign, but both Stanford and MIT had made significant revenue from licensing software over the years so it was a pretty impressive that the decided to agree to the terms and join the project. There was a fifth school who was considered as a potential partner who wanted a few weasel words put into their contract terms for the intellectual property. We just said “no thank you” and went ahead with the four core schools. A four-way split would be more lucrative than a five-way split so there was little reason to compromise the core principe of giving away the intellectual property forever.

A key element of the Sakai proposal was the notion of “community source”. We wanted to operate using open source principles but with more central coordination than was typical in open source projects. The idea was that each of the four schools would contribute five staff members to a central pool of talent that would be centrally managed in order to build a coherent product that could meet the collective needs of all for schools.

The combination of the outstanding team of schools, the community source approach, and the fact that it was a good time to try to build cross-school coordination in the area of Learning Management Systems led to the Andrew F. Mellon Foundation awarding the University of Michigan and its partners $2.4 million over a two-year period starting in january 2004 to build the “one open source LMS” that could bring the fragmented higher education market with each building its own independent LMS system together.

The plan seemed simple enough and almost certain to succeed.

Copyright, Charles Severance 2010 – All Rights Reserved