Monthly Archives: March 2010

Upgrading My Blog From Moveable Type 2.65 to WordPress 2.9.2 Maintaining PageRank

My blog has been running since 2003 using Movable Type 2.65 – with Lance, Zach, and Steve all suggesting I upgrade, this weekend turned out the be the week I decided to give it a try. I also am starting to use WordPress in my courses using IMS Basic LTI – so I figured I might as well find my way around it. My site has decent PageRank since I have been doing this for seven years now.

I had several goals in the conversion: (a) maintain my Google PageRank on the pages, (b) keep all my old posts and support all the old urls, and (c) keep the page identifiers the same in my WordPress database.

I am not much of an expert on Google PageRank – but I did watch this excellent talk from Google I/O 2008 by Maile Ohye:

Google I/O 2008 – Search Friendly Development

Maile repeatedly talks about the need for permanent redirects when web sites are changed – so I took that to heart. I recommend the video to *anyone* who is interested in maintaining or increasing PageRank legitimately.

I found a few helpful Blog Posts – but I waited so long to convert that all the instructions were pretty-much out-of-date. This blog post from Scott Yang was my inspiration – but I did have to adapt things to a newer version of WordPress:

So the first thing to do is export from Moveable Type and retain the post ID’s. In this I followed Scott’s directions slightly adapted to my version of Movable Type. This required editing the file ./lib/MT/App/ adding the ‘POSTID’ line at line 2970 of my file:

DATE: <$MTEntryDate format="%m/%d/%Y %I:%M:%S %p"$>

Then, also inspired by Scott’s post I went into the Movable Type’s user interface to export all entries, comments and trackbacks into a plain text file.

My old blog was installed at csev-blog so I initially installed WordPress at csev_blog (with an underscore). I later renamed it to csev-blog below.

Then I made some changes to my WordPress installation. I edited the file ./wp-admin/import/mt.php at line 418:

                        } else if ( 0 === strpos($line, "POSTID:") ) {
                                $postid = trim( substr($line, strlen("POSTID:")) );
                                $post->import_id = $postid;
                        } else if ( 0 === strpos($line, "EMAIL:") ) {

It turns out that WordPress now understands the notion of import_id – so there was no need to change the SQL (per Scott’s post) and the insert is no longer in ./wp-admin/import/mt.php anyways. Since WordPress already knows about import_id no further changes were necessary.

Then I copied the exported text file into ./wp-content/mt-export.txt and used the WordPress user interface to do the import without the upload. It would only import about 250 entries before hitting a run-time limit. I checked MySql to make sure the ID field in the wp_posts table were really being taken from the MT import.

I then edited the file ./wp-content/mt-export.txt to delete the first 249 posts and re-ran the import. The WordPress import is smart enough to not double import – so I always kept the last successful import to be sure I got them all. By deleting the first “249” posts and re-running the import over and over – after three imports, I had all 638 posts imported.

The next task was to edit my .htaccess to make my of URLs work. I needed to fix individual posts like 000749.html and then monthly digests like 2009_12.html and map the to my new permalink structure. I used the permalink structure that was 2010/03/blog-title-post to make my PageRank be as cool as it could be.

Here is my .htaccess file.

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /csev-blog/
RewriteRule ^([0-9]{4})_([0-9]{2}).html$ /csev-blog/$1/$2/ [R=permanent,L]
RewriteRule ^([0-9]{6}).html$ /csev-blog/mt-singlepost.php?p=$1 [L]
RewriteRule index.rdf /csev-blog/feed/rdf/ [R=permanent,L]
RewriteRule index.xml /csev-blog/feed/ [R=permanent,L]
RewriteRule atom.xml /csev-blog/feed/atom/ [R=permanent,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /csev-blog/index.php [L]

# END WordPress

The simplest rule was for the monthly digest files 2009_12.html which could be directly redirected to the new permalink structure of /2009/12/ – I wanted the redirects to be permanent and I wanted there to be one redirect to transfer PageRank as quickly and cleanly as possible – so make sure to have the trailing slash.

The three lines for the RSS feeds were similarly simple and done as permanent redirects – I wrote a bit of code that I never used that was designed to fake the RSS feeds forever called mt-feed.php – I almost got it working – but it was a bit flaky in some readers and I just decided to fall back to the redirect. I include the code for mt-feed.php at the end of the post – make sure to test everything carefully before using it. I did not think that Google cared too much about the RSS feeds w.r.t. PageRank so I took the easy way out with the redirects.

The trickiest bit was to map the individual posts to the new location (000736.html). I could have taken the easy way out and made a rewrite rule to send them all to index.php?p=000736, similar to how Scott did it – but since my WordPress permalink structure was /year/month/title this would be two redirects. The first would be from 000736.html to index.php?p=000736 and the second would be from index.php?p=000736 to /2008/10/some-title and I wanted Google to have every chance to transfer my PageRank – so I wanted one redirect and I wanted it to be a permanent redirect.

So my rewrite rule transformed the individual posts to mt-singlepost.php?p=000736 and I wrote the following code.


$posts = query_posts('p='.$_REQUEST['p']);
if ( have_posts() ) {
    while ( have_posts() ) {
        header("HTTP/1.1 301 Moved permanently");
        header('Location: '.get_permalink());
header("HTTP/1.1 404 Not Found");

Again an adaptation to Scott’s pattern but using more modern calls to WordPress 2.9.2. This gave me my single, permanent (301) redirect so I can transfer PageRank efficiently.

By letting both blogs go simultaneously with the original Movable Type blog on csev-blog and the new WordPress blog on csev_blog, I would test lots of URLs and be quite patient going back and forth. But once things worked – it was time to rename the folder on the server.

Important – make a copy of your .htaccess file before taking this step. Because changing the folder in WordPress will rewrite the .htaccess file wiping out all your precious changes. SAVE YOUR .htaccess FILE!!!!!

Go into the WordPress admin interface and under settings rename the blog’s url from csev_blog to csev-blog. Then rename the folders on the server. Then immediately edit the .htaccess file putting back in your clever redirects – making sure to change csev_ to csev- in the rules.

Test all the old URLs – there should be one redirect. Using FireBug you should be able to see the redirects in action and really verify things work. I found Chrome was the best way to test the RSS redirects – both Safari and FireFox get way too tricky when doing RSS feeds to even see what happened – thankfully my version of Chrome was clueless about RSS feeds so I could see what was really happening and verify proper operation. I am sure some new version of Chrome will get “smarter” and make it impossible to figure this out. Then I will write some Python code to do a urllib GET.

So things should now be OK.

As promised – here is the code for the RSS hack that I never deployed. Again this never worked perfectly for me – so test this a lot before you trust it. I called this file mt-feed.php:

$thetype = $_REQUEST['type'];
$rssurl = get_bloginfo('rss_url');
if ( $thetype == 'rss2' || $thetype == 'atom' || $thetype == 'rdf' ) {
    $rssurl = get_bloginfo($thetype.'_url');
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $rssurl);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
$content_type = curl_getinfo( $ch, CURLINFO_CONTENT_TYPE );
header('Content-type: '.$content_type);

I hope you find this helpful. I love WordPress and the fact that my new blog can accept comments! I have moved forward in time nearly seven years in terms of blog software and it feels pretty good.

I want to thank Scott Yang for such a good blog post that showed me the way forward. With his patterns – all I needed to do was map things to the newer version of WordPress.

A Simple New Post in WordPress

This is a pretty simple new post – the first one in WordPress.

I wonder how paragraph spacing works.  It seems as though my paragraphs turned into br tags on my older posts.

Ah well – at least I got them converted and old urls are all working.

Here is some Python code:

print "Hello world!"
print "The end"

And I have comments so folks can better disagree with me!

Removal of GM Headrest (i.e. Pontiac Sunfire) without Tool J 42214

Many GM vehicles have a hidden latch to remove the headrest (i.e. there is no button) – so it appears that the headrests are impossible to remove such as for the installation of sweet seat covers with skulls on them.  Here is a great set of instructions if you own the Headrest removal tool (J 42214).

Note this is a 2000 Pontiac SunFire with cloth seats I am working on – that is Brent’s first car – so I am less concerned about cosmetics. If you have a 2010 Cadillac CTX with leather seats – I would go to the dealership instead of using this technique and ending up with a giant tear in your leather seats!.

Of course if you are installing aftermarket seat covers from AuoZone with skulls on them in a 2010 Cadillac CTX – it raises some questions broader than just whether or not to purchase a headrest removal tool.

But of course, I did not want to wait until a $72.00 tool was available to install $20.00 seat covers with skulls on them – so I just went after it with a screwdriver and needle nose pliers. The secret is in the image at the right (click on the image for a larger version) that shows the detail of the locking spring and how you are *supposed* to pop the spring off using the special tool.

If instead you use a screwdriver and work down the seat cover and padding about 1/2 inch you will see the clip. Using a combination of the screwdriver and pliers you can remove the clip (on both sides of the headrest support) and pop off the headrest.

The number 3 is toward the front of the car – the image suggests that you pop the spring from the back using the special tool while lifting the headrest – and if you are really good – maybe this will work for you . But what I did was used my screwdriver at (2) and pulled it towards the front of the car and then took the spring off completely with a needle nose pliers and then everything becomes easy.
It is far easier to put the spring clips back in with the headrests off since you may need to get the clip all the way back in by using a screwdriver inside the tube to push the clip outward (near 3) in order to completely seat the spring before reinstalling the headrests.

So you install your totally sweet seat covers with skulls on them, and pop the headrests back on!

Community Source – Universities Building Open Source Software (Book Excerpt)

Copyright, Charles Severance 2010 – All Rights Reserved
From time to time I will put up draft excerpts of the book I am writing about Sakai. Feel free to comment and/or correct where I am mistaken by sending me E-Mail.

The Sakai project was formed in a moment of transition between a hierarchical / centrally controlled approach in campus information technology infrastructure and an organic / distributed approach to coordinating across a community of like-minded individuals and organizations. As a result, the Sakai effort has always been at the boundary between old-school approaches and new-age approaches to technology development and deployment. At times Sakai has achieved great success through a blend of old-school and new-age approaches and at other times operating at that boundary has led to great long-lasting conflict and stresses in the community.

By the year 2000, the concept of open source as an approach to software development was well established with the Linux operating system and Apache Foundation projects as solid sustainable examples. These efforts collected the talents of volunteer developers from around the world with relatively loose leadership and a commitment to working together to solve a common need. Generally the developers who worked in these projects fell into one of several categories: (1) volunteers who had paying day jobs who worked on the software in their spare time, (2) consultants or small companies who made their living doing consulting on the open source software and gained competitive advantage from their involvement in the project, or (3) individuals hired by large companies such as IBM who were given release time to contribute to these projects to support the projects and insure that the company had a voice in the projects going forward.

Many universities used open source software throughout their enterprise since the early 1990’s. Open source software was ideal or University use because it was low-cost and allowed University technology staff to make small changes to the software where some particular or unique feature was needed. Open Source gave Universities a sense of “control” over their destiny and future costs when it came to their information technology solutions. Open source software also allowed a certain level of “agility” as technology needs shifted throughout the 1990’s as things like the Internet and World Wide Web became part of the required campus information technology suite of services.

However, few universities were regular contributors to open source efforts. Universities typically felt that the software and other intellectual property produced by their employees had potential value and if a staff member built something that was valuable, then the university wanted to profit from that creation. After all, the university had paid the person’s salary while they were doing the work. It made perfect sense to a university administrator or attorney but was very frustrating to individual university employees who yearned to work with their colleagues around the world as part of a community effort.

This led to Universities writing a lot of software for their own use but not sharing that software with other universities unless there was some profit to be made on the interaction. And because no University was willing to invest the time and staff in making their software commercial-quality, most University-developed software was “just barely good enough” for local use.

One of the most common examples of “locally developed” software in the late 1990’s in use at Universities was the campus-wide Learning Management System. Learning Management Systems were pretty basic software and allowed instructors to distribute materials for students and interact using e-mail, chat, or threaded discussion forums with the students. These systems were simple enough that it only took a small amount of resources to get a basic system developed, up and running with a team of 1-2 developers in with less than a year of effort. Often the efforts were done “on the side” or “below the radar” of the typical campus IT operations.

In some cases these university-developed course management systems developed to the point where were purchased and turned into today’s commercial Learning Management Systems. The WebCT commercial LMS product was based on software developed at the University of British Columbia in 1995. The initial Blackboard product was based on a system developed at Cornell University in 1997. The ANGEL Learning system was created in 2000 based on technology developed at Indiana University-Purdue University at Indianapolis (IUPUI). The Prometheus system was developed at George Washington University and later purchased by Blackboard in 2002.

Often the universities would make some money in these transactions, but the real winners were the companies that took the software, cleaned it up and began to sell it to all of the other universities who were growing tired of their local home-grown systems. These companies started building market share and applying economies of scale to their software development. In time these companies began merging and buying one another to become ever larger and more pervasive in the marketplace. At the time of this writing, Blackboard has purchased Prometheus, WebCT, and ANGEL resulting in a very large market share.

??? Did D2L come from McGill University ?? When/How ??

Stanford University developed CourseWork system in 2001 and began to share the software with other Universities around the world. Also in In 2001, the Moodle project also starts with a simple LMS system with an open source license. The MIT Open Knowledge Initiative (OKI) was a project funded by the Andrew W. Mellon Foundation in 2001 to try to bring order to the chaos of so many independent LMS projects and so many divergent Learning Management Systems at so many Universities. Other projects such as Boddington at the University of Leeds, OLAT at the University of Zurich and CHEF from the University of Michigan were pursuing an open source approach and tried to convince other schools that their solutions were something that could be adopted.

From 2001 through 2003, the MIT OKI project regularly brought together many of the leading thinkers about LMS systems and technology from Universities around the world. The OKI meetings and discussions began to form a community of technical staff who slowly started to know one another and realized that even though they worked at many different organizations, that they were all facing the same problems and challenges.

As the OKI project funding was ending in 2003, several of the participants in the OKI efforts decided that perhaps they should band together to form a consortium and work more closely together to develop and release a shared Learning Management System that they would all work on collectively and all use in production at their institutions. By pooling resources, the partners would gain much greater leverage and each school would not have to solve the entire software development, testing, and maintenance tasks.

The goal of the Sakai project was to take the “best of breed” of the current university-developed learning management systems and produce one system that included the best of each of the systems. As a key founding principle, the Sakai project was going to operate on open source principles and every school that received Sakai funding was required to agree to give away all their rights to commercial gain from the software that they produced as part of the Sakai project.

Demanding these open source principles was quite necessary because university adopters of “free” software had see the pattern more than once where a piece of software started out as a “free and collective effort” and then once it had a few customers, the university which owned the software sold it a commercial company along with the customers who had adopted the software. The university that had originally written the program typically made some money and was giving the right to use the software forever. but the adopting schools were given no such deal. They were usually forced to pay the new owner of their software to continue to use it.

So Sakai was to be owned by no university – it was to be owned by the collective. That way all the participants in the project could be assured that the software would stay free forever and that no school would profit from participation in Sakai by selling the adopters of the software to a commercial vendor.

The University of Michigan was selected as the lead institution for the Sakai project and the Principle Investigator for the Andrew F. Mellon Foundation grant was Joseph Hardin and I was to the the Chief Architect for the project. The three other partner schools were Indiana University, MIT, and Stanford University. All the schools had a very strong track record for leadership in software for teaching and learning. The Sakai project also included the uPortal project as well as continued funding for the OKI project.

As a condition of being a partner in the project, each school was required to sign the agreement that they would forgo any commercial gain from the software developed as part of the Sakai project. This agreement was relatively easy for the University of Michigan and Indiana University to sign, but both Stanford and MIT had made significant revenue from licensing software over the years so it was a pretty impressive that the decided to agree to the terms and join the project. There was a fifth school who was considered as a potential partner who wanted a few weasel words put into their contract terms for the intellectual property. We just said “no thank you” and went ahead with the four core schools. A four-way split would be more lucrative than a five-way split so there was little reason to compromise the core principe of giving away the intellectual property forever.

A key element of the Sakai proposal was the notion of “community source”. We wanted to operate using open source principles but with more central coordination than was typical in open source projects. The idea was that each of the four schools would contribute five staff members to a central pool of talent that would be centrally managed in order to build a coherent product that could meet the collective needs of all for schools.

The combination of the outstanding team of schools, the community source approach, and the fact that it was a good time to try to build cross-school coordination in the area of Learning Management Systems led to the Andrew F. Mellon Foundation awarding the University of Michigan and its partners $2.4 million over a two-year period starting in january 2004 to build the “one open source LMS” that could bring the fragmented higher education market with each building its own independent LMS system together.

The plan seemed simple enough and almost certain to succeed.

Copyright, Charles Severance 2010 – All Rights Reserved

Saying: Definition of “Best Practice”

Here is my “best practice saying”:
The label “best practice” is most typically applied to emergent, questionable or even bad practices so that people holding minority opinions can win arguments.

If you cannot win an argument for your approach based on the merits of the approach – simply label it as a “best practice”. The logic used is, “Who could argue against a best practice”.
Sadly this approach works because of the tendancies of human nature best reflected in the Stanley Milgram experiments:

People using this “best practice” tactic use verbal prods very similar to the Milgram experiment such as making the statement, “The best practice requires that you comply.” or “You must comply.” in a monotone voice while wearing a lab coat and horn-rimmed glasses.

P.S. This is not just about the “deprecating Sakai 2.x static covers” argument (which I fully expect to lose) – it is equally a reference to IETF’s BCP-47. While BCP-47 may be a fine idea – the 100+ page document approved in September 2009 looks more like wishes and dreams than actual “best practice” – it looks like “what some group of 10 idealists someday hopes to be adopted in the far-off future as best practice’. I have no problem with the ideas in BCP-47 – in many ways the ideas are quite good and very forward-looking – I just find it galling that a significant change in direction for representing locale is arrogantly labelled “best practice” so early in its lifecycle with so little adoption and support around it.

Confessions of a Confused Apple iPad Fanboy

I pre-ordered my Apple iPad in that first-day flurry that allegedly sold 50,000 iPads in the first two hours. I knew I had to have the new iPad and I know the iPad will simply revolutionize everything.
I have no idea what I will do with my iPad when I get it in a few weeks. I really don’t. I never saw much value in an iPod Touch – I am not crazy about music or movies – I have an iPhone and love it because it is a small device that includes data networking, E-Mail, Twitter, web browsing, Music and a Phone – all in a pocket-sized form factor. But my iPad has little of that and needs WiFi to communicate.
I sure hope I like iWork on it.
Here is something I would like – I would like it to be able to handle a bunch of PDF and HTML blobs – like all of the Python Documentation and all the JavaDoc for Sakai and my Python for Informatics Textbook and my Networks textbook by Jon Kleinberg. I don’t want to buy new books – Kindle-style – I want to read the ones I already have. And I want to browse HTML stored locally on the iPad – not over the web or via iTunes. I don’t want it so that the iPad is only useful with a network connection – I want it to be like a book that works without WiFi.
I have this deep and abiding fear that I won’t be able to just put files on my iPad – that somehow I will need to view all data through the iTunes lens or send them to myself in E-Mail as attachments. Or perhaps I need to write an iPad application called “File Folder Downloader/Reader” that is kind of like the Mac OS Finder or Windows Desktop.
I want to put my stuff on the iPad just like on my laptop. I just want to drag and drop it from one to the other and then be able to go “off the net” with my iPad and read it.
I have this big fear that while I have not jail broken any Apple product ever that I will have to Jailbreak my iPad immediately so it can store and open local files.
With all of this angst and concern inside of me, then why did I buy one within the first hour?
Uh – “Because” is all I can think of to say. Just had to have one – I will work out the details of why I want one later.

Weird Mood: Pledge of Allegiance to the Web

I don’t know why I am in a weird mood this morning – probably because I am writing the exam for SI502 – which of course includes a question about the request-response cycle.
I started thinking about how important the request-response cycle is to information, networks, and people these days and somehow I leapt to the idea that we needed something like the “Pledge of Allegiance” to say at the beginning of every SI502 class to reinforce this notion of the importance of request-response cycle.
Here is my first draft of the “Pledge of Allegiance to the Web”:
“I pledge allegiance to the web and the open standards upon which its built, and to the request-response cycle upon which it stands, one Internet for the greater good, indivisible, with liberty and equal access for all.”
Comments welcome.
Now back to writing that midterm exam for SI502.

Starting on a Sakai Book

I have been trying to find time to write a book about the Sakai experience. I am thinking about a book like “Dreaming in Code” but about Sakai. I will focus on the early years 2004-2007 when I was most heavily involved.
It will be a combination of historical description, open source lessons, fun anecdotes, and inside information about what it took to make Sakai happen.
I am going to try to write it as light and high level as possible to appeal to as wide of an audience as I can
I am planning on starting in earnest next week and having a draft done by the Sakai conference in June.
As part of my pre-work, I have developed a time-line that I will use to trigger my memories to write into the book.
The timeline is in a Google Doc – If you have any comments or memories or the answers to the questions in the document any help would be much appreciated.

Message Never Sent – Sakai Governance

I was going to write a long message to John Norman about wishing for a 2.x PMC that functioned without higher-level priority setting authority and came across this note from John to Clay and Anthony on January 24 about the Maintenance Team:

“My instinct also is to avoid formality unless it is shown to be needed. Why don’t we just proceed with these people acting in a broadly similar way to an Apache PMC (which allows for anyone to be out of contact for a period) and see if that is good enough. I’m more interested to know if the maintenance team has managed to get any bugs fixed and what issues are concerning them with regard to their mission to improve the code.”

Once I saw this, I realized that I could not say it any better than John already had said.

The message I did not send

On Mar 13, 2010, at 4:03 AM, John Norman wrote:
Personally, I see it as a valuable function of the MT to ‘tidy up’ the code base. I am not sure I care when a decision is made so long as (a) it is properly discussed and consulted on and (b) all decisions that affect a release are reviewed at the same time. So I can view this as early opening of the consultation (good) and potentially a process that allows decisions to be reviewed carefully without rushing (also good). So, while I accept Stephen’s point, I think I might advocate that we don’t wait to consider such issues, but we do insist that they be recommendations and if acted upon (e.g. for testing dependencies) they should be reversible until the tool promotion decision point.
It feels like the PM and/or Product Council should be able to help here

On Mar 14, 2010, at 5:39 AM, John Norman wrote:

I don’t see the ambition for the structures that you attribute to them. I think we are seeing incremental steps to fill gaps and respond to community needs. Of course if you don’t share those needs it may be hard to empathise and my own view may not be that of other participants in the efforts, but I have none of the intent that you seem to rail against.

Chuck’s Response:

John, the thing I rail against is the statement you made above – “It feels like the PM/PC should be able to help here”. I interpret (and I think others do as well) that statement as “perhaps the MT should get approval from the PM/PC in order to properly set overall priorities for the community”. What bothers me is that I see a PM/PC structure that seems primarily set up for Sakai 3.x having “authority” over a structure that is set up for Sakai 2.x simply because 3 > 2.

By the way – I am not sure that the MT in its current form *is* the right place for these 2.x discussions either – what I dream of is one place where 2.x is the sole focus – not four places where 2.x is partial focus. I want separate but equal – I want 2.x to be able to follow its own path without priorities and choices about 2.x being made from the perspective of “what’s best for 3.x”. I want a win-win situation not a one-size-fits-all situation.

I *do* want to make sure that things get added to 2.x in order to make 3.x happen as effectively as possible – as a matter of fact my own contributions in 2.x in the past few months have been heavily focused improved connectivity between 3.x and 2.x and I am really excited about the progress and looking forward to when 3.x runs on my campus. Even in some future state after my campus is running Sakai 3.x in hybrid mode and I am happily using it as a teacher, and perhaps I have even become an active Sakai 3.x contributor, I *still* will be opposed to structurally viewing Sakai 2.x priorities though a Sakai 3.x lens in the name of “overall brand consistency”.

And interestingly, if *I* think that the PC/PM is a group that cares mostly about 3.x and you think that the PC/PM cares mostly about 2.x – that itself is an interesting question. Particularly because you are a member of the PC and I am a confused 2.x contributor. :)

On Mar 14, 2010, at 5:39 AM, John Norman wrote:

PS I have long held that Sakai is a community of institutions, rather than a product. I think the efforts look a little different through that lens.

Chuck Wrote:

For Sakai 3.x, I think that it *is* a community of institutions and that is a pretty reasonable structure for Sakai 3.x at this point in its lifecycle – and for Sakai 2.x I think that it is a community of individuals at this point in its lifecycle and that is the perfect structure for Sakai 2.x at this point in time. *Neither is wrong* – it is the nature of the development phase of each project – and why I am uncomfortable with to our current PC/PM “one-committee-to-rule-them-all” governance (or perceived governance).