Column: IEEE Computer Computing Conversations

Starting with the January 2012 issue of IEEE Computer Magazine, I am the writer/editor of a monthly column titled “Computing Conversations“. This new column is part of an overall strategy to move IEEE Computer magazine from a purely-paper magazine to a high quality digital magazine with extensive multimedia content.

http://computer.org/computingconversations

The purpose of the column is for all of us to get to know the people that have created and defined the computing field. Much of modern-day computing can be traced to innovations starting in the 1940’s. Never in human history has a major field emerged and matured in a single generation. In order for us to better understand the future directions where computing might be going, it is important to know our past and how we arrived at our current state. This column will be dedicated to meeting and talking to the people in the field of computing ranging from the early pioneers through the current visionaries. Multimedia and video will be an essential part of these conversations so we can use these conversations in our teaching to help explain the field to new technologists as they enter the field.

Using video is important as it allows us to give a face and voice to people in our field and helps form an oral history of the profession. I also hope to produce materials that can be used in classrooms to help students make a connection with the people who have created our field.

Each month, I will write a blog post about the column that will include a brief summary, a link to the video materials on the IEEE Computer Society YouTube channel, a link to my own high-quality archive of the videos on Vimeo, and an audio podcast of me reading the actual column as well as some back story on how the video was produced.

I have been purchasing new video equipment, shooting video, and upgrading all of my video skills to High Definition since September. I have been pestering my friends to review secret drafts of the videos as I worked through technical issues and so it is nice for me to get it going public in the January issue so I can share them with everyone.

The first column is titled, “The Second Order Effects of Steve Jobs” and the second column is an interview with Brendan Eich talking about the creation of the JavaScript language.

I need to thank the outstanding IEEE Computer editorial staff (Judi Prow, Jennifer Stout, and Brian Brannon) for their superb attention to detail and suggestions for improvement in the videos and the columns. And I also want to thank the IEEE Computer Editor in Chief, Ron Vetter from UNC Wilmington for involving me in the editorial board and supporting this grand experiment.

I am looking forward to writing the columns and producing the videos and would love to hear any comments you might have. If you want to follow along a I travel and gather video, you can follow me on twitter @drchuck. You can see who I am interviewing and where I am travelling and get a sneak peek of upcoming material.

IMS Common Cartridge (CC) Basic LTI Links and Custom Parameters

There is a great discussion going on between Brian Whitmer (Canvas Instructure) and David Lippman (iMathAs) in a Canvas forum about custom parameters and resource links.

http://groups.google.com/group/canvas-lms-users/browse_thread/thread/bd2932b9a6a8bb6e

My Response

Brian,

While the spec is not explicit in this area, the simple fact that each LTI placement in a CC 1.1 cartridge has its own custom parameters and you can have many links in a cartridge naturally implies that each link can have its own distinct custom parameters. In the LTI 1.0 specification, the notion of how custom parameters work was left vague when it comes to authoring links (i.e not as part of an import).

When folks read the LTI 1.0 spec some implementations made it so that (a) the single tool shared configuration was the only place that custom parameters be set and other implementations made it so (b) that custom parameters could only be set on the individual resource link. The more common approach was (a) which sounds like the approach you took in Canvas.

This worked well until you think of the export/import cartridge use case where tools like an assessment engine want to store a setting like “which assessment” in a custom field. Pearson, McGraw-Hill and lots of folks want to do this where they want many resource links in a cartridge and have those links point to different resources *without* adding a parameter to the URL (which is not recommended) – which would mess up much more than adding a custom parameter. Of course some of my presentations (i.e. using resource_link_id configuration) talk about the way a tool can work around the lack of per-resouce custom parameters parameter support in an LMS. This work-around is sufficient for initial link authoring in an LMS where the course is being built – but fails across export/import as resource_link_ids are not carried in the CC.

So this meant that LMS’s could not be used to author proper cartridges unless they allowed per-link custom parameters so these could persist across an export / import path. Yikes! We figured that many who wanted to make cartridges would simply use an LMS to do it while waiting for a specific authoring tool or process.

So in LTI 1.1, we made it more explicit in section B.7 with the following new text:

B.7.1 Instructor Creates New Tools
In the case that the TC decides to allow the instructor to place tools without administrator action by getting a URL, key, and secret from a TP and plugging them into a course structure, it is a good practice to allow the instructor to enter custom parameters without requiring administrator assistance. Some TPs will need custom parameters to function properly. Also if the instructor is using a TC to produce an IMS Common Cartridge with LTI links in the cartridge, often setting custom parameters for a tool placement is an essential part of authoring a cartridge.

B.7.2 Admin Creates New Tools, Instructor Only Places Tools
Another common case is to only allow the administrator to create new tools (i.e., key/secret/url) and then let the instructor place those pre-configured tools in their courses. In this use case, instructors never handle url/key/secret values. Even in this use case it is important to allow the instructor to be able to set or augment custom parameters for each placement. These parameters may be necessary for the TP to function and/or may be necessary if the instructor is building a course in the TC to be exported into an IMS Common Cartridge. It is not necessary to always give the instructor the option to configure custom parameters, but it should be possible for the administrator to make a choice to reveal a user interface to set custom parameters.

You can also read B.7.2 as to applying to a situation where an instructor makes their own tool configuration to share across multiple resource links. The best practice is to have both the shared config and the resource-link contribute to the custom parameters.

You merge the custom parameters at launch if they exist both places – I think that my code for this treats the shared / admin as having higher precedence. You would naturally take the same merge approach when exporting to a CC 1.1 – since the CC 1.1 has no concept of a shared link – it only knows about the resource link – you have to pull in the inherited parameters on export or data will be lost.

Now lets talk about LTI 2.0 that does not exist yet and how it treats this in the draft versions so far. LTI 2.0’s view of a cartridge explicitly models two separate items. The first is a tool configuration with url, vendor, etc. The tool configuration registers itself mime-type style resource-handler as a “pearson-mymathlab-quiz” indicating that once this tool is installed it handles resources that are of tyoe “pearson-mymathlab-quiz. The second is a resource-link that is the actual link in the course structure that includes title, custom parameters, and needs a resource handler of type “pearson-mymathlab-quiz”.

If you look at the LTI 1.0 / CC 1.1 data model, for simplification, we condensed these into a single structure. Simplification makes things easier sometimes and harder other times.

LTI 2.0 will add two new resource types to a future CC version, keeping the basiclti all-in-one resource type. But my guess is that once the LTI 2.0 CC support makes it into the field, folks will quickly switch as it is *much* prettier. One of the major advantages of the LTI 2.0 approach (at the cost of more UI and workflow complexity) is that since the resource handler idea is a bit of an abstraction between resource links and tool configurations, it allows LMS builders and LMS admins to re-map those resource handlers to solve use cases like living behind a firewall or having a local copy of Pearson MyMathLab in South Africa.

The 2.0 specs are pretty mature but it always takes a while for adoption so we need to focus on and deal with the current CC and LTI 1.1 and get it right so it works well while we finish up LTI 2.0 and its associated CC release and get it out and into the marketplace.

Hope this helps.

The Relationship Between Developers and Operations at Flickr

Ross Harms who is formerly of Flickr and currently at Etsy, published a memo he sent around Yahoo! in 2009 explaining the relationship between developers and operations at Flickr:

http://www.kitchensoap.com/2012/01/05/convincing-management-that-cooperation-and-collaboration-was-worth-it

Here is a quote from the post:

I did this in the hope that other Yahoo properties could learn from that team’s process and culture, which we worked really hard at building and keeping. The idea that Development and Operations could: (1) Share responsibility/accountability for availability and performance, (2) Have an equal seat at the table when it came to application and infrastructure design, architecture, and emergency response, (3) Build and maintain a deferential culture to each other when it came to domain expertise, and (4) Cultivate equanimity when it came to emergency response and post-mortem meetings.

My Comment To the Post

Very nice post and all quite obvious to folks with enough experience across multiple real-world situations. Usually when organizations don’t structure their ops / dev relationships as you describe, it is often in an obsessive attempt to “eliminate risk”.

The basic (incorrect) premise is that everything the developers do increases risk and that ops have the job of reducing that risk to zero. Developers are the “problem” and Ops is the “solution”. Or as you say above, Developers are the “Arsonists” and Ops are the “Firefighters”. Casting the relationship in this way leads to ops wanting to limit change and the devs naturally want the product to move forward so the organization can better serve its stakeholders.

Uninformed ops feel the need to do large tests with complete instances of the product and frozen “new versions” and as the product gets more complex, these test phases take longer and longer and so more and more features end up in each release.

Again, ops is trying to eliminate risk – but in reality because each release is larger and larger there is a super-linear likelihood that something will go wrong. And when there are a lot of features in a package upgrade, folks cannot focus on the changes because there are too many – they hope it is all OK or sometimes it is all declared “bad” as a package without looking for the tiny mistake and everyone goes back to the drawing board which further delays the release of functionality and insures that the next release attempt will be even larger and even more likely to fail. It is a vicious circle that your approach nicely avoids.

The gradual approach you describe allows everyone to focus intently on one or a few changes at a time and do it often enough that you avoid the risk of a large change consisting of lots of details.

I like to think of the way you describe as “amortizing risk” – where there is always a small amount of risk that everyone understands but you avoid the buildup of accumulated risk inherent in large package upgrades. Again, thanks for the nice description.

Technology Courses at the University of Michigan School of Information

I wrote a summary report for the SI curriculum committee describing of how I saw the various technology courses in SI fitting together. I figured I would share the report more broadly to help students decide which courses to take.

In the future SI572 is likely to be renumbered SI672 – but for this post, I use the old numbering scheme.

Nothing here is official – it is just my own opinions.

Course Summaries

These are the courses we have that are technical:

SI502 – Networked Computing: Storage, Communication, and Processing

SI502 is a very introductory course that is required for all students unless they pass the SI502 place out test. It is very much a survey course covering Python, Networks, HTML, CSS, Database, Search, and Security. The place out test insures that students with even a moderate amount of prior technical expertise bypass SI502. At this point about 1/5 of our incoming MSI’s bypass SI502. SI502 is taught at a very moderate pace and students have small discussion sections to insure everyone gets the attention they need to master the course material. SI502 is taught to about 140 students each Fall. There is a book “Python for Informatics” – www.py4inf.com written specifically for SI502.

SI539 – Design of Complex Websites

SI539 uses Python, HTML, CSS, Javascript, and JQuery to build a simple web application and deploy it on Google App Engine. There are no discussion sections. There are typically about 80 students in SI539 each semester. There are no discussion sections, but a GSI provides extensive office hours and assistance to allow struggling students to get help with the technical assignments in the course. About 60% of the students work pretty much on their own throughout SI539. There is a book written specifically for SI539 published by O’Reilly titled, “Using Google App Engine”.

SI543 – Programming I

SI543 is a traditional “first programming class” in the Java language. While it starts “at the beginning”, it moves quickly through quite a bit of material with the students building desktop-style applications including simple graphics and real-time (i.e. game-like) interactions with their applications.

SI572 – Database Application Design

SI572 covers PHP and SQL and prepares students to write real-world web applications in one of the more popular and widespread programming environments. The course features a significant group project where a real, significant web application is developed starting from the requirements and going through building and deploying a working application with a polished user interface. SI572 expects students can program independently, but does not specifically demand prior PHP experience. SI572 uses a book titled, “Learning PHP, MySql, and JavaScript”. We teach SI572 to about 50 students each semester.

How these Four Courses Fit Together

SI502 and SI539 are designed as courses for people with little or no programming background. SI502 in particular is designed for students who are “nervous” about computing and technology. The pace of both courses is sufficiently slow to insure that we don’t lose students. After SI502, students should have a good understanding of programming and technology but not be independent programmers. By the end of SI539 they should be well on the way toward being an independent programmer. Both SI543 and SI572 expect that incoming students function at about the level that students at the end of SI539. It would not hurt to add a very suggestive loose pre-requisite to SI543 and SI572 that says “Suggest an introductory programming course like SI539 or equivalent”.

Sample Sequences

In this section, I show several paths taken through these courses by different types of students.

Just Starting out In Technology

These students come in dreading SI502 and we work very hard in SI502 to get them excited about technology enough to want to take SI539 at least. These students are the target audience of SI502 and we make that very clear. Their sequence is either:

SI502
SI502 -> SI539 (if they get a little confidence)

Solid Programing But No Web Experience

Some of our incoming students have a very narrow Computer Science background where they were not exposed to the web at all but are solid “pure-programmers”. They tend to place out of the SI502 course and start in SI539

SI539 -> SI572

Self-Taught Web Programmers on the Web

Some of our incoming students have little formal training in programming or their formal training was a long time ago, but they have been playing with web technologies all along but their knowledge is spotty and they want better coverage in terms of their skills. They also need to see how programming is done on desktops. There is usually no point in them taking SI539 – it is too easy and too slow to keep their interest.

SI572 -> SI543 (EIther order works actually)

Interest In Data Analysis

Sometimes students are uninterested in building web applications and just want to do data analysis.

SI502 -> SI601 -> SI618

Traditionally SI601 (Data Manipulation) and SI618 (Exploratory Data Analysis) have been perl courses because that is how the teachers in those courses had developed their assignments. But there is a possibility with someone new teaching these courses they might switch to Python. That would make them articulate nicely with SI502 – particularly given SI502 is very data-oriented and includes a module on regular expressions.

Skilled Web Master

These students start with SI572 and move through this set of courses:

SI520 – Graphic Design
SI572 – Database Applications
SI634 – Application Platform Configuration (Drupal)
SI635 – Application Platform Customization (Drupal)
SI658 – Information Architecture

This is a great set of courses that exposes the students to a wide range of applicable technologies and techniques.

Gaps and Opportunities

Given the demands and interests that I am hearing from the students in these classes, I get a sense there is a demand for these topics – we may only need 7 weeks on each topic:

– A course in mobile computing (Java/Android and/or Objective-C/Apple) is taught.

– A course where advanced JavaScript (i.e. JQuery is taught)

– A course where we cover advanced data manipulation – would help SI508, SI601, SI618 and similar courses get started in their topics with less remedial instruction in each course

– A course that combines Graphics and CSS to make sophisticated, gorgeous and functional web sites

Conclusion

Over the past four years, we have gotten to a pretty good understanding of our four core programming courses and where they fit in our curriculum to serve our entire range of MSI students that range from nervous beginners to students with computer science undergraduate degrees and work experience in web programming. Probably the biggest key fact is to understand the SI502 and SI539 are very different courses than SI572 and SI543. SI502 and SI539 are broad courses designed to gently build confidence and competency whereas SI572 and SI543 are real, solid programming courses that produce graduates with real programming skills relevant to the marketplace.

Mexican Food of Note in Southeast Michigan

At the University of Michigan School of Information we have this mailing list we call “si.all.open” that is prone to interesting long threads about everything from traffic tickets to lost laptop power supplies to where the GIF has moved to since the first bar got too full.

Recently there was a thread titled ” [Ann Arbor Mexican Food] Amazing Discovery” where it started out with one person talking about a restaurant they particularly liked. This triggered many follow on comments that made me very hungry.

I decided to turn the thread into a personal gastronomic TODO list. It may take a while before I get to all these places – but now I have a plan. Just Google these strings – you will find them quickly enough.

Taqueria La Fiesta, in Ypsilanti, MI

Taqueria La Fiesta at 4060 Packard, Ann Arbor, MI

Evie’s Tamales, Detroit, MI

Taqueria Lupita’s, Detroit, MI

Nuevo Leon, Detroit, MI

El Zocalo, Detroit, MI

TMAZ Taqueria, 3182 Packard Rd, Ann Arbor, MI 48108

Xochimilco, Detroit MI

Book Review: Luke Fernandez

Luke Fernandez of Weber State posted a cool review of my Sakai book:

http://itintheuniversity.blogspot.com/

Here is a quote:

As the Soviet’s used to say (and as historians often still profess), “the future may be certain but the past is always contested territory.” Which is another way of saying that if Chuck has offered up an intriguing story, I hope it doesn’t end up being the authoritative history of Sakai. The sub-title, after all, is a “retrospective diary” rather than a history, which would suggest that many other stories are worth telling.

Here is my reaction:

Luke thanks for such an insightful review of my book. You hit so many of the themes of the book perfectly. I would amplify that I absolutely do not intend for this to be the definitive history of the Sakai project from 2003-2007. Others have completely valid perspectives and I wish others would write their own (perhaps contradictory) views of the events and I would love to be able to let folks assemble he “real” history from all those perspectives. You are also right in that my primary motivation is not simply to “stick it to the man” – the thing I fight for is for the creative types and management types to function as peers rather than the typical structure where management is “above” the creative types. I am fighting for the freedom of the creative types to take part in the decision making.

IMS Basic LTI @ Blackboard DevCon 2011

During my talk at the Blackboard DevCon, I was explaining why the development of standards seems quite dull – but is essential. I likened it to building a sewer system in a new subdivision long before any homes are built and long before any people live in the subdivision. And once the people live there, all the fine workmanship in building the sewer is deeply underground and never seen again (hopefully).

Here was my quote:

“If the toilet does not flush, no one will live in that house. That is why I am so excited about plumbing.”

Here are the slides for the talk:

http://slidesha.re/qqYy0P

Here is a video of the Demo/HACK of Blackboard as a Basic LTI Provider:

http://www.vimeo.com/26310497

John Fontaine did most of the work of the demo by hacking up a Basic LTI Provider Building block roughly modeled on the ProviderServlet in Sakai that makes Sakai a Provider. I contributed by checking code into his building block that broke it and triggering John to fix it.

Here is that code on www.oscelot.org :

http://projects.oscelot.org/gf/project/oscelotblti/

I must repeat that this is *not* a product direction – it was just a fun five-hour hackathon result.

Learning Tools Interoperability v1.1 Public Draft Released

It is like my Christmas present came early this year!

After a successful IMS quarterly meeting in Commerce, Texas in the second week of November where we demonstrated prototype interoperable implementations of LTI v1.1 with Moodle 2.2, Sakai 2.9 and SPV Software’s Building Block and PowerLink, the spec passed the threshold test for going to Public Draft.

The primary new feature for LTI v1.1 is the ability to send grades back from an external tool to an LMS.

In the IMS process, when a specification reaches Public Draft it is very mature and nearly complete and folks should review it quickly to see if there are any issues because it is very likely to be final in the next 30-60 days.

Documentation: You may download the public draft at: http://www.imsglobal.org/lti/index.html

Public Discussion Forum: http://www.imsglobal.org/community/forum/categories.cfm?catid=44

Open Source Implementation Code: http://code.google.com/p/ims-dev/

IMS Members–Only Benefits

Testing: Only Members may review and begin to practice for the LTI v1.1 Conformance Certification at: http://www.imsglobal.org/developers/alliance/LTI/cert-v1p1/

Feedback and Support: Only Members receive personalized support. Please post any comments/questions in the CC/LTI Alliance at: http://www.imsglobal.org/community/forum/categories.cfm?catid=138&entercat=y

Note: I am compensated for my work in IMS as a consultant and work extensively on IMS LTI.