Monthly Archives: January 2012

IBooks Author: Here Comes Apple – The Publishing World Ends in 2017 (rant)

Nine days ago, Apple announced iBooks 2 and iBooks Author. While everyone else was writing “me too” columns critical of the license agreement, I was busily downloading the software, converting my Sakai book to iBooks Author, getting an iTunes Connect Account, figuring out iTunes Publisher, and getting a book published.

My conclusion is that virtually everyone has it wrong. iBooks Author is a brilliant move on Apple’s part.

I have long called for a decent desk-top tool as the single MOST CRITICAL missing link in empowering teachers to become authors. I rail over and over that most of the Open Educational Resource funding effectively wasted on organizations that are trying to enhance their own brand by republishing faculty products in their name instead of trying to improve teaching and learning.

Here are a few of my recent rants with a theme of editable exchange formats for authors:

What are the Key Challenges for the OER Movement?

OER Rant 2.0 (Angry teacher and student)

Open Educational Resources (OER) – Rant-Fest

So now someone has heard my lonely cry for help and answered. Apple (like it always does) saw the massively obvious missing use cases in the endless lame offerings for authors and given us a tool that it the right use case (at least the best we have seen so far).

Apple’s iBooks Author tool was announced Thursday January 19 and my Sakai book was uploaded by midnight on Friday January 20. I little mistake in my metadata took a few days to figure out (their tech support is obviously swamped). Once I figured the metadata out and fixed it – 36 hours later I am in the book store with a very pretty book with swipe-style table of contents, revenue model, distribution channel and the whole works.

You can click the link below or search for “Charles Severance” or “Sakai” in iTunes and you get my book. It is free for a while because a pay account takes longer to get approved than a free account. So hurry and download the book while it is free.

Technically, the path was not too bad. I downloaded a LaTeX to RTF convertor, then imported the RTF into Pages and then pasted the text into iBooks Author one chapter at a time, cleaning up extra whitespace here and there. It could have been better and the documentation could have been more helpful – but you cannot argue with moving 230 pages from LaTeX to iBooks Author in 5 hours. And it even caught a couple of spelling errors I had missed.

But this book (Alpha) is just the start. I want to add pictures, multimedia, and supporting material like E-Mails that will slide out over the text. Over the next few months, I will enhance the book with these materials and create an awesome enhanced book that I will call (Beta) and sell. I want to see how Apple handles all the extra stuff and makes a truly beautiful book.

I need to do a rewrite of my Python for Informatics book because of copyright issues. I am simply going to convert to iBooks Author first and then do the rewrite there because it is far easier for me AS AN AUTHOR (are you getting the picture????) to be creative and produce an excellent book where I am spending my time on the creative aspects of creating an enhanced book and not worrying about the technical pain of HTML5 that is still emerging.

(This next bit is the MOST IMPORTANT PART of this entire post.)

People complain about the state of iBooks Author at this moment in time. What they miss is that we authors have a pipeline of work. Some books take 6 months and others take two years, there is a need to revise over and over. I can live with the few imperfections in today’s iBooks Author because by the time my next books come out, nearly all of those problems will be resolved. I really do not like the Pages/iBooks Author interface – but I have years to figure it out. This is a moving target and Apple is just getting started.

The Market Impact

Remember that it was *years* before iTunes became profitable – it was not an overnight success. All the people nay-saying iBooks Author are reacting to what it is *right now*. Already it is surprisingly impressive – but what is more important is that if folks could stop complaining about the EULA for a second and imagine where this roadmap leads – they would immediately see that the publishing industry has about five years before the door is completely shut by Apple.

The good news is that it will take Apple some time to strangle the industry. Companies like Amazon or Pearson or some startup could build a good tool or a funding agencies like Gates and Hewlett could fund an open effort to build a tool. Or perhaps one of the startups that are trying to multi-publish book authoring “in the cloud” (whoever came up with this idea never spoke to a single author) will change direction and build a desktop tool. One way or another the market can and will build a “Zune Author” to compete with iBooks Author. The “Zune Author” will likely be better, cheaper, more usable, and more open, and better in an infinite number of ways. But since so many people in this industry think about the next six months rather than the next five years, “Zune Author” will arrive too late, and technical superiority will not matter.

So the question is who in this educational space will take this on this problem head-on. I predict that no one will. Venture capitalists and philanthropic funders will continue to fund last year’s good ideas that we see over and over in keynote speeches from education futurists in the pursuit of the quick buck while Apple quietly sits in their spaceship-like headquarters and quietly builds on their lead in the publishing market and then all the “futuristic thinking geniuses” will wake up one day gasping for air and with their last breath, saying “DAMN YOU APPLE!” and wondering where things went wrong.

And on that day in 2017 when the publishing industry has been killed by Apple, please do a google search and find this blog post where I told you what to do and you did not listen. Of course it won’t change anything. And whatever I am telling you in 2017 – you won’t listen to that either. It is very frustrating to be right and have no one listen.
——-

If you liked this rant, you can read 200+ pages of my ranting about what worked and what did not work in the Sakai Project between 2003 and 2007 on your iPad –
Sakai: Free as in Freedom (Alpha) is now available in iTunes and the Apples iBookstore.

Comparing Amazon S3 Pricing to USPS Pricing

I am doing a good bit of video editing these days and I want to send the original HD video to my collaborators around the country. The data ranges from 10GB to 40GB depending on how long the interview ran. I basically need to get it to one other person – one upload, one download, and delete the data.

DVD is completely useless here as it would take 3-10 DVDs and there is no easy-to-use spanning software. I have an ISP that gives me 630GB of storage and unlimited bandwidth – but any one connection only sees 350KB/sec – which leads to a 15GB taking over 40,000 seconds – most of a day.

Amazon’s S3 charges for outgoing bandwidth and storage but does not limit outgoing bandwidth. That drops the 15GB transfer time to less than an hour on a wired connection. Doable given that I only need to do this 3-4 times per month. The S3 charge for a week of storage and a single transfer of a 30GB file is about $4.50.

If I purchase a 32GB USB stick and put it in a photo mailer, it can be sent for $2.00 first class each way. I have a photo mailer that can be reused both ways that costs about $0.50. So sending 30GB via mail ignoring the cost of the 32GB memory stick is also about $4.50 as long as I get my 32GB memory stick back.

Interestingly I can pay an $20.00 per month to get an extra 30GB of space on the University of Michigan AFS servers. Which if I moved 30GB of video four times per month, turns out to be about $5.00 per transfer.

I think that I am going to give up and compress the video to H.264. If I could get it under 3GB, then a whole host of options open up – one-way sending of a DVD or upload/download any number of free resources I already have.

IEEE Computer: The Second Order Effects of Steve Jobs (January 2012)

This is my first column for the January 2012 issue of IEEE Computer magazine’s Computing Conversations column. The idea of the column is to make it about people who make up the field of computing and getting to know those people. The January issue is called the “Outlook” issue dedicated to looking a bit more and encourages fritters to think a bit “out of the box’.

For this column, I figured that IEEE Computer (the flagship publication of the IEEE Computer Society) needed to acknowledge the passing of Steve Jobs. But since IEEE Compuer is a magazine with several months of lead time, it would look a little silly for us to write an article similar to the plethora of articles that appeared back in October, two months afterwards. So I wanted to take a more reflective view on what Steve Jobs had accomplished in terms of how we in computing who used his technology to accelerate our own thinking and innovation. I wanted to highlight the second order / knock-on effects of the products that Jobs produced. The column looked at several examples of where Steve Jobs simply pushed us forward and made us think differently.

Audio version of the column.

Here is the associated video:


I also have a High Quality Archive on Vimeo (with download link).

Other Videos in The January Issue

I also produced the following video for the January issue of IEEE Computer associated with an
article in the issue.

Bjarne Strostrup: The Inventor of C++


I also have a High Quality Archive on Vimeo (with download link). I think this would be a great video to use in a C++ class.

If you want to comment on the videos or the article – you can comment here or at the YouTube videos.

Column: IEEE Computer Computing Conversations

Starting with the January 2012 issue of IEEE Computer Magazine, I am the writer/editor of a monthly column titled “Computing Conversations“. This new column is part of an overall strategy to move IEEE Computer magazine from a purely-paper magazine to a high quality digital magazine with extensive multimedia content.

http://computer.org/computingconversations

The purpose of the column is for all of us to get to know the people that have created and defined the computing field. Much of modern-day computing can be traced to innovations starting in the 1940’s. Never in human history has a major field emerged and matured in a single generation. In order for us to better understand the future directions where computing might be going, it is important to know our past and how we arrived at our current state. This column will be dedicated to meeting and talking to the people in the field of computing ranging from the early pioneers through the current visionaries. Multimedia and video will be an essential part of these conversations so we can use these conversations in our teaching to help explain the field to new technologists as they enter the field.

Using video is important as it allows us to give a face and voice to people in our field and helps form an oral history of the profession. I also hope to produce materials that can be used in classrooms to help students make a connection with the people who have created our field.

Each month, I will write a blog post about the column that will include a brief summary, a link to the video materials on the IEEE Computer Society YouTube channel, a link to my own high-quality archive of the videos on Vimeo, and an audio podcast of me reading the actual column as well as some back story on how the video was produced.

I have been purchasing new video equipment, shooting video, and upgrading all of my video skills to High Definition since September. I have been pestering my friends to review secret drafts of the videos as I worked through technical issues and so it is nice for me to get it going public in the January issue so I can share them with everyone.

The first column is titled, “The Second Order Effects of Steve Jobs” and the second column is an interview with Brendan Eich talking about the creation of the JavaScript language.

I need to thank the outstanding IEEE Computer editorial staff (Judi Prow, Jennifer Stout, and Brian Brannon) for their superb attention to detail and suggestions for improvement in the videos and the columns. And I also want to thank the IEEE Computer Editor in Chief, Ron Vetter from UNC Wilmington for involving me in the editorial board and supporting this grand experiment.

I am looking forward to writing the columns and producing the videos and would love to hear any comments you might have. If you want to follow along a I travel and gather video, you can follow me on twitter @drchuck. You can see who I am interviewing and where I am travelling and get a sneak peek of upcoming material.

IMS Common Cartridge (CC) Basic LTI Links and Custom Parameters

There is a great discussion going on between Brian Whitmer (Canvas Instructure) and David Lippman (iMathAs) in a Canvas forum about custom parameters and resource links.

http://groups.google.com/group/canvas-lms-users/browse_thread/thread/bd2932b9a6a8bb6e

My Response

Brian,

While the spec is not explicit in this area, the simple fact that each LTI placement in a CC 1.1 cartridge has its own custom parameters and you can have many links in a cartridge naturally implies that each link can have its own distinct custom parameters. In the LTI 1.0 specification, the notion of how custom parameters work was left vague when it comes to authoring links (i.e not as part of an import).

When folks read the LTI 1.0 spec some implementations made it so that (a) the single tool shared configuration was the only place that custom parameters be set and other implementations made it so (b) that custom parameters could only be set on the individual resource link. The more common approach was (a) which sounds like the approach you took in Canvas.

This worked well until you think of the export/import cartridge use case where tools like an assessment engine want to store a setting like “which assessment” in a custom field. Pearson, McGraw-Hill and lots of folks want to do this where they want many resource links in a cartridge and have those links point to different resources *without* adding a parameter to the URL (which is not recommended) – which would mess up much more than adding a custom parameter. Of course some of my presentations (i.e. using resource_link_id configuration) talk about the way a tool can work around the lack of per-resouce custom parameters parameter support in an LMS. This work-around is sufficient for initial link authoring in an LMS where the course is being built – but fails across export/import as resource_link_ids are not carried in the CC.

So this meant that LMS’s could not be used to author proper cartridges unless they allowed per-link custom parameters so these could persist across an export / import path. Yikes! We figured that many who wanted to make cartridges would simply use an LMS to do it while waiting for a specific authoring tool or process.

So in LTI 1.1, we made it more explicit in section B.7 with the following new text:

B.7.1 Instructor Creates New Tools
In the case that the TC decides to allow the instructor to place tools without administrator action by getting a URL, key, and secret from a TP and plugging them into a course structure, it is a good practice to allow the instructor to enter custom parameters without requiring administrator assistance. Some TPs will need custom parameters to function properly. Also if the instructor is using a TC to produce an IMS Common Cartridge with LTI links in the cartridge, often setting custom parameters for a tool placement is an essential part of authoring a cartridge.

B.7.2 Admin Creates New Tools, Instructor Only Places Tools
Another common case is to only allow the administrator to create new tools (i.e., key/secret/url) and then let the instructor place those pre-configured tools in their courses. In this use case, instructors never handle url/key/secret values. Even in this use case it is important to allow the instructor to be able to set or augment custom parameters for each placement. These parameters may be necessary for the TP to function and/or may be necessary if the instructor is building a course in the TC to be exported into an IMS Common Cartridge. It is not necessary to always give the instructor the option to configure custom parameters, but it should be possible for the administrator to make a choice to reveal a user interface to set custom parameters.

You can also read B.7.2 as to applying to a situation where an instructor makes their own tool configuration to share across multiple resource links. The best practice is to have both the shared config and the resource-link contribute to the custom parameters.

You merge the custom parameters at launch if they exist both places – I think that my code for this treats the shared / admin as having higher precedence. You would naturally take the same merge approach when exporting to a CC 1.1 – since the CC 1.1 has no concept of a shared link – it only knows about the resource link – you have to pull in the inherited parameters on export or data will be lost.

Now lets talk about LTI 2.0 that does not exist yet and how it treats this in the draft versions so far. LTI 2.0’s view of a cartridge explicitly models two separate items. The first is a tool configuration with url, vendor, etc. The tool configuration registers itself mime-type style resource-handler as a “pearson-mymathlab-quiz” indicating that once this tool is installed it handles resources that are of tyoe “pearson-mymathlab-quiz. The second is a resource-link that is the actual link in the course structure that includes title, custom parameters, and needs a resource handler of type “pearson-mymathlab-quiz”.

If you look at the LTI 1.0 / CC 1.1 data model, for simplification, we condensed these into a single structure. Simplification makes things easier sometimes and harder other times.

LTI 2.0 will add two new resource types to a future CC version, keeping the basiclti all-in-one resource type. But my guess is that once the LTI 2.0 CC support makes it into the field, folks will quickly switch as it is *much* prettier. One of the major advantages of the LTI 2.0 approach (at the cost of more UI and workflow complexity) is that since the resource handler idea is a bit of an abstraction between resource links and tool configurations, it allows LMS builders and LMS admins to re-map those resource handlers to solve use cases like living behind a firewall or having a local copy of Pearson MyMathLab in South Africa.

The 2.0 specs are pretty mature but it always takes a while for adoption so we need to focus on and deal with the current CC and LTI 1.1 and get it right so it works well while we finish up LTI 2.0 and its associated CC release and get it out and into the marketplace.

Hope this helps.

The Relationship Between Developers and Operations at Flickr

Ross Harms who is formerly of Flickr and currently at Etsy, published a memo he sent around Yahoo! in 2009 explaining the relationship between developers and operations at Flickr:

http://www.kitchensoap.com/2012/01/05/convincing-management-that-cooperation-and-collaboration-was-worth-it

Here is a quote from the post:

I did this in the hope that other Yahoo properties could learn from that team’s process and culture, which we worked really hard at building and keeping. The idea that Development and Operations could: (1) Share responsibility/accountability for availability and performance, (2) Have an equal seat at the table when it came to application and infrastructure design, architecture, and emergency response, (3) Build and maintain a deferential culture to each other when it came to domain expertise, and (4) Cultivate equanimity when it came to emergency response and post-mortem meetings.

My Comment To the Post

Very nice post and all quite obvious to folks with enough experience across multiple real-world situations. Usually when organizations don’t structure their ops / dev relationships as you describe, it is often in an obsessive attempt to “eliminate risk”.

The basic (incorrect) premise is that everything the developers do increases risk and that ops have the job of reducing that risk to zero. Developers are the “problem” and Ops is the “solution”. Or as you say above, Developers are the “Arsonists” and Ops are the “Firefighters”. Casting the relationship in this way leads to ops wanting to limit change and the devs naturally want the product to move forward so the organization can better serve its stakeholders.

Uninformed ops feel the need to do large tests with complete instances of the product and frozen “new versions” and as the product gets more complex, these test phases take longer and longer and so more and more features end up in each release.

Again, ops is trying to eliminate risk – but in reality because each release is larger and larger there is a super-linear likelihood that something will go wrong. And when there are a lot of features in a package upgrade, folks cannot focus on the changes because there are too many – they hope it is all OK or sometimes it is all declared “bad” as a package without looking for the tiny mistake and everyone goes back to the drawing board which further delays the release of functionality and insures that the next release attempt will be even larger and even more likely to fail. It is a vicious circle that your approach nicely avoids.

The gradual approach you describe allows everyone to focus intently on one or a few changes at a time and do it often enough that you avoid the risk of a large change consisting of lots of details.

I like to think of the way you describe as “amortizing risk” – where there is always a small amount of risk that everyone understands but you avoid the buildup of accumulated risk inherent in large package upgrades. Again, thanks for the nice description.