Monthly Archives: July 2010

Book Review/Summary: DIY U by Anya Kamenetz

Anya Kamenetz was the keynote Speaker at the Sakai conference in Denver in June and at the Blackboard Developer Conference in Orlando last week. I purchased her book (DIY U) at the Sakai conference and had her autograph it, planning to read it later when I had some time. After some Twitter interaction with Anya after the Blackboard keynote last week, I decided it was time to read the book and write a review.

Summary

DIY U is a great book. I have been working for so long in the engine rooms of higher education trying to improve technology for teaching and learning that I have not really been aware of the important changes in higher education in the last fifty years and in particular in the last decade. When you are living it and living through it as a teacher and student, it is hard to see the high level patterns that are going to change us going forward. Anya has done a masterful job of researching, explaining, and summarizing the history of transformation in higher education, the changing economic conditions of higher education, some conventional and not-so-conventional possible evolutionary tracks for higher education.

Her writing style is efficient. Unlike many similar books, there is very little repetition just to pad pages. She tells us what we need to know in 135 pages making good use of her time and our time. Her writing style encourages critical thinking throughout – she will present several different points of view within the same paragraph, making sure to keep the reader’s focus on drawing their own conclusions from the information she presents.

The book chapters include (1) History, (2) Sociology, (3) Economics, (4) Computer Science, (5) Independent Study, and (6) Commencement. I will look at each of the chapters in turn.

History and Economics

Anya gives a nice summary of how higher education has evolved from early times to the present. I found her analysis of the post-war period particularly interesting as this is the higher education that I experienced as a student and became part of as a staff member and later faculty. From my perspective experiencing it, there seemed like very little change from the 1970’s to the present, but in reality there were a number of significant shifts in government policy at the federal and state level funding and policy mix over the past 30 years.

Probably the largest factor that would lead one to believe that change might be imminent is the shift from state funding of public universities through general funds to federal subsidies for tuition through Pell grants and student loans. The continuously increasing federal subsidy for tuition has allowed states to drop their funding (and their influence) in public universities and significant federal funding has masked the pain of tuition increases as long as the federal government pours more money into the subsidies.

The problem as Anya points out is that these subsidies are justified as giving equal access to folks regardless of their economic and social standing. But it is also clear from the research that these funds (and matching financial aid from the universities) are far more likely to benefit the middle and upper class students than the poor and otherwise disenfranchised students. As this becomes more and more obvious, it may erode the political will behind these subsidies.

This is particularly scary for public universities who have had carte blanche for tuition increases because of the historical gap between public university and private university tuition levels. At some point, public universities will no longer be able to roll out a 10% tuition increase that parents and student swallow because the alternative is much higher private tuition and federal subsidies cushion the blow.

The scariest moment may be triggered when public tuition gets within 20-30% of private tuition and the federal government decides to alter how subsidies are given which means that all of a sudden public universities will become unaffordable for the middle class students, perhaps in a relatively short period of time. Public universities (particularly smaller ones) may not have the endowment necessary to absorb the shock of such a change if it happens quickly.

Sociology

Anya gives us ample examples why it is pretty challenging for higher education administrators to “do the right thing”. Most of the motivation arrows point in the wrong direction. As an example:

“… 25 percent of the US News and World Report Rankings come from peer reputation … [and] 75 percent of the other measures come from either direct or proxy measures of spending per student and exclusivity.”

This means that if a university were to find a way to improve the education of a student while reducing costs or admitting less-elite students, it might result in a drop in their all-important US News and World Report rankings. Another good example is hiring a faculty member who with a lot of publications and awards and pay the to be on the masthead and never put them in a classroom as a “perk”. Anya describes situations where a school found itself in a position where they did some market analysis and decided they only way to improve their national image was to increase their tuition so folks would see them as somehow more exclusive. Yikes.

The motivation of traditional public and private universities to reduce enrollment plays directly into the hands of the for-profit universities that have found scalable approaches and are happy to increase enrollments and increase profits.

Another strong theme in the sociology chapter focuses on who gets admitted, who gets financial aid and who graduates. While education is seen by society as an opportunity for all and subsidizing education is generally a widely supported policy, there are some sticky bits when you look closely at the data.

“To put it bluntly, clever and/or middle class children get more schooling that stupid and/or working-class children, and later earn more simply because they have had all the advantages in life, of which education is only one and not even the most important one.” — Christopher Jenks

The overall takeaway from this theme is that nearly all of the policy efforts to level the playing field are better-exploited by those who have less need.

Computer Science

In this chapter, Anya describes a series of case studies and reflects on work being done by innovators inside the higher education system. I like this chapter because in a book about edupunk and DIY-U, it is important to acknowledge the important internal efforts that are beginning to show a lot of promise and moving from the emergent research towards the mainstream.

Since this is an area that I am working in, I think that it is important to exercise a little caution as to the breadth of impact each of the mentioned projects really has in terms of real transformation. It is quite natural when talking to a researcher (myself included) about their project to overstate the claims of breadth of application of their work. Of course the folks in these case studies feel that their work is transformative – but we do need to be a little circumspect and measure the transformative impact from the outside of the projects and over time.

Another interesting topic in the chapter that gave me a bit of pause is the thin thread of funding for much of this advanced experimentation pretty much comes from the William and Fora Hewlett Foundation and Andrew W. Mellon Foundation. Anya points out that most of the funding to look boldly at new ways of thinking about education has come from one of these two foundations. What if the MIT Open Courseware effort was never funded? Where would we be now? The exploration of these possible new approaches to education would have been set back many years if not for the investment of these foundations and their program officers such as Cathy Casterly, Ira Fuchs, Don Waters, and others.

Independent Study

In this chapter Anya talks about the Edupunk and DIY-U movements. Again it is a series of case studies that give a nice view of the different activity in this space.

My own personal feeling is that these are all excellent experiments with very little chance of scaling beyond the trivial but each gives us some insight into what is possible.

In a sense, I am inspired as I read this section and try to imagine the kind of technology that will support these new forms of education. These efforts are experimenting with technologies, content structures, interchange formats, cohort forming, portfolio building, assessment, credentialing , etc. As a software person, it feels like such a green field space to move into – but at the same time it is really foggy as to what will work. It is kind of like the way we were all building our own learning management systems in the mid 1990’s and then a pattern came out which became what we now call Learning Management System (or LMS). What will be the new technology pattern to support this new teaching pattern? Like a vivid dream that you try to remember just after you wake up, I can almost but not quite visualize what this software could and should be.

Commencement

In this chapter, Anya summarizes and reflects on the entire book and does a great job putting it all in perspective. My favorite reflection is from page 132:

The Reformation didn’t’ destroy the Catholic Church, and the DIY educational revolution won’t eradicate verdant hillside colonial colleges, nor strip-mall trade schools. DIY U examples will multiply, though. Most likely in bits and pieces, fits and starts, traditional universities and colleges will be influenced by them and be more open and democratic, to better serve their communities and students. Along the way, we will encounter rough spots, growing pains, unintended and unforeseen consequences – but the alternative is to be satisfied with mediocrity, and insufficient supplies of it at that.

Conclusion

So that brings us to the end of of our “roller-coaster” ride through the past, present and possible future of higher education. Like all good roller coaster rides, it starts with a big hill to climb and a terrifying drop that makes you grab at your stomach and gets you heart racing. Then there are twists and turns and quick changes in direction and at times we even find ourselves upside down and wondering if our iPhones will fall out of our pockets.

But at the end, we arrive back at the station safe and sound and no worse for the wear with our hearts beating faster and feeling more alive and most of all, wanting to get back in line and do it again as quickly as possible.

For me, personally, reading this book makes me think about people who are our leadership in higher education administration in a whole new light. I realize that their jobs are not quite so boring as I imagined them to be and realize that they are quite busy solving problems in a rapidly changing policy and funding environment.

New forms and patterns are emerging and will continue to emerge and those schools that get the new forms right out of the gate will have a leg up for decades.

Note: Favorite Passages

I just want to put down some of my favorite passages from Anya’s book. My copy is now dog-eared, highlighted and has many page corners turned over so I can skip to my favorite passages. I list my favorite paragraphs by page number and paragraph number. I count the first partial paragraph on a page as “paragraph 1”. Sometimes I list a range of paragraphs on a page or across multiple pages.

27-2, 33-3-5, 43-2, 47-1-3, 57-4, 61-3, 72-4, 73-2-5, 75-5, 86-2, 100-3 – 103-2, 103-5, 104-4, 105-1-2, 125-3, 127-5, 129-134

Dr. Chuck .vs. Dr. Mark – Talking About the First Programming Course

Here is my latest entry into my discussion with Mark Guzdial of Georgia Tech about the philosophy and approach to the first programming course both in K12 and in Higher Education.

The best place to view my comments in context is in Mark’s Blog:

http://computinged.wordpress.com/2010/07/13/what-are-we-chopped-liver-cs-left-out-of-national-academy-stem-standards/#comment-3145

Here are Mark’s Comments

Charles, by what definition do you claim “Computer Science is focused on preparing CS professionals who will create technology”? Alan Perlis (one of the guys who coined the term “Computer Science”) argued in 1961 that all undergraduates should take CS, regardless of their major. Jeanette Wing argued in her Computational Thinking article that CS is a good degree to prepare a student for any career. Alan Kay’s “Triple Whammy” definition of CS doesn’t say anything about producing software. Our Threads CS degree, which has “software engineer” as only one of several possible outcomes, is being approved by ABET as a BS in CS degree program.

I’ve seen this definition (implicitly) on the SIGCSE members list, but have not figured out where it’s coming from. Is this a University of Michigan definition?

Here are my comments

There is not a “University of Michigan definition” – it is more the philosophy of the design of our undergraduate Informatics program. I am trying to give you some possible rationale why your desire introduce the notion of a computational model as core part of a K12 curriculum seems to fall on deaf ears. It is pretty common for a focused domain to be so enamored with its core concepts that those in the domain feel that 100% of the educated people in our country must be exposed to those core concepts.

Both you and Alan have done a good job of reducing CS to a few easily described core concepts (storage, representation, processing). While you and (perhaps) Alan think that the elegant expression *makes* the case for inclusion of CS in the broadest of K12 curricula, I would claim that your descriptions make *exactly the opposite case*. Your descriptions make the case that the core CS concepts are not suitable for broad exposure in K12 nor as a single course required for all college students.

You seem to be stuck in the notion that if you had only fifteen weeks of material to present to a ninth grader or freshman that the best use of that time is to lay groundwork for understanding highly abstracted CS notions. You must realize that when you are designing such a curriculum you must impart real knowledge that will truly be valuable to 100% of the educated population assuming no further courses.

So as an example, the Water Cycle is really cool stuff – it serves as a great example to give students a window into science – and also gives them a great skill that helps them decide each day for the rest of their life whether to take an umbrella with them as they go to work or school.

Spreadsheets can be used to graph cool plant growth data and again offer a window into science and being able to enter data and formulas into spreadsheets also be useful in lots of careers.

Spreadsheets and Water Cycle clearly are of great use to all of the educated populace and as such are firmly ensconced in K12 curricula and when there is a required technology course in higher education it certainly includes spreadsheets.

Where you, Alan and I certainly agree is that in this day and age, K12 curricula and broadly required college courses need to explore a much richer and deeper understanding of technology and the mechanisms that underly technology. We all agree that this is rich and lovely material and very stimulating intellectually and also highly useful throughout life.

Where we disagree is the purpose of that first fifteen weeks – either in ninth grade or as that required-by-all.

Your position is that such a course is to be designed so that it is a wonderful prelude to Computer Science and inspires the student to pick CS as their chosen field, choose to go to college, choose CS as their major and spend 45 credits of their undergraduate degree in the required courses in one of the “threads”.

My position is “assume they never ever ever” take another technology course and I only have them for fifteen weeks and that they are paying real money for my course and I want them to come back years later and tell me that my course was one of the most useful courses they ever took in their whole life. (Hyperbole added to make the point).

Interestingly there is a lot of of overlap between courses designed using the two different starting philosophies – both give some sense of data and computation and perhaps even networking – but when I build courses intended for a broad audience, I am trying to teach the lessons in computation as a side effect of giving them a useful and relevant life skill (i.e. like as spreadsheet). The courses designed from your perspective delay the “good stuff” and the “real-world application” because that historically has always came later in a CS curriculum (CS0/CS1 *are* the first in a series of Computer Science Courses that build on one another).

Mark – you are on all the right committees and have the grants and credibility to begin a shift from “the first in a sequence of many CS courses” to a “literacy course that imparts useful life skills in computation”. I am not on those committees and not involved in those projects so my best chance for effecting the kind of change I would like to see happen is to convince *you* and then let you do the hard work :)

The best payoff for an effective and well-designed technology-literacy course is increased interest in Computer Science. At the end of such a course, while all the students have learned valuable life skills, some of the students may have gained a bit of curiosity about how it all really works. Those are the next generation of Computer Scientists.

So the irony, if my hypothesis is correct, is that we will increase overall interest in Computer Science if we teach less explicit CS and more useful technology skills in that all-important first broadly taken course at the K12 and college level. And such a course/curriculum approach would be far more palatable as part of an STEM approach for the next 10 years.

Blackboard Announces Plans to Deliver IMS Common Cartridge and Learning Tools Interoperability 1Q2011

During John Fontaine’s Blackboard keynote Blackboard Developer Conference (BbDevCon), Ray Henderson announced that Blackboard will release support for IMS Common Cartridge and IMS Learning Tools Interoperability by 1Q2011 in their core product line.

John’s Blog: http://www.johnfontaine.com/
Ray’s Blog: http://www.rayhblog.com/blog/

I am pleased and excited because this is an important milestone in the progression of the market adoption of these standards that I am convinced will positively impact teaching and learning in ways we cannot begin to imagine. But in a sense I was not really surprised. Strong support for standards and interoperability is very much in Blackboard’s best interest and for me it always felt like it was only a question of when it would fit into the Blackboard development cycle.

If you think about it for a moment, Blackboard has a pretty diverse customer base due to Angel and WebCT acquisitions and they would very much like to get to the point where they have a single overall learning product with the best features of Blackboard, WebCT, and Angel. That unifying product will naturally be a future version of Blackboard and one of the ways to get people to migrate to the latest version is to give them something in the latest version that they do not have in their current version.

I think that support for IMS Common Cartridge and LTI will be just the right kind of draw (among others) to bring customers forward and together in a positive way.

Beyond Blackboard’s customers, I hope that this is the beginning of Blackboard taking increasing leadership for the entire marketplace in terms of standards and interoperability. Even though Blackboard participated in both the working groups for Common Cartridge and Learning Tools Interoperability (Blackboard is co-chair of LTI), they were not the first to market for either standard. Now Ray has clearly made it a high priority to “catch up” and yesterday’s announcement was an indication that they will catch up pretty quickly.

I am imagining a future where Blackboard becomes increasingly open in what it is thinking about for next-generation approaches to teaching and learning.

While standards like IMC CC, IMS LTI, and IMS LIS are *very important* – they really are only the beginning of the kinds of standards we need to enable a true revolution in teaching and learning.

If we take the model where we go through the dance of (a) vendors create multiple similar proprietary solutions, (b) we realize that this new space is important so we start a standards working group to produce some common subset of the solution that is incompatible with any of the vendor solutions, and then (c) we try to “cat-herd” the vendors to add support for the new standard that is not all that different from the feature they originally built.

This whole process can easily take a long time! Actually if you look at IMS Tools Interoperability where the vendor solutions such as Building Blocks were coming out in the late 1990’s, and the equivalent standard is just making it into the marketplace, it has taken *over a decade*.

As a teacher and a student, wanting to learn and teach in new and innovative ways, a decade is far too long for a working, interoperable feature.

Going forward, we need to engage together looking forward and come up with one, interoperable solution from the *very beginning*. But this means we need to approach new ideas in different ways – the members of the market need to stop looking for win-lose scenarios and stop thinking that “proprietary and closed” is the way to compete – but instead – let the best products simply win without building proprietary APIs, Data Formats, and integration patterns as the first step.

I am optimistic that this recent announcement is only the beginning of engagement of Blackboard in standards and in particular standards around innovative ways to use technology to teach and learn going forward. I am going to do my part to try to bring this new approach into the market – one where we work together earlier rather than later – one where we reduce the time-to-market for standards that enable innovation and increase the quality of those standards as well.

Like a sports team that is in a playoff, I will savor this important and necessary milestone for a day or so, and then it is back to work to figure out how to do this all better and faster. Thanks to Ray and the whole Blackboard team!

Adding Data Loading to Our Learning Feature in Shindig

Now that we know and love the way of the asynchronous batch request in Shindig, it is time for us to add our own data retrieval for the course-related information and provide that information for Learning Gadget. If you look at the existing pattern in the “Social Hello World” gadget, you see the pattern where the gadget simply takes the data that is returned as part of each request and uses it. In my gadget, I want to add a bit of an abstraction layer and let the user use a set of accessor methods so the pattern is a little different.

You might want to grab a copy of the completed code for reference as we go forward:

The first thing we need to do is build a getInfo method and add it to our osapi.learning service. Our code gets a little larger and we need some more imports.

java/social-api/src/main/java/org/apache/shindig/social/opensocial/service/LearningHandler.java

package org.apache.shindig.social.opensocial.service;

import java.util.Map;
import java.util.HashMap;
import java.util.concurrent.Future;
import org.apache.shindig.auth.SecurityToken;
import org.apache.shindig.protocol.DataCollection;
import org.apache.shindig.protocol.Service;
import org.apache.shindig.protocol.Operation;
import org.apache.shindig.common.util.ImmediateFuture;

@Service(name = "learning")
public class LearningHandler {

  @Operation(httpMethods = "GET")
  public Future<DataCollection> getInfo(SocialRequestItem request) {
    SecurityToken token = request.getToken();
    System.out.println("Owner="+token.getOwnerId()+" viewer="+token.getViewerId());

    // This data *should* come from an SPI ...
    Map<String, Map<String, String>> results = new HashMap<String, Map<String, String>>();
    Map<String, String> data = new HashMap<String, String>();
    data.put("context_label","SI124");
    data.put("context_title","Network Thinking");
    results.put("info", data);

    DataCollection dc = new DataCollection(results);
    return ImmediateFuture.newInstance(dc);
  }

  @Operation(httpMethods = "GET")
  public void setOutcome(SocialRequestItem request) {
    System.out.println("Param = "+request.getParameter("outcome","default"));
    // Do something clever here like call an SPI ...
  }
}

We need to construct a DataCollection which is a map of maps. This gets turned into JSON or REST by magic code that is calling us. We name the top map info and put two fields into the second-level map. There is a bit of clunkiness for all of this but the JSON/REST symmetry is probably worth it.

Again, since we are in a hurry, we will just call the service directly at the moment of gadget startup:

   gadgets.window.setTitle('Social Hello World');
   osapi.learning.getInfo().execute(function(result) {
        if (result.error) {
            alert('Error on retrieval');
        } else {
            alert('Name '+result.info.context_label);
        }
    }) ;
    var hellos = new Array('Hello World', 'Hallo Welt', 'Ciao a tutti',

You see the asynchronous pattern and we simply pull the info object apart and directly look up the context_label that was placed in there by the service.

But this is not batch-friendly. So lets do this in a more batch friendly manner by making the following changes after undoing the above changes.

     function render(data) {
       alert('Render Label ' + data.learningData.info.context_label);
       var viewer = data.viewer;
       allPeople = data.viewerFriends.list;
    ...
     function initData() {
       var fields = ['id','age','name','gender','profileUrl','thumbnailUrl'];
       var batch = osapi.newBatch();
       batch.add('viewer', osapi.people.getViewer({sortBy:'name',fields:fields}));
       batch.add('viewerFriends', osapi.people.getViewerFriends({sortBy:'name',fields:fields}));
       batch.add('viewerData', osapi.appdata.get({keys:['count']}));
       batch.add('viewerFriendData', osapi.appdata.get({groupId:'@friends',keys:['count']}));
       batch.add('learningData', osapi.learning.getInfo());
       batch.execute(render);
     }

Now we have piggybacked the retrieval of our learning information along with all of the other OpenSocial requests that our gadget needs to do. So we make one request, get one response, and have access to the learning data along with the Open Social data when we are making our initial render of the widget.

Now if you like, you can stop now as you have seen how to retrieve data from the server and do so in concert with the rest of a gadget’s batch requests.

But for me, I wanted a little more abstraction and I wanted my gadget to be provisioned so that a tool could use my learning feature over and over whenever it liked and anywhere in the gadget code. So I make a few changes to my feature to make this possible.

First I put in an instance variable, a setter to store the info from getInfo and changes to my accessor methods getContextLabel and getContextTitle as follows:

learning/learning_client.js

gadgets.learning = (function() {

    var info = null;

    // Create and return our public functions
    return {
        ...
        setInfo : function(learninginfo) {
            info = learninginfo.info;
        },
        
        getContextLabel : function() {
            if ( info ) {
               return info.context_label;
            } else {
               return null;
            }
        },
    
        getContextName : function() {
            if ( info ) {
               return info.context_title;
            } else {
               return null;
            }
        },
    };

})(); 

I also then make the following changes to the “Social Hello World” gadget:

     function render(data) {
       gadgets.learning.setInfo(data.learningData);
       alert('Gadget Label ' +  gadgets.learning.getContextLabel());
       var viewer = data.viewer;
       allPeople = data.viewerFriends.list;
    ...
     function initData() {
       var fields = ['id','age','name','gender','profileUrl','thumbnailUrl'];
       var batch = osapi.newBatch();
       batch.add('viewer', osapi.people.getViewer({sortBy:'name',fields:fields}));
       batch.add('viewerFriends', osapi.people.getViewerFriends({sortBy:'name',fields:fields}));
       batch.add('viewerData', osapi.appdata.get({keys:['count']}));
       batch.add('viewerFriendData', osapi.appdata.get({groupId:'@friends',keys:['count']}));
       batch.add('learningData', osapi.learning.getInfo());
       batch.execute(render);
     }

I still add the osapi.learning.getInfo to the batched request, but instead of using the returned info data directly, I call setInfo to pass it into my learning feature to provision it and then call the getContextLabel accessor method.

This has the advantage that now any accessor method for my learning feature can be called anywhere in the gadget including much later in the processing since the gadget is fully provisioned.

If you look at the full source code, you will see another provisioning pattern called loadInfo I included it for completeness but I only think it would be useful if a gadget was not going to retrieve any other data except for learning data from the server at startup. A normal gadget will likely need plenty of OpenSocial data from several services so the batch pattern will be the right pattern.

So this brings us to the conclusion of this little tutorial series on how to add a feature and service to Shindig. I have tried to keep the sample code and length of the examples to the absolute minimum to give you a skeleton to hang your own code on.

I also have not explored the System Program Interface (SPI) pattern at all here. If I were to develop the LearningHandler to be real code, I would immediately build a Learning SPI interface to make the Handler be reusable code with a number of different LMS systems.

So, if you have gotten this far, thanks for taking the time to read all of this. If you are a Shindig wizard and I missed something obvious – please drop me a note and tell me where I missed the boat. I am truly a Shindig beginner having only downloaded the code two weeks ago – so the patterns might have been lost on me.

I do think that it would be a good idea in Shindig to make these kinds of extensions possible without code hacking. Perhaps this outline shows a pattern where we can use Guice to find and register both client features as well as serve-side elements. But that will be for another time. I need to prepare for the July 4 barbecue today.

— End of Shindig Post Series —

Batch / Asynchronous Loading of Data in Shindig

In this post we won’t actually write any new code – we will look at the “Social Hello World” gadget and get an understanding of the asynchronous data loading pattern.

The first thing to understand and accept is that all requests are asynchronous. You could so do something evil with setTimer() in JavaScript to fake synchronous requests – but if you did that, my guess is that you would be chided as not knowing the “way of the Gadget”.

The problem is once you go asynchronous, you need to delay real work of making markup and making UI until the message comes back much later. And once you accept the fact that UI change is effectively “event driven”, you really want to batch up all your requests into a single request, send in one large multi-call request and then wait once for all of it and then when it all comes back, put up the UI.

Here is the documentation for osapi.BatchRequest – study it – it is your friend as a gadget writer.

Up to now, we have been doing our hacks to the “Social Hello World” gadget right at the moment of start up. For example in all of the screenshots you will notice that the gadget UI is not present behind the alert box when our alert boxes come up.

This is because the gadget has not started yet – we jumped in right after the title was set and sneaked in our alert boxes that have the effect of (a) showing us if our stuff is working and (b) pausing the code before the gadget has a chance to retrieve its data and generate its markup.

So lets look through the “Social Hello World” gadget. (preferably a clean version without our hacks).

vi ./target/work/webapp/samplecontainer/examples/SocialHelloWorld.xml

If we look down for the following code:

     function initData() {
       var fields = ['id','age','name','gender','profileUrl','thumbnailUrl'];
       var batch = osapi.newBatch();
       batch.add('viewer', osapi.people.getViewer({sortBy:'name',fields:fields}));
       batch.add('viewerFriends', osapi.people.getViewerFriends({sortBy:'name',fields:fields}));
       batch.add('viewerData', osapi.appdata.get({keys:['count']}));
       batch.add('viewerFriendData', osapi.appdata.get({groupId:'@friends',keys:['count']}));
       batch.execute(render);
     }

     gadgets.util.registerOnLoadHandler(initData);

This is making a nice batch call and adding a number of service requests to the call which includes all of the data needed to build the initial UI. When it calls batch.execute, it is requesting that the server (in one request) make all the service calls in the order specified, take all the return data, and send it back to us as a single response and when that response is complete call the method render.

If we look at render we see that it takes a response object and starts pulling it apart, setting the local data needed by the Gadget and then later making the UI markup and later putting it in an empty div so the UI appears to the user.

     function render(data) {
       var viewer = data.viewer;
       allPeople = data.viewerFriends.list;

       var viewerData = data.viewerData;
       viewerCount = getCount(viewerData[viewer.id]);
       …
       html += '<div class="person">';
       html += '<div class="bubble c' + count % numberOfStyles + '">' 
         + hellos[count % hellos.length];
       html += '<div class="name">' + allPeople[i].name.formatted 
         + ' (' + count + ') ' + allPeople[i].gender;
       html += '</div></div>';
         …
       document.getElementById('helloworlds').innerHTML = html;
       gadgets.window.adjustHeight();

It pulls out the data (never checking the data.error status), builds some HTML from the data, and puts it in a div and adjusts the height of the div, and voila! there is a user interface.

The batch pattern makes it so we only have to wait for one request and then we do all our work when we receive the “event” that indicates that our data is back from the server and ready to process. The code in the sample gadget is a bit light on error handling but that should not be too hard to imagine.

So now that we understand the batch/asynchronous/render-on-event pattern, we can do some data retrieval of our own in the next post.

Next post in series

Sending Data to a Server-Side Service with Shindig

We will break the talking to the server bit into two pieces. First we will send / set some data on the server and then in the next post we will retrieve data form the server.

The thing that we need to understand is that server requests are asynchronous. As much as we RPC loving server dudes want to make synchronous calls – resist it. Accept and embrace asynchronous calls – in Ajax it is the only way. And as we will see in the next post, batched Ajax is the only way. Since widgets are tiny and they will be on lots of screens, for performance reasons and decent user experience, batched asynchronous requests are essential.

I will use code snippets throughout the next few blog posts so you might want the entire files right away. Understand that these are the complete files and will have the complete solution in them that will be explained in the next few posts.

In this exercise, we add a new server-side service to Shindig to be accessed using the osapi feature. I may have jacked-in at the wrong place – but this will get you started.

java/social-api/src/main/java/org/apache/shindig/social/opensocial/service/LearningHandler.java

package org.apache.shindig.social.opensocial.service;

import org.apache.shindig.protocol.Service;
import org.apache.shindig.protocol.Operation;

@Service(name = "learning")
public class LearningHandler {

  @Operation(httpMethods = "GET")
  public void setOutcome(SocialRequestItem request) {
    System.out.println("Param = "+request.getParameter("outcome","default"));
    // Do something clever here like call and SPI...
  }
}

Now of course, we would do something more clever than just printing out out parameter – but that detail is up to the container. But this short bit of code is enough to see the round-trip to the server.

Then modify this file:

java/social-api/src/main/java/org/apache/shindig/social/core/config/SocialApiGuiceModule.java


import org.apache.shindig.social.opensocial.service.PersonHandler;
import org.apache.shindig.social.opensocial.service.LearningHandler;

   protected Set<Class<?>> getHandlers() {
     return ImmutableSet.<Class<?>>of(ActivityHandler.class, AppDataHandler.class,
       PersonHandler.class, MessageHandler.class, LearningHandler.class);
   }

This makes sure our service and methods are included in osapi as osapi.learning.setOutcome. And yes, it would be nice if there were a way of doing this without jacking in at a code level. Perhaps there is such a way that I missed or perhaps it is simply ‘yet to be invented’.

Good Comment from Michael Young Instead of modifying SocialApiGuiceModule you can extend it (ie LearningGuiceModule) and replace this module in web.xml.

Just because we are in a hurry, we will compile this and see if our server call works before we alter our learning feature. So compile and start Jetty:

mvn
mvn -Prun

And navigate to http://localhost:8080/samplecontainer/samplecontainer.html

You should see the “Social Hello World” gadget. Now lets edit this file:

vi ./target/work/webapp/samplecontainer/examples/SocialHelloWorld.xml

And add these lines:

   gadgets.window.setTitle('Social Hello World');
   osapi.learning.setOutcome({'outcome' : '123456'}).execute(function (result) {
        if (result.error) {
            alert('Error, unable to send outcome to server.');
        }
    } ) ;
     var hellos = new Array('Hello World', 'Hallo Welt', 'Ciao a tutti',
...

Actually you will note that to do this we do not need the learning feature because we have fully provisioned the server-side learning service into the osapi helper. When the gadget starts up, osapi pulls down all its services from the server and registers them. This is independent of gadget registration which the Require accomplishes.

When you press refresh (or whatever you need to to force a full reload) on the container and watch the log on the server, you will see a cute litle line scroll by in your server log:

Param = 123456

Very simple – but very cool.

Now lets alter our learning feature to call the service on our behalf in the setOutcome method. We will give the user the option to provide a handler or let the learning feature do the handling.

We edit the setOutcome method in learning_client.js from the last post as follows:

        setOutcome : function(data, handler) {
            if ( handler === 'silent' ) handler = (function (result) { } );
            if ( handler === undefined ) handler = (function (result) {
                if (result.error) {
                    alert('Error, unable to send outcome to server.');
                }
            } ) ;
            osapi.learning.setOutcome({'outcome' : data}).execute(handler);
        },

It is pretty simple stuff, the user can give us the handler, or we provide a simple alert on error, or we can provide a completely silent handler at the user’s request.

We also need indicate that we want access to the osapi service:

taming.js

var tamings___ = tamings___ || [];
tamings___.push(function(imports) {
  ___.grantRead(gadgets.learning, 'getContextLabel');
  ___.grantRead(gadgets.learning, 'getContextName');
  ___.grantRead(gadgets.learning, 'setOutcome');
  caja___.whitelistFuncs([
    [osapi.learning, 'setOutcome']
  ]);
});

This makes sure that we have access to our service call when running through Caja. (I think I am saying this properly).

Once we have done this and recompiled Shindig, started Jetty and started the container, we make the following changes to the “Social Hello World” gadget. Now lets edit this file:

vi ./target/work/webapp/samplecontainer/examples/SocialHelloWorld.xml

And add two lines:

   <Require feature="osapi"></Require>
   <Require feature="learning"></Require>
   <Require feature="settitle"/></Require>
…

   gadgets.window.setTitle('Social Hello World');
   gadgets.learning.setOutcome('0.97');
     var hellos = new Array('Hello World', 'Hallo Welt', 'Ciao a tutti',
...

We are just using our learning gadget method to send the outcome to the server. By omitting the second parameter, the learning feature will give us a little alert if it has trouble sending data to the server.

Again, we press refresh and in the log we see:

Param = 0.97

So that completes our look at a simple call to the server to send some data. In the next post, we will get a little deeper into how to retrieve data from the server. The bit that gets complex is the requirement that things be done asynchronously and if at all possible with multiple batched requests in a single request.

So the code will initially look a little obtuse – at its core it is simple – but the asynchronous pattern takes a little getting used to. And since I only figured it out in the last 24 hours, I might have missed a bit in the pattern as well. Of course comments and improvements are welcome.

Next post in the series

Adding a New Feature to Shiding for Learning

In this exercise, we add a new feature to Shindig. I may have jacked-in at the wrong place – but this will get you started.

First we move into the feature directory:

cd features/src/main/javascript/features

Create a new directory named learning and put three files into it.

learning_client.js

gadgets['learning'] = (function() {

    return {
       getContextLabel : function() {
            return 'SI124';
        },
    
        getContextName : function() {
            return 'Social Computing';
        },
    
        setOutcome : function(data) {
            alert('setOutcome belongs here');
        }
    };

})();

This creates our client code and defines three methods in the client. For now they are simple stubs to keep life simple.

taming.js

var tamings___ = tamings___ || [];
tamings___.push(function(imports) {
  ___.grantRead(gadgets.learning, 'getContextLabel');
  ___.grantRead(gadgets.learning, 'getContextName');
  ___.grantRead(gadgets.learning, 'setOutcome');
});

I am a little foggy on this file – it basically works with Caja to make sure that you are explicit as to what you want to really expose to JavaScript callers.

feature.xml

<feature>
  <name>learning</name>
  <dependency>globals</dependency>
  <gadget>
    <script src="learning_client.js"/>
    <script src="taming.js"/>
  </gadget>
  <container>
    <script src="learning_client.js"/>
    <script src="taming.js"/>
  </container>
</feature>

This file names your feature and defines the source files that make it up.

Then edit the file features.txt and add a line:

features/xmlutil/feature.xml
features/com.google.gadgets.analytics/feature.xml
features/learning/feature.xml

There is a way to do this using a script, but for now, lets just jack-in directly.

At this point you need to rebuild Shindig. And you might get syntax errors during the build which you need to fix. Somehow the Javascript for features is compiled / processed at mvn time and put into a more run-time format.

mvn

once it compiles and installs, start Jetty again

mvn -Prun

And navigate to http://localhost:8080/samplecontainer/samplecontainer.html

You should see the “Social Hello World” gadget. Now lets edit this file:

vi ./target/work/webapp/samplecontainer/examples/SocialHelloWorld.xml

And add two lines:

   <Require feature="osapi"></Require>
   <Require feature="learning"></Require>
   <Require feature="settitle"/></Require>
…

   gadgets.window.setTitle('Social Hello World');
   alert(gadgets.learning.getContextLabel());
     var hellos = new Array('Hello World', 'Hallo Welt', 'Ciao a tutti',
...

This requests that the container load the new learning feature and then we call the learning feature when the JavaScript starts up in the gadget and you should see the dialog box pop up once you save the SampleHelloworld.xml and do a refresh.


Up next – talking to code inside the server…



Next post in the series

Getting Oriented with Shindig (i.e. Shindig Hacking for Dummies)

First check out a copy of Shindig from Apache.

svn checkout http://svn.apache.org/repos/asf/shindig/trunk/ shindig

Then compile it. The first compile will take a long time and will download a lot of artifacts. You will want to be on a quick network connection

mvn

If your compile fails a unit test, try mvn -Dmaven.test.skip=true

You can also take a look at the BUILD-JAVA file in the main directory if you are having problems getting it to compile.

Then start the Jetty server:

mvn -Prun

You best friend will be the Shindig Getting Started page – it has lots of hints on what to do to explore your container.

We will just hack a single bit of a gadget running in the sample container so click here:

http://localhost:8080/samplecontainer/samplecontainer.html

You should see the “Social Hello World” gadget. Now lets edit this file:

vi ./target/work/webapp/samplecontainer/examples/SocialHelloWorld.xml

And make the following change:

   gadgets.window.setTitle('Social Hello World');
   alert('Hello Chuck');
     var hellos = new Array('Hello World', 'Hallo Welt', 'Ciao a tutti',

You should see your little alert box when the page refreshes. That is the end of “getting started”.

Note that the SocialHelloWorld.xml file gets overwritten each time you recompile Shindig – so keep your modifications handy elsewhere to reapply after each mvn install – I like editing the gadget in target because then I just keep doing a refresh.

To shut down the Jetty server, simply kill it (i.e. press CTRL-C in the command window on Mac/Linux).

Now here is a little weirdness when you change the gadget code. I am never sure what exactly is needed to really do a full refresh. Here are the things I generally try:

  • Press Refresh in the Browser
  • Press the “Reset All” button
  • Clear the browser history if all else fails and your changes don’t seem to be getting reloaded

It seems as though there is *lots* of caching going on at several levels and you have to take increasingly drastic measures to get past it as you drop your code bits in.

Next post in the series.

Playing with Shindig/OpenSocial Adding a New Feature and a Service

I have been talking recently with folks at the Open University (UK), Open University of Catelonia, Ian Boston from Sakai 3, and a few other organizations about the emergence of an “OpenSocial Learning Gadget”. We had a nice Gadget BOF at the Sakai Conference in Denver where Ian Boston (also a Shindig committer) gave a little tutorial on Shindig Architecture and how to add a Shindig feature and plug Shindig into something like Sakai.

It seemed really clear and obvious and it felt to me that there was a nice way forward to build a Shindig feature (i.e. extension) to define a learning gadget and perhaps line up all of these disparate efforts across vendors and projects and make it so a “learning gadget” could run in any LMS that had Shindig with the learning extension.

So two weeks ago with some help from Ian, I downloaded the source to Shindig and started banging around with Shindig. Ian helped me a lot and the Apache Shindig developer list also gave me some wise advice at key moments where I would get lost and confused.

I had three goals in mind as I went through the Shindig code:

(1) Add a “feature” – an extension loaded into the browser that makes an API available to the Javascript code running in the widget.

(2) Add a run-time server-side service to support the feature – the code of the client-side feature would call this server-server API/service to retrieve things like course name, role of current user, set outcomes, etc. I need do find out how to write a service and register it both in the server Java code and in the client JavaScript code.

(3) Bang around until I understood the security model and the provisioning and launching of gadgets from the container (i.e. the LMS)

I also wanted to explore how the SPI (Server Program Interface) pattern worked in Shindig. Pluto 1.1 used the SPI pattern and it was really well done and made it really straightforward to add JSR-168 support to Sakai 2 back in the 2.4/2.5 days.

Part of my investigation was to take notes as I went along and possibly propose a general capability for Shindig list to add these features without touching any core Shindig code. It may be tricky because even though there is Javascript, there is compile and run-time bits needed.

Along the way, I banged away at Apache Shiro – the generic Authentication and Authorization project. I found Shiro kind of interesting and I particularly liked its feature where Session can exist even if a web browser is not involved in the interaction. In one of my early explorations, I tried to hack Basic LTI Provider code in to Shiro and came up with some ways to improve the plug-ability of Shiro – but then I realized it had little to do with what I was investigating with Shindig – so I dropped my Shiro investigation and went back to Shindig.

I am happy to report that Shindig is pretty cool and well structured internally. It was pretty easy to find all of the necessary places to plug my code in. It is not all too well documented and it is not set up to add features or services without modifying source code.

I promised to write someShindig documentation regarding how this all worked which I will do in a couple of blog posts over the next week after I clean up the code a bit to make it more presentable.

Next post in the series.