Connecting IMS Learning Tools Interoperability and SAML

Both IMS Learning Tools Interoperability and Security Assertion Markup Language (SAML) have been developing seemingly independently in the space of broad sharing of user identity across multiple organizations and applications. Both can be seen as solving the problem of “federated identity”. SAML is the generic technique for signing, transporting, and parsing security assertions that is embodied in products like Shibboleth, Microsoft Active Directory, and others.

In this post I compare SAML and LTI technically and point out the similarities and differences between the protocols and then describe how these two technologies are indeed highly complimentary and both are necessary to achieve interoperability between an LMS, Organization, and a Learning Tool.

Comparing SAML and LTI

The problems that IMS LTI and SAML solve are quite different. LTI establishes trust between a single application (i.e. the Learning Management System) and an external tool while SAML establishes a relationship between an organization (or federation) and an external tool using a Single-Sign-On (SSO). LTI transports information in each launch about a users identity within the LMS, course from which the launch is coming, the resource/activity id for the launch, and the user’s particular role in the course, settings specific to the particular launch/activity in the course, and user information such as name and email. SAML provides a tool with a user’s enterprise identity, enterprise role, and user information such as name or email. Even though LTI technically passes its data through the user’s browser using OAuth 1.0, LTI architecturally is secure server to server communication in it’s domain of trust.

Both LTI and SAML are very concerned about only releasing private information when appropriate. Both protocols insist on tools understanding that they may not always receive a user’s name or email information on every launch. At times the tool may only receive the user_id (LTI) or the handle (SAML) and in particular, the tool should never use a person’s email address as their internal key.

SAML is typically deployed in a top-down fashion where an organization converts to a SAML-based SSO at some point in time. It is often a long and drawn out process to convert an organization from an existing SSO to a SAML-based SSO. When an organization is setting up their arrangements with tool providers, there needs to be organization-to-organization interaction exchanging key and other information to properly establish the organization-to-tool relationship. Federations like InCommon reduce but do not eliminate this complexity.

LTI is designed to be deployed in a much more organic, bottom-up / mash-up approach where an organization upgrades their LMS to a version that supports LTI and they immediately have access to all of the LTI enabled tools anywhere on the web. LTI can be used broadly by the organization and enables Web 2.0 style mashups fully under instructor control.

Comparing Technical Complexity and Use Cases

There is quite a difference in terms the technical complexity of SAML solutions versus LTI solutions.

SAML makes extensive and precise use of XML and PKI libraries in very precise ways and requires a large library footprint and highly configured application server environment. The typical SAML deployment is done through an Apache Module (mod) such as mod_shib and is difficult or impossible to deploy in small hosted environments such as, Google’s AppEngine, or since developers are not given root access on these low-cost or free hosted systems.

While the InCommon federation reduces the complexity for the tool providers in a SAML in terms of PKI key management, many schools (i.e. nearly all in K12) using SAML are not part of any federation and so each school must separately configure the organization-tool relationship exchanging and installing a PKI keys and configuring identity provider information.

LTI uses the very simple OAuth 1.0a signing protocol developed by Google, NetFlix, Twitter, and others to allow simple “arms-length” Web 2.0 identity mash-ups. Because OAuth is aimed at ease-of use and board deployment, its library footprint is very low. The PHP implementation of OAuth is 800 lines long, comments and all and includes both the code to support the sender use cases as well as the receiver use cases with no additional libraries needed. There are small and simple implementations for every language/operating system in common use and they work nicely in hosted environments (even the free ones).

The simple technology footprint means that LTI empowers far more independent tool developers than SAML, to the point where I as a teacher write simple learning tools or games , host them on Google App Engine, plug them into my LMS the same day I write them, and use them in my class all in the same day. This allows small, independent tool developers such as and to participate in the larger tool ecology using modest technology infrastructure.

In terms of key distribution in LTI, the key can be handed to an instructor or administrator and the key is typed into a configuration dialog in the LMS to make the connection between the teacher’s course and the external tool.

LTI is designed to scale to hundreds or thousands of independent applications and to be deployed in LMS systems as those systems upgrade to LTI complaint versions. LTI only requires the interaction of the LMS administrator or Instructor to integrate a tool into a course. SAML-based solutions cannot be used until a campus converts to SAML and the campus establishes a relationship with the tool.

SAML is best suited for situations where organizations are sharing large, identifiable applications with other organizations that also use SAML as their single-sign-on. SAML is the ideal solution for trust between applications running on one SAML-protected university and applications running on another SAML-protected university.

LTI is best suited where a single (or a few) learning applications like an LMS (and maybe a portal) need to interact with a large and highly changing list of externally hosted learning tools written in many different programming languages and hosted in a wide range of technology platforms.

Learning tools have a particular unique need in that in a learning context, each user may have a quite different role in each course they are participating in. A person may be a faculty member as their organizational role, but also be a student in an Italian class. Learning Management Systems model these extremely fine-grain and often highly dynamic roles and LTI properly transports and communicates all this role detail to external tools. SAML models a person’s organizational role (i.e. Faculty member in School X) but does not model roles down to membership in every roster in every course every taught (past, present or future).

In summary, the technical footprint for SAML and LTI is different and the use cases they solve is very different. But there is also some similarity between LTI and SAML.

Similarity and Overlap

It is not just dumb luck that there is architectural overlap between SAML and LTI. From the moment that Sakai was conceived there were discussions about SAML (Shibboleth) and Sakai. Steve Carmody of Brown University and Scott Cantor of The Ohio State University were frequent visitors to Sakai meetings and advocated strongly for a deep connection between Shibboleth and Sakai. Based on these discussions, I did my own evaluation of the state of Shibboleth technology in 2004 to see if we could deeply integrate it into Sakai. In 2004, Shibboleth and SAML was not very mature (neither was Sakai) and almost no schools were deploying Shibboleth-based SSO so my architecture decision at that that time was not to build Shibboleth into Sakai and force schools to use Shibboleth as their SSO in order to use Sakai.

My technical work in 2004 gave me a deep understanding of the design, and architecture and strengths of the SAML-based approach used by Shibboleth. In particular, I was quite aware of the use of non-identifying handles, selective attribute release, and other essential features of the SAML/Shiboleth architecture.

Much later Shibboleth support for Sakai was developed by the University of Oxford and the University of the Highlands and Islands in Scotland.

IMS Tools Interoperability started in 2004, and in 2005 had a demonstration in Sheffield, UK that featured Sakai, Moodle, WebCT and Blackboard. TI 1.0 never really got off the ground because its SOAP-based technology footprint was just too much of a mountain to climb for an unknown interoperability payoff. Behind the scenes of the Sheffield demonstration was far too much pain and unnecessary complexity added due to the specification’s use of SOAP. Even though TI 1.0 was not widely implemented, it had all the essential architectural elements that would later define IMS Learning Tools Interoperability 1.0 (note subtle name change “TI 1.0” versus “LTI 1.0” – there will be a few more name changes in this blog post so pay attention).

TI 1.0 was a mash-up approach that only required server-to-server trust (i.e. not organization-server trust), but borrowed the SAML/Shibboleth notions of an opaque handle (user_id in LTI terms) and the selective release of private information such as name and E-Mail.

The Emergence of LTI

Thanks to early leadership of Bruno Van Haetsdaele of Wimba and leadership and funding from Chris Moffat of Microsoft we started a project called IMS Learning Tools Interoperability LTI 2.0 (Note the addition of “L” and the number 2.0) to redo the specification with an extremely simple REST-based protocol to achieve the same results as TI 1.0 but with a far smaller software library footprint. Our working goal was to be at least as easy as the FaceBook API or easier. The initial work was based on Wimba’s REST-based API. The Wimba/Microsoft-led specification made good progress through early 2008 to the point where it looked like it was ready to be final and we were building implementations for Moodle and Sakai to perform engineering tests.

In early 2008 the group was approached by Blackboard where they showed us the (at that time unreleased) Blackboard BB9 Proxy design for launching external tools. After some deliberation, we decided to scrap the Wimba approach and move to one loosely based on the BB9 proxy approach. This set us back to square one in terms of the technology we developed for Sakai and Moodle – which was discarded. I was in a panic at this point because I had committed to a Google Summer of Code project with three students to build LTI 2.0 implementations and we had just thrown away the specification and prototype code.

Because Google was not flexible in schedules, I quickly assembled an unapproved specification that I called Simple LTI for use by my students over the summer. Simple LTI was a strange combination / approximation of the Wimba provisioning architecture and security model and the BB9 Proxy launch model. It had all the advantages and disadvantages of something developed by a lone panicked person with no reviewers in a week.

Simple LTI was remarkably successful over the summer and Fall of 2008. My students (Jordi Piguillem Poch of the Universitat Politècnica de Catalunya / BarcelonaTech and Katie Edwards of McGill University) built working prototypes of it all, and Microsoft funded a .NET implementation as well. At the 2008 Educause conference in October, I walked around showing our awesome demo to 4-5 people and telling them “this is the future”. A few people like Bill Hughes and Mary Ann Scott of Pearson got it but most of the time I was a nerd with a laptop showing screen shots to people that yawned.

The near completion of the Wimba-led LTI 2.0 effort also attracted the attention of Pearson. Pearson had their own integration strategy for plugging into LMS systems called “Third-Party Interoperability” (TPI) that used a launch and web-service callback approach to solve this problem. It was effective but the web services were (sorry Greg) a little clunky.

With the re-launch of LTI 2.0 to be based on Blackboard’s BB9 proxy, we decided it was time to add the DNA of TPI to the mix as well.

From 2008 onward, the LTI 2.0 leadership was Lance Neumann of Blackboard, Greg McFall of Pearson, and Mark McKell of IMS. There was a lot of work to do in LTI 2.0 and it took from early 2008 to early 2010 to finish and approve Basic LTI 1.0. We decided it would be a 1.0 since TI 1.0 was such a failure. We called it “Basic” because it was a tiny subset of the overall LTI design, only covering the launch. We wanted to get something out while we refined the richer feature set which we called “Full LTI” to contrast it from “Basic LTI”.

With the best DNA of Wimba, Blackboard, and Pearson all mixed together, and with enough time to work out the kinks and think the design through very carefully, Basic LTI 1.0 was a solid technical design. It was reduced to only the most essential features and adopted the OAuth as its security model (please don’t look too closely at the Simple LTI security model).

Basic LTI 1.0 (later renamed LTI 1.0) was a great market success. At the release of the Basic LTI 1.0 in May 2010, nearly 100% of the higher-ed LMS market could use the spec, either through released code or through open source plugins (i.e. Moodle Module or Blackboard Building Block). A lot of people contributed to the success of Basic LTI 1.0 including Stephen Vickers of Edinburgh University, George Kroner of Blackboard, Alan Zaitchik of Jenzabar, Matt Teskey of Desire2Learn, Mark O’Neil of Oscelot and Blackboard, and many others.

The rate of Basic LTI uptake was unprecedented for a learning standard because it solved a very important problem and did it in the simplest possible way, with good test harnesses, straightforward certifications, with a lot of open source implementations made available from the very beginning. There as nothing in Basic LTI that was not essential and yet it worked very well.

The first major commercial vendor to ship support in their native release was Desire2Learn in their 8.4.7 release. This was announced as a surprise at the November 2010 Educause meeting. Blackboard was a *little frustrated* at Educause because they had put several years of effort into developing and supporting the standard and had been a leader in the specification and yet Desire2Learn scooped them in the marketplace. I of course made sure to point that out to Ray Henderson in the Blackboard booth and suggested that he not let that happen again in the future. I was not above playing one vendor against the other to achieve my interoperability objectives on behalf of teachers and learners regardless of the LMS they use.

That night I went to a Desire2Learn reception and shook John Baker’s (D2L founder and CEO) hand and thanked him for firing the short heard round-the-world in Learning Tools Interoperability.

Blackboard followed suit and did so in strong fashion early in 2011 when they released Learn 9.1SP4 with LTI support. But what as even more exciting was the announcement of the CourseSites service where teachers could use BB9.1SP4 for free and it included Basic LTI 1.0. This as a very important development for me because it represented the first moment where a teacher could build an application and host it on a free service like Google AppEngine and then teach a course with a commercial LMS that was free and plug their tool into their course with no requirement of any interaction of a university of LMS administrator. While no one other than me used this immediately, for me it was a tipping point in the instructor-mash-up use case.

By the end of 2011, Basic LTI was a roaring success. And once it became widely used, everyone started complaining about what was missing (i.e. like the ability to return a grade). The LTI working group was working on a long-term architecture to enable lots of great services but that was going to take a while so IMS decided to do one more “Basic” release and add simple grade return.

The decision also was made to just re-name all these specifications “LTI” with a version number. Basic LTI is now referred to as LTI 1.0 and the new LTI with grade return is called LTI 1.1 and was formally released yesterday (yes yesterday). The richer LTI (formerly known as “Full LTI”) will now be LTI 2.0.

With LTI 1.0 (formerly known as Basic LTI) already entrenched in the marketplace, the uptake of LTI 1.1 is almost immediate. Virtually all the major vendors in the marketplace are either shipping or will ship in the next few months support for LTI 1.1 in their main release. We don’t have to go through the phase of building and using open source plugins for LTI 1.1, waiting for vendors to release their official versions. Broad availability will happen very quickly. Canvas Instructure and Moodle 2.2 are the champions of first-to-the-market race for LTI 1.1. Sakai, Blackboard, and D2L need to play a bit of catchup this time around but I doubt it will take long.

While LTI has a lot more to bring to the marketplace in later evolution of the specification, at this point it solves a very important subset of the LMS to tool interoperability needs.

I would be remiss if at this point I did no mention Kevin Riley of IMS. Kevin Riley was the staff member of th original IMS Tools Interoperability 1.0 specification started in 2004 and finished in 2006 with the Sheffield demo in 2005. He was our cat herder and guide throughout the process.

We lost Kevin on 16th November 2011. It was the week of Educause 2011 where for the first time we truly saw the breakout success across of LTI 1.0 across the entire marketplace in ways that enabled whole new approaches and thinking to how we develop and provision learning tools. Kevin was our collective starting point for this revolution.

In encourage you to take a look at Kevin’s memorial page at, watch the video, and read the comments from his family, friends, and colleagues.


8th June 1960 – 16th November 2011

We owe a great debt of gratitude to Kevin for getting us started when few believed in the ideas. He kept us moving forward when it seemed like we were on a long journey to nowhere. I wish he could be here to see what the creation that he led us to create has wrought. I know that he would chuckle and say ‘Aw shucks’ and then change the subject to talk about how much he likes this current season of Dr. Who.

Increasing SAML Adoption

During the same timeframe of 2004-2011, the SAML community was also making great progress. The implementations were becoming more robust and in far more languages. SAML moved from something that “only came with Shibboleth” to support in commercial identity products from IBM and Microsoft. SAML became a popular Single-Sign-On approach in K12 software because it came built-in with Microsoft Active Directory.

In higher education, the Interet2-sponsored InCommon federation became increasingly successful as more and more campuses added Shibboleth or other SAML based Single-Sign-On capabilities and joined InCommon.

In the UK and Netherlands there were broad adoptions of SAML-based SSO with funding and involvement from JISC, SURF, and others.

Even the University of Michigan – my school – the university that invented the CoSign SSO technology – adopted and deployed Shibboleth in addition to CoSign (i.e. not as a replacement).

So by 2011, SAML based solutions have very much turned an important corner in adoption to the point where schools that don’t have support for SAML-based SSO and are members of the InCommon federation are increasingly becoming seen as the “odd-school-out”.

When Worlds Collide

Both IMS Learning Tools Interoperability (since 2004) and SAML (since 2000 have had a long history and have recently seen growing success and widespread adoption.

It was only a matter of time before they ran into each other.

The first major encounter happened at the University of Wisconsin Division of Information technology a couple of months ago.

Ironically, the University of Wisconsin took the lead with WebCT to develop the first prototype of the IMS Tools Interoperability specification back in 2004. You can see Dirk Herr-Hoyman of Wisconsin in the TI demonstration in Sheffield, UK. You can also see Anthony Whyte of the University of Michigan and Lydia Li of Stanford University in the video.

An even more ironic detail is that the very first successful exchange of a TI 1.0 SOAP message came while we were at a week-long hack-fest at the University of Wisconsin at Madison. That first message went between Chris Vento-developed WebCT code and Dirk Herr-Hoyman-develped Moodle code.

Enough of the reminiscing, on to the present.

Wisconsin is a Desire2Learn school and supports Shibboleth/SAML and a one of the leading schools in the the InCommon SAML federation. As teachers started to come forward making requests to integrate LTI-based tools into Desire2Learn, the Wisconsin DOIT team raised the completely logical question “why are we not using SAML for this?”.

I was not surprised that this happened and in a way was waiting and well-prepared for this conversation. After all I (and many others) had designed LTI with full awareness and appreciation of the strengths and weaknesses of SAML-based SSO solutions. I knew that SAML and LTI solved different problems and where there was overlap, I made sure to carefully align them so they would fit together like two puzzle pieces when the time came to bring them together.

This led to a series of telephone conference calls and a scheduled face-to-face meeting in Madison on February 29, 2012.

The Big Meeting

Originally we expected the meeting to be an all-day affair with about 10 people. By the time it was all figured out the only people in the meeting turned out to be Kieth Hazelton, Scott Fullerton, and me. And I got stuck in the morning rush hour traffic coming out of Chicago and arrived two hours late with only time for a quick lunch and a 90 minute whiteboard meeting.

We had our lunch and were were done in 45 minutes and one marked-up whiteboard later. The solution quickly became obvious. We never even had to erase the whiteboard. I had a more complex solution in mind but Keith said that my complex solution was fraught with problems and that it had been tried before in SAML-land and failed miserably. He suggested a much simpler solution that would be sufficent, and within minutes it was obvious how LTI and SAML could easily be made to work together in a completely secure and flexible manner. And actually it showed how any SSO could be effectively used in concert with an LTI launch.

The solution was simple, effective and secure. It required no changes to SAML and only a tiny change to LTI to add some optional data to the launch.

It also solved a problem with LTI of having no way to easily associate the local LMS key with a more global single sign on credential. It also solved the problem of letting users directly log into the external tool, bypassing the LMS and getting access to their data.

In short, it is the coolest thing since sliced bread. As I expected and hoped the two specifications were complimentary and perfectly filled in each other’s missing bits – it was even better than I had expected. Because both specifications were mature and well adopted we had the right architects in the room who understood the real strengths and weakness of each technology, we went right to the correct solution.

The following is a SlideShare presentation that captures the essence of the design. It is draft, and so I may need to revise it, and comments are welcome. Like all first drafts there is room for improvement.


This is a great start and there are many people that have contributed to how well we can now integrate applications into LMS systems and other systems on campuses. We see the combination of LTI and SAML allowing a wide range of integration approached from the large multi-organizations efforts like InCommon down to one faculty member writing a game, running it in AppEngine and plugging it into a multi university free cloud-hosted LMS like Blackboard’s CourseSites or Instructure’s Canvas.

There is plenty of work to do to build some prototype code and prove this works in real production. We have some SAML-protected applications at Wisconsin that we can work with to prototype a SAML+LTI connection.

Comments welcome.