TinCan and The Confusion About the Next Generation of SCORM

In the E-Learning world SCORM is a big thing because it is the leading framework for supporting the interoperability of complex arrangements of learning resources. The latest release of SCORM 2004 dates back to 2009, which is already 4 years old. However, pretty much from the beginning SCROM 2004 has been considered as "too complex", "too complicated", or even "too limited" to be a big success. As of  2013 the interoperability geeks in the community have a new hope that everything will be better, easier, and nicer: TinCan. I start reading about that TinCan will be the next successor of SCORM. The latest remark comes from Moodle's Dan Marsden. This is surpising to me and it should be surprising to you as well. In this article I explain why.

The ADL Co-labs are the main driver behind SCORM and it is possibly one of their biggest success stories, since the first version of the specification has been released in 2000. Back then it provided the core framework for nurtrishing the interoperabile web-based education ecosystem that we know today. However, by the time SCORM 2004 was released the web was very different then four years before. This became more evident with the rise of the Web 2.0 that started to receive major attention in 2005. Like many e-learning standards and specification that were released around the same time, SCORM 2004 has not been designed to support learning in complex and collaborative environments of services, tools and people. After the start of the mobile web era in 2007 it became increasingly clear that the monolithic structure of SCORM cannot keep up with the realities and requirements of modern learning management and orchestration needs. 

This has led to a reevaluation of todays requirements for technology-enhanced -- or advanced distributed learning. Modern learning is: 

  • Distributed,
  • Collaborative,
  • Continuous,
  • Networked,  
  • Mobile, and 
  • Using multiple devices.

This creates tensions with the centralised, single interfaced, individual learning, and course-centred concepts of SCORM. By 2011 it became evident that the main interoperability specifications require a massive overhaul. The first step was the TinCan project that developed an service API for activity streams from different learning services. The project was a success in the way that it manages to provide a API for linking educational tools like a sensor network. Therefore, it received some attention in late 2012 and early 2013. However, with the beginning of this year, the ADL Co-Labs also introduced the big picture for the roadmap for overhauling SCORM: The Training and Learning Architecture or short the TLA.

The TLA has four essential parts that are intended to extend the present capabilities of SCORM for maintaining interoperability in modern learning environments.

  • The Experience API and learning record stores (LRS),
  • Content broker,
  • Learner profiles, and 
  • Competence networks. 

The Experience API (xAPI) and the LRS are the key outcomes of the TinCan project. The ADL Co-Labs have released the first version of the specification last month. This framework basically provides a framework for tools, services and systems to exchange information about learning processes in the form of activity streams. As the name suggests are the traces of learning activities the fundamenal element of these streams. However, it is not just important to exchange those streams between systems but also to store them for later reference. This is where the LRS come into the game. Learning Record Stores define the basic functions for storing and requesting activity streams. Sounds simple and indeed it is. The brave who dwelled into the specification documents have certainly realized that the specification is very young and requires still a decent bit of maturing. Particularly, those who deal with mobile, contextual and collaborative learning will find that many important facets are rather difficult to express. 

The second element of the TLA is the content broker. A content broker is a bit like a content repository on steriods: it does not only store files and documents, but it should be able to dynamically draw resources from different locations to meet the learners' needs. This will finally break with SCORM's "the content must be in the package" doctrine and allow more dynamic arrangements of resources, activities and scenarios. This requires that content brokers need to be able to manage and orchestrate the interplay of - possibly dynamic - resources. The two scenarios that are currently mentioned on the TLA web-page are just in-time learning and dynamic selection of learning activities. More down to earth approaches include personalised offline caching of training material from several LMS as we have implemented it for the Mobler Cards app

The third TLA element are Learner Profiles. One could think that learner profiles are supposed to be based on the xAPI but this is not really the case because nobody has an idea how this would be achieved in an interoperable fashion. Basically, learner profiles are the pillar for multiplatform identity managment. This include authentication and authorization procedures as well as learner proferences and the learning history (in terms of certificates, grades and so on).  Therefore, the learner profiles can be considered as the slow-changing counterpart of the xAPI.

Finally, the TLA includes competence networks. Competence networks are special because they introduce a new concept to the world of SCORM: interdependent modules. Currently, prerequisites for SCORM packages can be only loosely defined, which makes it hard to recommend courses and personalise learning opportunities. The core idea of competence networks is to connect courses and learning experiences for developing and advancing competences. The vision behind this idea suggests that organizations can define competence profiles for tasks and duties. This should enable organizations to provide the optimal training to meet the task requirements of their professionals. Furthermore, professionals should be able to use competence networks as evidence for their competences based on their prior learning experiences.

There is little more information on the official web-pages of the ADL Co-labs on this topic, which indicates that the future of SCORM is not yet fully shaped. Many aspects are unclear and require more attention for developing an understanding of the functional and practical requirements and implications in order to avoid an overly complex framework. However, the available documentation clarifies that the direction of the future of SCORM is multi facetted and yet it needs to provide a simple to understand and to implement framework that meets the requirements of modern learning. Therefore, it is important to analyse and focus on these requirements instead of reducing the next generation SCORM to the "TinCan" API. The xAPI is only one piece in a puzzle with many blank spots as this very brief analysis showed.

© 2002-2013, Christian Glahn and Michael Valersi