Harvesting Learning Registry resources with Dynamic Learning Maps

13 July 2012

Following on from my initial investigation into the Learning Registry I have spent a little time looking into how we could use learning registry resources within the Newcastle curriculum mapping tool Dynamic Learning Maps (DLMs).

In order to redisplay resources within DLMs we decided it would be best to store a local copy of the node data within DLMs rather then querying the node on request, both for performance and to enable us to perform further processing on the data, which is not yet available via the learning registry API.

Harvesting resources

In order to harvest the resources I wrote a python script within DLMs, which utilises the learning registry slice API. Currently the code:

  1. Queries a learning registry node using the slice method, with a list of DLM topics as keywords.
  2. Filters out any none Dublin Core elements for the time being.
  3. Iterates over the returned documents and saves to DLMs: If the resource is a duplicate (based on doc_type, submitter and signer) the record is updated, otherwise a new resource is created. 
  4. Creates DLM connections to the topics used as part of the slice method.

Results in DLMs

Running the harvest script with a list of sub topics from the 'Urinary Tract' topic in DLMs resulted in the 116 new resources (see the json via the learning registry). I have included a few screen shots below of a development version of DLMs, which shows how the resources are redisplayed.

Resources connected to the ‘Kidney’ topic:

Viewing a resource shows basic details about the resource and in the image below you can see the resource is connected to the jLern topic and the Kidney topic within DLMs.

Learning registry entry for ‘Kidney tutorial’ (see this resource via the learning registry API):

 

Searching DLMs for ‘Kidney’ returns both curriculum content and resources from the learning registry:

What’s next?

  • One caveat is that the resources returned might not be in the context you expect, some additional checking would need to be put in place, such as a list of trusted publishers/sources.
  • Add in the harvesting of activity data (comments and ratings).
  • Modify the publish code I previously worked to push open DLM resources and activity data to the Jlern node.
  • The aim is to set the script to run as a scheduled task on the public version of DLMs as an initial demo.

Related tags: dynamic learning maps, jlern, learning registry, oer phase 3, oerri, rapid innovation, RIDLR, ukoer

Posted by: James Outterside

Posted in: OER phase 3 blog, OER rapid innovation: RIDLR

Reader comments

Pat - 13 July 2012 @ 15:12:15

I thought (scale wise) was to run a slice against all content, and produce a stack of returned resources - with them order by how often they occured - and then do a peer review check on them.

So you populate via slicing, but into a second system with some QA

James Outterside - 13 July 2012 @ 15:23:22

Hi Pat, i didn't think you could slice everything, although i have probably missed something in the docs.

The idea we are looking at is querying based on a set of DLM topics (tags) for example anatomy which could be over a 1000 keywords. We probably will still need some kind of sign off even if we have a trusted list etc as suggested.

 
 
MEDEV, School of Medical Sciences Education Development,
Faculty of Medical Sciences, Newcastle University, NE2 4HH

|