13 July 2012
Following on from my initial investigation into the Learning Registry I have spent a little time looking into how we could use learning registry resources within the Newcastle curriculum mapping tool Dynamic Learning Maps (DLMs).
In order to redisplay resources within DLMs we decided it would be best to store a local copy of the node data within DLMs rather then querying the node on request, both for performance and to enable us to perform further processing on the data, which is not yet available via the learning registry API.
In order to harvest the resources I wrote a python script within DLMs, which utilises the learning registry slice API. Currently the code:
Running the harvest script with a list of sub topics from the 'Urinary Tract' topic in DLMs resulted in the 116 new resources (see the json via the learning registry). I have included a few screen shots below of a development version of DLMs, which shows how the resources are redisplayed.
Resources connected to the ‘Kidney’ topic:
Viewing a resource shows basic details about the resource and in the image below you can see the resource is connected to the jLern topic and the Kidney topic within DLMs.
Learning registry entry for ‘Kidney tutorial’ (see this resource via the learning registry API):
Searching DLMs for ‘Kidney’ returns both curriculum content and resources from the learning registry:
Related tags: dynamic learning maps, jlern, learning registry, oer phase 3, oerri, rapid innovation, RIDLR, ukoer
Posted by: James Outterside
- 13 July 2012 @ 15:12:15
I thought (scale wise) was to run a slice against all content, and produce a stack of returned resources - with them order by how often they occured - and then do a peer review check on them.
So you populate via slicing, but into a second system with some QA
- 13 July 2012 @ 15:23:22
Hi Pat, i didn't think you could slice everything, although i have probably missed something in the docs.
The idea we are looking at is querying based on a set of DLM topics (tags) for example anatomy which could be over a 1000 keywords. We probably will still need some kind of sign off even if we have a trusted list etc as suggested.