Thursday 25 March 2010

Curating content from one repository to put into another

First you need a little code that I've written:

sudo easy_install recordsilo oaipmhscraper

(This should install all the dependencies for the following)

To harvest some OAI-PMH records from say... http://eprints.soton.ac.uk/perl/oai2 :

First, take a look at the Identify page for the OAI-PMH endpoint: http://eprints.soton.ac.uk/perl/oai2?verb=Identify

The example identifier indicates that the record identifiers start with: "oai:eprints.soton.ac.uk:" - we'll need this in a bit. Maybe not need, but it'll make the local storage more... elegant?

Go to a nice clean directory, with enough storage to handle whatever you want to harvest.

Start a python commandline:

>>> from oaipmhscraper import OAIPMHScraper

---> NB OAIPMHScraper(storage_dir, base_oai_url, identifier_uri_prefix)

>>> oaipmh = OAIPMHScraper("myrepo",
"http://eprints.soton.ac.uk/perl/oai2",
"oai:eprints.soton.ac.uk:")

Let's have a look at what could be found out about the OAI-PMH endpoint then:

>>> oaipmh.state

{'lastidentified': '2010-03-25T15:57:15.670552', 'identify': {'deletedRecord': 'persistent', 'compression': [], 'granularity': 'YYYY-MM-DD', 'baseURL': 'http://eprints.soton.ac.uk/perl/oai2', 'adminEmails': ['mailto:eprints@soton.ac.uk'], 'descriptions': ['........'], 'protocolVersion': '2.0', 'repositoryName': 'e-Prints Soton', 'earliestDatestamp': '0001-01-01 00:00:00'}}

>>> oaipmh.getMetadataPrefixes()

{'oai_dc': ('http://www.openarchives.org/OAI/2.0/oai_dc.xsd', 'http://www.openarchives.org/OAI/2.0/oai_dc/'), 'uketd_dc': ('http://naca.central.cranfield.ac.uk/ethos-oai/2.0/uketd_dc.xsd', 'http://naca.central.cranfield.ac.uk/ethos-oai/2.0/')}

Let's grab all the oai_dc from all the objects:

>>> oaipmh.getRecords('oai_dc')
...

Go make a cup of coffee or tea.... you'll get lots of stuff like:

INFO:OAIPMH Harvester:New object: oai:eprints.soton.ac.uk:1267 found with datestamp 2004-04-27T00:00:00 - storing.
2010-03-25 16:01:11,807 - OAIPMH Harvester - INFO - New object: oai:eprints.soton.ac.uk:1268 found with datestamp 2005-04-22T00:00:00 - storing.
INFO:OAIPMH Harvester:New object: oai:eprints.soton.ac.uk:1268 found with datestamp 2005-04-22T00:00:00 - storing.
2010-03-25 16:01:11,813 - OAIPMH Harvester - INFO - New object: oai:eprints.soton.ac.uk:1269 found with datestamp 2004-04-07T00:00:00 - storing.
INFO:OAIPMH Harvester:New object: oai:eprints.soton.ac.uk:1269 found with datestamp 2004-04-07T00:00:00 - storing.
2010-03-25 16:01:11,819 - OAIPMH Harvester - INFO - New object: oai:eprints.soton.ac.uk:1270 found with datestamp 2004-04-07T00:00:00 - storing.
INFO:OAIPMH Harvester:New object: oai:eprints.soton.ac.uk:1270 found with datestamp 2004-04-07T00:00:00 - storing.
2010-03-25 16:01:11,824 - OAIPMH Harvester - INFO - New object: oai:eprints.soton.ac.uk:1271 found with datestamp 2004-04-14T00:00:00 - storing.

...

My advice is to hop to a different terminal window and start to poke around with the content you are getting. The underlying store is a take on the CDL's Pairtree microspec (pairtree being a minimalist specification for how to structure the access to object-orientated items on a hierarchical filesystem) This model on top of pairtree I've called a Silo (in the RecordSilo library I've written) and constitutes a basic object model, where each object has a persistent JSON state (r/w-able) and can store any file or file in a subdirectory. It has crude object-level versioning, rather than file-versioning, so you can clone one version, delete/alter/add to it to create a second, curated version for reuse elsewhere without affecting the original.

What makes pairtree attractive is that the files themselves are not altered in form, so normal posix tools can be used on the files without unwrapping, depacking, etc.

Let's have a look around at what's been harvested so far into the "myrepo" silo:

>>> from recordsilo import Silo
>>> s = Silo("myrepo")
>>> s.state
{'storage_dir': 'myrepo', 'identifier_uri_prefix': 'oai:eprints.soton.ac.uk:', 'uri_base': 'oai:eprints.soton.ac.uk:', 'base_oai_url': 'http://eprints.soton.ac.uk/perl/oai2'}'}

>>> len(s) # NB this can be a time-consuming operation
1100
>>> len(s)
1200

Now let's look at a record: I'm sure I saw '6102' whizz past as it was harvesting...

>>> obj = s.get_item("oai:eprints.soton.ac.uk:6102")

>>> obj
{'files': {'1': ['oai_dc']}, 'subdir': {'1': []}, 'versions': ['1'], 'date': '2004-06-24T00:00:00', 'currentversion': '1', 'metadata_files': {'1': ['oai_dc']}, 'item_id': 'oai:eprints.soton.ac.uk:6102', 'version_dates': {'1': '2004-06-24T00:00:00'}, 'metadata': {'identifier': 'oai:eprints.soton.ac.uk:6102', 'firstSeen': '2004-06-24T00:00:00', 'setSpec': ['7374617475733D707562', '7375626A656374733D51:5148:5148333031', '7375626A656374733D47:4743', '74797065733D61727469636C65', '67726F75703D756F732D686B']}}

>>> obj.files
['oai_dc']
>>> obj.versions
['1']
>>> obj.clone_version("1","workingcopy")
'workingcopy'
>>> obj.versions
['1', 'workingcopy']
>>> obj.currentversion
'workingcopy'
>>> obj.set_version_cursor("1")
True
>>> obj.set_version_cursor("workingcopy")
True
>>> obj.files
['oai_dc']
>>> with obj.get_stream("oai_dc") as oai_dc_xml:
... print oai_dc_xml.read()
...
<metadata xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<oai_dc:dc xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:title>Population biology of Hirondellea sp. nov. (Amphipoda: Gammaridea: Lysianassoidea) from the Atacama Trench (south-east Pacific Ocean)</dc:title>
<dc:creator>Perrone, F.M.</dc:creator>
<dc:creator>Dell'Anno, A.</dc:creator>
<dc:creator>Danovaro, R.</dc:creator>
<dc:creator>Groce, N.D.</dc:creator>
<dc:creator>Thurston, M.H.</dc:creator>
<dc:subject>QH301 Biology</dc:subject>
<dc:subject>GC Oceanography</dc:subject>
<dc:description/>
<dc:publisher/>
<dc:date>2002</dc:date>
<dc:type>Article</dc:type>
<dc:type>PeerReviewed</dc:type>
<dc:identifier>http://eprints.soton.ac.uk/6102/</dc:identifier></oai_dc:dc></metadata>

You can add bytestreams as strings:

>>> obj.put_stream("foo.txt", "Some random text!")

or as file-like objects:

>>> with open("README", "r") as readmefile:
... obj.put_stream("README", readmefile)
...
>>> obj.files
['oai_dc', 'foo.txt', 'README']
>>> obj.set_version_cursor("1")
True
>>> obj.files
['oai_dc']

This isn't the easiest way to browse or poke around the files. It would be nice to see these through a web UI of some kind:

Grab the basic UI code from http://github.com/benosteen/siloserver

(You'll need to install web.py and Mako: sudo easy_install mako web.py)

Then edit the silodirectory_conf.py file to point to the location of the Silo - if the directory structure looks like the following:

myrepo
|
--- Silo directory stuff...
SiloServer
|
- dropbox.py
etc

You need to change data_dir to "../myrepo" and then you can start the server by running 'python dropbox.py'

Point a browser at http://localhost:8080/ and wait a while - that start page loads *every* object in the Silo.

And let's revisit our altered record, at http://localhost:8080/oai:eprints.soton.ac.uk:6102
So, from this point, I can curate the records as I wish, add files to each item - perhaps licences, PREMIS files, etc - and then push them onto another repository, such as Fedora.

No comments: