OpenART Final Report

Made by OpenART:

  • The OpenART ontology, an event-driven ontology produced to describe the ‘artworld’ dataset. The ontology is split into a number of parts to allow greater re-usability. It should be considered ‘work in progress’, although the version published is complete for the OpenART project: dlib.york.ac.uk/ontologies/
  • An ontology browser version of the ontologies can be found at dlib.york.ac.uk/ontologies/openart – for easier reading!
  • Sample data for each ‘primary’ entity in the form of a rdf/xml document and turtle document are available: dlib.york.ac.uk/data
  • Void description for the dataset: dlib.york.ac.uk/data
  • A script for creating all of the documents and ingesting them into the Digital Library Fedora repository.
  • RDF/A embedded in the pages of artworld.york.ac.uk (forthcoming)

Next steps for OpenART and the University of York:

For the University of York, there is some work to complete in order to get the full (current) dataset into our Fedora repository, mainly in setting up the url rewriting and content negotiation rules.

After that, we would ideally like to apply the same linked data principles to other Digital Library content, particularly some of the rich image content that we have. This would involve mapping and modelling work, for example VRA image metadata to linked data, and automating the generation of RDF.

Some thoughts

The approach taken in OpenART is somewhat twofold, with prototyping using a variety of tools (summarised in the technical approaches post) which could be further explored in future work.

The dataset which drives OpenART was released as a web application in October at http://artworld.york.ac.uk, to meet the requirements of the separate AHRC-funded project which it was created out of. This site has been developed as a database-driven application, an approach chosen as a best fit for the time available. The site, always envisaged as a human-user end point, was not initially designed for linked data. Indeed, one might argue that databases and ontologies do not make happy bedfellows. However, what we have found in the project is that is was relatively straightforward to (1) create a script to extract open data documents from the database and ingest that into our Fedora Digital Library, and (2) add RDF/A tagging to the web site itself.

One of the benefits of this approach is that it provides us with a non-proprietory back-up and preservation routine for the database, playing to one of Fedora’s strengths. It also demonstrates how Fedora can be used in place of a simple file structure to serve up linked data documents, bringing with it the advantages of data management, indexing and version control.

What this rather document-centric approach does not provide is a fully indexed RDF store with a SPARQL end point. Although Fedora has these as part of its stack they are internal tools for Fedora, not designed for indexing anything other than the core Fedora datastreams. Future work to enable Fedora’s ‘semantic’ capacity for external content would be extremely useful. The European Interactive Knowledge Stack (IKS) Project is doing interesting work in this area (http://www.iks-project.eu/).

Opportunities

OpenART was always focussed on a narrow rich seam of data, rather than a broad simpler dataset. There is an opportunity here to see how these two approaches can co-exist. Good ontology modelling will allow rich drilled-down terms to be mapped back to broader concepts for greater findability of content, whilst allowing much finer-grained analysis of the detail captured by the ontology. Where there may be a gap is in the tools which query, visualise and analyse the data sources.

Extending existing applications to better support open data is another opportunity, allowing standard repository platforms such as EPrints, DSpace and Fedora Commons to offer standard linked data endpoints, with options for configuring the data exposed.

Google Refine has come out strongly in OpenART as an extremely useful tool for manipulating datasets. It is particularly well suited to people who do not have in-depth programming skills but want to get RDF out of semi-structured documents. There is an opportunity to demystify some of the myth around creating open data and RDF, which can be quite simple to do.

Evidence of reuse

Our data has not been re-used, although we have had interesting discussions with a range of stakeholders from Tate, to be summarised in a blog post on how others could follow in OpenART’s footsteps. Skills What skills were used in your project? Did you already have these skills in your team or did you need to develop them or bring in external experts? Are the processes you have developed embedded in your institutional practice now or are there plans to embed them? Do you plan to develop these skills further? OpenART did have a range of skills in the team, in java programming, databases, Fedora Commons and metadata. External experts brought ontology modelling and RDF expertise, along with extra Fedora Commons expertise. These were essential for the project. OpenART has seen members of the project team gain much deeper knowledge of RDF and ontologies which we would hope to embed in York with further projects around linked open data.

Lessons

Lesson 1

Ontology modelling is complex so allow plenty of time for it. Take time to consider the data model and consider the best approach, a simpler ‘mix and match’ of schema terms might be suitable. For OpenART, where the data is very specific, an ontology was considered the best approach.

Lesson 2

Use Turtle during development phases and get familiar with validation and inspection tools. Turtle is a simple notation for RDF and is very easy to write and to understand. It can be exported directly out of Google Refine and validated by common tools (eg. http://www.rdfabout.com/). Any23 (http://any23.org/) can be used to generate other formats, such as RDF/XML or RDF/A. Sindice’s Inspector tool (http://inspector.sindice.com/) is a useful tool for viewing the relationships in RDF and checking that documents are not just valid, but also correct. Google Refine can be used as a relatively rapid application for generating RDF samples.

Lesson 3

Come up with use cases. What questions do users want to answer with your data? What links do they want to follow? Understanding the uses and potential uses of the data can help both with modelling but also with making the case for doing linked data in the first place.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: