I will be
blogging about my Semantic Web PhD for the next months, until I am finished. First, you learn what I did and do, and perhaps you can copy something for your own thesis or point me to information I missed, critique, positive and negative, is warmly welcome.
First part of my dissertation will be about
integrating data into the semantic desktop. The problem at hand is, that we face data from different sources (files, e-mail, websites) and with different formats (pdf, e-mails, jpg, perhaps some RDF in a foaf file, or an iCalendar file), and these can change frequently or never. Faced with all this lovely data that can be of use for the user, we are eager to represent it as RDF. There are two main problems when transforming data to rdf:
- find a RDF representation for the data (RDF(S), or OWL vocabulary)
- find a URI identifying the data
I have experienced, that the second question is far harder to solve. While it is quite easy to find a RDF(S) vocabulary for e-mails, MP3s, or People (and if you don't find one on
schemaweb.info, btw the only website I never had to bookmark because its so obvious, you make up the vocab yourself), finding the correct URI to identify the resource can be a longer task.
The most tricky thing is when identifying
files or things crawled from a bigger database like the Thunderbird
address book. For files, there are some possibilities, all of them have been used by me or others.
You can skip this section about typical URIs for files, its just an example of what implications the URI may have.
- file://c:/document/leobard/documents/myfile.txt this is the easiest way, because it is comformant with all other desktop apps. Firefox will open this url, Java knows it, its good. The problems are: what if you move the file? You lose the annotations, which can be fixed. Second, the URI is not world-unique. Two people can have the same file at the same place. Also, it is not possible to use this URI in the semantic web at large, because the server misses.
- http://desktop.leobard.net/~leobard/data/myfile.txt Assume you have a HTTP deamon running on your machine, like Apples OSX does, and assume you have the domain name leobard.net and register your desktop at the DNS entry desktop.leobard.net then you could host your files at this address. Using access rights, you could block all access to the files, but still open some for friends. Great. But first, people usually don't run http servers on their machines, nor do they own namespaces, nor are their desktops reachable on public IP addresses, but are rather behind NAT.
- urn:file-id:243234-234234-2342342-234. Semantic Web researchers love this one. You use a hash or something else to identify the file, and then have a linking from the URI to the real location. Systems like kspaces.net used this scheme. It is ok to identify files, but looses all the nicety of URLs, that can actually locate the file also.
So, after this excursion we know that its not straightforward to identify files with a URI. We tried the first two approaches, but I am not happy with them, perhaps I blog the latest findings regarding URIs sometimes.
On with metadata integration. So, four years ago I needed a way to extract metadata from MP3s, Microsoft Outlook and other files. I created something called "File Adapters". They worked very elegant: you post a query for " ?x" and get the answer "Numb". This was done by analysing the subject URI (file://...) and then invoking the right adapter. The adapter looked at the predicate and extracted only that, very neat. BUT after two years, around 2004 I realised that I need an index of all data anyway to do cool SPARQL queries, because the question "?x mp3:artist 'U2'" was not possible - for such queries, you need a central index like Google Desktop or Mac's Quicksilver (ahh, I mean Spotlight) does. For this, the Adapters are still useable, because they can extract the triples bit by bit. But then, if you do it by crawling anyway, then you could simplify the whole thing drastically. Thats what we found out the hard way by implementing it and seeing that interested students that helped out had many problems with the complicated adapter stuff, but are quite quick writing crawlers. We have written this in a paper called
"Leo Sauermann, Sven Schwarz: Gnowsis Adapter Framework: Treating Structured Data Sources as Virtual RDF Graphs. In Proceedings of the ISWC 2005." (bibtex
here). Shortly after finishing this paper (may 2005?), I came to the conclusion that writing these crawlers is a problem that many other people have, so I asked the people from
x-friend if they would want to do this together with me, but they didn't answer. I then contacted the
Aduna people, who do
Autofocus, and, even better for us, they agreed to cooperate on writing adapters and suggested to call the project Aperture. We looked at what we did before and then merged our approaches, basically using the code Aduna had before and putting it into
Aperture.
What we have now is an experiment that showed me that accessing the data live was slower and more complicated than using an index, and the easiest way to fill the index is crawling.
The problem that is still unsolved is, that the Semantic Web is not designed to be crawled. It should consist of distributed information sources, that are accessed through technologies like SPARQL. So, at one point in the future we will have to rethink what information should be crawled and what not, because it is already available as SPARQL endpoint. And then, it is going to be tricky to distribute the SPARQL queries amongst many such endpoints, but that will surely be solved by our Semantic Web services friends.
identifying files