Friday, 29. July 2005

smushing

I am putting together more about smushing, which will be a key factor in the global semantic web: to connect annotations that were made by different people.

A typical smushing algorithm would be:
  • take a large datastore DS that contains a set of triples Tset = {Ta, Tb, Tc, ... }
  • iterate through known InverseFunctionalProperties IFPset = {Ia, Ib, Ic, ....}
  • for each InverseFunctionalProperty Iy that is represented in the Tset as predicate, do a check for smushing.
  • find all triples TxIy so that Tx has Property Iy
  • find one triple Txc of TxIy that points to a grounding resource / canonical resource (see below)
  • Use the subject Sx from the triple Txc and aggregate all other triples of subjects of TxIy to Sx. This means, change the subject in the triples to Sx.
  • add owl:sameAs triples to connect all Subjects(TxIy) to Sx
The problem is, when you have a set of triples TxIy that have several subjects that should be the same - as defined by IFP - to choose which subject is the "canonical" subject and should now be filled with the triples.

There are different approaches to find the canonical resource:
  • take by random
  • prefer the resource that is annotated in special ontology (ie prefer SKOS concepts over foaf:Persons)
  • prefer the more public resource (googlefight, public urls wins over private uris)
  • prefer the best annotated resource (the resource with the most triples - attention, this is self-amplification of single resources)
  • prefer the resource with the shortest / the longest uri
  • prefer named resources over anonymous resources (this is very important, you must not smush to anonyms)
Another question is what to do with the smushing. Different approaches
  1. store the smushing in an extra graph
  2. delete the old triples, add the smushing
  3. add the smushing additional to the old triples (tricky)
Each has obvious advantages and disadvantages. For gnowsis I would prefer (1)to smush into an extra graph, which is similiar to (3) but seperates the data.

In gnowsis we have the problem of incremental smushing, which means that we crawl thousands of emails per day and then would like to smush the persons in the addresses, but only of the new messages.


I have posted this algorithm also in the ESW wiki, where you can comment on it.
QR barcode by i-nigma.com/CreateBarcodes
jolina - 9. Dec, 13:01

Hey...
It just awesome.. sounds great.. i really like your post... thanks for great sharing...
keep it up.. keep sharing
Redwood City Boot Camp

Trackback URL:
http://leobard.twoday.net/stories/867171/modTrackback

Trackbacks for this Story

telbcdmhgg - 16. Jul, 19:52

phmtcjlok

zuudico shotjmj fwmiexnnw [read more]
icon

semantic weltbild 2.0

Building the Semantic Web is easier together

and then...

foaf explorer
foaf

Geo Visitors Map
I am a hard bloggin' scientist. Read the Manifesto.
www.flickr.com
lebard's photos More of lebard's photos
Skype Me™!

Search

 

Users Status

You are not logged in.

I support

Wikipedia Affiliate Button

Archive

July 2005
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
 
 
 1 
 2 
 3 
 4 
 5 
 6 
 7 
 8 
 9 
10
14
16
17
18
19
20
21
23
24
26
27
28
30
 
 
 
 
 
 
 

Credits

vi knallgrau GmbH

powered by Antville powered by Helma


Creative Commons License

xml version of this page
xml version of this page (with comments)

twoday.net AGB


austriaca
Chucknorrism
digitalcouch
gnowsis
Jesus
NeueHeimat
route planning
SemWeb
travel
zoot
Profil
Logout
Subscribe Weblog