Semantic search

Jump to navigation Jump to search
Condition
Printout selection
Options
Parameters [
limit:

The maximum number of results to return
offset:

The offset of the first result
link:

Show values as links
headers:

Display the headers/property names
mainlabel:

The label to give to the main page name
intro:

The text to display before the query results, if there are any
outro:

The text to display after the query results, if there are any
searchlabel:

Text for continuing the search
default:

The text to display if there are no query results
embedformat:

The HTML tag used to define headings
embedonly:

Display no headings
Sort options
Delete
Add sorting condition

WLAN im Zug

Heute fuhr ich nach Düsseldorf, um bei der Britischen Botschaft mein Visum für Großbritannien zu beantragen und, so es klappt, auch gleich zu holen. Ich hatte schon einiges erwartet -- manche Horrorgeschichte wurde erzählt über Beamtentum, Formaliakrieg, Großkotzigkeit und Willkürherrschaft. Doch nichts dergleichen trat ein (zumindest nicht beim Beantragen heute morgen). Äußerst höflich, zuvorkommend und hilfsbereit.

Allerdings muss ich jetzt einige Zeit totschlagen (um am Nachmittag das fertige Visum abzuholen). Da ich die Zeit zum Arbeiten nutzen wollte, und zudem in der Gegend war, dachte ich mir, probiere ich mal das neue Pilotprojekt der Bahn aus, WLAN im ICE. Also fuhr ich nach Dortmund um von dort Richtung München nach Köln im ICE zu fahren. Dummerweise habe ich doch den falschen Zug erwischt, keinen ICE3 scheinbar, und ich kann einfach nicht herausfinden, welche Züge jetzt tatsächlich WLAN haben.

Schade. Na gut, habe ich mich doch in Düsseldorf hingesetzt und hier ein wenig gearbeitet. Und dann geht es zurück zum Konsulat. Ich war ganz frech und habe gleich ein Visum für fünf Jahre beantragt (statt des üblichen halben Jahrs). Mal sehen, was rauskommt... ein unbekannter (Fast)Landsmann, den ich zufällig am Konsulat traf, erklärte mir, dass das nie klappen würde, und dass das ganze immer Willkür sei, er hätte schließlich Erfahrung damit, schließlich muss er schon zum wiederholten Mal verlängern. Na ja, mit der Einstellung, kein Wunder dass es dann Probleme gibt. Ich vermied es, ihn darauf hinzuweisen, dass seine beiden Aussagen ("Willkür" und "das wird nie klappen") sich gegenseitig widersprechen.

Na, schauen wir mal, was rauskommt. Bloß blöd, dass das mit dem WLAN nicht geklappt hat.

WWW2006 social wiki

18 May 2006

The WWW2006 conference next week has a social wiki. So people can talk about evening activities, about planning BOF-Sessions, about their drinking habits. If you're coming to the conference, go there, make a page for yourself. I think it would be fun to capture the information, and to see how much data we can get together... data? Oh, yes, forgot to tell you: the WWW2006 wiki is running on Semantic MediaWiki.

Yay!

Let's show how cool this thing can get!

Wachstumsmodelle

Neue Woche, neue nutkidz, sonst nicht viel Neues zu berichten.

Wie ihr ja wisst, läuft dieses Jahr das Projekt 100.000 - der Counter soll den 100.000sten Besucher auf dieser Seite zählen. Ich habe - da mir das rumrechnen mit Zahlen wirklich Spaß macht - einfach ein paar Hochrechnungen angestellt, um zu sehen, wie erfolgreich ich mit diesem Plan voranschreite. Hier mal ein paar Ergebnisse aus diesen Rechnungen (alle, die von Zahlen gelangweilt werden und seit Sonntag ob Wiesbaden, Hannover und Lissabon genug von Zahlen haben, können jetzt schon ihren weiteren Weg durch den Cyberspace gehen).

Insgesamt habe ich fünf verschiedene Rechenmodelle aufgestellt, welche die Frage beantworten sollen, wie sich die Besucherzahlen über das Jahr entwickeln werden, damit, ausgehend von 20.000 Besuchern zum Jahresanfang zum Jahresende die 100.000 erreicht werden. Die fünf Modelle sind ein Lineares Modell (ausgehend von der gleichen Zahl von Besuchern über das ganze Jahr) über zwei Modelle mit linarem Wachstum der Besucherzahlen (L-Wachs 1 und 2, unterscheiden sich bei der Variablenbelegung), ein Modell, welches das quadratische Wachstum der Besucherzahlen über das Jahr annimmt (Q-Wachs) und schließlich das fantastische Modell mit der Methode "aus'm Bauch heraus". Und das Tolle: bis auf das Lineare Modell sind die Prognosen aller Modelle mehr als erfüllt!

Am 1. Februar hatten wir tatsächlich um die 24.200 Besucher - hier die Prognosen: Linear 26.820, L-Wachs1 22.361, L-Wachs2 22.777, Q-Wachs 22.100 und Bauch 23.000. Überraschend: Trotz der elaborierten mathematischen Berechnungsmethoden ist die Methode Bauch noch die genaueste gewesen!

So, jetzt aber genug Zahlen. Viel Spaß, und bis zum nächsten Mal!

War in the shadows

A few years ago I learned with shock and surprise that in the 1960s and 1970s Croatians have been assassinated by the Yugoslav secret service in other countries, such as Germany, and that the German government back then chose to mostly look away. That upset me. In the last few weeks I listened to a number of podcasts that were going into more details about these events, and it turned out that some of those murdered Croatians were entangled with the WW2 fascist Croatian Ustasha regime -- either by being Ustasha themselves, or by actively working towards recreating the Ustasha regime in Croatia.

Some of the people involved were actively pursing terrorist acts - killing diplomats and trying to kill politicians, hijacking and possibly downing airplanes, bombing cinemas, and even trying an actual armed uprising.

There was a failed attempt of planting seventeen bombs along the Croatian Adria, on tourist beaches, during the early tourist season, and to detonate them all simultaneously, in order to starve off income from tourism for Yugoslavia.

Germany struggled with these events themselves: their own secret service was tasked with protecting the German state, and it was initially even unclear how to deal with organizations whose goal is to destabilize a foreign government. Laws and rules were changed in order to deal with the Croatian extremists, rules that were later applied to the PLO, IRA, Hamas, etc.

Knowing a bit more of the background, where it seems that a communist regime was assassinating fascists and terrorists, does not excuse these acts, nor the German inactivity. It is a political assassination without due process. But it makes it a bit better understandable why the German post-Nazi administration, that was at that time busy with its own wave of terror by the Rote Armee Fraktion RAF, was not giving more attention to these events. And Germany received some of its due when Yugoslavia captured some of the kidnappers and murderers of Hanns Martin Schleyer, and did not extradite them to Germany, but let them go, because Germany did not agree to hand over Croatian separatists in return.

Croatians had a very different reputation in the 1970s than the have today.

I still feel like I have a very incomplete picture of all of these events, but so many things happened that I had no idea about.

Source podcasts in German

Was für ein Zufall!

Ich schreibe einem Kollegen in den Niederlanden. Der Antwortet mir, dass er bald nach Barcelona zieht, auf eine neue Stelle. Keine zwei Minuten später schickt mir Sixt eine eMail mit einem Spezialangebot, Hotel und Mietwagen für drei Tage Barcelona für nur X Euro.

Was für ein Zufall!

Web Conference 2019

25 May 2019

Last week saw the latest incarnation of the Web Conference (previously known as WWW or dubdubdub), going from May 15 to 17 (with satellite events the two days before). When I was still in academia, WWW was one of the most prestigious conference series for my research area, so when it came to be held literally across the street from my office, I couldn’t resist going to it.

The conference featured two keynotes (the third, by Lawrence Lessig, was cancelled on short notice due to a family emergency):

Watch the talks on YouTube on the links given above. Thanks to Marco Neumann for pointing to the links!

The conference was attended by more than 1,400 people (closer to 1,600?), making it the second largest since its inception (trailing only Lyon from last year), and about double the size than it used to be only four or five years ago. The conference dinner in the Exploratorium was relaxed and enjoyable. Acceptance rate was at 18%, which made for 225 accepted full papers.

The proceedings are available for free (yay!), so browse them for papers you find interesting. Personally, I really enjoyed the papers that looked into the use of WhatsApp to spread misinformation before the Brazil election, Dataset Search, and pre-empting SPARQL queries from blocking the endpoint. The proceedings span 5,047 pages, and are available online.

I had the feeling that Machine Learning was taking much more space in the program than it used to when I used to attend the conference regularly - which is fine, but many of the ML papers were only tenuously connected to the Web (which was the same criticism that we raised against many of the Semantic Web / Description Logic papers back then).

Thanks to the general chairs for organizing the conference, Leila Zia and Ricardo Baeza-Yates, and thanks to the sponsors, particularly Microsoft, Bloomberg, Amazon, and Google.

The two workshops I attended before the Web Conference were the Knowledge Graph Technology and Applications 2019 workshop on Monday, and the Wiki workshop 2019 on Tuesday. They have their own trip reports.

If you have trip reports, let me know and I will link to them.

Welcome!

Welcome to my new blog! Technology kindly provided by Blogger.com

Weltuntergang

Gestern war der 6.6.6.

Und die Welt ist doch nicht untergegangen. Da muss für so manchen Zahlenmystiker die Welt untergegangen sein.

(Übrigens war schon zum zweiten Mal der 6.6.6. Vor 1000 Jahren hat das mit dem Weltuntergang auch nicht geklappt.)

Wenn Nerds protestieren

Wer versteht schon so ein Schild?

Wann gibt es das bei den Studentenstreiks hierzulande? (Und überhaupt, was bedeutet es, wenn Studenten streiken? Dass sie die Arbeit niederlegen?)

Wetten, dass...

„Hast Du die Titelseite gesehen? Schon wieder eine Story über den 11. September…“
„Weißt Du, solange es noch um 2001 geht, ist doch alles in Ordnung.“
„Das ist doch nur für die Medien. Meinst Du, die schlagen wieder zu?“
„Zum Jahrestag?“
„Ja. Wäre doch der Hammer.“
„Ich weiß nicht. Das ist sicher zu vorhersagbar. Alle werden vorbereitet sein.“
„Eben! Stell Dir vor, das gelingt ihnen! Damit zeigen sie, wie fett sie sind.“
„Ja, schon, aber ich glaube, es klappt nicht.“
„Ich würde wetten…“
„Was?!“
„50 Euro. Am 11. September kommt es zu einem weiteren Anschlag.“
„Diesen Jahres?“
„Ja.“
„Ich weiß nicht…“
„Na gut, ich gewinne nur, wenn es mehr als 200 Tote gibt, sonst zählt es nicht.“
„Hmm… abgemacht.“

* * *

… die Polizei war abgelenkt durch einen Bombenalarm im Hauptbahnhof … wegen einem vergessenen Koffer… ein terroristischer Hintergrund … eine Reihe von Explosionen erschütterte das Brandenburger Tor, bevor es schließlich in sich zusammenstürzte … die ersten Schätzungen gehen von mehr als 100 Toten aus … die wollen doch Krieg! Dann geben wir ihnen Krieg! … niemand hätte erwartet, dass ausgerechnet Deutschland … die Bundesregierung wurde in Sicherheit gebracht … die ganze Nation konnte zusehen, wie das Brandenburger Tor … vermutet Al-Quaida … wurde eine Moschee angegriffen … Live Schaltung nach Washington … uneingeschränkte Solidarität mit unseren deutschen Brüdern und Schwestern … tiefe Bestürzung … einziges Ziel der sukzessiven Explosionen war, den Fernsehteams genug Zeit … bestätigt, dass es bislang 200 Todesopfer gab, und eine weitere Frau noch um ihr Leben kämpft …

* * *

Er zündete sich eine Zigarette an. Seine Finger zitterten. Er stand im Treppenhaus des Hospitals. Hier durfte er rauchen. Und die Ruhe genießen. Das dauernde piep, piep, piep zehrte an seinen Nerven. Seine Mutter. Seine Mutter kämpfte um das Leben.
Die Tür ging auf. Sein bester Freund stand da, schaute etwas blass. Er holte seinen Geldbeutel raus, und drückte ihm zwei Zwanziger in die Hand.
„Fehlen noch Zehn. Kriegst Du morgen im Club.“

Er stieg die Treppe runter. Die Tür unten schloss mit einem lauten, tiefen Schlag.

What is a good ontology?

You know? Go ahead, tell me!

I really want to know what you think a good ontology is. And I will make it the topic of my PhD: Ontology Evaluation. But I want you to tell me. And I am not the only one who wants to know. That's why Mari Carmen, Aldo, York and I have submitted a proposal for a workshop on Ontology Evaluation, and happily it got accepted. Now we can officially ask the whole world to write a paper on that issue and send it to us.

The EON2006 Workshop on Evaluation of Ontologies for the Web - 4th International EON Workshop (that's the official title) is co-located with the prestigous WWW2006 conference in Ediburgh, UK. We also were very happy that so many reknown experts accepted our invitation to the program committee, thus ensuring a high quality of reviews for the submissions. The deadline is almost two months away: January 10th, 2006. So you have plenty of time to write that mind-busting phantastic paper on Ontology Evaluation until then! Get all the details on the Workshop website http://km.aifb.uni-karlsruhe.de/ws/eon2006.

I really hope to see some of you in Edinburgh next May, and I am looking for lively discussions about what makes an ontology a good ontology (by the way, if you plan to submit something - I would love to get a short notification, that would really be great. But it is by no means requested. It's just so that we can plan a bit better).

What's DLP?

OWL has some sublanguages which are all more or less connected to each other, and they make the mumbojumbo of ontology languages not any clearer. There is the almighty OWL Full, there's OWL DL, the easy* OWL Lite, and then there are numerous 'proprietary' expansions, which are more (OWL-E) or less (OWL Flight) compatible and useful.

We'd like to add another one, OWL DLP. Not because we think that there aren't enough already, but because we think this one makes a difference. Because it has some nice properties, like fully translatable to logic programs, and because it is easy to use and because it is fully compatible to standard OWL, and you don't have to use any extra tools.

If you want to read more, I and some colleagues at the AIFB wrote a short introduction to DLP (and the best thing is: if I say short, I mean short. Just two pages!). It's meant to be easy to understand as well - but if you have any comments on that, please provide them.

 * whatever easy means here

What's in a name - Part 1

There are tons of mistakes that may occur when writing down RDF statements. I will post a six part series of blog entries, starting with this one, about what can go wrong in the course of naming resources, why it is wrong, and why you should care - if at all. I'll try to mix experience with pragmatics, usability with philosophy. And I surely hope that, if you disagree, you'll do so in the comments or in your own blog.

The first one is the easiest to spot. Here we go:

"Politeia" dc:creator "Plato".

If you don't know about the differences between Literals, QNames and URIs, please take a look at the RDF Primer. It's easy to read and absolutely essential. If you know about the differences, you already know that the above said actually isn't a valid RDF statement: you can't have a literal as the subject of a statement. So, let's change this:

philo:Politeia dc:creator "Plato".

What's the difference between these two? In the first one you say that "Plato" is the creator of "Politeia" (we take the semantics of dc:creator for granted for now). But in the second you say that "Plato" is the creator of philo:Politeia. That's like in Dragonheart, where Bowen tries to find a name for the dragon because he can't just call him "dragon", and he decides on "draco". The dragon comments: "So, instead of calling me dragon in your own language, you decide to call me dragon in another language."

Yep, we decide to talk about Politeia in another language. Because RDF is another language. It tries to look like ours, it even has subjects, objects, predicates, but it is not the language of humans. It is (mostly) much easier, so easy in fact even computers can cope with it (and that's about the whole point of the Semantic Web in the first place, so you shouldn't be too surprised here).

"Politeia" has a well defined meaning: it is a literal (the quotation marks tell you that) and thus it is interpreted as a value. "Politeia" actually is just a word, a symbol, a sign pointing to the meant string Politeia (a better example would be: "42" means the number 42. "101010b", "Fourty-Two" or "2Ah" would have been perfectly valid other signs denoting the number 42).

And what about philo:Politeia? How is it different from "Politeia", what does this point to?

philo:Politeia is a Qualified Name (QName), and thus ultimatively a short-hand notation for an URI, an Unified Resource Identifier. In RDF, everything has to be a resource (well, remember, RDF stands for Resource Description Framework), but that's not really a constraint, as you may simply consider everything a resource. Even you and me. And URIs are names for resources. Universally (well, at least globally) unique names. Like philo:Politeia.

You may wonder about what your URI is, the one URI denoting you. Or what the URI of Plato is, or of the Politeia? How to choose good URIs, and what may go wrong? And what do URIs actually denote, and how? We'll discuss this all in the next five parts of this series, don't worry, just stay tuned.

What's in a name - Part 2

How to give a resource a name, an URI? Let's look at this statement:

movie:Terminator dc:creator "James Cameron".

Happy with that? This is a valid RDF statement, and you understand what I wanted to say, and your RDF machine will be able to read and process it, too, so everything is fine.

Well, almost. movie:Terminator is a QName, and movie: is just a shorthand prefix, a namespace, that actually has to be defined as something. But as what? URIs are well-defined, so we shouldn't just define the namespace arbitrarily. The problem is, someone else could do the same, and suddenly, one URI could denote two different resources - this is called URI collision, and it is the next worst thing to immanentizing the Eschaton. That's why you should grab some URI space for yourself and there you go, you may define as many URIs there as you like (remember, the U in URI means Universal, that's why they make such a fuss about the URI space and ownership of it).

I am the webmaster of http://semantic.nodix.net, and the URI belongs to me and with it, all the URIs starting with it. Thus I decide, that movie: shall be http://semantic.nodix.net/movie/. Our example statement thus is the same as:

http://semantic.nodix.net/movie/Terminator http://purl.org/dc/elements/1.1/creator "James Cameron".

So this is actually what the computer sees. The short hand notation above is just for humans. But if you're like me, and you see the above Subject, you're already annoyed that it is not a link, that you can't click on it. So you copy it into your browser address bar, and go to http://semantic.nodix.net/movie/Terminator. Ups. A 404, the website is not found. You start thinking, oh man, stupid! Why you giving the resource such a name that looks so much like an web address, and then point it to 404-Nirvana?

Many think so. That's because they don't grasp the difference between URIs and URLs, and to be honest, this difference is maybe the worst idea the W3C ever had (that's a hard-to-achieve compliment, considering the introduction of XML/RDF-serialisation and XSD). We will return to this difference, but for now, let's see what usually happens.

Because http://semantic.nodix.net/movie/Terminator leads to nowhere, and I'm far too lazy to make a website for the Terminator just for this example, we will take another URI for the movie. Jumping to IMdb we quickly find the appropriate one, and then we can reformulate our statement:

http://www.imdb.com/title/tt0088247/ http://purl.org/dc/elements/1.1/creator "James Cameron".

Great! Our subject is a valid URI, clicking on http://www.imdb.com/title/tt0088247/ (or pasting it to a browser) will tell you more about the subject, and we have a valid RDF statement. Everything is fine again...

...until next time, where we will discuss the minor problems of our solution.

What's in a name - Part 3

Last time we merrily published our first statement for the Semantic Web:

http://www.imdb.com/title/tt0088247/ http://purl.org/dc/elements/1.1/creator "James Cameron".

A fellow Semantic Web author didn't like the number-encoded IMdb-URI, but found a much more compelling one and then published the following statement:

http://en.wikipedia.org/wiki/The_Terminator http://purl.org/dc/elements/1.1/date "1984-10-26".

A third one sees those and, in order to foster integration of data offers helpfully the following statement:

http://www.imdb.com/title/tt0088247/ owl:sameAs http://en.wikipedia.org/wiki/The_Terminator.

And now they live merrily ever after. Or do you hear the thunder of doom rolling?

The problem is that the URIs above actually already denote something, namely the IMdb website about the Terminator and the Wikipedia-article on the Terminator. They did not denote the movie itself, but that's how they're used in our examples. Statement #3 above actually says the two websites are the same. The first one says, that "James Cameron" created the IMdb website on the Terminator (they'd wish), and the second one says that the Wikipedia article was created in 1984, which is wrong (July 23, 2001 would be the correct date). We have a classic case of URI collision.

This happens all the time. People working professionally on this do this too:

_person foaf:interest http://dmoz.org/Computers/Security/.

I'd bet, _person (remaining anonymously here) does not have such a heavy interest in the website http://dmoz.org/Computers/Security/, but rather in the Topic the website is about.

_person foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

Instead of letting _security be anonymous, we'd rather give it a real URI. This way we can reference it later.

_person foaf:interest http://semantic.nodix.net/topic/security.
http://dmoz.org/Computers/Security/ dc:subject http://semantic.nodix.net/topics/security.

But, oh pain - now we're exactly at the same spot we've been in the last part. We have an URI that does not dereference to a website (by the way, I do know that the definition of foaf:interest actually says the semantics of foaf:interest is, that the Subject is interested in the Topic of the Object, and not the Object itself, but that's not my point here)
Thinking for a moment about it, we must conclude that it is actually impossible to achieve both goals: either the URIs will identify a resource retrievable over the web and are thus unsuitable as URIs for entities outside the web (like persons, chairs and such) because of URI collision, or they don't - and will then lead to 404-land.

Isn't there any solution? (Drums) Stay tuned for the next exciting installment of this series, introducing not one, not two, not three, but four solutions to this problem!

What's in a name - Part 4

I promised you four solutions to the problem of dubbing with appropriate URIs. So, without further ado, let's go.

The first one you've seen already. It's using anonymous nodes.

_person foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

But here we get the problem, that we can't reference _security from outside, thus loosing a lot of the possibilities inherent in the Semantic Web, because this way you can not say that someone else is interested in the same topic as _person above. Even if you say, in another RDF file,

_person2 foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

_security actually does not have to be the same as above. Who says, websites only have one subject? The coincidental equality of the variable name _security bears as much semantics as the equality of two variables x in a C and a Python-Program.
So this solution, although possible, bears too much short-comings. Let's move on.

The second solution is hardly available to the majority of us puny mortals. It's introducing a new URI schema. Let's return to our very first example, where we wanted to say that the Politeia was written by Plato.

urn:isbn:0192833707 dc:creator "Plato".

Great! No problems here. Sure, your web-browser can't (yet) resolve urn:isbn:0192833707, but no ambiguity here: we know exactly of what we speak.

Do we? Incidentally, urn:isbn:0465069347 also denotes the Politeia. No, not in another language (those would be another handful of ISBN numbers), just a different version (the text is public domain). Now, does the following statement hold?

urn:isbn:0192833707 owl:sameAs urn:isbn:0465069347.

Most definitively not. They have different translators. They have different publishers. These are different books. But it's the same - what? What is the same? It's not the same text. It's not the same book. They may have the same source text they are translated from. But how to express this correctly and still useful?

The urn:isbn: scheme is very useful for a very special kind of entities - published books, even the different versions of published books.
The problem with this solution that you would need tons of schemes. Imagine the number of committees! This would, no, this should never happen. We definitively need an easier solution, although this one certainly does work for very special domains.

Let's move on to the third solution: the magic word is fragment identifier. #. Instead of saying:

http://semantic.nodix.net/Politeia dc:creator http://semantic.nodix.net/Plato.

and thus getting 404s en masse, I just say:

http://semantic.nodix.net/#Politeia dc:creator http://semantic.nodx.net/#Plato.

See? No 404. You get to the homepage of this blog by clicking there. And it's valid RDF as well. So, isn't it just perfect? Everything we wished for?

Not totally, I fear. If I click on http://semantic.nodx.net/#Plato, I actually expect to read something about Plato, and not to see a blog about the Semantic Web. So this somehow would disappoint me. Better than a 404, still...
The other point is my bandwidth. There can be RDF files with thousands of references. Following every single one will lead to considerable bandwidth abuse. For naught, as there is no further information about the subject on the other side. Maybe using http://semantic.nodix.net/person#Plato would solve both problems, with http://semantic.nodix.net/person being a website saying something like "This page is used to reserve conceptual space for persons. To understand this, you must understand the magic of URIs and the Semantic Web. Now, go back whereever you came from and have a nice day." Not too much webspace and bandwith will be used for this tiny HTML-page.

You should be careful though to not have a real fragment identifier "Plato" in the page, or you would actually dereference to this element. URI collision again. You don't want Plato to become half-philosopher / half-XML-element, do you?

We will return to fragment identifiers in the last part of this six part series again. And now let's take a quick look at the fourth solution - we will discuss it more thoroughly next time.

Use a fresh URI whenever you need an URI and don't care about it giving a 404.

What's in a name - Part 5

After calling Plato an XML-Element, making movies out of websites and having several accidents with careless URIs, it seems we return to the very beginning of this series.

http://semantic.nodix.net/document/Politeia dc:creator "Plato".

Whereby http://semantic.nodix.net/document/Politeia explicitly does not resolve but returns a 404, resource not found. Let's remember, why didn't we like it? Because humans, upon seeing this, have the urge to click on it in order to get more information about it. A pretty good argument, but every solution we tried brought us more or less trouble. We didn't get happy with any of them.

But how can I dismiss such an argument? Don't I risk loosing focus with saying "don't care about humans going nowhere"? No, I really don't think so. Due to two reasons, one meant for humans and one for the machines.

First the humans (humans always should go first, remember this, Ms and Mr PhD-student): humans actually never see this URI (or at least, should not but when debugging). URIs who will grace the GUI should have a rdfs:label which provides the label human users will see when working with this resource. Let's be honest: only geeks like us think that http://semantic.nodix.net/document/Politeia is a pretty obvious and easy name for a resource. Normal humans would probably prefer "Politeia", or even "The Republic" (which is the usual name in English speaking countries). Or be able to define their own name.

As they don't see the URI, they actually never feel the urge to click on it, or to copy and paste it to the next browser window. Naming it http://semantic.nodix.net/document/Politeia instead of http://semantic.nodix.net/concept/1383b_xc is just for the sake of readability of the source RDF files, but actually you should not derive any information out of the URI (that's what the standard says). The computer won't either.

The second point is, a RDF application shouldn't look up URIs either. It's just wrong. URIs are just names, it is important that they remain unique, but they are not there for looking up in a browser. That's what URLs are for. It's a shame they look the same. Mozilla realised the distinction when they gave their XUL language the namespace http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul. Application developers should realise this too. rdfs:seeAlso and rdfs:isDefinedBy give explicit links applications may follow to get more information about a resource, and using owl:imports actually forces this behaviour - but the name does not.

Getting information out of names is like making fun of names. It's mean. Remember the in-kids in primary school making fun of out-kids because of their names? You know you're better than that (and, being a geek, you probably were an out-kid, so mere compassion and fond memories should hold you back too)..

Just to repeat it explicitly: if an URI gives back a 404 when you put it in a browser navigation bar - that's OK. It was supposed to identify resources, not to locate them.

Now you know the difference between URIs and URLs, and you know why avoiding URI collision is important and how to avoid it. We'll wrap it all in the final instalment of the series (tomorrow, I sincerely hope) and give some practical hints, too.

By the way, right after the series I will talk about content negotiation, which was mentioned in the comments and in e-Mails.

Uh, and just another thing: the wary reader (and every reader should be wary) may also have noticed that

Philosophy:Politeia dc:creator "Plato".

is total nonsense: it says, that there is a resource (identified with QName Philosophy:Politeia) that is created by "Plato". Rest assured that this is wrong - no, not because Socrates should be credited as the creator of the Politeia (this is another discussion entirely) but because the statement claims that the string "Plato" created it - not a Person known by this name (who would be a resource that should have an URI). But this mistake is probably the most frequent one in the world of the Semantic Web - a mistake nevertheless.

It's OK if you make it. Most applications will cope with it (and some are actually not able to cope with the correct way). But it would not be OK if you didn't know that you are making a mistake.

What's in a name - Part 6

In this series we learned how to make URIs for entities. I know there's a big discussion flaring up every few weeks or so, if we should use fragment identifier or not. For me, this question is pretty much settled. Using a fragment identifier has the advantage of giving you the ability of providing a human readable page for those few lost souls who look up the URI, so maybe it's a tad nicer than using no fragment identifier and returning 404s. Not using fragids has the advantage of probably reducing bandwidth - but this discussion should be more or less academic, because looking up URIs, as we have seen, should not happen.

There is some talking about different representations, negotiating media-types, returning RDF in one, XHTML in the other case, but to be honest, I think that's far too complicated. And you would need to use another web server and extensions to HTTP to make this real, which doesn't really help the advent of the Semantic Web. Look at Nokias URIQA project for more information.

Keep this rules in mind, and everything should be fine:

  • be careful to use unused URIs if you reference a new entity. Take one from an URI space you have control of, so that URI collision won't appear
  • don't put a website under the URI you used to to name an entity. That would lead to URI collision
  • try to make nice looking URIs, but don't try to hard. They are supposed to be hidden by the application anyway
  • provide rdfs:label and rdfs:seeAlso instead. This solves everything you would want to try to solve with URI naming, but in a standard compliant way
  • give your resources URIs. Please. So that other can reference them more easily.

I should emphasise the last one more. Especially using RDF/XML-Syntax easily leads to anonymous nodes, which are a pain in the ass because they are hard or impossible to address. Especially, don't use rdf:nodeID. They don't give your node an ID that's visible to the outer world. This is just a local name. Don't use it, please.

The second is using them like this:

<foaf:person about="me">
  <foaf:knows>
    <foaf:Person>
      <foaf:name>J. Random User</foaf:name>
    </foaf:Person>
  </foaf:knows>
</foaf:Person>

Actually, the Person known to "me" is an anonymous one. You can't refer to her. Again, try to avoid that. If you can, look up the URI the person gave to herself in her own FOAF-file. Or give her a name in your own URI-space. Don't be afraid, you won't run out of it.

Another very interesting approach is to use published subjects. I will return to this in another blog, promised, but so long: never forget, there is owl:sameAs to make two URIs point to the same thing, so don't mind too much if you doublename something.

Well, that's it. I hope you enjoyed the series, and that you learned a bit from it. Looking forward to your comments, and your questions.

White's illusion

I stumbled upon "White's Illusion" and was wondering - was this named after a person called White, or was this named because, well it is an illusion where the colour white plays an important role?

As usual in this case, I started at Wikipedia's article on White's illusion. But Wikipedia didn't answer that question. The references at the bottom also didn't list to anyone named White. So I started guessing it's about the colour.

But wait! Skimming the article there was a mention to "White and White (1985)" - but without any further citation information. So not only one White but two of them!

Google Scholar and Semantic Scholar didn't help me resolving "White and White (1985)" to a proper paper, so I started suspecting that this was a prank that someone entered into the article. I started checking the other references, but they indeed reference papers by White! And with those more complete references I was able to find out that Michael White and Tony White wrote that 1985 paper, that they are both Australian, that Michael White wrote a number of other papers about this illusion and others, and that this is Tony White's only paper.

I added some of the info to the article, but that was a weird ride.

Who am I?

Well, as this being a blog, it will turn out that it is more important what I write than who I am. Just for the context, I nevertheless want to offer a short sketch about my bio.

I studied Computer Science and Philosophy at the University of Stuttgart, Germany. In Computer Science, I thought about Software Architectures, Programming Languages and User Interfaces, and my master thesis happened to be the first package to offer a validating XML parser for the programming language Ada 95.
In Philosophy I started thinking a lot of Justice, especially John Rawls and Plato, but finally I had a strong move to Construcitivst Epistemology and the ontological status of neural networks (both papers are in German and available from my website.

It's a pretty funny thing that next week I will listen to talk on neural networks and ontologies again, and nevertheless my then made paper and the talk won't have too much in common ;-)

Well, so how comes I am working on Semantic Web technologies by now? I have the incredible luck to work in the Knowledge Management Group of the AIFB in Karlsruhe, and there on the EU SEKT Project. I still have a lot to learn, but in the last few weeks I aggregated quite a good grasp on Ontology Engineering, RDF and OWL and some other fields. This is all pretty exicting and amazing and I am looking forward to see what's around the next triple.

Why some are disenchanted

In a comment to my last blog entry, Christopher St John wrote:

"I suffered through the 80's Knowledge Representation fad, both academically in the AI program at Edinburgh and as a practitioner at the only company ever to produce a commercial system written in Prolog (that wasn't a Prolog development system.) So I'm familiar with the problems that the Semantic Web effort is attempting to address. Having slogged through real-life efforts to encode substantial amounts of knowledge, I find some of the misty-eyed musings that surround the Semantic Web effort depressing. That "most information on the Web is designed for human consumption" is seen as an obstacle surmountable via tools like RDF is especially sad. On the other hand, I'm always happy to make use of the cool tools that these sorts of things seem to throw off. There's probably a certain Proverbs 26:11 aspect to it as well."

Thanks for your insightful comment, and being new to the field I certainly appreciate some report based on real life experience - and I have to admit to probably be faulty of being misty-eyed myself more than once about the Semantic Web (and probably will be in the future as well).

'"Most information on the Web is designed for human consumption" is seen as an obstacle'. Yes, you are right, this is probably the worst phrased sentence in the Semantic Web vision. Although I think it's somehow true: if you want the computer to help you dealing with today's information overflow, it must understand as much of the information as possible. The sentence should be at least rephrased as "most information on the Web is designed only for human consumption". I think it would be pretty easy to create both human-readable and machine-friendly information with only little overhead. Providing such systems should be fairly easy. But this is only about the phrasing of the sentence - I hope that every Semwebber agrees that the Semantic Web's ultimate goal is to help humans, not machines. But we must help the machines in order to enable them to help us.

The much more important point that Christopher addresses is his own disenchantment with the Knowledge Represenation research in the 80s, and probably by many people with the AI research a generation before. So the Semantic Web may just seem as the third generation of futile technologies to solve AI-complete problems.

There were some pretty impressive results from AI and KR, and the Semantic Web people build on that. Some more, some less - some too much even, forgetting the most important component of the Semantic Web underway: the Web. Yes, you can write whole 15-page papers and file them to Semantic Web conferences and journals and not even once mention anything web-specific. That's bad, and that's what Christopher, like some researchers, does not see as well, the main difference between this work two decades ago and today's line of investigation. The Web changes it all. I don't know if AI and KR had to fail - it probably must have failed, because they were so many intelligent people doing it and so there's no other explanation than that it had to fail due to the premises of its time. I have no idea if the Semantic Web is bound to fail as well today. I have no idea if we will be able to reach as much as AI and KR did in their time, or less, or maybe even more. I am a researcher. I have no idea if the things I do will work.

But I strongly believe it will and I will invest my time and part of my life towards this goal. And so do dozens of dozens other people. Let's hope that some nice thing will be created in the course of our work. Like RDF.

Why we will win

People keep saying that the Semantic Web is just a hype. That we are just an unholy chimaera of undead AI researchers talking about problems solved by the database guys 15 years ago. And that our work will never make any impact in the so called real world out there.

As I stated before: I'm a believer. I'm even a catholic, so this means I'm pretty good at ignoring hard facts about reality in order to stick to my beliefs, but it is different in this case: I slowly start to comprehend why Semantic Web technology will prevail and make life better for everyone out there. It' simply the next step in the IT RevoEvolution.

Let's remember the history of computing. Shortly after the invention of the abacus the obvious next step, the computer mainframe, appeared. Whoever wanted to work with it, had to learn to use this one mainframe model (well, the very first ones were one-of-a-kind machines). Being able to use one didn't necessarily help you using the other.

First the costs for software development were negligible. But slowly this changed, and Fred Brooks wrote down his experience with creating the legendary System/360 in the Mythical Man-Month (a must-read for software engineers), showing how much has changed.

Change was about to come, and it did come twofold. Dennis Ritchie is to blame for both of them: together with Ken Thompson he made Unix, but in order to make that, he had to make a programming language to write Unix in, this was C, which he made together with Brian Kernighan (this account is overly simplified, look at the history of Unix for a better overview).

Things became much easier now. You could port programs in a simpler way than before, just recompile (and introduce a few hundred #IFDEFs). Still, the masses used the Commodore 64, the Amiga, the Atari ST. Buying a compatible model was more important than looking at the stats. It was the achievement of the hardware development for the PC and of Microsoft to unify the operating systems for home computers.

Then came the dawning of the age of World Wide Web. Suddenly the operating system became uninteresting, the browser you use was more important. Browser wars raged. And in parallel, Java emerged. Compile once, run everywhere. How cool was that? And after the browser wars ended, the W3Cs cries for standards became heard.

That's the world as it is now. Working at the AIFB, I see how no one cares what operating system the other has, be it Linux, Mac or Windows, as long as you have a running Java Virtual Machine, a Python interpreter, a Browser, a C++ compiler. Portability really isn't the problem anymore (like everything in this text, this is oversimplified).

But do you think, being OS independent is enough? Are you content with having your programs run everywhere? If so, fine. But you shouldn't be. You should ask for more. You also want to be independent of applications! Take back your data. Data wants to be free, not locked inside an application. After you have written your text in Word, you want to be able to work with it in your Latex typesetter. After getting contact information via a Bluetooth connection to your mobile phone, you want to be able to send an eMail to the contact from your web mail account.

There are two ways to achieve this: the one is with standard data formats. If everyone uses vCard-files for contact information, the data should flow freely, shouldn't it? OpenOffice can read Word files, so there we see interoperability of data, don't we?

Yes, we do. And if it works, fine. But more often than not it doesn't. You need to export and import data explicitly. Tedious, boring, error prone, unnerving. Standards don't happen that easily. Often enough interoperability is achieved with reverse engineering. That's not the way to go.

Using a common data model with well defined semantics and solving tons of interoperability questions (Charset, syntax, file transfer) and being able to declare semantic mappings with ontologies - just try to imagine that! Applications being aware of each other, speaking a common language - but without standard bodies discussing it for years, defining it statically, unmoving.

There is a common theme in the IT history towards more freedom. I don't mean free like in free speech, I mean free like in free will.

That's why we will win.

Wiki workshop 2019

24 May 2019

Last week, May 14, saw the fifth incarnation of the Wiki workshop, co-located with the Web Conference (formerly known as dubdubdub), in San Francisco. The room was tight and very full - I am bad at estimating, but I guess 80-110 people were there.

I was honored to be invited to give the opening talk, and since I had a bit more time than in the last few talks, I really indulged in sketching out the proposal for the Abstract Wikipedia, providing plenty of figures and use cases. The response was phenomenal, and there were plenty of questions not only after the talk but also throughout the day and in the next few days. In fact, the Open Discussion slot was very much dominated by more questions about the proposal. I found that extremely encouraging. Some of the comments were immediately incorporated into a paper I am writing right now and that will be available for public reviews soon.

The other presentations - both the invited and the accepted ones - were super interesting.

Thanks to Dario Taraborelli, Bob West, and Miriam Redi for organizing the workshop.

A little extra was that I smuggled my brother and his wife into the workshop for my talk (they are visiting, and they have never been to one of my talks before). It was certainly interesting to hear their reactions afterwards - if you have non-academic relatives, you might underestimate how much they may enjoy such an event as mere spectators. I certainly did.

See also the #wikiworkshop2019 tag on Twitter.

Wikidata - The Making of

19 May 2023

Markus Krötzsch, Lydia Pintscher and I wrote a paper on the history of Wikidata. We published it in the History of the Web track at The Web Conference 2023 in Austin, Texas (what used to be called the WWW conference). This spun out of the Ten years of Wikidata post I published here.

The open access paper is available here as HTML: dl.acm.org/doi/fullHtml/10.1145/3543873.3585579

Here as a PDF: dl.acm.org/doi/pdf/10.1145/3543873.3585579

Here on Wikisource, thanks to Mike Peel for reformatting: Wikisource: Wikidata - The Making Of

Here is a YouTube trailer for the talk: youtu.be/YxWs_BS31QE

And here is the full talk (recreated) on YouTube: youtu.be/P3-nklyrDx4

Wikidata crossed 2 billion edits

The Wikidata community edited Wikidata 2 billion times!

Wikidata is, to the best of my knowledge, the first and only wiki to cross 2 billion edits (the second most edited one being English Wikipedia with 1.18 billion edits).

Edit Nr 2,000,000,000 was adding the first person plural future of the Italian verb 'grugnire' (to grunt) by user Luca.favorido.

Wikidata also celebrated 11 years since launch with the hybrid WikidataCon 2023 in Taipei last weekend.

It took from 2012 to 2019 to get the first billion, and from 2019 to now for the second. As they say, the first billion is the hardest.

That the two billionth edit happens right on the Birthday is a nice surprise.

Wikidata crossed Q100000000

Wikidata crossed Q100000000 (and, in fact, skipped it and got Q100000001 instead).

Here's a small post by Lydia Pintscher and me: https://diff.wikimedia.org/2020/10/06/wikidata-reaches-q100000000/

Wikidata lexicographic data coverage for Croatian in 2023

Last year, I published ambitious goals for the coverage of lexicographic data for Croatian in Wikidata. My self-proclaimed goal was widely missed: I wanted to go from 40% coverage to 60% -- instead, thanks to the help of contributors, we reached 45%.

We grew from 3,124 forms to 4,115, i.e. almost a thousand new forms, or about 31%. The coverage grew from around 11 million tokens to about 13 million tokens in the Croatian Wikipedia, or, as said, from 40% to 45%. The covered forms grew from 1.4% to 1.9%, which illustrates neatly the increased difficulty to reach more coverage (thanks to Zipf's law): last year, we increased covered forms by 1%, which translated to an overall coverage increase of occurrences by 35%. This year, although we increased the covered forms by another 0.5%, we only got an overall coverage increase of occurrences by 5%.

But some of my energy was diverted from adding more lexicographic data to adding functions that help with adding and checking lexicographic data. We launched a new project, Wikifunctions, that can hold functions. There, we collected functions to create the regular forms for Croatian nouns. All nouns are now covered.

I think that's still a great achievement and progress. Sure, we didn't meet the 60%, but the functions helped a lot to get to the 45%, and they will continue to benefit us 2024 too. Again, I want to declare some goals, at least for myself, but not as ambitious with regards to coverage: the goal for 2024 is to reach 50% coverage of Croatian, and in addition, I would love us to have Lexeme forms available for verbs and adjectives, not only for nouns, (for verbs, Ivi404 did most of the work already), and maybe even have functions ready for adjectives.

Wikidata or scraping Wikipedia

Yesterday I was pointed to a blog post describing how to answer an interesting project: how many generations from Alfred the Great to Elizabeth II? Alfred the Great was a king in England at the end of the 9th century, and Elizabeth II is the current Queen of England (and a bit more).

The author of the blog post, Bill P. Godfrey, describes in detail how he wrote a crawler that started downloading the English Wikipedia article of Queen Elizabeth II, and then followed the links in the infobox to download all her ancestors, one after the other. He used a scraper to get the information from the Wikipedia infoboxes from the HTML page. He invested quite a bit of work in cleaning the data, particularly doing entity reconciliation. This was then turned into a graph and the data analyzed, resulting in a number of paths from Elizabeth II to Alfred, the shortest being 31 generations.

I honestly love these kinds of projects, and I found Bill’s write-up interesting and read it with pleasure. It is totally something I would love to do myself. Congrats to Bill for doing it. Bill provided the dataset for further analysis on his Website. Thanks for that!

Everything I say in this post is not meant, in any way, as a criticism of Bill. As said, I think he did a fun project with interesting results, and he wrote a good write-up and published his data. All of this is great. I left a comment on the blog post sketching out how Wikidata could be used for similar results.

He submitted his blog post to Hacker News, where a, to me, extremely surprising discussion ensued. He was pointed rather naturally and swiftly to Wikidata and DBpedia. DBpedia is a project that started and invested heavily in scraping the infoboxes from Wikipedia. Wikidata is a sibling project of Wikipedia where data can be directly maintained by contributors and accessed in a number of machine-readable ways. Asked why he didn’t use Wikidata, he said he didn’t know about it. All fair and good.

But some of the discussions and comments on Hacker News surprised me entirely.

Expressing my consternation, I started discussions on Twitter and on Facebook. And there were some very interesting stories about the pain of using Wikidata, and I very much expect us to learn from them and hopefully make things easier. The number of API queries one has to make in order to get data (although, these numbers would be much smaller than with the scraping approach), the learning curve about SPARQL and RDF (although, you can ignore both, unless you want to use them explicitly - you can just use JSON and the Wikidata API), the opaqueness of the identifiers (wdt:P25 wd:Q9682 instead of “mother” and “Queen Elizabeth II”) were just a few. The documentation seems hard to find, there seem to be a lack of libraries and APIs that are easy to use. And yet, comments like "if you've actually tried getting data from wikidata/wikipedia you very quickly learn the HTML is much easier to parse than the results wikidata gives you" surprised me a lot.

Others asked about the data quality of Wikidata, and complained about the huge amount of bad data, duplicates, and the bad ontology in Wikidata (as if Wikipedia wouldn’t have these problems. I mean how do you figure out what a Wikipedia article is about? How do you get a list of all bridges or events from Wikipedia?)

I am not here to fight. I am here to listen and to learn, in order to help figuring out what needs to be made better. I did dive into the question of data quality. Thankfully, Bill provides his dataset on the Website, and downloading the query result for the following query - select * { wd:Q9682 (wdt:P25|wdt:P22)* ?p . ?p wdt:P25|wdt:P22 ?q } - is just one click away. The result of this query is equivalent to what Bill was trying to achieve - a list of all ancestors of Elizabeth II. (The actual query is a little bit more complex, because we also fetch the names of the ancestors, and their Wikipedia articles, in order to help match the data to Bill’s data).

I would claim that I invested far less work than Bill in creating my graph data. No data cleansing, no scraping, no crawling, no entity reconciliation, no manual checking. How about the quality of the two datasets?

Update: Note, this post is not a tutorial to SPARQL or Wikidata. You can find an explanation of the query in the discussion on Hacker News about this post. I really wanted to see how the quality of the data using the two approaches compares. Yes, it is an unfamiliar language for many, but I used to teach SPARQL and the basics of the languages seem not that hard to learn. Try out this tutorial for example. Update over

So, let’s look at the datasets. I will refer to the two datasets as the scrape (that’s Bill’s dataset) and Wikidata (that’s the query result from Wikidata, as of the morning of August 20 - in particular, none of the errors in Wikidata mentioned below have been fixed).

In the scrape, we find 2,584 ancestors of Elizabeth II (including herself). They are connected with 3,528 parenthood relationships.

In Wikidata, we find 20,068 ancestors of Elizabeth II (including herself). They are connected with 25,414 parenthood relationships.

So the scrape only found a bit less than 13% of the people that Wikidata knows about, and close to 14% of the relationships. If you ask me, that’s quite a bad recall - almost seven out of eight ancestors are missing.

Did the scrape find things that are missing in Wikidata? Yes. 43 ancestors are in the scrape which are missing in Wikidata, and 61 parenthood relationships are in the scrape which are missing from Wikidata. That’s about 1.8% of the data in the scrape, or 0.24% compared to the overall parent relationship data of Elizabeth II in Wikidata.

I evaluated the complete list of those relationships from the scrape missing from Wikidata. They fall into five categories:

  • Category 1: Errors that come from the scraper. 40 of the 61 relationships are errors introduced by the scrapers. We have cities or countries being parents - which isn’t too terrible, as Bill says in the blog post because they won’t have parents themselves and won’t participate in the original question of findinging the lineage from Alfred to Elizabeth, so no problem. More problematic is when grandparents or great-grandparents are identified as the parent, because this directly messes up the counting of generations: Ügyek is thought to be a son, not a grandson of Prince Csaba, Anna Dalassene is skipping two generations to Theophylact Dalassenos, etc. This means we have an error rate of at least 1.1% in the scraper dataset, besides having the low recall rate mentioned above.
  • Category 2: Wikipedia has an error. Those are rare, it happened twice. Adelaide of Metz had the wrong father and Sophie of Mecklenburg linked to the wrong mother in the infobox (although the text was linking to the right one). The first one has been fixed since Bill ran his scraper (unlucky timing!), and I fixed the second one. Note I am linking to the historic version of the article with the error.
  • Category 3: Wikidata was missing data. Jeanne de Fougères, Countess of La Marche and of Angoulême and Albert Azzo II, Margrave of Milan were missing one or both of their parents, and Bill’s scraping found them. So of the more than 3,500 scraped relationships, only 2 were missing! I added both.
  • In addition, correct data was marked deprecated once. I fixed that, too.
  • Category 4: Wikidata has duplicates, and that breaks the chain. That happened five times, I think the following pairs are duplicates: Q28739301/Q106688884, Q105274433/Q40115489, Q56285134/Q354855, Q61578108/Q546165 and Q15730031/Q59578032. Duplicates were mentioned explicitly in one of the comments as a problem, and here we can see that they happen with quite a bit of frequency, particularly for non-central items. I merged all of these.
  • Category 5: the situation is complicated, and different Wikipedia versions disagree, because the sources seem to disagree. Sometimes Wikidata models that disagreement quite well - but often not. After all, we are talking about people who sometimes lived more than a millennium ago. Here are these cases: Albert II, Margrave of Brandenburg to Ada of Holland; Prince Álmos to Sophia to Emmo of Loon (complicated by a duplicate as well); Oldřich, Duke of Bohemia to Adiva; William III to Raymond III, both Counts of Toulouse; Thored to Oslac of York; Bermudo II of León to Ordoño III of León (Galician says IV); and Robert Fitzhamon to Hamo Dapifer. In total, eight cases. I didn't edit those as these require quite a bit of thought.

Note that there was not a single case of “Wikidata got it wrong”, which surprised me a lot - I totally expected errors to happen. Unless you count the cases in Category 5. I mean, even English Wikipedia had errors! This was a pleasant surprise. Also, the genuine complicated cases are roughly as frequent as missing data, duplicates, and errors together. To be honest, that sounds like a pretty good result to me.

Also, the scraped data? Recall might be low, but the precision is pretty good: more than 98% of it is corroborated by Wikidata. Not all scraping jobs have such a high correctness.

In general, these results are comparable to a comparison of Wikidata with DBpedia and Freebase I did two years ago.

Oh, and what about Bill’s original question?

Turns out that Wikidata knows of a path between Alfred and Elizabeth II that is even shorter than the shortest 31 generations Bill found, as it takes only 30 generations.

This is Bill’s path:

  • Alfred the Great
  • Ælfthryth, Countess of Flanders
  • Arnulf I, Count of Flanders
  • Baldwin III, Count of Flanders
  • Arnulf II, Count of Flanders
  • Baldwin IV, Count of Flanders
  • Judith of Flanders
  • Henry IX, Duke of Bavaria
  • Henry X, Duke of Bavaria
  • Henry the Lion
  • Henry V, Count Palatine of the Rhine
  • Agnes of the Palatinate
  • Louis II, Duke of Bavaria
  • Louis IV, Holy Roman Emperor
  • Albert I, Duke of Bavaria
  • Joanna Sophia of Bavaria
  • Albert II o _Germany
  • Elizabeth of Austria
  • Barbara Jagiellon
  • Christine of Saxony
  • Christine of Hesse
  • Sophia of Holstein-Gottorp
  • Adolphus Frederick I, Duke of Mecklenburg-Schwerin
  • Adolphus Frederick II, Duke of Mecklenburg-Strelitz
  • Duke Charles Louis Frederick of Mecklenburg
  • Charlotte of Mecklenburg-Strelitz
  • Prince Adolphus, Duke of Cambridge
  • Princess Mary Adelaide of Cambridge
  • Mary of Teck
  • George VI
  • Elizabeth II

And this is the path that I found using the Wikidata data:

  • Alfred the Great
  • Edward the Elder (surprisingly, it deviates right at the beginning)
  • Eadgifu of Wessex
  • Louis IV of France
  • Matilda of France
  • Gerberga of Burgundy
  • Matilda of Swabia (this is a weak link in the chain, though, as there might possibly be two Matildas having been merged together. Ask your resident historian)
  • Adalbert II, Count of Ballenstedt
  • Otto, Count of Ballenstedt
  • Albert the Bear
  • Bernhard, Count of Anhalt
  • Albert I, Duke of Saxony
  • Albert II, Duke of Saxony
  • Rudolf I, Duke of Saxe-Wittenberg
  • Wenceslaus I, Duke of Saxe-Wittenberg
  • Rudolf III, Duke of Saxe-Wittenberg
  • Barbara of Saxe-Wittenberg (Barbara has no article in the English Wikipedia, but in German, Bulgarian, and Italian. Since the scraper only looks at English, they would have never found this path)
  • Dorothea of Brandenburg
  • Frederick I of Denmark
  • Adolf, Duke of Holstein-Gottorp (husband to Christine of Hesse in Bill’s path)
  • Sophia of Holstein-Gottorp (and here the two lineages merge again)
  • Adolphus Frederick I, Duke of Mecklenburg-Schwerin
  • Adolphus Frederick II, Duke of Mecklenburg-Strelitz
  • Duke Charles Louis Frederick of Mecklenburg
  • Charlotte of Mecklenburg-Strelitz
  • Prince Adolphus, Duke of Cambridge
  • Princess Mary Adelaide of Cambridge
  • Mary of Teck
  • George VI
  • Elizabeth II

I hope that this is an interesting result for Bill coming out of this exercise.

I am super thankful to Bill for doing this work and describing it. It led to very interesting discussions and triggered insights into some shortcomings of Wikidata. I hope the above write-up is also helpful, particularly in providing some data regarding the quality of Wikidata, and I hope that it will lead to work in making Wikidata more and easier accessible to explorers like Bill.

Update: there has been a discussion of this post on Hacker News.

Wikidata reached a billion edits

As of today, Wikidata has reached a billion edits - 1,000,000,000.

This makes it the first Wikimedia project that has reached that number, and possibly the first wiki ever to have reached so many edits. Given that Wikidata was launched less than seven years ago, this means an average edit rate of 4-5 edits per second.

The billionth edit is the creation of an item for a 2006 physics article written in Chinese.

Congratulations to the community! This is a tremendous success.

Wikidatan in residence at Google

Over the last few years, more and more research teams all around the world have started to use Wikidata. Wikidata is becoming a fundamental resource. That is also true for research at Google. One advantage of using Wikidata as a research resource is that it is available to everyone. Results can be reproduced and validated externally. Yay!

I had used my 20% time to support such teams. The requests became more frequent, and now I am moving to a new role in Google Research, akin to a Wikimedian in Residence: my role is to promote understanding of the Wikimedia projects within Google, work with Googlers to share more resources with the Wikimedia communities, and to facilitate the improvement of Wikimedia content by the Wikimedia communities, all with a strong focus on Wikidata.

One deeply satisfying thing for me is that the goals of my new role and the goals of the communities are so well aligned: it is really about improving the coverage and quality of the content, and about pushing the projects closer towards letting everyone share in the sum of all knowledge.

Expect to see more from me again - there are already a number of fun ideas in the pipeline, and I am looking forward to see them get out of the gates! I am looking forward to hearing your ideas and suggestions, and to continue contributing to the Wikimedia goals.

Wikimania 2006 is over

And it sure was one of the hottest conferences ever! I don't mean just because of the 40°C/100°F that we had to endure in Boston, but also because of the further speakers there.

Brewster Kahle, the man behind the Internet Archive, and who started Alexa and WAIS Inc., told us about his plans to digitalize every book (just a few Petabytes), every movie (just a few Petabytes), every record (just a... well, you get the drill), and to make a snapshot of the web every few months, and archive this. Wow.

Yochai Benkler spoke about the Wealth of Networks. You can download his book from his site, or go to a bookstore and get it there. The talk was really inviting to read it: why does a network thingy like Wikipedia work and not suck? How does this change basically everything?

Next day, there was Mitch Kapor, president of the Open Source Application Foundation -- and I am really sorry I had to miss his talk, because at the same time we were giving our workshop on how to reuse the knowledge within a Semantic MediaWiki in your own applications and websites. Markus Krötzsch, travel companion and fellow AIFB PhD student, and basically the wizard who programmed most of the Semantic MediaWiki extension, totally surprised me by being surprised about what you can do with this Semantic Web stuff. Yes, indeed, the idea is to be able to ask another website to put stuff up on yours. And to mush data.

There was David Weinberger, whose talk made me laugh more than I had for a while (and I am quite merry, usually!). I still have to rethink what he actually said, contentwise, but it made a lot of sense, and I took some notes, it was on the structure of knowledge, and how it changes in the new world we are living in.

Ben Shneiderman, the pope on visualization and User Interfaces had an interesting talk on visualizing the Wikipedia. The two talks before his, by Fernanda Viegas and Martin Wattenberg, were really great, because they have visualized real Wikipedia data -- and showed us a lot of interesting data. I hope their tools will become available soon. (Ben's own talk was rather a bit disappointing, as he didn't seem to have the time to take some real data, but only used fake data to show some general possible visualizations. As i had the chance to see him in Darmstadt last year anyway, I didn't see much new stuff).

The party at the MIT Museum was great! Even though I wasn't allow to drink, because I forgot my ID. I'd never think anyone would consider me looking younger than 21. So I take this as the most sincere compliment. Don't bother explaining they had to check my ID even if I looked 110, I really don't want to hear :) I saw Kismet! Pitily, he was switched off.

Trust me. I was kinda tired after this week. It was lots of fun, it was enormously interesting. Thanks to all the Wikipedians, who made Wikipedia and Wikimania possible. Thanks to all these people for organizing this event and helping out! I am looking forward to Wikimania 2007, wherever it will be. The bidding for hosting Wikimania 2007 are open!

Wikimania is coming

Wikimania starts on Friday. Looking forward to it, I'll be there with a collegue and we will present a paper on Wikipedia and the Semantic Web - The Missing Links on Friday. Should you be in Frankfurt, don't miss it!

Here's the abstract: "Wikipedia is the biggest collaboratively created source of encyclopaedic knowledge. Growing beyond the borders of any traditional encyclopaedia, it is facing new problems of knowledge management: The current excessive usage of article lists and categories witnesses the fact that 19th century content organization technologies like inter-article references and indices are no longer sufficient for today's needs.

Rather, it is necessary to allow knowledge processing in a computer assisted way, for example to intelligently query the knowledge base. To this end, we propose the introduction of typed links as an extremely simple and unintrusive way for rendering large parts of Wikipedia machine readable. We provide a detailed plan on how to achieve this goal in a way that hardly impacts usability and performance, propose an implementation plan, and discuss possible difficulties on Wikipedia's way to the semantic future of the World Wide Web. The possible gains of this endeavour are huge; we sketch them by considering some immediate applications that semantic technologies can provide to enhance browsing, searching, and editing Wikipedia."

Basically we suggest to introduce typed links to the Wikipedia, and an RDF-export of the articles annotated with these typed links being regarded as relations. And suddenly, you get the a huge ontology, created by thousands and thousands of editors, queryable and usable, a really big starting block and incubator for Semantic Web technologies - and all this, still scalable!

If the Wikipedia community agrees that this is a nice idea, which I hope with all my heart. We'll see this weekend.

Wikipedia demonstriert

Eine Reihe von Wikipedien (Deutsch, Dänisch, Estnisch, Tschechisch) tragen heute schwarz um schlecht gemachte Gesetzesänderungen zu verhindern. Ich bin stolz auf die Freiwilligen der Wikipedien, die das organisiert bekommen haben.

Willkommen auf Nodix!

Die letzten zwei Monate waren bei mir eindeutig vom Studium bestimmt. Zwar hatte ich das Glück auch auf zwei wunderschönen Cons gewesen zu sein - einmal auf dem WWW, dem Rahjacon, ein fantastisches Live, sicherlich mein zweitliebstes bisher, das andere mal auf dem Ebertreffen, welches mir ebenfalls viel Freude und eine scheinbar gelungen gemeisterte Herausforderung bot, und über beide Konvente erhaltet ihr weitere Informationen, wohl auch Bilder, auf Sven Wedekens Website - aber meine Prüfungen in Theoretischer Informatik (vorbei, bestanden!) und demnächst in Compilerbau (mit Respekt erwartet) und das zeitgleich laufende, sehr fordernde Fachpraktikum Visualisierung (in welchem ich C++, OpenGL, Qt, Doxygen, den Emacs, das Prinzip der Szenengraphen, des Raytracing und des Volumenrendering ziemlich gleichzeitig lernen darf) haben es mir leider nicht ermöglicht, Nodix weitere Inhalte zu geben. Seitdem ich wieder mir einen klitzekleinen Funken mehr Zeit für meine Freizeit stehle, hoffe ich jedoch, nachdem ich nun dazu kam, mein eMail-Fach ein wenig auszuräumen, auch hier wieder weiterarbeiten zu können. In Zukunft soll zumindest ein regelmäßiges Editorial hier stehen, welches über die Website als solches informiert.

Diese Webseite füllt sich leider sehr langsam mit Inhalten. Bislang bleibt vor allem auf das Referat 'Was ist Software Architektur?', die wenigen von mir verfassten Buch-Rezensionen, die Kurzgeschichte 'Legende von Castle Darkmore' hinzuweisen. Am weitesten fortgeschritten ist das Bereitstellen von Material zum Rollenspiel Das Schwarze Auge, worin sich Texte zur Magietheorie, zu dotdsa und vor allem zu meiner Spielrunde finden, zudem noch Niobaras Foliant zum runterladen, ein Programm zur Darstellung des aventurischen Sternenhimmels.
Ich plane, binnen der nächsten Monats ein paar weitere Kurzgeschichten von mir veröffentlichen, zudem das Material zu meiner Spielrunde stark vergrößern und weitere Buchrezensionen hinzufügen zu können. Alles das hängt natürlich sehr davon ab, wieviel Zeit ich habe, jedoch kann ich eines versprechen: diese Website wird erblühen! Wenn sich auch noch nicht der tägliche Klick lohnt, so doch meistens der monatliche Blick.

Ich hoffe, euch bald wieder begrüßen zu können,
Denny Vrandecic

Willkommen auf Simia

Willkommen auf Simia, der neuen Website von Denny Vrandecic. Nachdem ich in meinem Blog seit gefühlten drei Zeitaltern nix mehr geschrieben habe, und auf meinen Seiten seit Anbeginn der Zeiten keine neuen Inhalte eingestellt habe, kann ich euch jetzt sagen, es lag daran, dass ich die ganze Technik umstellen wollte.

Womit ich endlich gut vorangekommen bin. Zur Zeit finden sich hier alle Blogeinträge von Nodix und die Kommentare. Die Funktion zum Erstellen neuer Kommentare funktioniert noch nicht, aber ich arbeite daran. Ihr werdet auch merken, dass deutlich mehr Inhalte auf der Seite in Englisch sind als früher.

Technisch gesehen ist Simia eine Semantic MediaWiki Installation. Damit gehört dieser Blog auch zu meiner Forschung, indem ich ein wenig Erfahrung aus erster Hand sammeln möchte, wie es ist, sein Blog und seine persönliche Homepage mit Semantic MediaWiki zu führen. (Insofern ist das natürlich kein Blog mehr, sondern ein so genanntes Bliki, aber wen schert's?). Und da das ganze semantisch ist, will ich herausfinden, wie so eine persönliche Website ins Semantic Web passt...

Um Up to date zu bleiben, gibt es eine Reihe von feeds auf Simia. Wählt Euch aus, was ihr wollt. Schöne Grüße, und ich hoffe, Ihr habt Euch gut durch die Weihnachtszeit gemampft! :)

Willkommen auf der Webseite von Denny Vrandecic!

Diese Webseite wurde erst vor kurzem erstellt und enthält dementsprechend erst wenige Inhalte. Sie wird binnen mir schnellstmöglicher Zeit mit Inhalten über die Themen meines Studiums - Informatik und Philosophie -, über mich selber, über meine Zukunftsvisionen und über Rollenspiele - vor allem DSA und Shadowrun - gefüllt werden, dazu noch einige Kleinigkeiten, die ich über die Jahre hinweg erstellt habe. Außerdem wird zur Zeit an einer XML-Version dieser Seiten gearbeitet. Es wäre nett, wenn Ihr versuchen würdet, diese anzuschauen und mir Kommentare senden würdet (Rückkehr auf diese Seite über die Browserbuttons).

Schauen Sie doch bald mal wieder rein!


Wired: "Wikipedia is the last best place on the Internet"

WIRED published a beautiful ode to Wikipedia, painting the history of the movement with broad strokes, aiming to capture its impact and ambition with beautiful prose. It is a long piece, but I found the writing exciting.

Here's my favorite paragraph:

"Pedantry this powerful is itself a kind of engine, and it is fueled by an enthusiasm that verges on love. Many early critiques of computer-assisted reference works feared a vital human quality would be stripped out in favor of bland fact-speak. That 1974 article in The Atlantic presaged this concern well: “Accuracy, of course, can better be won by a committee armed with computers than by a single intelligence. But while accuracy binds the trust between reader and contributor, eccentricity and elegance and surprise are the singular qualities that make learning an inviting transaction. And they are not qualities we associate with committees.” Yet Wikipedia has eccentricity, elegance, and surprise in abundance, especially in those moments when enthusiasm becomes excess and detail is rendered so finely (and pointlessly) that it becomes beautiful."

They also interviewed me and others for the piece, but the focus of the article is really on what the Wikipedia communities have achieved in our first two decades.

Two corrections: - I cannot be blamed for Wikidata alone, I blame Markus Krötzsch as well - the article says that half of the 40 million entries in Wikidata have been created by humans. I don't know if that is correct - what I said is that half of the edits are made by human contributors

Wissenswertes über Jamba

Herzallerliebst geschrieben, dazu noch äußerst unterhaltsam, und dennoch aufklärerischer und kritischer Inhalt:

http://spreeblick.de/wp/index.php?p=324

Da freut man sich. Was las ich vor kurzem in einem Telepolis-Interview mit Norbert Bolz gelesen?
"Ich lese am liebsten das ›Streiflicht‹ in der Süddeutschen Zeitung und ›Das Letzte‹ in der Zeit, also Glossen. Diese Glossen haben sehr viel mehr Sprengkraft als die Kommentare von irgendeinem Leitartikler. Solche Texte sind so voraussehbar in ihrer politischen Korrektheit, dass sie mich einfach nur anöden. Über die Form des Witzes lassen sich so manche politischen Informationen und Kritik viel besser vermitteln."

Na, dann ist der obige Link ein Beispiel für die Information der Zukunft.

Wodka

Neulich, im Getränkemarkt...

"Oh, wir könnten noch Bananen- und Kirschsaft kaufen, für KiBa."
"Coole Idee. Das sind zwei Flaschen Banane und eine Flasche Kirsch."
"Nö, das mischt man 1:1. Wir nehmen zwei von jeder."
"Echt? Na gut."
"Oh, schau, Mangosaft! Nehmen wir auch eine Flasche Mangosaft."
"Soviel passt aber nicht in die Kiste. Wir müssen was raustun."
"Hier, nehmen wir nur eine Flasche Banane."
"Aber dann haben wir ja zwei Flaschen Kirsche auf nur eine Flasche Banane."
"Ja und?"
"Du sagtest doch, die mischt man 1:1."
"Ja, schon, aber den Kirschsaft kann man auch für anderes hernehmen."
"Ach ja? Für was denn?"
"Kirsch Wodka zum Beispiel."
"Wir haben Wodka da?"
"Nein."
"..."
"Blogg das bitte nicht!"

Glaubst Du doch selbst nicht. Wer solche Bilder von mir bloggt, der hat kaum Gnade verdient... ;)

Wordle is good and pure

The nice thing about Wordle - whether you play it or not, whether you like it or not - is that it is one of those good, pure things the Web was made for. A simple Website, without ads, popups, monetization, invasive tracking, etc.

You know, something that can chiefly be done by someone who already has a comfortable life and won't regret not having monetized this. The same way scientists mainly have been "gentleman scientist". Or tenured professors who spent years on writing novels.

And that is why I think that we should have a Universal Basic Income. To unlock that creativity. To allow for ideas from people who are not already well off to see the light. To allow for a larger diversity of people to try more interesting things.

Thank you for coming to my TED talk.

P.S.: on January 31, five days after I wrote this text, Wordle was acquired by the New York Times for an undisclosed seven-digit sum. I think that is awesome for Wardle, the developer of Wordle, and I still think that what I said was true at that time and still mostly is, although I expect the Website now to slowly change to have more tracking, branding, and eventually a paywall.

World Wide Prolog

Today I had an idea - maybe this whole Semantic Web idea is nothing else than a big worldwide Prolog program. It's the AI researchers trying to enter the real world through the W3Cs backdoor...

No, really, think about it: almost all most people do with OWL is actually some logic programing. Declaring subsumptions, predicates, conjunctions, testing for entailment, getting answers out of this - but on a world wide scale. And your browser does the inferencing for you (or maybe the server? Depends on your architecture).

They are still a lot of questions open (and the actual semantic differences between Description Logics, and Logic Programming surely ain't the smalles ones of them), like how to infere anything with contradicting data (something that surely will happen in the World Wide Semantic Web), how to treat dynamics (I'm not sure how to do that without reification in RDF), and much more. Looking forward to see this issues resolved...

Wort des Jahres

Merriam-Webster hat das (englischsprachige) Wort des Jahres bekanntgegeben: Blog.

Wowarich?

Sehr nette idee, von Fred darauf gebracht:

Wo war ich schon überall auf der Welt, bzw. in Europa? (bei einer Weltkarte würde noch die USA dazukommen, der Rest wäre gähnendes Grau, darum habe ich lieber die Europakarte gewählt).

Hier fehlt noch ein Bild.

Könnt ihr auch ganz einfach für euch selber zusammenbasteln, auf World66.

Wurzelbehandlung

Zurück aus dem Urlaub, für die Uni aufgeräumt, haufenweise eMails bearbeitet und eine - ächz! - Wurzelbehandlung hinter und auch noch ein wenig vor mir (und das mir, der sich vor Zahnärzten so höllisch fürchtet, verdammt).

Dies als Erklärung für die Inaktivität auf dieser Seite, ansonsten verbleibe ich bloß mit dem Versprechen: es passiert bald mehr. Nach wie vor sind ein paar größere Neuerungen in Planung. Danke allen fleißigen und treuen Besuchern!

XML4Ada95 0.9

Version 0.9 von XML4Ada95 ist erschienen! Das bedeutet, die Dokumentation ist ausgebessert, ein paar grobe Bugs sind raus, und ich teile der ganzen Welt mit: hier ist es! Holt es euch...

Ich befürchte nur ernsthaft, dass meine Veröffentlichung des Pakets für meine Diplomarbeit von Nachteil sein kann: dadurch, dass schon jetzt viele Augen das Projekt begutachten werden, wird es auch viel Kritik, auch gerechtfertigte geben. Ich hoffe, dass dies die Note nicht negativ beeinflusst. Dem Projekt kann es ja nur zu Gute kommen, wenn es korrigiert wird. Ächz, ich hoffe, das Richtige getan zu haben, indem ich auf mein Gefühl hörte.

XML4Ada95 wächst

XML4Ada95 ist ganz schön gewachsen, drei Dutzend neue Seiten wurden hinzugefügt. Und die Seite ist noch lange nicht fertig!

Auch auf Nodix wurde ein klein wenig aufgeräumt, von den Einträgen auf der Titelseite wurde der Mai ins Archiv geschoben. War aber eher mal wieder zum üben, ich vergesse sonst, wie der Nodix Webseiten Generator bedient wird...

XML4Ada95 wächst weiter

Weiterer Wachstum für XML4Ada95. Beispiele, Erweiterung der Dokumentation (mehr denn 100 Seiten inzwischen, gut, dass dieser Teil nicht in der Ausarbeitung der Diplomarbeit mitgedruckt wurde, puh!)

Es geht voran. Ich mache mir auch schon wieder die ersten Notizen zum DSA4 Werkzeug und werde auch daran bald wieder arbeiten. Zudem steht eine Überarbeitung von Nodix an, aber das eilt weniger - diese Website hat sich doch anders entwickelt als gedacht, und dem sollte Rechnung getragen werden.

Den Juni, Juli und Augst von dieser Seite ins Archiv geschoben, und auch die History rechts mal wieder gekürzt (das macht die Anfangsseite kleiner und somit schneller zu laden).

Schönes Wochenende allen!

Zeitverschiebung

Es ist Mittag in Hawaii, und ich bin müde! Herrje.

Liegt wahrscheinlich daran, dass ich in Karlsruhe bin.

Zen and the Art of Motorcycle Maintenance

13 May 2021

During my PhD, on the topic of ontology evaluation - figuring out what a good ontology is and what is not - I was running circles up and down trying to define what "good" means for an ontology (Benjamin Good, another researcher on that topic, had it easier, as he could call his metric "Good metric" and be done with it).

So while I was struggling with the definition in one of my academic essays, a kind anonymous reviewer (I think it was Aldo Gangemi) suggested I should read "Zen and the Art of Motorcycle Maintenance".

When I read the title of the suggested book, I first thought the reviewer was being mean or silly and suggesting a made-up book because I was so incoherent. It took me two days to actually check whether that book existed, as I wouldn't believe it.

It existed. And it really helped me, by allowing me to set boundaries of how far I can go in my own work, and that it is OK to have limitations, and that trying to solve EVERYTHING leads to madness.

(Thanks to Brandon Harris for triggering this memory)

Zum 500.

Herzlichen Glückwünsch an Schwesterchen für Ihren 500. Eintrag in nakit-arts. Wow, 500 Einträge! Sehr fleißig.

Spaßigerweise ist dieser Eintrag wiederum der 250. Eintrag auf Nodix. Koinzidenz.