Semantic search

Jump to navigation Jump to search
Condition
Printout selection
Options
Parameters [
limit:

The maximum number of results to return
offset:

The offset of the first result
link:

Show values as links
headers:

Display the headers/property names
mainlabel:

The label to give to the main page name
intro:

The text to display before the query results, if there are any
outro:

The text to display after the query results, if there are any
searchlabel:

Text for continuing the search
default:

The text to display if there are no query results
embedformat:

The HTML tag used to define headings
embedonly:

Display no headings
Sort options
Delete
Add sorting condition

Stanford seminar on Knowledge Graphs

My friend Vinay Chaudhri is organising a seminar on Knowledge Graphs with Naren Chittar and Michael Genesereth this semester at Stanford.

I have the honour to present in it as the opening guest lecturer, introducing what Knowledge Graphs are and what are good for.

Due to the current COVID situation, the seminar was turned virtual, and opened to everyone to attend to.

Other speakers during the semester include Juan Sequeda, Marie-Laure Mugnier, Héctor Pérez Urbina, Michael Uschold, Jure Leskovec, Luna Dong, Mark Musen, and many others.

Star Trek's 32nd century

I like Star Trek for the cool technology, which has inspired plenty of people to work eg on "the Star Trek computer". I love Star Trek for the utopian society of plenty they sketch in the 23rd and 24th century.

I claim it is because of the laziness of the writing: they don't keep that utopia up.

When I heard about Discovery going to the 32nd century, I was excited about the wonders they would dream up. The new technology. The society. The culture. The breakthroughs.

With regards to that, it was a massive let down. Extremely disappointing.

Stars in our eyes

I grew up in a suburban, almost rural area in southern Germany, and I remember the hundreds, if not thousands of stars I could see at night. In the summers, that I spent on an island in Croatia, it was even more marvelous, and the dark night sky was breathtaking.

As I grew up, the stars dimmed, and I saw fewer and fewer of those, until only the brightest stars were visible. It was blindingly obvious that air and light pollution have swallowed that every-night miracle and confined it to my memory only.

Until in my late twenties I finally accepted and got glasses. Now the stars are back, as beautiful and brilliant as they have ever been.

Start the website again

This is no blog anymore. I haven't had entries for years, and even before then sporadically. This is a wiki, but somehow it is not that either. Currently you cannot make comments. Updating the software is a pain in the ass. But I like to have a site where I can publish again. Switch to another CMS? Maybe one day. But I like Semantic MediaWiki. So what will I do? I do not know. But I know I will slowly refresh this page again. Somehow.

A new part of my life is starting soon. And I want to have a platform to talk about it. And as much as I like Facebook or Google+, I like to have some form of control over this platform. Facebook and Google+ -- maybe they won't disappear in a year. But what about ten? Twenty? Fifty years? I'll still be around (I hope), but they might not...

Let's see what will happen here. For now, I republished the retelling of a day as a story I first published on Google+ (My day in Jerusalem) and a poem that feels eerily relevant whenever I think about it (Wenn ich wollte)

Starting Abstract Wikipedia

I am very happy about the Board of the Wikimedia Foundation having approved the proposal for the multilingual Wikipedia aka Abstract Wikipedia aka Wikilambda aka we'll need to find a name for it.

In order to make that project a reality, I will as of next week join the Foundation. We will be starting with a small, exploratory team, which will allow us to have plenty of time to continue to socialize and discuss and refine the idea. Being able to work on this full time and with a team should allow us to make significant progress. I am very excited about that.

I am sad to leave Google. It was a great time, and I learned a lot about running *large* projects, and I met so many brilliant people, and I ... seriously, it was a great six and a half years, and I will very much miss it.

There is so much more I want to write but right now I am just super happy and super excited. Thanks everyone!

Summer School for the Semantic Web, Day 0

Arrived in Cercedilla today, at the Semantic Web Summer School. I really was looking forward to these days, and now, flipping through the detailed programme I am even more excited. This will be a very intense week, I guess, where we learn a lot and have loads of fun.

I was surprised by the sheer number of students being here: 56 or 57 students have come to the summer school, from all over the world - met someone from Australia, from Pittsburgh, and many Europeans. Happily, I also met quite a number of people I already knew, and thus I know it will be a pleasurable week. But let's just do the math for a second: we have more than 50 accepted students at this summer school. There are at least three other summer schools with related fields, like the one in Ljubljana the week before, there's one in Edinburgh, and the ESSLLI. So, that's about 200 students. Even if we claim that every single PhD student is going to a summer school - which I don't think - that would mean we get 200 theses every year! (Probably this number will be only reached in three years or so)

So, just looking at the sheer amount of people working on it - what's the expected impact?

Interesting times lie ahead.

Supporting disaster relief with semantics

Soenke Ziesche, who has worked on humanitarian projects for the United Nations for the last six years, wrote an article for xml.com on the use of semantic wikis in disaster relief operations. That is a great scenario I never thought about, and basically one of these scenarios I think of when I say in my talks: "I'll be surprised if we don't get surprised by how this will be used." Probably I would even go to state the following: if nothing unexpected happens with it, the technology was too specific.

Just the thought that semantic technology in general, and maybe even Semantic MediaWiki in particular, could relief the effects of a natural disaster, or maybe even safe a life, this thought is so incredible exciting and rewarding. Thank you so much Soenke!

Taking a self-driving car

Ten years ago, my daughter was just born and I just joined Google, who were working on self-driving cars. And I was always hoping that my daughter would not have to need to learn how to drive a car (but that if she wanted, she may). In the last ten years I lost confidence in that hope.

Yesterday, thanks to my wife organizing it, we took our first ride with a self-driving car, driving about ten minutes through San Francisco. And I guess a world-wide roll out will take time, maybe a lot of time, but what can I say: it drove very well.

Talk in Korea

If you're around this Tuesday, February 13th, in Seoul, come by the Semantic Web 2.0 conference. I had the honour to be invited to give a talk on the Semantic Wikipedia (where a lot is happening right now, I will blog about this when I come back from Korea, and when the stuff gets fixed).

Looking forward to see you there!

Tech layoffs of 2022

Very interesting article reflecting on the current round of layoffs in the tech industry. The author explains it within the context of the wider economy. I'm surprised that the pandemic is not mentioned, which lead to accelerated growth early in the pandemic, which now hasn't turned out to be sustained. But the other arguments - from low interest rates to constant undervaluation due to the dot com bust around the millennium - this seems to tell a rather coherent story.

One particularly interesting point is the outlook that the tech company has gobbled up so much programming talent that other industries were starved of it. A lot of industries would benefit from (more modestly paid) software engineers, which might stimulate the whole economy to grow. Software might still be "eating the world", but that doesn't have to translate into software companies eating up the economy. There are so many businesses with domain expertise that cannot be easily replaced by some Silicon Valley engineer - but who would benefit from some programmers on staff.

This is especially true with the last decade of AI results. There is a massive overhang of capabilities that we have unlocked, which hasn't found its way into products yet, partly because all the skills necessary to turn these into products at the right places were just concentrated through enormously high wages in a small set of companies. There are so many businesses who would benefit from the latest machine learning methods. But folks prefer, understandably, to work in a place that gives them the promise of revolutionizing whole industries or saving the world.

But there is so much potential value to be generated if we also take some more modest goals into account. Not all of us need to work on AGI, it's also great to use software engineering skills to improve working conditions at the assembly line of a small local factory. With or without machine learning.

Temperatures in California

It has been a bit chillier the last few days. I noticed that after almost a decade in California, I feel pretty comfortable with understanding temperatures in Fahrenheit - as long as they are over 60° F. If it is colder, I need to switch to Celsius in order to understand how cold it exactly is. I have no idea what 40° or 45° or 50° F are, but I still know what 5° C is!

The fact that I still haven't acclimatised to Fahrenheit for the cooler temperatures tells you a lot about the climate in California.

Ten years of Wikidata

Today it's ten years since Wikidata had launched. A few memories.

It's been an amazing time. In the summer of 2011, people still didn't believe Wikidata would happen. In the fall of 2012, it was there.

Markus Krötzsch and I were pushing for the idea of a Semantic Wikipedia since 2005. Semantic MediaWiki was born from that idea, Freebase and DBpedia launched in 2007, microformats in Wikipedia became a grassroots thing, but no one was working on the real thing at the Wikimedia Foundation.

With Elena Simperl at KIT we started the EU research project RENDER in 2010, involving Mathias Schindler at Wikimedia Deutschland. It was about knowledge diversity on the Web, still an incredibly important topic. In RENDER, we developed ideas for the flexible representation of knowledge, and how to deal with contradicting and incomplete information. We analysed Wikipedia to understand the necessity of these ideas.

In 2010, I was finishing my PhD at KIT, and got an invitation by Yolanda Gil to work at the ISI at University of Southern California for a half year sabbatical. There, Yolanda, Varun Ratnakar, Markus and I developed a prototype for Wikidata which received the third place in the ISWC Semantic Web Challenge that year.

In 2011, the Wikimedia Data summit happened, invited by Tim O'Reilly and organised by Danese Cooper, to the headquarters of O'Reilly in Sebastopol, CA. There were folks from the Wikimedia Foundation, Freebase, DBpedia, Semantic MediaWiki, O'Reilly, there was Guha, Mark Greaves, I think, and others. I think that's where it became clear that Wikidata would be feasible.

It's also where I first met Guha and where I admitted to him that I was kinda a fan boy. He invented MFC, RDF, had worked with Douglas Lenat on CYC, and later that year introduced Schema.org. He's now working on Data Commons. Check it out, it's awesome.

Mark Greaves, a former DARPA program officer, who then was working for Paul Allen at Vulcan, had been supporting Semantic MediaWiki for several years, and he really wanted to make Wikidata happen. He knew my PhD was done, and that I was thinking about my next step. I thought it would be academia, but he suggested I should write up a project proposal for Wikidata.

After six years advocating for it, I understood that someone would need to step up to make it happen. With the support and confidence of so many people - Markus Krötzsch, Elena Simperl, Mark Greaves, Guha, Jamie Taylor, Rudi Studer, John Giannandrea, and others - I drafted the proposal.

The Board of the Wikimedia Foundation approved the proposal as a new Wikimedia project, but neither allocated the funding, nor directed the Foundation to do it. In fact, the Foundation was reluctant to take it on, unsure whether they would be able to host such a project development at that time. Back then, that was a wise decision.

Erik Möller, then CTO of the Foundation, was the driving force behind a major change: instead of turning the individual Wikipedias semantic, we would have a single Wikidata for all languages. Erik was also the one who had secured the domain for Wikidata. Many years prior.

Over the next half year and with the help of the Wikimedia Foundation, we secured funding from AI2 (Paul Allen), Google (who had acquired Freebase in the meantime), and the Gordon and Betty Moore Foundation, 1.3 million.

Other funders backed out because I insisted on the Wikidata ontology to be entirely under the control of the community. They argued to have professional ontologists, or reuse ontologies, or to use DBpedia to seed Wikidata. I said no. I firmly believed, and still believe, that the ontology has to be owned, created and maintained by the community. I invited the ontologists to join the project as community members, but to the best of my knowledge, they never made significant contributions. We did miss out on quite a bit of funding, though.

There we were. We had the funding and the project proposal, but no one to host us. We were even thinking of founding a new organisation, or hosting it at KIT, but due to the RENDER collaboration, Mathias Schindler had us talk with Pavel Richter, ED of Wikimedia Deutschland, and Pavel offered to host the development of Wikidata.

For Pavel and Wikimedia Deutschland this was a big step: the development team would significantly increase WMDE (I think, almost double it in size, if I remember correctly), which would necessitate a sudden transformation and increased professionalisation of WMDE. But Pavel was ready for it, and managed this growth admirably.

On April 1st 2012, we started the development of Wikidata. On October 29 2012 we launched the site.

The original launch was utterly useless. All you could do was creating new pages with Q IDs (the Q being a homage to Kamara, my wife), associated those Q IDs with labels in many languages, and connect to articles in Wikipedia, so called sitelinks. You could not add any statements yet. You could not connect items with each other. The sitelinks were not used anywhere. The labels were not used anywhere. As I said, the site was completely useless. And great fun, at least to me.

QIDs for entities are still being often disparaged. Why QIDs? Why not just the English name? Isn't dbp:Tokyo much easier to understand than Q1490? It was an uphill battle ten years ago to overcome the anglocentricity of many people. Unfortunately, this has not changed much. I am thankful to the Wikimedia movement to be one of the places that encourages, values, and supports the multilingual approach of Wikidata.

Over the next few months, the first few Wikipedias were able to access the sitelinks from Wikidata, and started deleting the sitelinks from their Wikipedias. This lead to a removal of more than 240 million lines of wikitext across the Wikipedias. 240 million lines that didn't need to be maintained anymore. In some languages, these lines constituted more than half of the content of the Wikipedia. In many languages, editing activity dropped dramatically at first, sometimes by 80%.

But then something happened. Those edits were mostly bots. And with those bots gone, humans were suddenly better able to see each other and build a more meaningful community. In many languages, this eventually lead to an increased community activity.

One of my biggest miscalculations when launching Wikidata was to entirely dismiss the possibility of a SPARQL endpoint. I thought that none of the existing open source triple stores would be performant enough. Peter Haase was instrumental in showing that I was wrong. Today, the SPARQL endpoint is an absolutely crucial piece of the Wikidata infrastructure, and is widely used to explore the dataset. And with its beautiful visualisations, I find it almost criminally underused. Unfortunately, the SPARQL endpoint is also the piece of infrastructure that worries us the most. The Wikimedia Foundation is working hard on figuring out the future for this service, and if you can offer substantial help, please reach out.

Today, Wikidata has more than 1.4 billion statements about approximately 100 million topics. It is by far the most edited Wikimedia project, with more edits than the English, German, and French Wikipedia together - even though they are each a decade older than Wikidata.

Wikidata is widely used. Almost every time Wikipedia serves one of its 24 billion monthly page views. Or during the pandemic in order to centralise the data about COVID cases in India to make them available across the languages of India. By large companies answering questions and fulfilling tasks with their intelligent assistants, be it Google or Apple or Microsoft. By academia, where you will find thousands of research papers using Wikidata. By numerous Open Source projects, by one-off analyses by data scientists, by small enterprises using the dataset, by student programmers exploring and playing with it on the weekend, by spreadsheet enthusiasts enriching their data, by scientists, librarians and curators linking their datasets to Wikidata, and thus to each other. Already, more than 7,000 catalogs are linked to Wikidata, and thus to each other, really and substantially establishing a Web of linked data.

I will always remember the Amazon developer who approached me after a talk. He had used Wikidata to gather data about movies. I was surprised: Amazon owns imdb, why would they ever use anything else for movies? He said that imdb was great for what it had, but Wikidata complemented it in unexpected ways, offering many interesting connections between the movies and other topics which would be out of scope for imdb.

Not to be misunderstood: knowledge bases such as imdb are amazing, and Wikidata does not aim to replace them. They often have a clear scope, have a higher quality, and almost always a better coverage in their field than Wikidata ever can hope to have, or aims to have. And that's OK. Wikidata's goal is not to replace these knowledge bases. But to provide the connecting tissue between the many knowledge bases out there. To connect them. To provide a common set of entities to work with. To turn the individual knowledge bases into a large interconnected Web of knowledge.

I am still surprised that Wikidata is not known more widely among developers. It always makes me smile with joy when I see yet another developer who just discovered Wikidata and writes an excited post about it and how much it helped them. In the last two weeks, I stumbled upon two projects who used Wikidata identifiers where I didn't expect them at all, just used them as if it was the most normal thing in the world. This is something I hope we will see even more in the future. I hope that Wikidata will become the common knowledge base that is ubiquitously used by a large swarm of intelligent applications. Not only to make these applications be smarter, by knowing more about the world - but also by allowing these applications to exchange data with each other more effectively because they are using the same language.

And most importantly: Wikidata has a healthy, large, and comparatively friendly and diverse community. It is one of the most active Wikimedia projects, only trailing the English Wikipedia, and usually similarly active as Commons.

Last time I checked, more than 400,000 people have contributed to Wikidata. For me, that is easily the most surprising number about the project. If you had asked me in 2012 how many people would contribute to Wikidata, I would have sheepishly hoped for a few hundred, maybe a few thousand. And I would have defensively explained why that's OK. I am humbled and awestruck by the fact that several hundred thousand people have contributed to an open knowledge base that is available to everyone, and that everyone can contribute to.

And that I think is the most important role that Wikidata plays. That it is a place that everyone can contribute to. That the knowledge base that everyone uses is not owned and gateguarded by any one company or government, but that it is a common good, that everyone can contribute to. That everyone with an internet connection can lend their voice to the sum of all knowledge.

We all own Wikidata. We are responsible for Wikidata. And we all benefit from Wikidata.

It has been an amazing ten years. I am looking forward to many more years of Wikidata, and to the many new roles that it will play in the years to come, and to the many people who will contribute to it.

Shoutout to the brilliant team that started the work on Wikidata: Lydia Pintscher, Abraham Taherivand, Daniel Kinzler, Jeroen De Dauw, Katie Filbert, Tobias Gritschacher, Jens Ohlig, John Blad, Daniel Werner, Henning Snater, and Silke Meyer.

And thank you for all these amazing pictures of cakes for Wikidata's birthday. (And if you're curious what is coming next: we are working on Wikifunctions and Abstract Wikipedia, in order to allow more people to contribute more knowledge to even more people!)

The Center of the Universe

The discovery of the center of the universe led to a series of unexpected consequences. It killed some, it enlightened others, but most people just were left utterly confused in the end.

When the results from the Total Radiating Universal Tessellation Hyperfield satellites measurements came in, it became depressingly clear that the universe was indeed contracting. Very slowly, but without any reasonable doubt — or, as the physicists said, they were five sigma sure about it. As the data from the measurements became available, physicists, cosmologists, topologists, even a few mathematically inclined philosophers, and a huge number of volunteers started to investigate it. And after a short period of time, they came to a whole set of staggering conclusions.

First, the Universe had a rather simple four-dimensional form. The only unfortunate blemishes in this theory were the black holes, but most of the volunteers, philosophers, and topologists decided to ignore these as accidental.

Second, the form was bounded. There was a beginning and an end in time, and there were boundaries in space, and those who understood that these were the same were enlightened about the form of the universe.

Third, since the form of the universe was bounded and simple, it had a center. Whereas this was slightly surprising it was a necessary consequence of the previous findings. What first seemed exciting, but soon will turn out not to be only the heart of this report, but the heart of all humanity, was that the data collected by the satellites allowed to calculate the position of the center of the universe.

Before that, let me recapture what we traditionally knew about how the universe is built. Our sun is a star, around which a few planets travel, one of them being our Earth. Our sun is one of a few tens of billions of stars that form a long curved thread which ties around a supermassive black hole. A small number of such threads are tangled together, forming the spiral arms of our galaxy, the Milky Way. Our galaxy consists of half a trillion stars like our sun.

Galaxies, like everything else in the universe, like to stick together and form groups. A few hundred thousand galaxies make up a supercluster. A few of these superclusters together build enormous walls of stars, filaments traversing the universe. The galaxies of such a wall are all in a single plane, more or less, and sometimes even in a single line.

Between these walls, walls made of superclusters and galaxies and stars and planets, there is, basically, nothing. The walls of stars are like gigantic honeycombs, and between them, are enormous empty spaces, hundred million of light years wide. When you look at a honeycomb, you will see that the empty spaces between the walls are much, much larger than the walls themselves. Such is the universe. You might think that the distance from here to the next grocery store is quite far, or that the ocean is quite big. But the distance from the earth to the sun is so much bigger, and the distance from the sun to the next star again so much more. And from our galaxy to the next, there is a huge empty space. Nevertheless, our galaxy is so close to the next group of galaxies that they together form a building block of a huge wall, separating two unimaginable large empty spaces from each other.

So when we figured out that we can calculate the center of the universe, it was widely expected that the center would be somewhere in one of those vast spaces of nothing. The chances that it would be in one of the filaments were tiny.

It turned out that this was not a question of chance.

The center of the universe was not only inside of a filament, but the first quick calculations (quick, though, has to be understood as taking three and a half years) suggested that the center is actually within our filament. And not only within our filament — but our galaxy. Within a one light year radius of our sun.

The team that made these calculations was working at a small research institute in rural Japan. They did not believe the results, and double and triple checked them. The head of the institute had graduated from Princeton, and called his former advisor there. Although it was deep in the night in Japan, they talked for many hours. In the end he learned that Princeton has made the same calculations, and received their own results about eight months ago. They didn’t dare to publish them. There must have been a mistake. These results had to be wrong.

Science has humiliated the whole of humanity again and again. And it was quite successful in doing so. A scientist would much easier accept that the center of the universe is some mathematical construct pointing to nothing than what the infallible mathematics indicated. But the data was out. And the number of people making the above mentioned realizations and calculations continued growing. It was only a matter of time. And when the Catholic University of Rio de Janeiro finally published the results — in a carefully written paper, without any accompanying press release, and formulated so cautiously and defensively — all the scientists who already knew the results held their breath.

The storm was unimaginable. Everyone demanded an explanation, but no one would listen to anyone offering one. The religions rejoiced, claiming they knew it all along, and many flocked to the mosques and churches and temples, as a proof of God was finally found. The irony of science leading humans to the embrace of religion was profoundly lost at that time, but later recognized as one of the largest jokes in history. Science has dealt its ultimate humiliation, not to humanity, but perversely to its most devout followers, the scientists. The scientists, who, while trashing the superiority of humans over the world, were secretly inflating their own, and were now reminded that they were merely slaves to a most cruel mistress. Their bitter resistance to the results did not stop them from emerging.

The mathematics and calculations were soon made public. The mathematics were deceptively simple, once the required factorizations were done, and easy to check. High school courses went through the proofs, and desperate parents peeked over the shoulders of their daughters and sons who, sometimes for the first time, talked of integrals and imaginary numbers. Television and streaming platforms were explaining discriminants and complex numbers and roots of higher degrees. Websites offering math courses bent under the load and moral weight.

There is one weird thing about roots. The root of a number is the number that, multiplied with itself, gives you the original number. The weird thing is that there is usually not a single, unique result to that question. For example, the root of the number four is not just two, but also minus two, as minus two times minus two results in four, too. There are two roots of the second degree (which we usually call the square root). There are three roots of the third degree (sometimes called the cube root). There are four roots of the fourth degree. And so on. All of them are correct. Sometimes you can discard one or the other because the result has to fit certain constraints (say, you are looking only for the positive root of four), but sometimes, you can not.

As the calculations went public, the methods became more and more refined. The results became increasingly precise, and as the data from the satellites poured in, one of the last steps involved a root of the seventh degree. First, this was regarded as a minor curiosity, especially because these seven results led to basically the same point. Cosmologically speaking.

Earth is moving. Earth is moving around the sun, with a speed of a sixty seven thousand miles per hour, or eighteen miles each second. Also the sun is moving, and the earth is moving with the sun, and our galaxy is moving, and with our galaxy the sun moves along, and with the sun our earth. We are racing with a speed of a thousand miles each second in some direction away from the center of the universe.

And it was realized, maybe we just passed the center of the universe. Maybe it was just an accident, maybe all the planets and stars pass the center of the universe at some point. That we are so close to the center of the universe might be just a funny coincidence.

And maybe they are right. Maybe every star will at some point cross the center of the universe within the distance of a light year.

At some point though it was realized that, since the universe was bounded in all four dimensions, there was not only a center in space, but also a center in time, a midpoint between the beginning of the universe and its future end.

All human history is encompassed in the last hundred thousand years. From the mitochondrial Eve and the Y-Chromosomal Adam who lived in Africa, the mother of our mother of our mother, and so on, that we all share, and the father of our father of our father, and so on, that we all share, their descendants, our ancestors, who crossed the then fertile jungle of the Sahara and who afterwards settled the whole planet, painted on the walls of caves and filled the air with music by blowing over grass blades and into hollow bones, wandered over the land bridge connecting Asia with the Americas and traveled over the vast Pacific to discover tiny islands, until the recent invention of the alphabet, all of this happened in the last hundred thousand years. The universe has an age of hundred thousand times a hundred thousand years, roughly. And the fabled midpoint turned out to be within the last few thousand years.

The hopes that our earth was just accidentally next to the center of the universe was shattered. As the precision of the calculations increased, it became clearer and clearer that earth was not merely close to the center of the universe, but back at the midpoint of history, earth was right there in the center. In every single of the seven possible results, Earth was right at the center of the universe. [1]

As the calculations continued over the years, a new class of mystic mathematicians emerged, and many walls between religion and science were shattered. On both sides the unshakeable ones remained: the scientists who would not admit that these results mean anything, that it all is merely a mathematical abstraction; and the priests who say that these results mean nothing, that they don’t tell us about how to live a good life. That these parallels intersect, is the only trace of infinity left.


[1] As the results refined, it seemed that the seven mathematical solutions for the center of time and space turned out to be some very well known dates. So far the precisions calculated was ten years here or there. The well known dates were: 3760 BC, 541 BC, 30 AD, and 610 AD. The other dates turned out to be quite less well known: 10909 BC, 3114 BC, and 1989 AD. The interpretation of the dates led to a well-known series of events all over the world, which we will not discuss here.


(This story was first published on Medium on February 2, 2014 under CC-BY 4.0).

The Fourth Scream

Janie loved her research. It was at the intersection of so many interesting areas - genetics, linguistics, neuroscience. And the best thing about it - she could work the whole day with these adorable vervet monkeys.

One more time, she showed the video of the flying eagle to Kassandra. The MRI helmet on Kassandra’s little head measured the neuron activation, highlighting the same region on her computer screen as the other times, the same region as with the other monkeys. Kassandra let out the scream that Janie was able to understand herself by now, the scream meaning “Eagle!”, and the other monkeys behind the bars in the far end of the room, in a cage large as half the room, ran to cover in the bushes and small caves, if they were close enough. As they did every time.

That MRI helmet was a masterpiece. She could measure the activation of the neurons in unprecedented high resolution. And not only that, she could even send inferencing waves back, stimulating very fine grained regions in the monkey’s brain. The stimulation wasn’t very fast, but it was a modern miracle.

She slipped a raspberry to Kassandra, and Kassandra quickly snatched it and stuffed it in her mouth. The monkeys came from different populations from all over Southern and Eastern Africa, and yet they all understood the same three screams. Even when the baby monkeys were raised by mute parents, the baby monkeys understood the same three screams. One scream was to warn them from leopards, one scream was to warn them from snakes, and the third scream was to warn them from eagles. The screams were universally understood by everyone across the globe - by every vervet monkey, that is. A language encoded in the DNA of the species.

She called up the aggregated areas from the scream from her last few experiments. In the last five years, she was able to trace back the proteins that were responsible for the growth of these four areas, and thus the DNA encoding these calls. She could prove that these three different screams, the three different words of Vervetian, were all encoded in DNA. That was very different from human language, where every word is learned, arbitrary, and none of the words were encoded in our DNA. Some researchers believed that other parts of our language were encoded in our DNA: deep grammatical patterns, the ability to merge chunks into hierarchies of meaning when parsing sentences, or the categorical difference between hearing the syllable ba and the syllable ga. But she was the first one to provably connect three different concrete genes with three different words that an animal produces and understands.

She told the software to create an overlapping picture of the three different brain areas activated by the three screams. It was a three dimensional picture that she could turn, zoom, and slice freely, in real time. The strands of DNA were highlighted at the bottom of the screen, in the same colors as the three different areas in the brain. One gene, then a break, then the other two genes she had identified. Leopard, snake, eagle.

She started to turn the visualization of the brain areas, as Kassandra started squealing in pain. Her hand was stuck between the cage bars and the plate with raspberries. The little thief was trying to sneak out a raspberry or two! Janie laughed, and helped the monkey get the hand unstuck. Kassandra yanked it back into the cage, looked at Janie accusingly, knowing that the pain was Janie’s fault for not giving her enough raspberries. Janie snickered, took out another raspberry and gave it to the monkey. She snatched it out of Janie’s hand, without stopping the accusing stare, and Janie then put the plate to the other side of the table, in safe distance and out of sight of Kassandra.

She looked back at the screen. When Kassandra cried out, her hand had twitched, and turned the visualization to a weird angle. She just wanted to turn it back to a more common view, when she suddenly stopped.

From this angle, she could see the three different areas, connecting together with the audiovisual cortex at a common point, like the leaves of a clover. But that was just it. It really looked like three leaves of a four-leaf clover. The area where the fourth leaf would be - it looked a lot like the areas where the other three leaves were.

She zoomed into the audiovisual cortex. She marked the neurons that triggered each of the three leaves. And then she looked at the fourth leaf. The connection to the cortex was similar. A bit different, but similar enough. She was able to identify what probably are the trigger-neurons, just like she was able to find them for the other three areas.

She targeted the MRI helmet on the neurons connected to the eagle trigger neurons, and with a click she sent a stimulus. Kassandra looked up, a bit confused. Janie looked at the neurons, how they triggered, unrolled the activation patterns, and saw how the signal was suppressed. She reprogrammed the MRI helmet, refined the neurons to be stimulated, and sent off another stimulus.

Kassandra yanked her head up, looking around, surprised. She looked at her screen, but it showed nothing as well. She walked nervously around inside the little cage, looking worriedly to the ceiling of the lab, confused. Janie again analyzed the activation patterns, and saw how it almost went through. There seemed to be a single last gatekeeper to pass. She reprogrammed the stimulator again. Third time's the charm, they say. She just remembered a former boyfriend, who was going on and on about this proverb. How no one knew how old it was, where it began, and how many different cultures all over the world associate trying something three times with eventual success, or an eventual curse. How some people believed you need to call the devil's name three times to —

Kassandra screamed out the same scream as before, the scream saying “Eagle!”. The MRI helmet had sent the stimulus, and it worked. The other monkeys jumped for cover. Kassandra raised her own arms above her head, peeking through her fingers to find the eagle she had just sensed.

Janie was more than excited! This alone will make a great paper. She could get the monkeys to scream out one of the three words of their language by a simple stimulation of particular neurons! Sure, she expected this to work - why wouldn’t it? But the actual scream, the confirmation, was exhilarating. As expected, the neurons now had a heightened potential, were easier to activate, waiting for more input. They slowly cooled down as Kassandra didn’t see any eagles.

She looked at the neurons connected to the fourth leaf. The gap. Was there a secret, fourth word hidden? One that all the zoologists studying vervet monkeys have missed so far? What would that word be? She reprogrammed the MRI helmet, aiming at the neurons that would trigger the fourth leaf. If her theory was right. With another click she sent a stimulus to the —

Janie was crouching in the corner of the room, breathing heavily, cold sweat was covering her arms, her face, her whole body. Her clothes were clamp. Her arms were slung above her head. She didn’t remember how she got here. The office chair she was just sitting in a moment ago, laid on the floor. The monkeys were quiet. Eerily quiet. She couldn’t see them from where she was, she couldn’t even see Kassandra from here, who was in the cage next to her computer. One of the halogen lamps in the ceiling was flickering. It wasn’t doing that before, was it?

She slowly stood up. Her body was shivering. She felt dizzy. She almost stumbled, just standing up. She slowly lowered her arms, but her arms were shaking. She looked for Kassandra. Kassandra was completely quiet, rolled up in the very corner of her cage, her arms slung around herself, her eyes staring catatonically forward, into nothing.

Janie took a step towards the middle of the room. She could see a bit more of the cage. The monkeys were partly huddled together, shaking in fear. One of them laid in the middle of the cage, his face in a grimace of terror. He was dead. She thought it was Rambo, but she wasn’t sure. She stumbled to the computer, pulled the chair from the floor, slumped into it.

The MRI helmet had recorded the activation pattern. She stepped through it. It did behave partially the same: the neurons triggered the unknown leaf, as expected, and that lead to activate the muscles around the lungs, the throat, the tongue, the mouth - in short, that activated the scream. But, unlike with the eagle scream, the activation potential did not increase, it was now suppressed. Like if it was trying to avoid a second triggering. She checked the pattern: yes, the neuron triggered that suppression itself. That was different. How did this secret scream sound?

Oh no! No, no, no, no, NOO!! She had not recorded the experiment. How stupid!

She was excited. She was scared, too, but she tried to push that away. She needed to record that scream. She needed to record the fourth word, the secret word of vervet monkeys. She switched on all three cameras in the lab, one pointed at the large cage with the monkeys, the other two pointing at Kassandra - and then she changed her mind, and turned one onto herself. What has happened to herself? Why couldn’t she remember hearing the scream? Why was she been crouching on the floor like one of the monkeys?

She checked her computer. The MRI helmet was calibrated as before, pointing at the group of triggering neurons. The suppression was ebbing down, but not as fast as she wanted. She increased the stimulation power. She shouldn’t. She should follow protocol. But this all was crazy. This was a cover story for Nature. With her as first author. She checked the recording devices. All three were on. The streams were feeding back into her computer. She clicked to send the sti—

She felt the floor beneath her. It was dirty and cold. She was laying on the floor, face down. Her ears were ringing. She turned her head, opened her eyes. Her vision was blurred. Over the ringing in her ears she didn’t hear a single sound from the monkeys. She tried to move, and she felt her pants were wet. She tried to stand up, to push herself up.

She couldn’t.

She panicked. Shivered. And when she felt the tears running over her face, she clenched her teeth together. She tried to breath, consciously, to collect herself, to gain control. Again she tried to stand up, and this time her arms and legs moved. Slower than she wanted. Weaker than she hoped. She was shaking. But she moved. She grabbed the chair. Pulled herself up a bit. The computer screen was as before, as if nothing has happened. She looked to Kassandra.

Kassandra was dead. Her eyes were bloodshot. Her face was a mask of pure terror, staring at nothing in the middle of the room. Janie tried to look at the cage with the other monkeys, but she couldn’t focus her gaze. She tried to yank herself into the chair.

The chair rolled away, and she crashed to the floor.

She had went too far. She had made a mistake. She should have had followed protocol. She was too ambitious, her curiosity and her impatience took the best of her. She had to focus. She had to fix things. But first she needed to call for help. She crawled to the chair. She pulled herself up, tried to sit in the chair, and she did it. She was sitting. Success.

Slowly, she rolled back to the computer. Her office didn’t have a phone. She double-clicked on the security app on her desktop. She had no idea how it worked, she never had to call security before. She hoped it would just work. A screen opened, asking her for some input. She couldn’t read it. She tried to focus. She didn’t know what to do. After a few moments the app changed, and it said in big letters: HELP IS ON THE WAY. STAY CALM. She closed her eyes. Breathed. Good.

After a few moments she felt better. She opened her eyes. HELP IS ON THE WAY. STAY CALM. She read it, once, twice. She nodded, her gaze jumping over the rest of the screen.

The recording was still on.

She moved the mouse cursor to the recording app. She wanted to see what has happened. There was nothing to do anyway, until security came. She clicked on the play button.

The recording filled three windows, one for each of the cameras. One pointed at the large cage with the vervet monkeys, two at Kassandra. Then, one of the cameras pointing at Kassandra was moved, pointing at Janie, just moments ago - it was moments, was it? - sitting at the desk. She saw herself getting ready to send the second stimulus to Kassandra, to make her call the secret scream a second time.

And then, from the recording, Kassandra called for a third time.

The end

The Future of Knowledge Graphs in a World of Large Language Models

The Knowledge Graph Conference 2023 in New York City invited me for a keynote on May 11, 2023. Given that basically all conversations these days are about large language models, I have given a talk about my understanding on how knowledge graphs and large language models go together.

After the conference, I did a recording of the talk, giving it one more time, in order to improve the quality of the recording. The talk had gotten more than 10,000 views on YouTube so far, which, for me, is totally astonishing.

I forgot to link it here, so here we go finally:

The Heat Death of the Internet

Good observations, and closing on a hopeful note. Short and pointed read.

The Jones Brothers

The two Jones brothers never got along, but both were too stubborn to leave the family estate. They built out two entrances to the estate, one from the south, near Jefferson Avenue, and the newer, bigger one, closer to the historic downtown, and each brother chose to use one of the entrances exclusively, in order to avoid the other and their family. To the confusion of the local folk (but to the open enjoyment of the high school's grammar teacher, who was, surprisingly for his role, a descriptivist), they named the western gate the Jones' gate, and the southern one the Jones's gate, and the brothers earnestly thought that that settled it.

It didn't.

The Ring verse in German

28 May 2024

I finally got the Lord of the Rings in English. I never read it in its native English, only in a German translation, about thirty years ago.

And already on the first page I am stumped: the ring verse seems to me sooo much better in German than in English. Now, it is absolutely possible that this is due to me having read it as an impressionable teenager and having carried the translation with me for three decades and thus developed fondness and familiarity with it, but I think it's more than that.

Here are the verses in English, German, and a literal back-translation of the German to English:

Three Rings for the Elven-kings under the sky,
Seven for the Dwarf-lords in their halls of stone,
Nine for Mortal Men doomed to die,
One for the Dark Lord on his dark throne
In the Land of Mordor where the Shadows lie.
One Ring to rule them all,
One Ring to find them,
One Ring to bring them all,
and in the darkness bind them
In the Land of Mordor where the Shadows lie.

German translation by von Freymann:

Drei Ringe den Elbenkönigen hoch im Licht,'
Sieben den Zwergenherrschern in ihren Hallen aus Stein,
Den Sterblichen, ewig dem Tode verfallen, neun,
Einer dem dunklen Herrn auf dunklem Thron
Im Lande Mordor, wo die Schatten drohn.
Einen Ring, sie zu knechten, sie all zu finden,
ins Dunkle zu treiben und ewig zu binden
Im Lande Mordor, wo die Schatten drohn.

Back-translation of her translation by me:

Three Rings for the Elven kings high in the light,
Seven for the Dwarf-lords in their halls of stone,
For the mortals, eternally doomed to death, nine,
One for the Dark Lord on dark throne
In the Land of Mordor, where the Shadows loom.
One Ring, to enslave them, to find them,
to drive to Darkness, and forever bind them
In the Land of Mordor, where the Shadows loom.

The differences are small, but I find the selection of words by the translator to be stronger and more evocative than Tolkien's original. Which is amazing. Thanks to the great Ebba-Margareta von Freymann for her wonderful translation of the poems!

Originally, the publisher Klett hat trouble with translating Tolkien's poems, but Ebba-Margareta had been, for many years working on the translation of poems by Tolkien, and by using her translations, Klett did a great service to the book for the German-speaking world.


The Strange Case of Booker T. Washington’s Birthday

A lovely geeky essay about how much work a single edit to Wikipedia can be. I went down this kind of rabbit holes myself more than once, and so I very much enjoyed the essay.

The Surrounding Sea

Explore the ocean of words in which we all are swimming, day in day out. A site that allows you to browse through the lexicographic data in Wikidata along four dimensions:

  • alphabetical, like in a good old fashioned dictionary
  • through translations and synonyms
  • where does this word come from, and where did it go
  • narrower and wider words, describing a hierarchy of meanings

Wikidata contains over 1.2 million lexicographic entries, but you will see the many gaps when exploring the sea of words. Please join us in charting out more of the world of words.

Happy 23rd birthday to Wikipedia and the movement it started!

The benefit of Semantic MediaWiki

I can't comment on Tim O'Reilly's blog right now it seems, maybe my answer is too long, or it has too many links, or whatever. It only took some time, my mistake. He blogged about Semantic MediaWiki -- yaay! I'm a fanboy, really -- but he asks "but why hasn't this approach taken off? Because there's no immediate benefit to the user." So I wanted to answer that.

"About Semantic MediaWiki, you ask, "why hasn't this approach taken off?" Well, because we're still hacking :) But besides that, there is a growing number of pages who actually use our beta software, which we are very thankful to (because of all the great feedback). Take a look at discourseDB for example. Great work there!

You give the following answer to your question: "Because there's no immediate benefit". Actually, there is benefit inside the wiki: you can ask for the knowledge that you have made explicit within the wiki. So the idea is that you can make automatic tables like this list of Kings of Judah from the Bible wiki, or this list of upcoming conferences, including a nice timeline visualization. This is immediate benefit for wiki editors: they don't have to make pages like these examples (1, 2, 3, 4, 5, or any of these) by hand. Here's were we harness self-interest: wiki editors need to put in less work in order to achieve the same quality of information. Data needs to be entered only once. And as it is accessible to external scripts with standard tools, they can even write scripts to check the correctness or at some form of consistency of the data in the wiki, and they are able to aggregate the data within the wiki and display it in a nice way. We are using it very successfully for our internal knowledge management, where we can simply grab the data and redisplay it as needed. Basically, like a wiki with a bit more DB functionality.

I will refrain from comparing to Freebase, because I haven't seen it yet -- but from what I heard from Robet Cook it seems that we are partially complementary to it. I hope to see it soon :)"

Now, I am afraid since my feed's broken this message will not get picked up by PlanetRDF, and therefore no one will ever see it, darn! :( And it seems I can't use trackback. I really need to update to a real blogging software.


Comments are still missing on this post.

The end of civilization?

This might be controversial with some of my friends, but no, there is no high likelihood of human civilization ending within the next 30 years.

Yes, climate change is happening, and we're obviously not reacting fast and effective enough. But that won't kill humanity, and it will not end civilization.

Some highly populated areas might become uninhabitable. No question about this. Whole countries in southern Asia, central and South America, in Africa, might become too hot and too humid or too dry for human living. This would lead to hundreds of millions, maybe billions of people, who will want to move, to save their lives and the lives of their loved ones. Many, many people would die in these migrations.

The migration pressures on the countries that are climatically better off may become enormous, and it will either lead to massive bloodshed or to enormous demographic changes, or, most likely, both.

But look at the map. There are large areas in northern Asia and North America that would dramatically improve their habitability for humans if they would warm a bit. Large areas could become viable for growing wheat, fruits, corn.

As it is already today, and as it was for most of human history, we produce enough food and clean water and shelter and energy for everyone. The problem is not production, it is and will always be distribution. Facing huge upheaval and massive migration the distribution channels will likely break down and become even more ineffective. The disruption of the distribution network will likely also endanger seemingly stable states, and places that thought to pass the events unscathed will be hurt by that breakdown. The fact that there would be enough food will make the humanitarian catastrophes even more maddening.

Money will make it possible to shelter away from the most severe effects, no matter where you start now. It's the poor that will bear the brunt of the negative effects. I don't think that's surprising to anyone.

But even if almost none of today's countries might survive as they are, and if a few billion people die, the chances of humanity to end, of civilization to end, are negligible. Billions will survive into the 21st century, and will carry on history.

So, yes, the changes might be massive and in some areas catastrophic. But humanity and civilization will preserve.

Why this post? I don't think it is responsible to exaggerate the bad predictions too much. It makes the predictions less believable. Also, to have a sober look at the possible changes may make it easier to understand why some countries react as they do. Does this mean we don't need to react and try to reduce climate change? If that's your conclusion, you haven't read carefully along. I said something about possibly billions becoming displaced.

IFLScience: New Report Warns "High Likelihood Of Human Civilization Coming To An End" Within 30 Years

The height of Anson Mount

26 May 2024

Slop is filling up the Internet.

Today my Google Now feed even suggested (!) the following page which was focused solely on the height of Anson Mount. Now I assume Google thinks I'm interested in the actor because I've read about Star Trek.

https://berkah.blob.core.windows.net/ernews/how-tall-is-anson-mount.html

The article has a certain fascination, because it claims to be the ultimate guide to Anson Mount's height, and it goes in a lot of detail about it, for example explaining that height is often measured in feet and inches, or how having more height helps Mount find better fitting clothes.

It's also fascinating because it gives his height as 6'3 / 1.91. Google Knowledge Graph claims 6'1 / 1.85 without a source. And IMDb states 5'11½ / 1.82. The website Celebrity Heights lists 5'11¼ / 1.81. I kid you not.

That makes me wonder whether I'm yearning back to times when people were publishing stuff like this (I'm not):

https://winteriscoming.net/2021/06/17/james-gunn-star-trek-anson-mount-fight-twitter-actors-lie-height/

Here we see reporting about a Twitter discussion between Mount and director James Gunn about actors lying about their height, and Mount seemingly being touchy about that subject.

The algorithmically pushed article also mentions Mount's place of birth in Tennessee (Wikipedia though says Illinois, but trust whom you will).

The Web has, almost from the beginning, been a place that you shouldn't trust blindly. I used to trust Google to be a first layer of defense. But the last few weeks indicate that this is no longer the case. Google will now push AI generated slop right to me, whereas it should try to keep me from even pulling it from the Web. I hope Google will figure that out.

In the last few weeks it's getting increasingly difficult to get correct information on the Web. I'm noticing it around Pokemon Go, where I look up whether a Pokemon has already been released, or how to evolve it. I get arbitrary answers, which I found plain wrong several times. Google's results are not ranked by trustworthiness, and now I have to start to remember which sites to trust, which sucks.

This is going to be exhausting.

(And if you think this is only true about pop culture stuff, then bless your heart)

The letter Đ

The letter Đ was introduced to Serbo-Croatian by Đuro Daničić, according to Wikipedia. I found that highly amusing, that he introduced the letter that is the first letter in his name.

Wikipedia also claims that he was born Đorđe Popović, and all I can think of is "nah, that can't be right".

That would be like Jebediah Springfield who was born in a cabin that he helped build.

The name Zdenko

Today I saw that the Wikipedia article on Zdenko - my actual name - was edited, and the meaning of the name was changed from something I considered correct (slavic form of Sidonius) to something that I never heard of before (diminutive of Zdeslav), but the reference stayed intact, so I thought that'll be an easy revert. Just to do due process, I checked the given source - and funnily enough, it didn't say neither one nor the other, but gave an etymology from the slavic word zidati, to build, to create.

That lead me down a two hour rabbit hole through different sources crossing the 19th to 20th century, finding sources that claim the name is derived from the Slavic word zdenac, a well, or that Zdenko is cognate to Sidney, a Hessian source explaining that it is considered the root for the name Denje (so close to Denny!) (and saying it has nothing to do with Sidonius), and much more.

In short, if you think that etymology is messy, I tell you, anthroponymy is far worse!

The place of birth of Ena Begović

I stumbled accidentally over a discrepancy regarding the place of birth of the Croatian actress Ena Begović, and noticed that if you ask Google for the place of birth, it answers Trpanj, whereas Wikipedia lists Split. I was curious where Google got Trpanj from, and how to fix it (especially now that I am not at Google anymore).

The original article in English Wikipedia was created in August 2005 by Raoul DMR. The article listed her as a "native of Split", which in September 2005 was turned into "born in Split".

In April 2018, Lole484, a user who gets blocked for sockpuppeting later, adds that she was born in "Trpanj near Split". There is no Trpanj near Split, but there is a Trpanj on Pelješac. Realzing that, they remove the "near Split" part. In 2019, Ivan Ladic - a sockpuppet of Lole484 - adds a reference to the city of birth being Trpanj, Večernji list, a well known Croatian news magazine.

In April 2020, an anonymous editor changes the place of birth back to Split, and adds a reference to the Croatian national encyclopedia. Today, I changed it back to Trpanj, accidentally while not being logged in (thus anonymously), to possibly encourage a discussion, after starting a conversation on the talk page on English and Croatian a few weeks ago that had one reply.

Interestingly, within a minute after changing the text, I went to Google and asked again for the date of birth, and Google again shows me Trpanj - but this time with the Wikipedia article and the updated snippet as a source. That is impressive.

When I asked Bing, Bing was saying Split for the last three weeks, since I started this adventure, whenever I checked. Today, it still kept saying Split, referencing two sources, one of them English Wikipedia, although I had already changed English Wikipedia. Not as fresh. Let's see how long this will stick. (Maybe folks at Bing should also talk with my colleagues at Wikimedia Enterprise to improve their freshness?)

The Croatian article was created in 2006 after the English one already stated Split, and Split was presumably copied over from the English version. Lole484 changed it to Trpanj in May 2018, and was later also blocked on Croatian Wikipedia, for unrelated reasons of vandalism. The same anonymous editor as on English Wikipedia changes it back to Split in April 2020.

Serbian and Serbocroatian started their articles in 2007, Russian in 2012, Ukrainian in 2016, Albanian and Bulgarian in 2017, Egyptian Arabic was created in October 2020. They all had Split from the beginning and throughout until today, presumably copied from English, directly or indirectly.

Amusingly, Serbian Wikipedia's opening sentence, which includes the place of birth being Split, receives a reference in January 2022 - but the reference actually states Trpanj.

None of the other language editions had their article started in the 2018-2019 window when English and Croatian stated the place of birth as Trpanj.

The only other Wikipedia language edition that saw a change of the place of birth was the Bosnian. The article on Bosnian Wikipedia started a few months after the Croatian, in 2006 (and thus being the third oldest article), and presumably also just copied from either Croatian or English. Lole484 changed it to Trpanj in April 2018, just like on the other Wikipedias. Here it was reverted the next day, but Lole484's sockpuppet Ivan Ladic reinstated that change in January 2019. When I started this adventure, the only Wikipedia that stated Trpanj was Bosnian, all other eight language editions with an article said Split.

On Wikidata, the item was created in 2012, shortly after the launch of the site, based on the existing six sitelinks. The place of birth being Split is added the following year, imported from the Russian Wikipedia.

After I stumbled upon the situation, I added Trpanj as second place of birth, and added sources to both Trpanj and Split.

What's the situation outside of Wikipedia? Both places have pretty solid references going for them:

Trpanj

  • Večernji list, article from 2016
  • Biografija stated Trpanj, no date, but after 2013 (Archive has the first copy from October 2020)
  • tportal.hr has an article on a photography exhibition in Trpanj about Ena Begović, saying the place is chosen because it is her place of birth, published 2016
  • Jutarnji list, a well known Croatian newspaper, has a long article about the actress, calling their house in Trpanj the 'rodna kuća', their birth home, of Ena and her sister Mia. This does not necessarily mean that it is literally the house they were born in. Published 2010
  • HRT (Croatian national broadcaster), published 2021
  • Dubrovački Vjesnik, local newspaper close to Trpanj, lists Trpanj, article from 2020
  • Slobodna Dalmacija, a local newspaper from Split, writes Trpanj (but note that this is the same author as the previous article)
  • Juarnji list, published 2020 (but note that this is the same author as the previous article)
  • Geni.com says Trpanj, last updated 2022

Split

24sata says she grew up in Trpanj, gives her date of birth, but avoids stating her place of birth.

Only very few of the sources predate the English Wikipedia article, most notably:

I also looked up her sister Mia and found her profile on Facebook and sent her a message, but I assume she never even saw this message request. At least I never received an answer (and I didn't expect to). For Mia, the situation is similar: her article originally stated Split, was changed by Lole484 and reverted by an anonymous user, both in English and Croatian, whereas the other languages just list Split throughout.

There were many other sources, and they were going one way or the other. Many of the sources probably just copied from each other. The fact that there were some sources, such as Večernji, that stated Trpanj before it ever made to Wikipedia, but after Split was listed in Wikipedia, was swaying me to think it is Trpanj. Also, it was not always the strongest sources (e.g. usually I would rank the national encyclopedia over Večernji) that said Trpanj, but it was the most in-depth articles, that looked like the authors actually took the time to do some research. Many of the sources looked like they were just bots copying from Wikipedia or Wikidata, or quick pieces taking the base data from Wikipedia.

But then, finally, I stumbled upon one more source: index.hr re-published in 2019 an 1989 interview by Kemal Mujičić with Ena and Mia Begović. Here's a quote from the interview:

Rođene su u Trpnju na Pelješcu.
Ena: Molim vas, to posebno naglasite: Svi misle da smo Dubrovkinje.
Mia: Zanimljivo je da smo u Trpnju rođene kao podstanarke. Roditelji su tek poslije sagradili onu kućicu.

Translation:

They (Ena and Mia) are born in Trpanj on Pelješac.
Ena: Please put an emphasis on this: everyone thinks we are from Dubrovnik.
Mia: It is interesting that in Trpanj we were born as renters. Our parents built the little house (in which we lived) only later.

Ha! It is amusing to see that Ena's worry was that everyone thinks they are from Dubrovnik. I couldn't find a single source claiming that (but she went to high school (gimnazijum) in Dubrovnik, which is probably the source of that statement from 30 years ago). Also, so much for birth house.

Given all of that, I am going with Trpanj, and making the changes to the Wikipedia languages as much as I can (if someone can help with Arabic and Egyptian Arabic for Ena and Mia, that would be swell, I cannot edit that language edition). Let's see if it sticks.

So, why did Google know the correct answer, even though their usual sources, such as Wikidata and Wikipedia where saying Split? I mustn't say too much but it is due to the Google Knowledge Graph team and their quality processes. Seriously, congratulations to my former colleagues at Google for getting that right!

Just for fun, I also asked ChatGPT (on February 15). And the answer surprised me: when I asked in English, it gave me, unsurprisingly, Split (certainly what the Web seems to believe). But when I asked in Croatian, it gave me a different answer! And the answer was neither Split, nor Trpanj, and also not Dubrovnik - but Zagreb! It is interesting that something like the place of birth of an actress would lead to different answers depending on the language. I would have expected this knowledge to be in the 'world knowledge' of the LLM, not in the 'language knowledge'. I can't check out Bing's chat interface, as I have no access to it, but I would be curious what it says and how long it takes to update.

Thank you for going along on this rather nerdy ride of citogenesis.

Update

Ah, only a few hours after this publication, Bing got updated. And they not only switched from Split to Trpanj, they use this very blogpost as one of the two authoritative references for Trpanj!

The right to work

20 May 2023

I've been a friend of Universal Basic Income for thirty years, but in the last twenty years, I have growing reservations about it, and many questions. This article about an experiment with a right to work was the first text in a while I read on it that substantially impacted my thinking on this (text is in German). I recommend reading it.

Work is not just a source of money, but for many also a source of meaning, pride, structure, motivation, social connections. Having voluntary access to work seems to be one major component that is necessary on a societal level, in addition to a universal basic income that allows that everyone can live in dignity. Note: I think work should be widely construed. If someone has something that fills that need, that's work. Raising children, taking care of a garden, writing a book, refining piano skills, creating art, taking care of others, taking care of yourself, all these easily count as work in my book.

I wish we were willing and able to experiment with different ways of structuring society as we are willing and able to experiment with technology. We deployed the Internet to the world without worrying about the long term consequences, but we're cautious about giving everyone enough money to not be hungry. That's just broken. I was always disappointed about the fact that sociology and politics as studied and taught by academia were mostly descriptive and not constructive endeavors.

The story of the Swedish calendar

Most of us are mostly aware how the calendar works. There’s twelve months in a year, each month has 30 or 31 days, and then there’s February, which usually has 28 days and sometimes, in what is called a leap year, 29. In general, years divisible by four are leap years.

This calendar was introduced by no one else then Julius Caesar, before he became busy conquering the known world and becoming the Emperor of Rome. Before that he used to have the job title “supreme bridge builder” - the bridge connecting the human world with the world of the gods. One of the responsibilities of this role was to decide how many days to add to the end of the calendar year, because the Romans noticed that their calendar was getting misaligned with the seasons, because it was simply a bit too short. So, for every year, the supreme bridge builder had to decide how many days to add to the calendar.

Since we are talking about the Roman Republic, this was unsurprisingly misused for political gain. If the supreme bridge builder liked the people in power, he might have granted a few extra weeks. If not, no extra days. Instead of ensuring that the calendar and the seasons aligned, the calendar got even more out of whack.

Julius Caesar spearheaded a reform of the calendar, and instead of letting the supreme bridge builder decide how many days to add, the reform devised rules founded in observation and mathematical rules - leading to the calendar we still have today: twelve months each year, each with 30 or 31 days, besides February, which had 28, but every four years would have 29. This is what we today call the Julian calendar. This calendar was not perfect, but pretty good.

Over the following centuries, the role of the supreme bridge builder - or, in latin, Pontifex Maximus - transferred from the Emperor of Rome to the Bishop of Rome, the Pope. And with continuing observations over centuries it was noticed that the calendar was again getting out of sync with the seasons. So it was the Pope - Gregory XIII, later called The Great - who, in his role as Pontifex Maximus, decided that the calendar should be fixed once again. The committee he set up to work on that came up with fabulous improvements, which would guarantee to keep the calendar in sync for a much longer time frame. In addition to the rules established by the Julian calendar, every hundred years we would drop a leap year. But every four hundred years, we would skip dropping the leap year (as we did in 2000, which not many people noticed). And in 1582, this calendar - called the Gregorian calendar - was introduced.

Imagine leading a committee that comes up with rules on what the whole world would need to do once every four hundred years - and mostly having these rules implemented. How would you lead and design such a committee? I find this idea mind-blowing.

Since the time of Caesar until 1582, about fifteen centuries have passed. And in this time, the calendar was getting slightly out of sync - by one day every century, skipping every fourth. In order to deal with that shift, they decided that ten calendar days need to be skipped. Following the 4th of October 1582 was the 15th of October 1582. In 1582, there was no 5th or 14th of October, nor any of the days in between, in the countries that had the Gregorian calendar adopted.

This lead to plenty of legal discussions, mostly about monthly rents and wages: is this still a full month, or should the rent or wage be paid prorated to the number of days? Should annual rents, interests, and taxes be prorated by these ten days, or not? What day of the week should the 15th of October be?


The Gregorian calendar was a marked improvement over the Julian calendar with regards to keeping the seasons in sync with the calendar. So one might think its adoption should be a no-brainer. But there was a slight complication: politics.

Now imagine that today the Pope gets out on his balcony, and declares that, starting in five years, January to November all have 30 days, and December has 35 or 36 days. How would the world react? Would they ponder the merits of the proposal, would they laugh, would they simply adopt it? Would a country such as Italy have a different public discourse about this topic than a country such as China?

In 1582, the situation was similarly difficult. Instead of pondering the benefits of the proposal, the source of the proposal and the relation to that source became the main deciding factor. Instead of adopting the idea because it is a good idea, the idea was adopted - or not - because the Pope of the Catholic Church declared it. The Papal state, the Spanish and French Kingdoms, were first to adopt it.

Queen Elizabeth wanted to adopt it in England, but the Anglican bishops were fiercely opposed to it because it was suggested by the Pope. Other Protestant and the Orthodox countries simply ignored it for centuries. And thus there was a 5th of October 1582 in England, but not in France, and that lead to a number of confusions over the following centuries.

Ever wondered why the October Revolution started November 7? There we go. There is even a story that Napoleon won an important battle (either the Battle of Austerlitz or the Battle of Ulm) because the Russian and Austrian forces coordinated badly as the Austrians were using the Gregorian and the Russians the Julian calendar. The story is false, but it makes for a great story.

Today, the International Day of the Book is on April 23 - the death date of both Miguel de Cervantes and William Shakespeare in 1616, the two giants of literature in their respective languages - with the amusing side-effect that they actually died about two weeks apart, even though they died on the same calendar day, but in different calendars.

It wasn’t until 1923 that for most purposes all countries had deprecated the Julian calendar, and for religious purposes some still follow it - which is why the Orthodox and the Amish celebrate Christmas on January 6. Starting 2101, that should shift by another day - and I would be very curious to see whether it will, or whether by then January 6th has solidified as the Christmas date.


Possibly the most confusing story about adopting the Gregorian calendar comes from Sweden. Like most protestant countries, Sweden did not initially adopt the Gregorian calendar, and was sticking with the Julian calendar, until in 1699 they decided to switch.

Now, the idea of skipping eleven or twelve days in one go did not sound appealing - remember all the chaos that occurred in the other countries for dropping these days. So in Sweden they decided that instead of dropping the days all at once, they would drop them one by one, by skipping the leap years from 1700 until 1740, when the two calendars would finally catch up.

In 1700, February 29 was skipped in Sweden. Which didn’t bring them any closer to Gregorian countries such as Spain, because they skipped the leap year in 1700 anyway. But it brought them out of alignment with Russia - by one day.

A war with Russia started (not about the calendar, but just a week before the calendars went out of sync, incidentally), and due to the war Sweden forgot to skip the leap days in 1704 and 1708 (they had other things on their mind). And as this was embarrassing, in 1711, King Charles XII of Sweden declared to abandon the plan, and added one extra day the following year to realign it back to Russia. And because 1712 was a leap year anyway, in Sweden there was not only a February 29, but also a February 30, 1712. The only legal February 30 in history so far.

It needed not only for Charles XII to die, but also for his sister (who succeeded him) and her husband (who succeeded her) in 1751, before Sweden could move beyond that embarrassing episode, and in 1752 Sweden switched from the Julian to the Gregorian calendar, by cutting February short and ending it after February 17, following that by March 1.


Somewhere on my To-Do list, I have the wish to write a book on Wikidata. How it came to be, how it works, what it means, the complications we encountered, and the ones we missed, etc. One section in this book is planned to be about calendar models. This is an early, self-contained draft of part of that section. Feedback and corrections are very welcome.


Tim Bray leaving Amazon in protest

Tim Bray, co-author of XML, stepped down as Amazon VP over their handling of whistleblowers on May 1st. His post on this decision is worth reading.

Time on Mars

This is a fascinating and fun listen about the mars mission. Because a day on Mars takes 40 minutes longer than on Earth, the people working on that mission had to live on Mars time, as the Mars rovers work with solar panels. So they have watches showing Mars time. They invent new words in their language, speaking about sol instead of day, of yestersol, and they start themselves calling Martians. 11 minutes.

Toy Story 4

Toy Story 4 was great fun!

Toy Story 3 had a great closure (and a lot of tears), so would, what could they do to justify a fourth part? They developed the characters further than ever before. Woody is faced with a lot of decisions, and he has to grow in order to say an even bigger good-bye than last time.

Interesting fact: PETA protested the movie because Bo Peep uses a shepherd's crook, and those are considered a "symbol of domination over animals."

Bo Peep was a pretty cool character in the movie. And she used her crook well.

The cast was amazing: besides the many who kept their roles (Tom Hanks, Tim Allen, Annie Potts, Joan Cusack, Timothy Dalton, even keeping Don Rickles from archive footage after his death, and everyone else) many new voices (Betty White, Mel Brooks, Christina Hendricks, Keanu Reeves, Bill Hader, Tony Hale, Key and Peele, and Flea from the Red Hot Chili Peppers).

Turing Award to Bengio, LeCun, and Hinton

Congratulations to Yoshua Bengio, Yann LeCun, and Geoffrey Hinton on being awarded the Turing Award, the most prestigious award in Computer Science.

Their work had revolutionized huge parts of computer science as it is used in research and industry, and has lead to the current impressive results in AI and ML. They were continuing to work on an area that was deemed unpromising, and has suddenly swept through whole industries and reshaped them.

Twenty years

On this day, twenty years ago, on January 15, 2001, I started my third Website, Nodix, and I kept it up since then (unlike my previous two Websites, which are lost to history as Internet Archive didn't capture them yet, it seems). A few years later I renamed it to Simia.

Here is the first entry: Willkommen auf der Webseite von Denny Vrandecic!

My Website never became particularly popular, although I was meticulously keeping track of how many hits I got and all of this. It was always a fun side project for which I had sometimes more and sometimes less time.

The funniest thing is that it was - and that was completely incidental - exactly the same day that another Website was started, which I, over the years, spent much more time on: Wikipedia.

Wikipedia changed my life, not only once, but many times.

It is how I met Kamara.

It is how I met a lot of other very smart people, too. It became part of my research work and my PhD thesis. It became the motivation for many of the projects I have started, be it Semantic MediaWiki, Wikidata, or Abstract Wikipedia. It is the reason for my career trajectory over the last fifteen years. It is hard to overstate how influential Wikipedia has been on my life.

It is hard to overstate how important Wikipedia has become for modern AI and for the Web of today. For smaller language communities. For many, many people looking for knowledge. And for the many people who realised that they can contribute to it too.

Thanks to the Wikipedia community, thanks to this marvellous project, and happy anniversary and many returns to Wikipedia!

Unexpected problems

As you know, I'm a strong believer in the vision of the Semantic Web, and I actively pursue this goal. I am not too sure what it means, but I have hundreds of ideas floating through my head, about what will be possible in this future...

But the road seems longer than expected. For some time I have the dlpconvert and rdf2owlxml web services running. It is very enlightening and interesting to see, what kind of ontologies were used for testing. And I most certainly don't mean the domain of the ontologies used, but rather the syntax.

Both services state very clearly what syntaxes you may use. dlpconvert allows only OWL XML presentation syntax, rather obscure, I admit. That's the main reason, rdf2owlxml was offered. But most people didn't care, they just keep on using RDF - and not just OWL in RDF/XML-serialisation, but much more simple, plain RDF.

Yeah, every RDF is in OWL Full. But dlpconvert only deals with OWL DL. That's stated explicitly. And much less does it work with Abstract Syntax or N3. All of this was tested.

I most definitively don't want to rant about users here. You never should rant about users (I mean, in public). Especially, since everyone who uses a service like dlpconvert is probably quite intelligent and has some expertise in the field of Semantic Web. It's not his fault. It isn't mine either, I wrote quite explicitly what is needed. Maybe it's the W3Cs fault, or maybe it's just to blame on politics.

The fine differences between RDF, RDFS, RDF(S), OWL, OWL Full, OWL DL, OWL Lite, DLP - yes, I said fine differences between RDF and OWL DL - it's just too much to cope with. If it is too much for us, what do we expect of the future user of the Semantic Web? The web as we know it grew to its todays size because it was easy. It wasn't because of standards. For the first few years no one really cared about the HTML standard, I mean, not to the extent we do today in the Semantic Web. Even with tons of errors, pages would load and show nice results. It was a very forgiving system. And now, find out why it was so widely adopted?

The problem is: maybe we really need to be as strict as we are. But I hope we don't. I strongly believe into the virtue of "View source" - but this means understandable views on the source. Not RDF/XML-Serialisation. And still easy to copy. Only this way the Semantic Web can lift off from the roots, from the users. The users were creating the Web in the first years, not the companies. I don't know why everybody is turning to the companies today.

Oh, I should stop, it sounds like ranting again.

Unique Name Assumption

I just read Andrew Newman's entry on the Unique Name Assumption (UNA). He thinks that not having an UNA is "weird, completely backwards and very non-intuitive". Furher he continues, that "It does seem perverse that the basis for this, the URI, is unique." He cites an OWL Flight paper that caused me quite some headache a few weeks ago (cause there was so little in it that I found to like).

Andrew, whose blog I really like to read, makes one very valid point: "It doesn't really say, though, why you need non-unique names."

There was an OWL requirement that gives a short rationale for the UNA, but it seems it is not yet stated obvious enough.
Let's make a short jump to the close future: the Semantic Web is thriving, private homepages offer rich information sources about anything, and even the companies see the value of offering machine-processable information, thus, ontologies and knowledge bases everywhere!

People want to say how they liked the movie they just saw. They enrich their movie review with an RDF-statement that says

http://semantic.nodix.net/movie#Ring_2 http://semantic.nodix.net/rating#rated http://semantic.nodix.net/rating#4_of_5.

Or rather, their editor creates this statement automatically and publishes it along the review.

I'd be highly surprised if imdb would use the same URI for denoting the movie. They would probably use an imdb-URI. And so could I, using the imdb-specified URI for the movie. But I didn't, and I don't have to. If I want to state that this is the same movie, I can assert that explicitly. If I had UNA, I couldn't do that. The two knowledge bases could not work together.

With UNA, many knowledge bases relying on inverse functional properties would break as well. FOAF, for examples, uses this, identifiying persons with an IFP of their eMail-Hash. With UNA, this wouldn't work anymore.

Let's take another example. On my mothers webpage there could be a statement saying she has three kids, Jurica, Rozana and Zdenko. I would state on my page that I am my moms kid. My sister, being the social kind, tells the world about her mom and her two brothers, Jurica and Denny.
Now, if we have UNA, a reasoner would infer that one of us is lying. But all of us are very honest, trustworthy people. The problem here is, that my name is Zdenko, but most people refer to me as Denny. UNA says that Denny and Zdenko are the same person. If we have no UNA, we wouldn't believe that. But still we can state it explicitly: my mom could have said that she has three kids, Jurica, Rozana and Zdenko, and those are mutually distinct. Problem solved.

You could say, wait, if we had UNA we still could just claim that Zdenko owl:sameAs Denny, and the problem wouldn't arise. That is true. But then I would have to consider my moms statements. That maybe OK on a scale like this, but imagine this in the wilds of the web - you would have to consider every statement made about something, before you may state something as well. Impossible! And you would introduce non-monotonic inferences, and you probably wouldn't really want that.

What does this mean? Let's take the following row of statements, and consider the answer to the question "Is Kain on of Adams two sons?". So we know that Adam has two sons, and that there is an entity named Kain.

Adam fatherOf Abel.

UNA and non-UNA both answer: don't know.

Adam fatherOf Cain.

UNA says "No, Kain is no son of Adam". non-UNA says: "Sorry, I still don't know".

Cain sameAs Kain.

UNA says "Yes, Kain is a son of Adam (hope you didn't notice my little lie seconds before)". non-UNA says: "Yes, Kain is a son of Adam".

Assuming that, instead of the last statement, we claimed that

Adam fatherOf Kain.

UNA would say: "I'm messed up, I don't know anything, my database is inconsistent, sorry." , whereas non-UNA would answer: "Yes, Kain is a son of Adam (and by the way, maybe Kain and Abel are the same, or Kain and Cain, or Abel and Cain)."

The problem is, that in the setting of the Semantic Web you have a World Wide Web with thousands of facts, always changing, and you must assume that you didn't fetch all the information about a subject. You really can't know if you know everything there is about Adam. But you still want to be able to ask questions. And you want to get answers, and these answers to be monotonic. You don't want the Semantic Web to answer one day "No", the other "Yes" and sometimes "I don't know", but you could be fine with having it either provide the correct answer or non at all.

OWL-Flight and proponents of UNA actually forgot that it's a Semantic Web, not just a Semantic Knowledge Base. If you want UNA, take your Prolog-engine. The Semantic Web is more. And therefore it has to meet some requirements, and UNA is an astonishingly basic requirement of the Semantic Web. Don't forget, you can create local unique names if needed. But the other way would be much harder.

Still, Andrews arguments lead to a very important question: taking for granted that Andrew is an intelligent guy with quite some experience with this kind of stuff, how probable is it, that Joe Random User will have really big problems with grasping such concepts as non-UNA? How should the primers be written? How should the tools work in order to help users deal with this stuff - without requiring the user to study these ideas in advance?

Still a long way to go.

Unique Name Assumption - another example

Ian Davis has a very nice example illustrating the Unique Name Assumption: "Two sons and two fathers went to a pizza restaurant. They ordered three pizzas. When they came, everyone had a whole pizza. How can that be?"'

Better than my examples. And much shorter!

Using ChatGPT for fact checking

I wanted to explore the idea that we could use ChatGPT for fact checking. So I took a random article - it gave me the article on Jewish Ideas Daily, copied the text into ChatGPT with the following prompt:

Here's the text of a Wikipedia article. Please find the three statements that are most likely incorrect and rank them, in order to guide fact checking:

Followed by the Wikipedia article text. Here's the answer:

  1. "Jewish Ideas Daily was edited by Margot Lurie from its founding in January 2010 until December 2012." This statement is incorrect because according to the article, Margot Lurie was the editor of Jewish Ideas Daily "until December 2012."
  2. "Jewish Ideas Daily drew notice for controversial features including one accusing Christopher Hitchens of antisemitism." This statement is difficult to verify without more context, as it is not clear when this feature was published or what specifically it accused Hitchens of.
  3. "Like Jewish Review of Books, and Jewish Ideas Daily, Mosaic is funded by the Tikvah Fund." This statement is incorrect because according to the article, Mosaic succeeded Jewish Ideas Daily in June 2013 and is therefore not funded by the Tikvah Fund in the same way as Jewish Ideas Daily and Jewish Review of Books.

Regarding the first statement, GPT says that the sentence "Jewish Ideas Daily was edited by Margot Lurie from its founding in January 2010 until December 2012." is incorrect. I am not disagreeing, but that sentence does not appear in the Wikipedia article text.

Regarding the second statement, GPT says that there is no reference for that statement. In fact, there is, and Wikipedia provides it.

Regarding the third statement, I didn't check the statement, but the argumentation for why it should be checked is unconvincing: if Tikvah fund financed Jewish Ideas Daily, why would it not finance the successor Mosaic? It would be good to add a reference for these statements, but that's not the suggestion.

In short: the review by ChatGPT looks really good, but the suggestions in this case were not good.

The exercise was helpful insofar the article infobox and the text were disagreeing on the founding of the newspaper. I fixed that, but that's nothing ChatGPT pointed out (and couldn't, as I didn't copy and paste the infobox).

Views on the US economy 2024

By most metrics, the American economy is doing well. But the perception of the American economy is much weaker than its actual strength. This seems to finally slowly break up a bit, and people are realizing that things are actually not that bad.

Here's an article that tries to explain it: because of high interest rates, credit is expensive, including credit card debt, and if someone is buying a home now.

But if you go beyond the anecdotes as this essay does, and look at the actual data, you will find something else: it is a very partisan thing.

For Democrats we find that it depends. Basically, the more you fit to the dominant group - the richer you are, the older, the better educated, the "whiter", the "maler" - the better your view of the economy.

For Republicans we don't find any such differentiation. Everyone is negative about it, across the board. Their perception of the economic situation are crassly different from the perception of their Democratic peers.

WWW2006 social wiki

18 May 2006

The WWW2006 conference next week has a social wiki. So people can talk about evening activities, about planning BOF-Sessions, about their drinking habits. If you're coming to the conference, go there, make a page for yourself. I think it would be fun to capture the information, and to see how much data we can get together... data? Oh, yes, forgot to tell you: the WWW2006 wiki is running on Semantic MediaWiki.

Yay!

Let's show how cool this thing can get!

War in the shadows

A few years ago I learned with shock and surprise that in the 1960s and 1970s Croatians have been assassinated by the Yugoslav secret service in other countries, such as Germany, and that the German government back then chose to mostly look away. That upset me. In the last few weeks I listened to a number of podcasts that were going into more details about these events, and it turned out that some of those murdered Croatians were entangled with the WW2 fascist Croatian Ustasha regime -- either by being Ustasha themselves, or by actively working towards recreating the Ustasha regime in Croatia.

Some of the people involved were actively pursing terrorist acts - killing diplomats and trying to kill politicians, hijacking and possibly downing airplanes, bombing cinemas, and even trying an actual armed uprising.

There was a failed attempt of planting seventeen bombs along the Croatian Adria, on tourist beaches, during the early tourist season, and to detonate them all simultaneously, in order to starve off income from tourism for Yugoslavia.

Germany struggled with these events themselves: their own secret service was tasked with protecting the German state, and it was initially even unclear how to deal with organizations whose goal is to destabilize a foreign government. Laws and rules were changed in order to deal with the Croatian extremists, rules that were later applied to the PLO, IRA, Hamas, etc.

Knowing a bit more of the background, where it seems that a communist regime was assassinating fascists and terrorists, does not excuse these acts, nor the German inactivity. It is a political assassination without due process. But it makes it a bit better understandable why the German post-Nazi administration, that was at that time busy with its own wave of terror by the Rote Armee Fraktion RAF, was not giving more attention to these events. And Germany received some of its due when Yugoslavia captured some of the kidnappers and murderers of Hanns Martin Schleyer, and did not extradite them to Germany, but let them go, because Germany did not agree to hand over Croatian separatists in return.

Croatians had a very different reputation in the 1970s than the have today.

I still feel like I have a very incomplete picture of all of these events, but so many things happened that I had no idea about.

Source podcasts in German

Web Conference 2019

25 May 2019

Last week saw the latest incarnation of the Web Conference (previously known as WWW or dubdubdub), going from May 15 to 17 (with satellite events the two days before). When I was still in academia, WWW was one of the most prestigious conference series for my research area, so when it came to be held literally across the street from my office, I couldn’t resist going to it.

The conference featured two keynotes (the third, by Lawrence Lessig, was cancelled on short notice due to a family emergency):

Watch the talks on YouTube on the links given above. Thanks to Marco Neumann for pointing to the links!

The conference was attended by more than 1,400 people (closer to 1,600?), making it the second largest since its inception (trailing only Lyon from last year), and about double the size than it used to be only four or five years ago. The conference dinner in the Exploratorium was relaxed and enjoyable. Acceptance rate was at 18%, which made for 225 accepted full papers.

The proceedings are available for free (yay!), so browse them for papers you find interesting. Personally, I really enjoyed the papers that looked into the use of WhatsApp to spread misinformation before the Brazil election, Dataset Search, and pre-empting SPARQL queries from blocking the endpoint. The proceedings span 5,047 pages, and are available online.

I had the feeling that Machine Learning was taking much more space in the program than it used to when I used to attend the conference regularly - which is fine, but many of the ML papers were only tenuously connected to the Web (which was the same criticism that we raised against many of the Semantic Web / Description Logic papers back then).

Thanks to the general chairs for organizing the conference, Leila Zia and Ricardo Baeza-Yates, and thanks to the sponsors, particularly Microsoft, Bloomberg, Amazon, and Google.

The two workshops I attended before the Web Conference were the Knowledge Graph Technology and Applications 2019 workshop on Monday, and the Wiki workshop 2019 on Tuesday. They have their own trip reports.

If you have trip reports, let me know and I will link to them.

Welcome!

Welcome to my new blog! Technology kindly provided by Blogger.com

What is a good ontology?

You know? Go ahead, tell me!

I really want to know what you think a good ontology is. And I will make it the topic of my PhD: Ontology Evaluation. But I want you to tell me. And I am not the only one who wants to know. That's why Mari Carmen, Aldo, York and I have submitted a proposal for a workshop on Ontology Evaluation, and happily it got accepted. Now we can officially ask the whole world to write a paper on that issue and send it to us.

The EON2006 Workshop on Evaluation of Ontologies for the Web - 4th International EON Workshop (that's the official title) is co-located with the prestigous WWW2006 conference in Ediburgh, UK. We also were very happy that so many reknown experts accepted our invitation to the program committee, thus ensuring a high quality of reviews for the submissions. The deadline is almost two months away: January 10th, 2006. So you have plenty of time to write that mind-busting phantastic paper on Ontology Evaluation until then! Get all the details on the Workshop website http://km.aifb.uni-karlsruhe.de/ws/eon2006.

I really hope to see some of you in Edinburgh next May, and I am looking for lively discussions about what makes an ontology a good ontology (by the way, if you plan to submit something - I would love to get a short notification, that would really be great. But it is by no means requested. It's just so that we can plan a bit better).

What's DLP?

OWL has some sublanguages which are all more or less connected to each other, and they make the mumbojumbo of ontology languages not any clearer. There is the almighty OWL Full, there's OWL DL, the easy* OWL Lite, and then there are numerous 'proprietary' expansions, which are more (OWL-E) or less (OWL Flight) compatible and useful.

We'd like to add another one, OWL DLP. Not because we think that there aren't enough already, but because we think this one makes a difference. Because it has some nice properties, like fully translatable to logic programs, and because it is easy to use and because it is fully compatible to standard OWL, and you don't have to use any extra tools.

If you want to read more, I and some colleagues at the AIFB wrote a short introduction to DLP (and the best thing is: if I say short, I mean short. Just two pages!). It's meant to be easy to understand as well - but if you have any comments on that, please provide them.

 * whatever easy means here

What's in a name - Part 1

There are tons of mistakes that may occur when writing down RDF statements. I will post a six part series of blog entries, starting with this one, about what can go wrong in the course of naming resources, why it is wrong, and why you should care - if at all. I'll try to mix experience with pragmatics, usability with philosophy. And I surely hope that, if you disagree, you'll do so in the comments or in your own blog.

The first one is the easiest to spot. Here we go:

"Politeia" dc:creator "Plato".

If you don't know about the differences between Literals, QNames and URIs, please take a look at the RDF Primer. It's easy to read and absolutely essential. If you know about the differences, you already know that the above said actually isn't a valid RDF statement: you can't have a literal as the subject of a statement. So, let's change this:

philo:Politeia dc:creator "Plato".

What's the difference between these two? In the first one you say that "Plato" is the creator of "Politeia" (we take the semantics of dc:creator for granted for now). But in the second you say that "Plato" is the creator of philo:Politeia. That's like in Dragonheart, where Bowen tries to find a name for the dragon because he can't just call him "dragon", and he decides on "draco". The dragon comments: "So, instead of calling me dragon in your own language, you decide to call me dragon in another language."

Yep, we decide to talk about Politeia in another language. Because RDF is another language. It tries to look like ours, it even has subjects, objects, predicates, but it is not the language of humans. It is (mostly) much easier, so easy in fact even computers can cope with it (and that's about the whole point of the Semantic Web in the first place, so you shouldn't be too surprised here).

"Politeia" has a well defined meaning: it is a literal (the quotation marks tell you that) and thus it is interpreted as a value. "Politeia" actually is just a word, a symbol, a sign pointing to the meant string Politeia (a better example would be: "42" means the number 42. "101010b", "Fourty-Two" or "2Ah" would have been perfectly valid other signs denoting the number 42).

And what about philo:Politeia? How is it different from "Politeia", what does this point to?

philo:Politeia is a Qualified Name (QName), and thus ultimatively a short-hand notation for an URI, an Unified Resource Identifier. In RDF, everything has to be a resource (well, remember, RDF stands for Resource Description Framework), but that's not really a constraint, as you may simply consider everything a resource. Even you and me. And URIs are names for resources. Universally (well, at least globally) unique names. Like philo:Politeia.

You may wonder about what your URI is, the one URI denoting you. Or what the URI of Plato is, or of the Politeia? How to choose good URIs, and what may go wrong? And what do URIs actually denote, and how? We'll discuss this all in the next five parts of this series, don't worry, just stay tuned.

What's in a name - Part 2

How to give a resource a name, an URI? Let's look at this statement:

movie:Terminator dc:creator "James Cameron".

Happy with that? This is a valid RDF statement, and you understand what I wanted to say, and your RDF machine will be able to read and process it, too, so everything is fine.

Well, almost. movie:Terminator is a QName, and movie: is just a shorthand prefix, a namespace, that actually has to be defined as something. But as what? URIs are well-defined, so we shouldn't just define the namespace arbitrarily. The problem is, someone else could do the same, and suddenly, one URI could denote two different resources - this is called URI collision, and it is the next worst thing to immanentizing the Eschaton. That's why you should grab some URI space for yourself and there you go, you may define as many URIs there as you like (remember, the U in URI means Universal, that's why they make such a fuss about the URI space and ownership of it).

I am the webmaster of http://semantic.nodix.net, and the URI belongs to me and with it, all the URIs starting with it. Thus I decide, that movie: shall be http://semantic.nodix.net/movie/. Our example statement thus is the same as:

http://semantic.nodix.net/movie/Terminator http://purl.org/dc/elements/1.1/creator "James Cameron".

So this is actually what the computer sees. The short hand notation above is just for humans. But if you're like me, and you see the above Subject, you're already annoyed that it is not a link, that you can't click on it. So you copy it into your browser address bar, and go to http://semantic.nodix.net/movie/Terminator. Ups. A 404, the website is not found. You start thinking, oh man, stupid! Why you giving the resource such a name that looks so much like an web address, and then point it to 404-Nirvana?

Many think so. That's because they don't grasp the difference between URIs and URLs, and to be honest, this difference is maybe the worst idea the W3C ever had (that's a hard-to-achieve compliment, considering the introduction of XML/RDF-serialisation and XSD). We will return to this difference, but for now, let's see what usually happens.

Because http://semantic.nodix.net/movie/Terminator leads to nowhere, and I'm far too lazy to make a website for the Terminator just for this example, we will take another URI for the movie. Jumping to IMdb we quickly find the appropriate one, and then we can reformulate our statement:

http://www.imdb.com/title/tt0088247/ http://purl.org/dc/elements/1.1/creator "James Cameron".

Great! Our subject is a valid URI, clicking on http://www.imdb.com/title/tt0088247/ (or pasting it to a browser) will tell you more about the subject, and we have a valid RDF statement. Everything is fine again...

...until next time, where we will discuss the minor problems of our solution.

What's in a name - Part 3

Last time we merrily published our first statement for the Semantic Web:

http://www.imdb.com/title/tt0088247/ http://purl.org/dc/elements/1.1/creator "James Cameron".

A fellow Semantic Web author didn't like the number-encoded IMdb-URI, but found a much more compelling one and then published the following statement:

http://en.wikipedia.org/wiki/The_Terminator http://purl.org/dc/elements/1.1/date "1984-10-26".

A third one sees those and, in order to foster integration of data offers helpfully the following statement:

http://www.imdb.com/title/tt0088247/ owl:sameAs http://en.wikipedia.org/wiki/The_Terminator.

And now they live merrily ever after. Or do you hear the thunder of doom rolling?

The problem is that the URIs above actually already denote something, namely the IMdb website about the Terminator and the Wikipedia-article on the Terminator. They did not denote the movie itself, but that's how they're used in our examples. Statement #3 above actually says the two websites are the same. The first one says, that "James Cameron" created the IMdb website on the Terminator (they'd wish), and the second one says that the Wikipedia article was created in 1984, which is wrong (July 23, 2001 would be the correct date). We have a classic case of URI collision.

This happens all the time. People working professionally on this do this too:

_person foaf:interest http://dmoz.org/Computers/Security/.

I'd bet, _person (remaining anonymously here) does not have such a heavy interest in the website http://dmoz.org/Computers/Security/, but rather in the Topic the website is about.

_person foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

Instead of letting _security be anonymous, we'd rather give it a real URI. This way we can reference it later.

_person foaf:interest http://semantic.nodix.net/topic/security.
http://dmoz.org/Computers/Security/ dc:subject http://semantic.nodix.net/topics/security.

But, oh pain - now we're exactly at the same spot we've been in the last part. We have an URI that does not dereference to a website (by the way, I do know that the definition of foaf:interest actually says the semantics of foaf:interest is, that the Subject is interested in the Topic of the Object, and not the Object itself, but that's not my point here)
Thinking for a moment about it, we must conclude that it is actually impossible to achieve both goals: either the URIs will identify a resource retrievable over the web and are thus unsuitable as URIs for entities outside the web (like persons, chairs and such) because of URI collision, or they don't - and will then lead to 404-land.

Isn't there any solution? (Drums) Stay tuned for the next exciting installment of this series, introducing not one, not two, not three, but four solutions to this problem!

What's in a name - Part 4

I promised you four solutions to the problem of dubbing with appropriate URIs. So, without further ado, let's go.

The first one you've seen already. It's using anonymous nodes.

_person foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

But here we get the problem, that we can't reference _security from outside, thus loosing a lot of the possibilities inherent in the Semantic Web, because this way you can not say that someone else is interested in the same topic as _person above. Even if you say, in another RDF file,

_person2 foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

_security actually does not have to be the same as above. Who says, websites only have one subject? The coincidental equality of the variable name _security bears as much semantics as the equality of two variables x in a C and a Python-Program.
So this solution, although possible, bears too much short-comings. Let's move on.

The second solution is hardly available to the majority of us puny mortals. It's introducing a new URI schema. Let's return to our very first example, where we wanted to say that the Politeia was written by Plato.

urn:isbn:0192833707 dc:creator "Plato".

Great! No problems here. Sure, your web-browser can't (yet) resolve urn:isbn:0192833707, but no ambiguity here: we know exactly of what we speak.

Do we? Incidentally, urn:isbn:0465069347 also denotes the Politeia. No, not in another language (those would be another handful of ISBN numbers), just a different version (the text is public domain). Now, does the following statement hold?

urn:isbn:0192833707 owl:sameAs urn:isbn:0465069347.

Most definitively not. They have different translators. They have different publishers. These are different books. But it's the same - what? What is the same? It's not the same text. It's not the same book. They may have the same source text they are translated from. But how to express this correctly and still useful?

The urn:isbn: scheme is very useful for a very special kind of entities - published books, even the different versions of published books.
The problem with this solution that you would need tons of schemes. Imagine the number of committees! This would, no, this should never happen. We definitively need an easier solution, although this one certainly does work for very special domains.

Let's move on to the third solution: the magic word is fragment identifier. #. Instead of saying:

http://semantic.nodix.net/Politeia dc:creator http://semantic.nodix.net/Plato.

and thus getting 404s en masse, I just say:

http://semantic.nodix.net/#Politeia dc:creator http://semantic.nodx.net/#Plato.

See? No 404. You get to the homepage of this blog by clicking there. And it's valid RDF as well. So, isn't it just perfect? Everything we wished for?

Not totally, I fear. If I click on http://semantic.nodx.net/#Plato, I actually expect to read something about Plato, and not to see a blog about the Semantic Web. So this somehow would disappoint me. Better than a 404, still...
The other point is my bandwidth. There can be RDF files with thousands of references. Following every single one will lead to considerable bandwidth abuse. For naught, as there is no further information about the subject on the other side. Maybe using http://semantic.nodix.net/person#Plato would solve both problems, with http://semantic.nodix.net/person being a website saying something like "This page is used to reserve conceptual space for persons. To understand this, you must understand the magic of URIs and the Semantic Web. Now, go back whereever you came from and have a nice day." Not too much webspace and bandwith will be used for this tiny HTML-page.

You should be careful though to not have a real fragment identifier "Plato" in the page, or you would actually dereference to this element. URI collision again. You don't want Plato to become half-philosopher / half-XML-element, do you?

We will return to fragment identifiers in the last part of this six part series again. And now let's take a quick look at the fourth solution - we will discuss it more thoroughly next time.

Use a fresh URI whenever you need an URI and don't care about it giving a 404.

What's in a name - Part 5

After calling Plato an XML-Element, making movies out of websites and having several accidents with careless URIs, it seems we return to the very beginning of this series.

http://semantic.nodix.net/document/Politeia dc:creator "Plato".

Whereby http://semantic.nodix.net/document/Politeia explicitly does not resolve but returns a 404, resource not found. Let's remember, why didn't we like it? Because humans, upon seeing this, have the urge to click on it in order to get more information about it. A pretty good argument, but every solution we tried brought us more or less trouble. We didn't get happy with any of them.

But how can I dismiss such an argument? Don't I risk loosing focus with saying "don't care about humans going nowhere"? No, I really don't think so. Due to two reasons, one meant for humans and one for the machines.

First the humans (humans always should go first, remember this, Ms and Mr PhD-student): humans actually never see this URI (or at least, should not but when debugging). URIs who will grace the GUI should have a rdfs:label which provides the label human users will see when working with this resource. Let's be honest: only geeks like us think that http://semantic.nodix.net/document/Politeia is a pretty obvious and easy name for a resource. Normal humans would probably prefer "Politeia", or even "The Republic" (which is the usual name in English speaking countries). Or be able to define their own name.

As they don't see the URI, they actually never feel the urge to click on it, or to copy and paste it to the next browser window. Naming it http://semantic.nodix.net/document/Politeia instead of http://semantic.nodix.net/concept/1383b_xc is just for the sake of readability of the source RDF files, but actually you should not derive any information out of the URI (that's what the standard says). The computer won't either.

The second point is, a RDF application shouldn't look up URIs either. It's just wrong. URIs are just names, it is important that they remain unique, but they are not there for looking up in a browser. That's what URLs are for. It's a shame they look the same. Mozilla realised the distinction when they gave their XUL language the namespace http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul. Application developers should realise this too. rdfs:seeAlso and rdfs:isDefinedBy give explicit links applications may follow to get more information about a resource, and using owl:imports actually forces this behaviour - but the name does not.

Getting information out of names is like making fun of names. It's mean. Remember the in-kids in primary school making fun of out-kids because of their names? You know you're better than that (and, being a geek, you probably were an out-kid, so mere compassion and fond memories should hold you back too)..

Just to repeat it explicitly: if an URI gives back a 404 when you put it in a browser navigation bar - that's OK. It was supposed to identify resources, not to locate them.

Now you know the difference between URIs and URLs, and you know why avoiding URI collision is important and how to avoid it. We'll wrap it all in the final instalment of the series (tomorrow, I sincerely hope) and give some practical hints, too.

By the way, right after the series I will talk about content negotiation, which was mentioned in the comments and in e-Mails.

Uh, and just another thing: the wary reader (and every reader should be wary) may also have noticed that

Philosophy:Politeia dc:creator "Plato".

is total nonsense: it says, that there is a resource (identified with QName Philosophy:Politeia) that is created by "Plato". Rest assured that this is wrong - no, not because Socrates should be credited as the creator of the Politeia (this is another discussion entirely) but because the statement claims that the string "Plato" created it - not a Person known by this name (who would be a resource that should have an URI). But this mistake is probably the most frequent one in the world of the Semantic Web - a mistake nevertheless.

It's OK if you make it. Most applications will cope with it (and some are actually not able to cope with the correct way). But it would not be OK if you didn't know that you are making a mistake.

What's in a name - Part 6

In this series we learned how to make URIs for entities. I know there's a big discussion flaring up every few weeks or so, if we should use fragment identifier or not. For me, this question is pretty much settled. Using a fragment identifier has the advantage of giving you the ability of providing a human readable page for those few lost souls who look up the URI, so maybe it's a tad nicer than using no fragment identifier and returning 404s. Not using fragids has the advantage of probably reducing bandwidth - but this discussion should be more or less academic, because looking up URIs, as we have seen, should not happen.

There is some talking about different representations, negotiating media-types, returning RDF in one, XHTML in the other case, but to be honest, I think that's far too complicated. And you would need to use another web server and extensions to HTTP to make this real, which doesn't really help the advent of the Semantic Web. Look at Nokias URIQA project for more information.

Keep this rules in mind, and everything should be fine:

  • be careful to use unused URIs if you reference a new entity. Take one from an URI space you have control of, so that URI collision won't appear
  • don't put a website under the URI you used to to name an entity. That would lead to URI collision
  • try to make nice looking URIs, but don't try to hard. They are supposed to be hidden by the application anyway
  • provide rdfs:label and rdfs:seeAlso instead. This solves everything you would want to try to solve with URI naming, but in a standard compliant way
  • give your resources URIs. Please. So that other can reference them more easily.

I should emphasise the last one more. Especially using RDF/XML-Syntax easily leads to anonymous nodes, which are a pain in the ass because they are hard or impossible to address. Especially, don't use rdf:nodeID. They don't give your node an ID that's visible to the outer world. This is just a local name. Don't use it, please.

The second is using them like this:

<foaf:person about="me">
  <foaf:knows>
    <foaf:Person>
      <foaf:name>J. Random User</foaf:name>
    </foaf:Person>
  </foaf:knows>
</foaf:Person>

Actually, the Person known to "me" is an anonymous one. You can't refer to her. Again, try to avoid that. If you can, look up the URI the person gave to herself in her own FOAF-file. Or give her a name in your own URI-space. Don't be afraid, you won't run out of it.

Another very interesting approach is to use published subjects. I will return to this in another blog, promised, but so long: never forget, there is owl:sameAs to make two URIs point to the same thing, so don't mind too much if you doublename something.

Well, that's it. I hope you enjoyed the series, and that you learned a bit from it. Looking forward to your comments, and your questions.