The idea behind the semantic web is to link content or data rather than to link syntactic strings. This is better illustrated by an example. Typing ‘Spaniard logicians’ into Google turns up pages on centuries-old dead guys. Why? There are far more historical pages on the web using the syntactic string ‘Spaniard logician’ than there are (if any at all) that list current logicians from Spain. So, if you are looking for a Spanish logician, or looking for how many logicians are from Spain, typing either ‘Spanish logician’ or ‘How many logicians are from Spain’ will not return pages that answer your question. The reason is that search engines don’t understand the content of your question. The ambitious aim of WC3 is to tackle this problem by developing tools to “understand” the content of web pages, and to “understand” the meaning of search queries.
Google is extraordinarily clever at exploiting the structural features of natural language syntax. It can recognize ‘Wh-‘ questions, for instance, and deliver answers to questions like ‘What is the GDP of Spain?’ But there are limits to this method, as evidenced by searching on ‘What is the number of Spanish logicians?’ Which is why there is interest in working on WC3 technologies. This is what the Times article is about.
What’s this got to do with epistemology? A lot. The thing that’s being worked on here is content: how to find it, how to recognize it when you do find it, how to combine it, what relationships it bears to other things you’ve found, how to recognize those relationships, how learning one thing changes other things you have discovered. I see no fundamental difference between answers to these questions and the answers to their epistemic corollaries.
I doubt that the W3C project will be a complete success, and I suspect my skepticism is shared by most readers of this blog. However, there is little doubt that there will be many partial successes. And I am certain that much will be learned about epistemic model building from the successes and failures to come.