2.0

Wikipedia 3.0: The End of Google?

In Artificial Intelligence, crowdsourcing, description logic, Inference Engine, Ontology, OWL, RDF, Search For Meaning, Semantic, Semantic Search, Semantic Web, Web 3.0, Wikipedia, Wikipedia 3.0 on June 26, 2006 at 5:18 am

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

License: Attribution-NonCommercial-ShareAlike 3.0

Announcements:

Semantic Web Developers:

Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

  1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

Click here for more info and a list of related articles…

Forward (2008, 2009)

Two years after I published this article it has received over 230,000 hits and we now have several startups attempting to apply Semantic Web technology to Wikipedia and knowledge wikis in general, including Wikipedia founder’s own commercial startup as well as a startup that was recently purchased by Microsoft.

Recently, after seeing how Wikipedia’s governance is so flawed, I decided to write about a way to decentralize and democratize Wikipedia.

In August 2009, a little over 3 years after the writing of this unexpectedly wildly popular article, I wrote an update in response to a query by a journalist, titled Wikipedia 3.0: Three Years Later.

Versión española

Article

(Article was last updated at 10:15am EST, July 3, 2006)

Wikipedia 3.0: The End of Google?

The Semantic Web (or Web 3.0) promises to “organize the world’s information” in a dramatically more logical way than Google can ever achieve with their current engine design. This is specially true from the point of view of machine comprehension as opposed to human comprehension.The Semantic Web requires the use of a declarative ontological language like OWL to produce domain-specific ontologies that machines can use to reason about information and make new conclusions, not simply match keywords.

However, the Semantic Web, which is still in a development phase where researchers are trying to define the best and most usable design models, would require the participation of thousands of knowledgeable people over time to produce those domain-specific ontologies necessary for its functioning.

Machines (or machine-based reasoning, aka AI software or ‘info agents’) would then be able to use those laboriously –but not entirely manually– constructed ontologies to build a view (or formal model) of how the individual terms within the information relate to each other. Those relationships can be thought of as the axioms (assumed starting truths), which together with the rules governing the inference process both enable as well as constrain the interpretation (and well-formed use) of those terms by the info agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent info agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

Thus, and as stated, in the Semantic Web individual machine-based agents (or a collaborating group of agents) will be able to understand and use information by translating concepts and deducing new information rather than just matching keywords.

Once machines can understand and use information, using a standard ontology language, the world will never be the same. It will be possible to have an info agent (or many info agents) among your virtual AI-enhanced workforce each having access to different domain specific comprehension space and all communicating with each other to build a collective consciousness.

You’ll be able to ask your info agent or agents to find you the nearest restaurant that serves Italian cuisine, even if the restaurant nearest you advertises itself as a Pizza joint as opposed to an Italian restaurant. But that is just a very simple example of the deductive reasoning machines will be able to perform on information they have.

Far more awesome implications can be seen when you consider that every area of human knowledge will be automatically within the comprehension space of your info agents. That is because each info agent can communicate with other info agents who are specialized in different domains of knowledge to produce a collective consciousness (using the Borg metaphor) that encompasses all human knowledge. The collective “mind” of those agents-as-the-Borg will be the Ultimate Answer Machine, easily displacing Google from this position, which it does not truly fulfill.

The problem with the Semantic Web, besides that researchers are still debating which design and implementation of the ontology language model (and associated technologies) is the best and most usable, is that it would take thousands or tens of thousands of knowledgeable people many years to boil down human knowledge to domain specific ontologies.

However, if we were at some point to take the Wikipedia community and give them the right tools and standards to work with (whether existing or to be developed in the future), which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.

The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.

Notes

After writing the original post I found out that a modified version of the Wikipedia application, known as “Semantic” MediaWiki has already been used to implement ontologies. The name that they’ve chosen is Ontoworld. I think WikiMind would have been a cooler name, but I like ontoworld, too, as in “it descended onto the world,” since that may be seen as a reference to the global mind a Semantic-Web-enabled version of Wikipedia could lead to.

Google’s search engine technology, which provides almost all of their revenue, could be made obsolete in the near future. That is unless they have access to Ontoworld or some such pan-domain semantic knowledge repository such that they tap into their ontologies and add inference capability to Google search to build formal deductive intelligence into Google.

But so can Ask.com and MSN and Yahoo…

I would really love to see more competition in this arena, not to see Google or any one company establish a huge lead over others.

The question, to rephrase in Churchillian terms, is wether the combination of the Semantic Web and Wikipedia signals the beginning of the end for Google or the end of the beginning. Obviously, with tens of billions of dollars at stake in investors’ money, I would think that it is the latter. No one wants to see Google fail. There’s too much vested interest. However, I do want to see somebody out maneuver them (which can be done in my opinion.)

Clarification

Please note that Ontoworld, which currently implements the ontologies, is based on the “Wikipedia” application (also known as MediaWiki), but it is not the same as Wikipedia.org.

Likewise, I expect Wikipedia.org will use their volunteer workforce to reduce the sum of human knowledge that has been entered into their database to domain-specific ontologies for the Semantic Web (aka Web 3.0) Hence, “Wikipedia 3.0.”

Response to Readers’ Comments

The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

Response to Basic Questions Raised by the Readers

Reader divotdave asked a few questions, which I thought to be very basic in nature (i.e. important.) I believe more people will be pondering about the same issues, so I’m to including here them with the replies.

Question:
How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

Reply:
It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

Question:
Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

Reply:
That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

Question:
Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

Reply:
There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

The only standard required is that of the ontology language and associated production tools.

Addendum

On AI and Natural Language Processing

I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines that will NOT attempt to perform natural language processing, where current approaches still face too many serious challenges. However, they will still have the formal deductive reasoning capabilities described earlier in this article, and users would interact with these systems through some query language.

On the Debate about the Nature and Definition of AI

The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that will run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.


Update on how the Wikipedia 3.0 vision is spreading:


Update on how Google tried to co-opt the Wikipedia 3.0 vision with Google Knol:



Web 3D Fans:

Here is the original Web 3D + Semantic Web + AI article:

Web 3D + Semantic Web + AI *

The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipedia 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

Web 3.0 Developers:

Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

  1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

Jan 7, ‘07: The following Evolving Trends post discusses the current state of semantic search engines and ways to improve the paradigm:

  1. Designing a Better Web 3.0 Search Engine

June 27, ’06: Semantic MediaWiki project, enabling the insertion of semantic annotations (or metadata) into the content:

  1. http://semantic-mediawiki.org/wiki/Semantic_MediaWiki (see note on Wikia below)

Wikipedia’s Founder and Web 3.0

The hosting of the Semantic Mediawiki, i.e. the Web 3.0 version of of Wikipedia’s platform, has been taken over by Wikia, a commercial venture founded by Wikiepdia’s own founder Jimmy Wales. This opens up a huge conflict of interest, which is, namely, the fact that Wikipedia’s founder is running a commercial venture that takes creative improvements to Wikipedia’s platform, e.g. Semantic Mediawiki, and transfer those improvements to Wikia, Jimmy Wales’ own commercial for-profit venture.

2010 Update:

Jimmy Wales (Wikipedia’s founder) has quit the Wikia venture and is now trying to make Web 3.0 happen with Wikipedia itself as proposed in the  original Wikipedia 3.0: The End of Google? article and followed up upon in Wikipedia 3.0: Three Years Later.

P.S.

This post provides the history behind use of the term Web 3.0 in the context of the Semantic Web and AI.

This post explains one of the interesting reasons behind the rapid spread of this article (which points out the basic flaw of the Wisdom of Crowds concept)


Tags:

Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI

  1. […] La verdad es que cuando leí el título de este artículo me llamó la atención aunque fui un poco escéptico y decidí echarle un vistazo. No tengo mucha idea de Inglés pero lo que pude entender y los enlaces a los que me llevó me ayudó a entender la idea del proyecto. […]

  2. English Translation of the above linked comment/trackback (courtesy of Google):

    The truth is that when I read the title of this article it called my attention to it although I was a little skeptical and I decided to have a look at it. I do not have much idea of English but what I could understand and the connections to which it took me helped to understand the idea of the project.

  3. […] I wouldn’t be surprised if the strategy would shift from talking to a call center agent to searching an online FAQ created by a “semantic web,” possibly even utilizing voice-recognition software coupled with text-to-voice technologies. It may sound like science fiction, but so was video-conferencing a couple of years ago. […]

  4. Whatever happens you can bet Google could easily buy the technology in Wikipedia to extend it’s reach beyond’s its current search engine-ishness.

    Jim Sym

  5. Hmmmmmmmmm.

    I think I will think on this for a few days.

  6. Jim,

    It’s not the technology. It’s the thousands of knowledgeable people that have to work for years to boild down human knowledge to a set of ontologies.

    I guess OntoWorld’s would open up to companies like Google and let them plug their inference enginers into it.

    Marc

  7. […] Will we ever ask Google a question again?read more | digg story […]

  8. My first thought on this was that “the semantic web needs robots” (in order to be created) and that I’m not sure if the AI described is ready yet. We have companies like Semantica which enable us to create small scale semantic webs and networks and knowlege-management platforms but it still requires a great deal of manual labor to input the ontological terms properly. Ontoworld would do a lot of that, yes–but tags are still tags. You can manipulate them and draw patterns, to a point. Machines still need to process it effectively, efficiently, and then communicate what they have made to us, the humans. Are we there yet?

  9. […] Evolving Trends » Wikipedia’s OntoWorld: The End of Google?   […]

  10. Google will figure out a way to start their own form of a similar system. They may have allready.. who knows.. maybes it’s in the testing phase and when its ready, it may be simply turned on like a light switch… ?

  11. Sam,

    I’ve clarified the premise of my argument to clarify that the right tools and standards have to be ready first, but the Ontoworld project is already in progress… Technology evolves based on our needs, so we have to take those early awkward steps in order to get there.


    “However, if we were at some point to take the Wikipedia community and give them the right tools and standards (whether existing or to be developed in the future) to work with, which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.”

    Marc

  12. Eugene,

    Wikipedia has a cult of users who number in the thousands and who are knowledgeable in their own domains and who have proved they can do decent quality work. They would be needed to create the ontologies. It’s no small job. Google would have to go out and hire the world? Wikipedia has the educated/knowledgeable resources needed for the job and all they need is better more user-friendly tools (automation, IDE, etc) and more usable standards.

    It’s not there yet..

    Again, it’s about the workforce not the technology. Google just doesn’t have enough people to do it.

    Marc

  13. I can’t read an article like this without remembering Clay Shirky’s article on “The Semantic Web, Syllogism, and Worldview” [1]. I remain a skeptic on the Semantic Web, just as I remain a skeptic on AI. I’ll believe it when I see it.

    [1] http://www.shirky.com/writings/semantic_syllogism.html

  14. I know not with what weapons Web 3.0 will be fought, but Web 4.0 will be fought with sticks and stones.

  15. […] Uhhhh leiam este artigo: https://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/ no início deste ano saiu na newsweek umas 4 páginas falando sobre semantic web.. é a coisa mais revolucionária desde o monitor! […]

  16. this will happen. except it will happen by google doing it. that’s the future of google.. the ability to ask it a question in plain english and get back an intelligable answer. i hate to break it, but this won’t kill google. it will seal the deal for google as king.

  17. Sounds like Hal might come back from the dead1

  18. The AI community (no dis) once felt this problem was easy to solve. Mere ontology is not enough, nor is a large workforce. When it comes to natural language, disambiguation is a major problem, and that requires a patterned database. I believe we could get off to a good start, but without some new concepts – or the location of some old, neglected ones – it’ll never be satisfying to use. I don’t think Google’s worried; they may even kick in a few megabucks.

  19. Tony,

    I was not saying anything about natural language queries, though it may have looked like it since I did not specify the query mechanism for sake of berivity and keeping it within grasp.

    No need to go into natural language.

    Ontologies + Inference Engines + Query Logic = Ultimate Answer Machine (for the next 5-8 years.. after that the definition of ultimate may include natural language.)

    Marc

  20. Folks,

    Clarification:

    It’s not Wikipedia itself that is operating Ontoworld. They are simply using the Wikipedia application, also known as WikiMedia. But I envision “Wikipedia 3.0” to follow from Ontoworld.

    Marc

  21. As good as google is something better will come. All dynastys must come to an end.

  22. I’m a huge fan of both Wikipedia and Google.
    I’m just really interesting to see how this all develops!

    Great article

  23. Sounds like a fantastic concept but I can see as you say, “There is to much vested interest” in the Google type search engine. The idea of such massive change is too much to come in smoothly, its like moving from fossil fuels to solar. Though it would be for our collective best most of us who have the money (Western world) would be somewhat inconvenienced and the big players would be greatly inconvenienced. So who will invest so massively if the return will be years away? How do you pay several thousand people if there is no forseeable end in sight? Would need the support of a government or something.

  24. Interesting theory. I have found that when I’m looking for facts, the wikipedia article is in the top search results. Google gets me there quickly. Not sure if wikipedia could ever kill Google entirely, though.

  25. If anything, this will only enable Google to do their job more efficiently. It may slightly alter the way the accomplish the end goal, but it will help them much more than hinder. However, several good points are made here. My personal opinion is that click fraud and the similar problems propose much more of a threat to Google’s revenue model than a better organization of the worlds largest form of media.

  26. Google has been the reference for the last few years. However, the development of web 2.0 technologies have somewhat shifted the focus away from Google.

    Wikipedia and its technologies are definitely a force to be reckoned with in web tech. I find myself using Wikipedia first for specific knowledge based queries, validating them through other sources. When I explore the blogosphere, I don’t use Google at all.

  27. Interesting article!

  28. This article has been noted by the Antiwikipedia (http://www.antiwikipedia.com). We will now integrate ontologies into our wisdombase.

  29. Alright! Good for you, Antiwikipedia!

    :)

    Marc

  30. Each human being is unique. Each of us has a unique genetic makeup, unique values, and unique circumstances. There is no universally applicable algorithm that applies to something as individualized as the pursuit of enlightenment. I cannot imagine such a highly structured project that does not disenfranchise the majority of people in one way or another; the more pervasive the attempted structuring, the more universal and profound the disenfranchisement will surely become. Eventually we will all be fighting with each other, dining on dog food, and living in mud huts, while we pound away on our keyboards, or whatever.

  31. Ian:

    Thank you for bringing this forward. That’s encompassed by what I was implying regarding giving the ontology makers the right tools.

    Update:

    However, when I said thousands of people needed to produce the ontologies, I did not mean to say that they would produce them manually. Yet, my assessment is that given the vastness of human knowledge, which for the most part exists in plain form, we would need thousands of people working over time, with automation tools and whatever tools available (now or in the future), to produce the ontologies (not manually and not persistently intervening.) The tools should make the job faster and more realistic but it would still take a lot of time and many people.

    Think about how long it took to build Wikipedia’s content. Many years and thousands of people. How can the conversion process from Wikipedia’s current format to ontologies (even with next years tools) take much less than a few years and thousands of people? (I said two years optimistaclly) The conversion from Wikipedia to formal (computationally complete and decidable) ontologies cannot be entirely automated, at least not yet.

    I’ll look further into it and try to get a better estimate for the cost in time and labor but has their been any final standard? Are we going to be using OWL, OWL-DL or OWL Full or somewhere in between OWL and OWL-DL?

    Marc

  32. Nicely written but a few gaps I take issue with: First, the concept of the Semantic Web predates the concept of Web 2.0 so it’s a little disingenuous to call it Web 3.0. It’s been struggling for many years while making very little progress that I’ve seen. The Wikipedia analogy is an interesting one. Wikipedia achieves some very significant things through the tireless efforts of thousands. A lot of successes with Wikipedia and some very significant shortcomings. And that’s one website. By extension, the Semantic Web requires the participation of practically everybody to work.

    It’s a good theory and a noble goal with some rather serious hurdles to overcome. Not least of which is the requirement to engage so many people when so many people are so very lazy. And people are not inclined to agree while the Semantic Web requires significant (if not universal) agreement to work. Look to Wikipedia talk pages to see what happens when people disagree. I think the focus on some huge, utopian and probably unachievable goal is likely to be nothing more than a distraction from what the next big thing will really be.

    And I’m not about to let you in on that secret. Not before the IPO anyway.

  33. Web 1.5?

    I just doubled it then for double the fun.

    If it hasn’t happened yet then it must come after 2.0, which is hapening now. Thus, it must be “3.0” :)
    I didn’t know the IPO market in Australia was going strong! Take us there with you!

    The bet your company made on your product makes you financially biased against the Wkipedia 3.0 vision, as described here. :)

    Marc

  34. Marc,

    I harbor a serious doubt that people will embrace query logic. I agree that application of ontology will eventually lead to better resolution in database searches. But natural language is what Google gets from *most* people, who are and will remain disinclined to learn more “query logic”.

    More problematic, mere ontology will remain unable to resolve the infinite array of conceptual ambiguities and relationships that the human brain is patterned by experience to resolve almost effortlessly. So -short of a new heuristic algorithm- while it’d be great to see results *today*, I just don’t see ontology leading to such superior results that people will abandon Google. But I’d enjoy being wrong.

  35. Tony,

    If you call Google’s boolean search query a “natural language query” then I call my writing Shakespearean. Obviously, neither comes anywhere with 200,000 miles to its claim. But I’d like to think I’m much closer to mine than Google is to being a natural language query system. ;-)

    It’s not mere ontology. The inference rules can be intelligent and that’s where ‘improved’ heuristics can be rellied on to “kind of, sort of” mimic the human brain until we fully get there. We have to take those early awkward steps or the future will not arrive, not even in an unevenly distributed way. :)

    Marc

  36. I’m struggling to see how the likes of google will fall from grace because of standard XML templating. There will still information to be search or indeed machines to be interfaced with but how different is this to how we work with google today? Google can simply integrate web semantics into their search, if they do it better and first they will continue to dominate the search market. Perhaps only a large organisation such as google can succeed with this technology since as Mr Angry argues you have to get everyone to work together with standards, hell could this be the way that Microsoft or AOL topple Google’s search dominance.

    It is though unlikely that Wikipedia or a similar voluntary organisation would produce something a system that could topple Google.

  37. smiggs,

    It’s not about the next generation search engine (in this case the inference engine and the whole AI enablement in general) or the tools to produce the ontologies.

    The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

    Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

    Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

    I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

    There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information would also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

    It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

    After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

    Marc

  38. Ultimately I don’t think Google will see Wikipedia as a threat as such. They are already a highly trusted source for Google. As more ontologies are created in fields other than those covered by wikipedia, or indeed as more granular and detailed sources for wikepedia themselves; Wikipedia will become one of very many trusted sources, the data from which will be aggregated and delivered by Google. Surely Google actually welcomes more well crafted, semantically correct sources than the grey goo of todays genberally poorly structured non semantic web sites?

    The real challenge is going to lie in being able to create a plural Internet where there are hundreds / thousands of trusted RDF aggregates or similar, where wikipedia will be only one voice. There is a also presumption that we must somehow manage to make sense of the grey goo we have (and still producing producing more daily.) At this point in time the water / grey go is still pouring over the dam at a considerable rate. The worst possible scenario is that we must start again, learning from the mistakes of the pioneer days

    The current google interface is far from a natural language processor, but who would bet against them getting that right if they were first given a higher quality input?

  39. I personally like having Google as a market leader. Yahoo has become a bunch of greedy thugs – their apps invariably have ads on the left, the right, above, and below, all blinking and moving, and $199 a year for a link in their now-useless directory has always been incredibly nasty – and MSN, well, greedy thugs does pretty much cover it! Google on the other hand has shared the wealth quite nicely with content creators via the AdSense program, which allows normal people to earn a good living as webmasters if they have a particular area of expertise and the ability to share it. Yes, it’s also inspired a number of sleazebags to have Made-For-Adsense sites, but that’s inevitable and those tend to get bounced sooner or later.

    Wikipedia.org is a threat to a lot of us who have worked hard to develop good web sites only to find that increasingly people are just going to wikipedia, which I guess is fair but often the depth and variety is greater out there in the wild. I’ll also point out that a single ontology directed source might stifle a lot of the independent voices who add variety as all traffic goes to them, if I’m understanding it right. I do understand that wikipedia.org is “open” but I’ve experienced openness in the Open Directory that ended up being authoritarian dictatorship when editors get out of hand, and don’t see wikipedia.org as being non-susceptible to that.

  40. “I would really love to see more competition”

    And open standards, data, source code, access….all the good stuff.

    http://www.techcrunch.com/2006/06/06/google-to-add-albums-to-picassa/#comment-66429

  41. I enjoyed your commentary on “The Semantic Web” and find your observations right on. It was the prefect follow-up to watching a “Royal Society” presentation by – Professor Sir Tim Berners-Lee FRS (Video on-demand) The future of the world wide web this past evening. Although somewhat dated (as am I)from 9/03, it helped put everything in perspective.

    http://www.royalsoc.ac.uk/portals/bernerslee/rnh.htm

    Thanks.

  42. I don’t think Google will allow themselves to become obsolete. Google just may get as many people to get the job done as Wikipedia has. It’s not impossible. We’ll just have to wait and see on that one.

  43. Nothing can kill google now :)

  44. Good Comment

  45. Google will have to compete with the likes of Yahoo and MSN new search technology.

    I’m thankful at least there is some competition left in the U.S. (oil, telecom, banking – almost no competition any more)

  46. I would have to agree that competiton is getting less and here in the USA.

  47. So…I am trying to wrap my brain around this concept a bit (regardless of the threat to Google or whatever). The collective mind of this Semantic web can, in theory, intuitively connect bits of information together due to their relational association beyond a “keyword” association (i.e. your Pizza/Italian) example. However, how does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject? Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user? And what of ranking or order – Google’s search results are driven by many factors…but they may not be the best results for what I am specifically looking for…even on the level of personal preference. Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to? Hmmm…

  48. Question:
    How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

    Reply:
    It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

    However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

    Question:
    Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

    Reply:
    That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

    There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

    Question:
    Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

    Reply:
    There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

    Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

    The only standard required is that of the ontology language and associated production tools.

    Marc

    P.S. This set of responses was refined at 10:25pm EST, June 30, ’06.

  49. I’m in utter awe and there is no way I can digest all this information tonight, so I’ve saved it in its own “favorites” folder. The internet is changing and growing every day and its difficult to keep up. Thanks for the information.

  50. seems like some hardcore knowledge

  51. Trying to get machines to understand the English language? People have a tough enough time doing that in every day life, with all the misunderstandings born of incorrect interpertations of the spoken or written word.

    In reference to the first basic question raised by readers: How would a machine distinguish good information from bad? Given that the concept of good vs bad is purely subjective, the answer is machines can’t make that determination because machines can’t make value judgements. Any choice made by a machine that would appear to be a value judgement is really that of the developing programmer.

  52. One big issue I have to wonder about is how to keep out the legions of spammers with their MFA (made for AdSense) sites. They have done an incredible job of staying one step ahead of everyone except perhaps Akismet, spamming Google quite effectively, lodging their turds in forums and blogs, and generally being quite ingenious in their ability to spread filth. Given that we cannot expect our government to crack down on their criminal activities – not necessarily the spamming but the crime that generates the funding for it – how can we insulate Wiki3/Web3 from all that rubbish?

    Another issue is the tendency I’ve noticed for any authority to become incredibly insular and snotty, no doubt due to the massive fight against spam. The Open Directory is famous for its arbitrary, permanent decisions and lack of any ability to take criticism; but Yahoo was hardly fun in its prime, and Google seems to be getting rather aloof as well. The v3 web/wiki may need to confront that head-on since it makes for bad decisions (since self-examination disappears in a mass of self-righteousness.)

  53. On AI and Natural Language Processing

    I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    On the Debate about the Nature and Definition of AI

    Please note that I could not get into moral, philosophical or religious debates such as what is AI vs human consciousness or what is evolution vs intelligent design or any other such debate that cannot be resolved by computable (read – provable) and decidable arguments in the context of formal theory.

    Non-formal and semi-formal (incomplete, evolving) theories can be found everywhere in every branch of science. For example, in the branch of physics, the so-called “String Theory” is not a formal theory: it has no theoretically provable arguments (or predictions.) Likewise, the field of AI is littered with non-formal theories that some folks in the field (but not all) may embrace as so-called ’seminal’ works even though those non-formal theories have not made any theoretically provable predictions (not to mention experimentally proven ones.) They are no more than a philosophy, an ideology or even a personal religion and they could not be debated with decidable, provable arguments.

    The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

  54. […] This article explains the background for how the Wikipedia 3.0/Semantic Web vs Google meme was launched. […]

  55. […] So I read this blog entry on A list apart a ways back, but never made an entry here. It’s titled web 3.0 by Jeffrey Zeldman. And then there’s this blog entry that came out today on the Evolving Trends blog called Wikipedia 3.0: The End of Google?. Jeffrey Zeldman makes the point that Web 2.0 is really about the collaboration and community. Not only for the end-user (i.e. flickr, ma.gnolia, etc…), but on the development side. It allows small teams to work more efficiently and to focus on things like usability. AJAX technologies (PHP, Ruby on Rails, XML, CSS, JavaScript, XHTML, and sometimes Microsoft widgets) allow applications to be elegant and simple. The end result is products that do what they do really well vs. an overload of feataures and complexity. Zeldman concludes by saying he’s going to let all the hype over Web 2.0 pass and get on to Web 3.0. […]

  56. […] Con respecto a la entrada anterior sobre Google, tambien existe una tesis que predice la “caida” de google. Se basa en la Web Semantica (Web 3.0) y en su poder para deducir (o inducir) respuestas, en lugar de simplemente buscar palabras claves. El problema (aparte de los obstaculos técnicos) esta en que recolectar y clasificar (categorizar) dicha información es un esfuerzo que requiere de miles de personas trabajando por mucho tiempo. El post Wikipedia 3.0: El fin de Google propone que la inmensa base de datos de Wikipedia junto con sus miles de colaboradores voluntarios son la respuesta al problema recien planteado, suponiendo que se les provea con las herramientas adecuadas. […]

  57. […] Evolving Trends says that a “Wikipedia 3.0″ could make Google obsolete. […]

  58. […] But there it is. That was then. Now, it seems, the rage is Web 3.0. It all started with this article here addressing the Semantic Web, the idea that a new organizational structure for the web ought to be based on concepts that can be interpreted. The idea is to help computers become learning machines, not just pattern matchers and calculators. […]

  59. […] Despite that I had painted a picture (not so foreign to many in concept) of a future ‘intelligent collective’ (aka Global Brain [a term which had been in use in somewhat similar context for years now]) in the articles on Wikipedia 3.0 and Web 3.0, I believe that the solution to Web 2.0 is not to make the crowd more intelligent so that ‘it’ (not the people) can control ‘itself’ (and ‘us’ in the process) but to allow us to retain control over it, using true and tried structures and processes. […]

  60. […] Marc đưa ra khái niệm về Wikipedia 3.0 , báo hiệu cho kết thúc của Google […]

  61. […] It doesn’t matter who thought of it first. So it’s better to put these ideas out there in the open, be them good ideas like the Wikipedia 3.0, Web 3.0, ‘Unwisdom of Crowds’ or Google GoodSense or “potentially” bad ones like the Tagging People in the Real World or the e-Society ideas. […]

  62. […] web 3.0 y google ¿La web 3.0 es el final de google? … no, 5 razones porqué no razón 4: google puede escuchar tu ambiente y en 5 segundos saber que programa ves en TV! […]

  63. […] Wikipedia 3.0: The End of Google? […]

  64. […] Since writing the article on Wikipedia 3.0: The End of Google? I’ve received over 65,000 page-views from people in almost every Internet-savvy population around the world, and all that traffic happened in less than two weeks, with 85% of it in the first 4 or 5 days. […]

  65. […] Wikipedia 3.0: The End of Google? […]

  66. Wikipedia 3.0: ¿El final de Google?

    El autor nos habla de los “agentes info” que revolucionarán la forma en que se busca información en la web y la manera en que la comunidad de Wikepedia podría ayudar a acelerar la implementación de la “Máquina Suprema de Respuestas”

    Nota: e…

  67. […] Evolving Trends » Wikipedia 3.0: The End of Google? Caught up with Web 2.0 yet, if not just skip over and check out Web 3.0! Evoloving Trends walks through a very intellectual analysis of the next big thing. (tags: blog future search SemanticWeb wiki Wikipedia) […]

  68. […] Wikipedia 3.0: The End of Google? […]

  69. People actually have to take the time to write if this is going to work. And you can look at all the wikis on the net that have failed because they didn’t reach a critical mass of volunteers. People have jobs and blogs to take care of, therefore, the for-profit model is going to be the way it has to work.

  70. “The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.”

    This captures the emerging power of wikis that we are all moving to.

    Thanks!

  71. […] Det finnes ingen Web 3.0 side på WikiPedia – det er faktisk satt opp en sperre mot å opprette en slik side – men siden om Semantic Web kan være av interesse. Jeg skal også lese artikkelen Wikipedia 3.0: The End of Google? når jeg får litt tid. […]

  72. Update:
    Companies and researchers are developing tools and processes to let domain experts with no knowledge of ontology construction build a formal ontology in a manner that is transparent to them, i.e. without them even realizing that they’re building one. Such tools/processes are emerging from research organizations and Web 3.0 ventures.

    The article “Google Co-Op: The End of Wikipedia?” is linked from this article. It provides a plausible counter argument with respect to how the Semantic Web will emerge. But it’s only one potential future scenario among many even more likely future scenarios, some of which are already taking shape.
    Marc

  73. […] Go here for latest update on ontology creation […]

  74. […] Acá un recorte directo de la traducción del articulo original. (perdí mucho tiempo tratando de entenderlo, se nota?) por Marc Fawzi de Evolving Trends […]

  75. Right now, few months before we intended to, we have the begginign of such a semantic application in our modest martial arts wiki.

    The main intention is to discovers causative and other relations between techniques in our field of expertise.

    We have only few of these articles ready, here is one example:
    http://www.ninjutsu.co.il/wiki/index.php/Harai_goshi
    The process of describing ontological relations is painstakingly slow involving many human hours.

  76. Hi Yossi,

    Good stuff.

    The creation of domain-specific ontologies(including domain-specific ontologies that enable machine reasoning about a given subject) has to be a transparent process from the domain expert’s perspective. Whether that’s done through technology or well-designed processes depends on what needs to be accomplished

    See latest update on Web 3.0 technologies: https://evolvingtrends.wordpress.com/2006/11/19/web-30-update/

    Thanks for your comment.

    Marc

  77. […] by Wikipedia’s founder), future semantic version of Wikipedia (aka Wikipedia 3.0), and Google’s Pagerank algorithm to shed some light on how to design a better semantic […]

  78. […] Interestingly, my friend Marc Fawzi described exactly this idea in a piece he posted on the subject last […]

  79. 20 years ago or so, there was a great hipe about Japan’s project of “5th generation of computers”. The idea was to use AI tools (specifically, Prolog) to make computers understand people. The project failed with no result. Probably, Web 3.0 will be more sucessfull, and probably not.
    May be for the start, create standalone semantic operating system? Or we’ll have to access semantic web with dumb windoze machines?

  80. There is at least one Semantic Desktop project out there already.

    Marc

  81. Alexei has touched on the subject of needing more powerful computers and operating systems to handle the semantic explosion of synonyms and language rules. The brain is still king. Until quantum computers.

  82. Much is being done to define and tackle the problems, so you should see some exciting paradigms emerge over the next few years.

    There are many ways to define the problem, so there are many ways to solve it.

    :]

  83. […] personas consideran que esta Web 3.0 será el fin de empresas como Google, pero otras  ya alucinan el surgimiento de una Inteligencia Artificial al estilo de Ghost in the […]

  84. […] « Evolving Trends Posted in Semantic web, Web 2.0 by Aman Shakya on the March 23rd, 2007 Wikipedia 3.0: The End of Google? « Evolving Trends The Semantic Web or Web 3.0 promises to “organize the world’s information” in a dramatically […]

  85. Google may rack most of the benefits by hosting Wikipedia.

  86. People are not as stupid as you may think.

    Any such decision involving collusion between a public-created knowledge base like Wikipedia and any company like Google who may try to control it will be challenged and opposed by the majority of people.

    Marc

  87. […] interesting article, written some time ago now, suggests that the Web 3.0 vision might be the way to end Google’s monopoly. It is touted as being a new way to organise […]

  88. […] can read the whole article here Wikipedia3.0 – The End of Google A more trimmed down version of the same article can be found here P2P ai will kill […]

  89. […] blog (https://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/) that was talking about Web. 3.0. The argument it was making about Wikipedia, ontological knowledge […]

  90. I agree with Marc that there is at least one Semantic Desktop project out there already

  91. […] RuleML, Semantic Web, Web 3.0, oWL, ontology — evolvingtrends @ 5:28 am At the time the Wikipedia 3.0: The End of Google? article was written, I didn’t think it necessary to supply external references, especially […]

  92. […] zu vergessen. Dafür bräuchte eine Maschine künstliche Intelligenz. Man könnte also definieren: Web 3.0 = semantisches Web + künstliche Intelligenz. Verwandte Beiträge auf Blogpiloten.de zu den Themen: AI, künstliche Intelligenz, Semantic […]

  93. […] bieden, kijk bijvoorbeeld naar semanticweb.org. Als het gaat om het zoeken naar informatie, zou Wikipedia 3.0 Google wellicht ooit van de troon kunnen […]

  94. […] quote from an article (blog) from evolvingtrends has some interesting points. If (at last) we get to the point where web pages are […]

  95. I can’t read all the comments, so my question may have been asked … Is it ethical for anyone to control the product of volenteer efforts? Even now, is it ethical to profit by them? … Hmmm?

  96. John,

    That’s my whole gripe with Jimmy Wales piggy backing his VC funded venture (Wikia) on top of his leveraged position at Wikipedia, even hosting semantic wiki technology that was initially considered for Wikipedia itself.

    This unfortunate mixing of profit and non-profit activities (which are in this case within the same industry) is problematic at best and highy unethical and immoral at worse.

    That’s not to mention the secret ban lists and corruption at Wikipedia (documented in my post on “People Hosted P2P Version of Wikipedia“)

    So people need to start building and hosting their own wikipedia where deletions and bans are not allowed and where good content rises to the top on its own (like through PageRank or some other equally sophisticated trust metric)

    Marc

Leave a comment