Why am I not surprised?
The idea is that the Semantic Web will allow people to run AI-enabled P2P Search Engines that will collectively be more powerful than Google can ever be, which will relegate Google to just another source of information, especially as Wikipedia [not Google] is positioned to lead the creation of domain-specific ontologies, which are the foundation for machine-reasoning [about information] in the Semantic Web.
Additionally, we could see content producers (including bloggers) creating informal ontologies on top of the information they produce using a standard language like RDF. This would have the same effect as far as P2P AI Search Engines and Google’s anticipated slide into the commodity layer (unless of course they develop something like GWorld)
In summary, any attempt to arrive at widely adopted Semantic Web standards would significantly lower the value of Google’s investment in the current non-semantic Web by commoditizing “findability” and allowing for intelligent info agents to be built that could collaborate with each other to find answers more effectively than the current version of Google, using “search by meaning” as opposed to “search by keyword”, as well as more cost-efficiently than any future AI-enabled version of Google, using disruptive P2P AI technology.
For more information, see the articles below.
- Wikipedia 3.0: The End of Google?
- All About Web 3.0
- P2P 3.0: The People’s Google
- Intelligence (Not Content) is King in Web 3.0
- Web 3.0 Blog Application
- Towards Intelligent Findability
- Why Net Neutrality is Good for Web 3.0
Semantic Web, Web strandards, Trends, OWL, Google, inference engine, AI, Web 2.0, Web 3.0, AI, Wikipedia, Wikipedia 3.0, , Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, P2P Semantic Web inference Engine, semantic blog, intelligent findability, RDF