2.0

Web 3.0: Basic Concepts

In Uncategorized on June 30, 2006 at 7:53 am


Notes

You may also wish to see Wikipedia 3.0: The End of Google?, the original ‘Web 3.0/Semantic Web’ article, and P2P 3.0: The People’s Google, a more extensive version of this article that discusses the implication of P2P Semantic Web Engines to Google.

Semantic Web Developers:

Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it.

  1. Description Logic Programs: Combining Logic Programs with Description Logic


Article

Semantic Web (aka Web 3.0): Basic Concepts

Basic Web 3.0 Concepts

Knowledge domains

A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

Information vs Knowledge

To a machine, knowledge is comprehended information (aka new information that is produced via the application of deductive reasoning to exiting information). To a machine, information is only data, until it is reasoned about.

Ontologies

For each domain of human knowledge, an ontology must be constructed, partly by hand and partly with the aid of dialog-driven ontology construction tools.

Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

Inference Engines

In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

Info Agents

Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

Proofs and Answers

The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly -or implied- by the user.

“The Future Has Arrived But It’s Not Evenly Distributed”

Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and people like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

Addendum

On AI and Natural Language Processing

I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines that will NOT attempt to perform natural language processing, where current approaches still face too many serious challenges. However, they will still have the formal deductive reasoning capabilities described earlier in this article, and users would interact with these systems through some query language.

Related

  1. Wikipedia 3.0: The End of Google?
  2. P2P 3.0: The People’s Google
  3. All About Web 3.0
  4. Semantic MediaWiki

Tags:

Semantic Web, Web strandards, Trends, OWL, innovation, Googleinference engine, AI, ontology, Web 2.0Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, AI Engine, OWL-DL, AI Engine, AI Matrix, Semantic MediaWiki, P2P

  1. I find the term AI to be used with these technologies a bit premature. Sure they might be intelligent agents, but I’m pretty sure that they won’t be AI’s for the moment. Artificial Intelligence is still something that we will see in the future.

    I agree that with the distributed processes you talk about, there might be the potential for an emerging intelligence, but I’m not sure that will happen so soon.

    True AI or true artificial consciousness is still further away, though I do enjoy the analysis you have written.

    I do enjoy the forecasting of emergent technologies that will happen because of the web and web 2.0. It has already altered fundammentaly the way we live our lives. It will continue to do so.

    I agree though that we are heading towards a technological singularity within the next 50 years and major discoveries will be made in multiple fields, from biotechnology to artificial intelligence, to quantum mechanics.

    Once another space race starts, when corporations see some sort of benefit or profit from launching themselves into space, technology will really advance in rapid spurts.

  2. I should start by clarifying the term “AI.” I see “AI” as being any machine-executed process that tries to mimic the way we process information. This also happens to be the view of the majority of AI researchers.

    The massively parellel interactions within a very large distributed AI Matrix, made up of existing AI Engines (e.g. inference engines), all collaborating in P2P fashion and using standardized ontologies, will give rise to a complex behavior (that I’m calling the Global Brain.)

    A large number of relatively simple AI engines working with standardized ontologies can be combined together (in P2P fashion) to produce a massively parallel, deductive reasoning machine.

    I believe that it is theoretically possible today to have a distributed AI matrix (or Global Brain), made of P2P AI Engines, that are based on existing Semantic Web inference engines and working with standardized ontologies, that will be far more powerful than Google, except for the fact that the Semantic Web revolution does not yet have the popular grassroots support it deserves nor the ontologies it needs to function. But this is where I see Wikipedia eventually filling the gap with its large workforce of knowledgeable volunteers (who would help develop the ontologies.)Marc

    Updated 7:45AM EST

  3. seems like some hardcore knowledge

  4. I guess it’s the difference between current AI researchers and science-fiction. I find the term employed this way can be considered confusing to some people, though I do not disagree with your analysis.

  5. An inference engine (which may also be called an ‘expert system’ in some domains) is an AI application.

    The problem as you said is that some people think AI is like Mr Data (Star Trek NG) but it isn’t. Some washing machines have AI embedded in them! I mean it’s in a lot of products. Some Canon cameras follow the movement of the iris to determine where to focus. That to me is AI because it involves shape recognition… a cognitive process.

    BTW, Re: 1-liner comment from ‘unblock myspace’

    Am I getting a creative form of spam from MySpace-promoting spam bots? It’s hard to tell because although the statement seems relevant (on the surface) it was duplicated under the other thread. So I tend to think it’s spam, but we’ll see what happens. If you see any ads for Viagra on this blog then you’ll know what happened.
    Speaking of spam, some spam bots employ “AI” as in “trying to get around obstacles using pure heuristics.”

    Marc

  6. Addendum

    On AI and Natural Language Processing

    I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    On the Debate about the Nature and Definition of AI

    Please note that I could not get into moral, philosophical or religious debates such as what is AI vs human consciousness or what is evolution vs intelligent design or any other such debate that cannot be resolved by computable (read – provable) and decidable arguments in the context of formal theory.

    Non-formal and semi-formal (incomplete, evolving) theories can be found everywhere in every branch of science. For example, in the branch of physics, the so-called “String Theory” is not a formal theory: it has no theoretically provable arguments (or predictions.) Likewise, the field of AI is littered with non-formal theories that some folks in the field (but not all) may embrace as so-called ’seminal’ works even though those non-formal theories have not made any theoretically provable predictions (not to mention experimentally proven ones.) They are no more than a philosophy, an ideology or even a personal religion and they could not be debated with decidable, provable arguments.

    The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

  7. Yes, expert systems are quite an interesting topic of discussion, especially since most people aren’t aware of them.

    Maybe it’s because I’ve been involved in science-fiction that the term AI gives such strong connotations. I am also a programmer though not in AI. Closest I got to that was LISP or some Math I did during my studies at the U.

  8. BTW, for those of us not so well versed in ontologies and AI research, here is a link to wikipedia’s AI portal.
    http://en.wikipedia.org/wiki/Portal:Artificial_intelligence

    I have to say Marc, this blog is one of the most interesting reads out there. Well deserved the readership base.

  9. Thank you.

    I believe there will be more ground breaking and boat rocking.

    It’s good to know that others care.

    I know people care when people leave comments that enlighten the discussion and help generate the ripples we need to keep this vision going.

    Cheers,

    Marc

  10. […] Web 3.0 Vs GoogleWikipedia 3.0: The end of Google? […]

  11. Technology being one of my primary interests (the other’s philosophy), I think this discussion is something that has opened a lot of them roadblocks in my mind. Though my comment might start a bit off-note, I hope it gets into perspective when read completely.
    I do not for a moment doubt that we (even I am part of the information contribution) will be able to bring about the “Global Mind” which is being discussed. Though we are far from there, we are moving in the right direction.
    According to me the ultimate destination of all these technologies is “controlled and moderated telepathy”. “Controlled” by the sender, you dont want anybody prying into mind and “Moderated” by some authorities, whoever they are, so no unwarranted information gets propagated.
    Artificial Intelligence (itself in its nascent stages) would still have a simple barrier of language. A “global mind” created out of the information contributed and meta-tagged for intelligence by millions of contributers across the world in English would still have problems answering questions in German or Japanese unless it is meta-tagged with information in those languages as well.
    What we need to do going forward is to feed audio visual information to this “global mind” in addition to text. That way an Englishman in London searching for “holiday on a sunny beach with white sand by the blue sea” can get the same audio visual content and have it delivered to his non-English speaking girlfriend in China. That way the language barrier could be weakened. We would agree that most of the information we seek out for is not only for our own personal consumption but also for sharing with one or many other recipients. That way AI would be an enabler to take the technology towards the ultimate destination, telepathy.
    Until then, lets work towards web 3.0 which would be one of the greatest things to happen to us in our lifetime.

  12. […] de O’Reilly ha sido futuro casi sin ser presente. Y aun sin poder definirla con precisión, ya se habla de la web 3.0. Velocidad de vértigo. Pero en cierto modo esto nos hace sentir vivos, ¿no creéis? WPvideo […]

  13. I’m completely agree that semantic networking,ontology world and intelligent agents are new tendencies of technological development, but may be everything depends of people, business and society to accepts new reality form of communications.

  14. I,agree,What we need to do going forward is to feed audio visual information to this “global mind” in addition to text. That way an Englishman in London searching for “holiday on a sunny beach with white sand by the blue sea” can get the same audio visual content and have it delivered to his non-English speaking girlfriend in China. That way the language barrier could be weakened. We would agree that most of the information we seek out for is not only for our own personal consumption but also for sharing with one or many other recipients.

Leave a reply to range Cancel reply