2.0

Intelligence (Not Content) is King in Web 3.0

In Uncategorized on July 17, 2006 at 2:35 pm

Author: Marc Fawzi

Twitter: http://twitter.com/#!/marcfawzi

License: Attribution-NonCommercial-ShareAlike 3.0

Observation

  1. There’s an enormous amount of free content on the Web.
  2. Pirates will aways find ways to share copyrighted content, i.e. get content for free.
  3. There’s an exponential growth in the amount of free, user-generated content.
  4. Net Neutrality (or the lack of a two-tier Internet) will only help ensure the continuance of this trend.
  5. Content is is becoming so commoditized that it only costs us the monthly ISP fee to access.

Conslusions (or Hypotheses)

The next value paradigm in the content business is going to be about embedding “intelligent findability” into the content layer, by using a semantic CMS (like Semantic MediaWiki, that enables domain experts to build informal ontologies [or semantic annotations] on top of the information) and by adding inferencing capabilities to existing search engines. I know this represents less than the full vision for Web 3.0 as I’ve outlined in the Wikipedia 3.0 and Web 3.0 articles but it’s a quantum leap above and beyond the level of intelligence that exists today within the content layer. Also, semantic CMS can be part of P2P Semantic Web Inference Engine applications that would push central search model’s like Google’s a step closer to being a “utility” like transport, unless Google builds their own AI, which would ultimately have to compete with P2P semantic search engines (see: P2P 3.0: The People’s Google and Get Your DBin.)

In other words, “intelligent findability” NOT content in itself will be King in Web 3.0.

Related

  1. Towards Intelligent Findability
  2. Wikipedia 3.0: The End of Google?
  3. Web 3.0: Basic Concepts
  4. P2P 3.0: The People’s Google
  5. Why Net Neutrality is Good for Web 3.0
  6. Semantic MediaWiki
  7. Get Your DBin

Tags:

net neutrality, two-tier internet, content, Web 3.0, inference engine, semantic-web, artificial intelligence, ai

  1. You speak of “intelligent findability”. it seems to me that there is also a fidelity component. The masses will eventually cause the purest iteration to rise to the top. However, with enough money/power and an insidious desire such a system could easily be poisoned. This is evidenced by systems like digg and wikipedia having to come up with creative solutions. Would the intelligence provide for a framework to help an inventive soul to effectively prevent poisoning?

  2. I don’t see how the ‘masses’ can cause the purest iteration to rise to the top. I can see how they can cause the average or lowest-common-denominator iteration to rise to the top (for detailed discussion: see my post on the Unwisdom of Crowds.)

    But to your point about “poisoning” there are two things: information and the ontologies (the semantic/foundational layer.) The foundational layer can always be developed in-house. In other words, there is a separation between the “brain” or “intelligence” and the information. The information can always be poisoned regardless of whether we stick to Web 2.0 or move to the Semantic Web (Web 3.0).. a move which is already happening.

    In other words, the poisoning of information will always happen and you can’t bet on the masses at all times (and definitely not all types of ‘masses’) to give you the average judgment (which would presuambly be void of poisoning) since masses can just as easily give you the lowes-common-denominator judgment (again, see my article on the Unwisdo of Crowds)

    Intelligent Findability will simply allow you to find information by comparing meaning rather than keywords. I wouldn’t be surprised if Google is already building it into their search engine, but they would need a set of formal ontologies covering most domains of human knowledge and Wikipedia can have that easier than Google beacuse they already control the largest database of human knowledge and have domain experts among their volunteers who can produce the ontologies (or the foundational layer for intelligence.)

    If the information source is poisoned then the reasoning will be wrong, but that’s what we have to deal with everyday in real life. Information can and will always be susceptible to poisoning. The key is to use information sources you trust. Intelligent Findability will only make it easier for you to find what you’re looking for since you can search by meaning rather than by keywords.

    Marc

  3. A Google executive recently challenged Tim Berners-Lee promotion of the Semantic Wbe based on the same argument Tony used, i.e. “poisoning” or “deception”

    Tim’s response was that [quoting from ZDNet article] “deception on the Internet is a problem, but he argued that part of the Semantic Web is about identifying the originator of information, and identifying why the information can be trusted, not just the content of the information itself”

    That’s interesting.

    Marc

  4. […] In response to: Evolving Trends: Intelligence Not Content Is King in Web 3.0 […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: