2.0

Archive for July, 2006|Monthly archive page

The Future of Governance

In Uncategorized on July 10, 2006 at 8:06 pm

Please see Unwisdom of Crowds for an intro into this piece.

Future of Governance

My basic assumption is that the process of governing human societies in cyberspace will ultimately go back to the classical model we have today in the Western world. It may take 10, 20 or 50 years of experimenting with but I believe we will come full circle to what we have today.

I believe that the core governance process that is our democratic process (which is in essence the same basic idea as that invented by the Greeks, with several important innovations built on top of it) is immune to innovation in the short range. This belief applies to our core governance process now and at any given time, i.e. it will always be immune to innovation in the short range. Change in any process that is fundamental to our existence tends to happen every so many thousand years. Our system today is not that different from the system the Greek invented thousands of years ago.

Many would disagree, but in the short range, I don’t believe that we will be successful in changing the core process that is the current process we have today. If we were able to change the system of governance so easily, we would have witnessed a much rapid rate of incremental change by now. It’s simply frozen where it’s at and will be so for many centuries, at the least.

Somewhat related:

  1. Unwisdom of Crowds

Tags:

Trends, wisdom of crowds, tagging, Startup, mass psychology, governance, cult psychology, Web 2.0, Web 2.0, digg, censorship, democracy, P2P, P2P 2.0, social bookmarking, social networking, Web 2.5, hierarchy

Is Google a Monopoly? // Update: May 10, 2011 Re: Microsoft Instigated Antitrust Investigation of Google

In Uncategorized on July 10, 2006 at 6:13 am

Important Update:  Late 2011

Since the #occupyWallStreet protests, Google has completely buried this article (I believe due to its Anti-Corporatism verbiage) after having been in the top 10 results (1st page) for over five years.

They had done this once (a few years ago) where they completely buried it and then brought it back magically. We hope it will be uncensored once again. We also hope that Google is demoted to a back end service for the centralized corporate web and that everyone moves to embrace the emerging P2P/Mesh internet.

Why do we give Google so much power over our us?

What choices do we have?

Is this debate too academic/intellectual or is there a real issue here that we need to address?

All good questions that must be asked.

Author: Marc Fawzi

>>> Twitter: http://twitter.com/marcfawzi <<<<

License: Attribution-NonCommercial-ShareAlike 3.0

I must not question corporate rule

May 10, 2011 Update:

Microsoft, an aggressive [former?] monopoly, has filed a monopoly complaint against Google. Say what?! One giant corporation fighting another giant corporation? What can small businesses and startups who supply most jobs in this country do re: either of them?

I encourage you to read the rest of this to see why it’s important for software startups to have a better-than-good chance of competing against Google.

December 7, 2010 Update:

Google attempted to purchase Groupon, the online daily deal company, for a reported $6 billion. Some sources have indicated that the deal fell through due to concerns around regulatory interference and pending antitrust investigations (i.e. someone is watching.)

November 23, 2010 Update:

In response to Facebook’s highly targeted ad serving system, which threatens Google’s dominance of online advertising, Google is expected to launch the widely reported “Google Me” service, which may leverage Google’s other resources like Google Apps, Google App Engine and Android to build a powerful social networking and applications platform that can, a least in theory, leave Facebook in the dust, or it can go nowhere given Google’s history with social networking ventures. See this related post about the failure of the Facebook social networking model.

Sept. 12th, 2010 Update:

Google has moved its China operation from the mainland to Hong Kong, which is also under Chinese control. This does not mean that they’ve stopped censoring search result in China and as far as this author is aware there has not been any public statement from Google to that effect since their move to Hong Kong.

Jan. 12th, 2010 Update:

Years after Google decided to censor their China search results (where they basically acted as an oppressive instrument of the Chinese government) they announced that they’ll un-censor their China search results or, if not possible, leave China. People at their China division have a different idea though. So we’ll wait and see.


Article

Given the growing feeling that Google holds too much power over the future of the Web, without any proof that they can use that power wisely, and with sufficient proof to the contrary1, it’s easy to see why some of us are growing increasingly worried about Google’s continued drive to embed itself in all aspects of our lives.

In the software industry, economies of scale do not derive as much from production capacity as from the size of the installed user base, and that’s because software is made of electrical pulses (or bits) that can be replicated and downloaded by the users, at a relatively very small cost to the producer. This means that the size of the installed user base replaces production capacity in classical economic terms.

So far Google had managed to build a dominant market share in search based mostly on the strength of its technology, not by leveraging an installed user base as Microsoft had done with desktop applications.

However, the situation for Google has changed now that it has a huge installed base on the Web (Google.com as the default search engine for Firefox and Google Chrome), mobile phones (Google Android) and desktop (Google Chrome and Chrome OS.) This rather huge installed base should allow Google to dominate almost every application category on every platform.

While building such an advantage is both natural as well as permitted by law (or absence of it), it is unfair to Google’s smaller competitors. The key issue, aside from fairness, is that Google’s continued drive to dominate the search and advertising markets for Web, desktop and mobile platforms clearly threatens innovation by making it harder for more innovative, smaller companies to compete against it. It is akin to Wholefoods (or maybe WalMart, depending on your view of the quality of Google’s innovation) moving into a new neighborhood (or market) and wiping out a whole bunch of smaller more consumer-friendly operators. It’s kind of inevitable in some sense for the winner to take all, but it’s also very bad for society and our economy that we have just one or two big players in any given industry, which eventually become too big to fail and have to be bailed out as they age and start to fall apart. It’s just not sane not to think of what will happen when our global online economy becomes dependent on just a handful of giant corporations (hint: certain disaster.)

Theoretically speaking, the patent system is designed to enable companies of all sizes to carve out new niches to themselves. However, obtaining patents can be a very costly and prolonged process and small companies often get their inventions copied and co-opted by bigger players like Google, Microsoft, etc. In fact, in the Microsoft dominated era, very few companies succeeded in suing them for patent infringement. I happen to know of one small software company and their CEO who succeeded in suing and then settling with Microsoft for millions. But that’s a rare exception to a common rule: the one with the deeper pockets always has the advantage in court (they can drag the lawsuit for years and make it too costly for others to sue them.)

So for  small companies competing against Google , it’s not any better or worse than it used to be under the Microsoft monopoly. But for us the users it’s much worse because what is at stake now is much bigger. It’s no longer about our PCs and LANs, it’s about our global online economy, online businesses and online lives.

Google has accumulated not just a huge installed base, but too many deeply embedded strategic channels and hooks, and ignoring that would lead to Google eventually becoming “way too big to fail.”

Unchecked monopolies, even when “lawful,” create too much dependence on a single vendor, which reduces the number of choices we have as consumers and exposes our economy to the risk of failure in the long run.

Resiliency can only come from ‘millions of individuals and small producers cooperating and competing in a free market’ not from a few giant corporations that have cornered the market (see: The Shift From Top-Down To Bottom-Up Production.)

If the Internet has proved anything, it is that we, the users, can have everything we need without the massive, profit-driven (and often morally indifferent) corporations.

It’s time we took a serious look at the version of Capitalism that we have today, which is more like Corporatism than Capitalism, with giant corporations defining the law in their favor.

If we don’t, giant, morally indifferent corporations will continue to hold power over the rest of us.

As an example of the power Google holds, consider the case of site blocking where Google has been forcing its own policy on site owners, without the site owners agreeing to the terms of that policy and without having any way to quickly resolve (or avoid) Google’s site blocking process. 2

In order for us to get out from under all such ‘lawful’ monopolies, we can either change the Law itself or lessen our dependence on those monopolies whether they’re lawful or not.

It’s probably much easier for us as individuals to start developing and/or supporting p2p alternatives that go around centralized infrastructure and services than to try and fix the law.

To that effect, we should consider moving from centralized technologies like Google and Facebook to peer-to-peer technologies like the emerging p2p search engines, Skype (or its open source alternatives, given that it’s now owned by Microsoft), and the emerging p2p social networking apps. Doing so would reduce our dependence on capital-intensive, centralized infrastructure like Google’s and allow us to be not only the consumers but also the producers of the infrastructure (by having our PCs and maybe one day mobile phones collaborate with each other to provide all the infrastructure we need.)

We would also benefit from moving to wireless mesh networks as replacement to centralized Internet Service Providers (ISPs) like Verizon who was recently caught colluding with Google to kill ‘network neutrality’ and hand Google even more leverage. Such a transition to a truly decentralized, P2P Internet is not only possible, it has already began in certain niches.

The Internet should be entirely decentralized (from both the power-structure and technological points of view) and it should be owned and operated by its users, not a few too-big-to-fail corporations.

That can be done by having each PC and mobile device act as a relay for the data in the network (same way as when using BitTorrent and in p2p apps like Skype) and not just be a consumer of the data.

The new version of the Internet protocol, IP v6, being deployed today in many places, will allow P2P networking to take place on a grander scale and in a more pervasive and direct manner. So it’s just a matter of time before Google and other big centralized players will be a thing of the past, as long as innovative startups are funded and allowed to reach their full potential (as opposed to being plucked by Google or Facebook et al and co-opted), and for that we need some changes in the attitudes of the founders and the investors of those startups with respect to the immediate gratification of being sold to a big player vs the long term reward of seeing their vision come through…

1. What leaps to mind as far as Google’s lack of wisdom is their cooperation with the Chinese government in oppressing the already-oppressed (see: Google Chinese censorship.) More recently, Google’s shareholders, on advice from Google’s Board of Directors, have voted against two proposals that would have compelled Google to change its human rights policies (for the better.)

2. Even more recently, Google, Mozilla, Apple, and others have implemented a feature in their respective browsers that detects and filters out malicious sites based on what ‘Google crawlers’ decide and what is reported to StopBadWare.org. The first part of the problem is that in both cases, whether malicious code was detected by Google crawlers or reported by some 3rd party to StopBadWare.org, Google is the main authority in deciding which site is malicious, for all browsers from Google, Firefox and Apple (and possibly others.) This means that web site owners whose sites had been injected with malicious code by hackers are at the mercy of Google’s review process which may not resolve (with the removal of the site from the list of malicious sites) for many hours or even days after the site owner has removed the malicious code. This holds the site owners hostage to Google. The second part of the problem is that the site owners do not have a choice as far as what browser their users use, and, therefore, Google’s site blocking policy is being forced on them, without their agreement. The problem in its two parts boils down to Google establishing the ‘law’ (site blocking in this case) as well as enforcing it. Google’s defense has been that StopBadWare.org is an independent authority, but they’re clearly not (i.e. Google is being misleading) since the site review process goes through Google itself and there is no way for legitimate site owners to manage the process.

Related

  1. Wikipedia 3.0: The End of Google?

P2P Related

  1. Towards a World-Wide Mesh
  2. P2P DNS For Firefox
  3. The People’s Google
  4. Using Google To Send Smoke Signals

Open Source Your Mind

In Uncategorized on July 9, 2006 at 3:03 pm

Any idea that you come up with that can bring a lot of power to someone and is realistic enough to attempt will inevitably get built by someone.

It doesn’t matter that you thought of it first. So it’s better to put your ideas out there in the open, be them good ideas like Wikipedia 3.0, P2P 3.0 (The People’s Google) and Google GoodSense or “potentially” concern-causing ones like the Tagging People in the Real World and the e-Society ideas.

In today’s world, if anyone can think of a powerful idea that is realistic enough to attempt then chances are someone is already working on it or someone will be working on it within months.

Therefore, it is wise to get both good and potentially concern-causing ideas out there and let people be aware of them so that the good ones like the vision for Wikipedia 3.0 and the debate about the ‘Unwisdom of Crowds‘ can be of benefit to all and so that potentially concern-causing ones like the Tagging People in the Real World and the e-Society ideas can be debated in the open.

It is in a way similar to the one aspect of the patent system. If someone comes up with the cure to cancer or with an important new technology then we, as a society, would want them to describe how it’s made or how it works so we can be sure we have access to it. However, given the availability of blogs and the connectivity we have today, wise innovators, including those in the open source movement, are putting their deas out there in the open so that society as a whole may learn about them, debate them, and decide whether to embrace them, fight them or do something in between (moderate their effect.)

For some, it can be a lot of fun, especially the unpredictability element.

So open source your blue sky vision and let the world here about it.

And for the potentially concern-causing ideas, it’s better to bring them out in the open than to work on them (or risk others working on them) in the dark.

In other words, open source your mind.

Tags:
Trends, wisdom of crowds, tagging, Web 2.0, Web 2.0, digg, censorship, democracy, P2P, P2P 2.0, e-society, unwisdom of crowds, Web 3.0, Web 3.0, ai, P2P AI, Wikipedia 3.0, Wikipedia, Semantic Web, semantic web, world hunger, Google AdSense, Open Source, open source your mind

Self-Aware e-Society

In Uncategorized on July 9, 2006 at 9:20 am

(this post was refreshed on Jul 16, ‘08.)

A Self-Aware Society

In this post we discuss the idea of a pattern-recognizing neural network that sits on top of a P2P network and learns to recognize and predict communication, social, cultural, political and transactional patterns [generated by the users] across the system. The idea is to enable the detection of the emergence of potentially “negative” patterns in the system (such as speculative market bubbles or the emergence of certain group behavior) thus allowing us to control or at least predict social, political and business trends.

The idea is to use the P2P clients in such e-society as a way to pull into the neural network the social, political, and business (transactional) trends produced by the users across the network. In this scenario, the neural network, which would be separate from the actual P2P layer itself, would be trained to recognize certain patterns in the real-time data gathered from the P2P clients and alert us when certain patterns are detected.

Obviously, there are limits on the types of patterns that can be isolated and learned as well as limits on the accuracy of pattern recognition and trend prediction.

However, the potential for such a pattern-recognition layer would be immense (and scary.)

Self-Aware e-Society vs Prediction Markets

Prediction markets are mostly based on wisdom of crowds. They are simulations in which people make individual judgments and their judgments are averaged to produce the prediction (or the crowd judgment). There are types of prediction markets where people buy and sell and the system makes the prediction (or crowd judgment based on the buy-sell decisions which represent judgments)

However, I am not aware of any prediction market that can recognize and predict emergence of patterns in people’s implied or explicit judgments as they relate to a given company stock, product, idea or person. These patterns are extracted from the users’ communication, social, cultural, game-playing and transactional data (including inferred data) which are captured from virtual stock markets, virtual auctions, chat rooms (where a hierarchy can exist: e.g. founder of room, operators, favored participants, participants, and unwanted participants), social applications and entertainment applications (including multi-player online games.)

People supply complex individual implicit or explicit judgments in true-to-life simulations that generate patterns of judgments across society or across groups within the society which can then be taught and recognized (for those patterns that relate to a phenomenon like speculative market bubbles, emergence of cults, etc) by the neural network monitoring this live e-society.

Governments and politicians will be able to use such live (made of people), self-aware e-society to simulate the outcome of critical political decisions on society before they make those decisions in their own, real society.

This relates to governance in another way: the e-society by being aware of negative patterns emerging within it can flag and alert the leaders of the e-society so that they may try steer society away from trouble.

I believe it is the next level in prediction markets. The key difference with respect to prediction markets, is that a self-aware e-society will be able to capture, recognize and predict the emergence of behavioral patterns that happen within it as opposed to simply predicting single-valued outcomes and ranges (without the ability to recognize and predict the patterns that could lead to those outcomes.) In other words, a self-aware e-society can predict the outcome of prediction markets running within it before prediction markets can make that prediction. That means that (given the ability to pre-predict and thus potentially avoid bad outcomes) the prediction markets can be real markets and not just simulations. So it would seem that the e-society application described here could run on top of society itself (i.e. no need for simulation.)

In other words, a self-aware e-society would act as a predictive governance tool for society itself.

Conclusion

I realize that it does sound very futuristic, but the idea is ‘technically’ compatible with the democratic governance ideals I had proposed for Web 2.0. In other words, in the ideal usage scenario, it should not supplant them. It should help society by monitoring it for dangerous trends so that the problems that would normally happen could be diffused.

Think it’s sci-fi? It can be put together with existing technologies and expertise.

 

The implications of this idea extend to areas such as national security, economic security, cultural phenomenon, political science, mass psychology and sociology.

But is it good or bad?

Any idea that can deliver a lot of power to someone and is realistic enough to be attempted will inevitably be developed by someone somewhere. So it’s better to put these ideas (be them good like the Wikipedia 3.0/Web 3.0 idea or potentially concern-causing like the Tagging idea or this idea) out there in the open and let people be aware of them and debate them.

Response to Readers’ Comments

Question:
Ian Delaney wrote: I wonder if machines are up to the job of identifying negative cults. After all, human judges seem to make a lot of very bad mistakes.

Response:
The leaders of society will still be the ones who would make the judgment. The machine is a predictive tool to help society manage emergent patterns. It is still the people who make the judgments, through their democratically elected leaders. The machine simply provides a cognitive layer below that.

Related

  1. Open Source Your Mind
  2. Tagging People in the Real World

Beats

  1. Soulenoid (Scream at the right time)

 

 

Tags:

Trends, wisdom of crowds, Startup, mass psychology, cult psychology, Web 2.0, Web 2.0, democracy, P2P, P2P 2.0, social networking, Web 2.5, governance, Internet governance, pattern recognition, non-linear feedback loop, neural network, prediction markets, e-society, national security, economy, political science, cultural phenomenon, AOL, NSA, wiretapping, civil liberties

Unwisdom of Crowds

In Uncategorized on July 7, 2006 at 8:15 am

Author: Marc Fawzi

Twitter: http://twitter.com/#!/marcfawzi

License: Attribution-NonCommercial-ShareAlike 3.0

~~

A Crowd Has No Wisdom

Before we make this argument, let’s define the types of crowds.

{The designations of ‘condensed’ and ‘dispersed’ given below for crowds are relative to the ability of the members of the crowd to communicate with each other and affect each other’s judgment.

The word “crowd” is used here to mean a large group of people, not 5 or 10 people but thousands or millions of people.}

A dispersed crowd (without a formal hierarchy) will produce averaged judgment. For example, asking each of 200 people (not at the same time or place) how many jelly beans are in a jar would result in an averaged judgment, which would eliminate values that are too high or two low, resulting in an estimate of the number of jelly beans in the jar (which is a measurable value) that is close to the actual value. In this case the crowd is nothing more than a decent statistical calculator. It has not exhibited any more wisdom than the tool it is being used as.

A condensed crowd (without a formal hierarchy) may produce averaged or lowest-common-denominator judgment, depending on whether or not its judgment is rationally or psychologically driven. In case the judgment is about a measurable value it would most likely be rationally driven, and, thus, be an averaged judgment. In case the judgment is about a quality it would most likely be psychologically driven, and thus, be a lowest-common-denominator judgment. In the rational case, the assumption is that, even though the crowd’s members can communicate with and affect each others judgment, if each member is rational enough and the judgment to be made concerns a measurable value then the crowd will likely produce an averaged judgment (i.e. the average of independent judgments.) If, however, the crowd members can communicate and affect each others judgment and the judgment to be made is qualitative not quantitative then the crowd’s judgment should tend toward the lowest common denominator.

A typical crowd is a mix of both the dispersed and condensed crowds. Thus, its range of judgment with respect to both measurable value and quality include both averaged as well as lowest-common-denominator judgments.

The problem with averaged judgment when it’s applied to quality (rather than measurable value), which can happen in a typical crowd, is that you end up with a judgment of average quality, not the best judgment.

The problem with lowest-common-denominator judgment when it’s applied to quality is that it uses the primitive part of our psychology. In other words, expect exactly the opposite of wisdom.

So when it comes to quality, a typical crowd is going to be either a judge of average quality or an unwise judge. And nothing else.

Where does that leave the ‘Wisdom of Crowds’ movement? (in the garbage bin of history in my candid opinion.)

Toward a Democratic Society

A hierarchy that doesn’t listen to the crowd (or that forces and manipulates the crowd to listen to it) is a dictatorship (e.g. North Korea, Iran, the 3rd Reich, etc.)

However, mixed ‘hierarchical + crowd’ system, which ideally allows the crowd to adjusts the judgment (of the system), is a democracy.

Therefore, Web 2.0’s [un]wisdom-of-crowds model needs to be fixed by adding the concept of a non-arbirary hierarchy that is by the crowd (or people) and for the crowd (or people.)

Below is one example, using ‘digg’ as the Web 2.0 application, that shows a prototypical transformation from Web 2.0 to Web 2.5 (or from “hunter gatherer” to “democratic society.”)

Electing Leaders in a Democracy: Building the System

In an application like digg (or the “digg killer” to be exact) writers, content producers, social figures, business figures, and others, who are higher in the food chain than the consumer, and who are collectively referred to herein as ‘taste makers’, should be allowed to start their own channel (or page) where they list links they think are cool. If enough people ‘bookmark’ a given page then that means that the taste-maker in question is worthy of being positioned into the system’s hierarchy at a higher level than that of the consumer. The taste-makers can then rally their followers (those who use them as taste-makers) to digg the links the taste maker has chosen to put on his/her page.

This is similar to parliamentary democracy where members of the parliament have to get enough votes on a given issue from their district in order to pass it into law.

The key here is that the ‘trusted’ taste-makers get to decide which links to promote for votes from their followers.

At the same time, people in the crowd should be able to vote the taste-makers in or out of the system’s hierarchical structure by bookmarking or un-bookmarking their page.

Anyone who has followers can become a taste-maker, but they would have to replace an existing taste-maker as the system has a finite hierarchy with finite number of taste-maker positions (e.g. in the thousands.) And once someone is elected as a taste-maker they would stay in the role for a certain period before they can be voted in or out of the position by their followers (assuming another contender has nominated himself/herself for the position.)

This is a very simple ‘hierarchical + crowd’ system that implements a very simple form of leader-follower democratic process.

The perils of letting the crowd decide without giving them a democratic structure and process is to let lowest-common-denominator and averaged judgments become the norm.

Leaders and Crowds need to work together within a democratic structure and process to assure the best judgment possible.

BTW, this is not much different than the process whereby the crowd selects its taste-makers (e.g. Radio DJs, Wise men, etc.) except this provides a structure to formalize the process, which would be too costly and time-consuming in the real world. So may be this would also apply to how society elects its taste makers (outside of social bookmarking sites.)

The reason this system would kill digg is because it will have an aggregate quality of judgment so much better than digg.

Related

 

  1. Web 2.0: Back to The Hunter Gatherer Society
  2. The Future of Governance (outdated by next link)
  3. Toward a Natural World Order

 

Tags:

Trends, wisdom of crowds, tagging, Startup, mass psychology, Google, cult psychology, Web 2.0, Web 2.0, digg, censorship, democracy, P2P, P2P 2.0, social bookmarking, social networking, Web 2.5

P2P AI Engines To Challenge Google in Web 3.0

In Uncategorized on July 6, 2006 at 9:57 am

This is a note (in case you missed it) about how in Web 3.0 (aka Semantic Web) P2P AI Engines running on users’ machine and working with standardized domain-specific ontologies will challenge Google’s dominance.

P2P AI Engines will challenge Google and as well as any future AI-enabled version of Google.

Read more

Related

  1. The People’s Google
  2. Wikipedia 3.0: The End of Google?

Tags:

Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google,inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0AI Engine, Danny Hillis, William Gibson, Thinking Machines, cellular automata, OWL-DL, AI Engine, The Matrix, AI Matrix, Global Brain

Digg Killer

In Uncategorized on July 6, 2006 at 4:29 am

This article has been merged with:

Unwisdom of Crowds

Tags:
Trends, wisdom of crowds, tagging, Startup, mass psychology, Google, cult psychology, Web 2.0, Web 2.0, digg, censorship, democracy

The Google Crowd (has hierarchy)

In Uncategorized on July 5, 2006 at 1:06 am

For the context of this article, please see:

Unwisdom of Crowds

Last updated: 12/07/2008

Please see Ian Delaney’s well-written set of counter arguments at TwoPointTouch and the discussion that emerged under his comments section.

My reply to Ian’s argument re: Google’s PageRank being an implementation of the ‘wisdom of crowds’ model is that Google does not let the crowd judge the worthiness of a given link. It let’s the writers, bloggers like Ian, myself, e-zines, news publishers, organizations, etc, i.e. the tastemakers in society (or the producers), who are linked to by many others, judge what is good and what is not. This is distinctly different from letting those who simply consume make the judgment. In the food chain, the producer or tastemaker comes before the consumer. That represents a non-arbitrary hierarchy on the level of the society that does not exist within a crowd. Thus, on the level of the society, the Google model does not rely on the wisdom of the ‘crowd’ but the wisdom of tastemakers and producers.

One important thing to note about the precdeding argument is that it’s not any arbitrary producers that make up the ‘tastemakers’ layer (or crowd) within the hierarchy of society. The producers whose links to sites representing a given field (e.g. arts, music, science, etc) get valued higher by Google are those producers who have many people linking to them (i.e. other producers), which, if you follow the chain of links, leads us eventually to the first producers that appeared on the Web to write about that field, who had the time and leverage to build credibility among other tastemakers. So it’s the early adopters (for each given field), who tend to be the real tastemakers and leaders, who are the highest value producers, that determine who the high-value producers are. Having said that, high-value producers could appear out of nowhere. Such newcomers would get recognized as being high-value producers by receiving many incoming links from their peers.

Obviously, Google’s algorithm is more complex and robust than described above, but the purpose here is to show how Google’s PageRank is based on the averaged or lower-common-denominator judgment of the tastemakers layer of society (which itself is a crowd) rather than the averaged or lower-common-denominator judgment of an arbitrary crowd.

The wisdom of a crowd (or lack thereof), in the case of the tastemakers layer of society, is going to result in lowest-common-denominator only if their indivdiual judgments are lumped together (as digg does with the judgment of its users.)

In a mixed ‘hierarchical + crowd’ system the individual judgments of the taste-makers can be seen by members of the crowd. The lumping together of individual judgments is what creates a crowd.

Thus, in a mixed ‘hierarchical + crowd’ system the taste makers are bound to exist as both unwise crowds as well as wise individuals.

A crowd can never be as wise as its wisest member or as foolish as its most foolish member.

Related

  1. Unwisdom of Crowds
  2. For Great Justice, Take Off Every Digg
  3. Digg This! 55,500 hits in ~4 Days
  4. Web 2.0: Back to the Hunter Gatherer Society

Tags:
Trends, wisdom of crowds, tagging, Startup, mass psychology, Google, cult psychology, Web 2.0, Web 2.0, digg, censorship

The Geek VC Fund Project: 7/02 Update

In Uncategorized on July 2, 2006 at 9:06 am

This post is an update to the original post about the Geek-Run, Geek-Funded Venture Capital Fund.

  1. The idea has gotten a fantastic reception.
  2. We’ve built a core team of experienced individuals that is working on the concept.
  3. We plan on gathering input from potential investors and entrepreneurs in the near future.
  4. We plan on announcing the location of our virtual collaboration space in the near future.
  5. If you’ve just joined us you may wish to add your feedback (see Comments)

More to come …

As always, feel free to contact me via email.

Tags:

Web 2.0, Web 2.0, venture capital, venture capital, VC, entrepreneur, funding, private equity, geek, seed funding, early stage, Startup

Digg This! 55,500 hits in ~4 Days

In Uncategorized on July 2, 2006 at 5:22 am

/*

(this post was last updated at 10:30am EST, July 3, ‘06, GMT +5)

This post is a follow up to the previous post For Great Justice, Take Off Every Digg

According to Alexa.com, the total penetration of the Wikipedia 3.0 article was ~2 million readers (who must have read it on other websites that copied the article)

*/

EDIT: I looked at the graph and did the math again, and as far as I can tell it’s “55,500 in ~4 days” not “55,000 in 5 days.” So that’s 13,875 page views per each day.

Stats (approx.) for the “Wikipedia 3.0: The End of Google?” and “For Great Justice, Take Off Every Digg articles:

These are to the best of my memory from each of the first ~4 days as verified by the graph.

33,000 page views in day 1 (the first wave)

* day 1 is almost one and a half columns on the graph not one because I posted it at ~5:00am and the day (in WordPress time zone) ends at 8pm, so the first column is only ~ 15 hours.

9,500 page views in day 2

5,000 page views in day 3

8,000 page views in day 4 (the second wave)

Total: 55,500 in ~4 days which is 13,875 page views per day (not server hits) for ~4 days. Now on the 7th day the traffic is expected to be ~1000 page views, unless I get another small spike. That’s a pretty good double-dipping long tail. If you’ve done better with digg let me know how you did it! :)

Experiment

This post is a follow-up to my previous article on digg, where I explained how I had experimented and succeeded in generating 45,000 visits to an article I wrote in the first 3 days of its release (40,000 of which came directly from digg.)

I had posted an article on digg about a bold but well-thought out vision of the future, involving Google and Wikipedia, with the sensational title of “Wikipedia 3.0: The End of Google?” (which may turn out after all to be a realistic proposition.)

Since my previous article on digg I’ve found out that digg did not ban my IP address. They had deleted my account due to multiple submissions. So I was able to get back with a new user account and try another the experiment: I submitted “AI Matrix vs Google” and “Web 3.0 vs Google” as two separate links for one article (which has since been given the final title of “Web 3.0: Basic Concepts

Results

Neither ‘sensational’ title worked.

Analysis

I tried to rationalize what happened …

I figured that the crowd wanted a showdown between two major cults (e.g the Google fans and the Wikipedia fans) and not between Google and some hypothetical entity (e.g. AI Matrix or Web 3.0).

But then I thought about how Valleywag was able to cleverly piggyback on my “Wikipedia 3.0: The End of Google?” article (which had generated all the hype) with an article having the dual title of “Five Reasons Google Will Invent Real AI” on digg and “Five Reasons No One Will Replace Google” on Valleywag.

They used AI in the title and I did the same in the new experiment, so we should both get lots of diggs. They got about 1300 diggs. I got about 3. Why didn’t it work in my case?

The answer is that the crowd is not a logical animal. It’s a psychological animal. It does not make mental connections as we do as individuals (because a crowd is a randomized population that is made up of different people at different times) so it can’t react logically.

Analyzing it from the psychological frame, I concluded that it must have been the Wikipedia fans who “dugg” my original article. The Google fans did “digg” it but not in the same large percentage as the Wikipedia fans.

Valleywag gave the Google fans the relief they needed after my article with its own article in defense of Google. However, when I went at it again with “Matrix AI vs Google” and “Web 3.0 vs Google” the error I made was in not knowing that the part of the crowd that “dugg” my original article were the Wikipedia fans not the Goolge haters. In fact, Google haters are not very well represented on digg. In other words, I found out that “XYZ vs Google” will not work on digg unless XYZ has a large base of fans on digg.

Escape Velocity

The critical threshold in the digg traffic generation process is to get enough diggs quickly enough, after submitting the post, to get the post on digg’s popular page. Once the post is on digg’s popular page both sides (those who like what your post is about and those who will hate you and want to kill you for writing it) will affected by the psychlogical manipulation you design (aka the ‘wave.’) However, the majority of those who will “digg” it will be from the group that likes it. A lesser number of people will “digg” it from the group that hates it.

Double Dipping

I did have a strong second wave when I went out and explained how ridiculous the whole digg process is.

This is how the second wave was created:

I got lots of “diggs” from Wikipedia fans and traffic from both Google and Wikipedia fans for the original article.

Then I wrote a follow up on why “digg sucks” but only got 100 “diggs” for it (because all the digg fans on digg kept ‘burying’ it!) so I did not get much traffic to it from digg fans or digg haters (not that many of the latter on digg.)

The biggest traffic to it came from the bloggers and others who came to see what the all fuss was about as far as the original article. I had linked to the follow up article (on why I thought digg sucked) from the original article (i.e. like chaining magnets) so when people came to see what the fuss was all about with respect to the original article they were also told to check out the “digg sucks” article for context.

That worked! The original and second waves, which both had a long tail (see below) generated a total of 55,500 hits in ~4 days. That’s 13,875 page views a day for the first ~4 days.

Long Tail vs Sting

I know that some very observant bloggers have said that digg can only produce a sharp, short lived pulse of traffic (or a sting), as opposed to a long tail or a double-dipping long tail, as in my case, but those observations are for posts that are not themselves memes. When you have a meme you get the long tail (or an exponential decay) and when you chain memes as I did (which I guess I could have done faster as the second wave would have been much bigger) then you get a double-dipping long tail as I’m having now.

Today (which is 7 days after the original experiment) the traffic is over 800 hits so far, still on the strength of the original wave and the second wave (note that the flat like I had before the spike represents levels of traffic between ~100 to ~800, so don’t be fooled by the flatness, it’s relative to the scale of the graph.)

In other words, traffic is still going strong from the strength of the long-tail waves generated from the original post and the follow up one.

double

Links

  1. Wikipedia 3.0: The End of Google?
  2. For Great Justice, Take Off Every Digg
  3. Unwisdom of Crowds
  4. Self-Aware e-Society

Tags:
Semantic Web, Web strandards, Trends, wisdom of crowds, tagging, Startup, mass psychology, Google, cult psychology, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, digg, censorship