2.0

Archive for the ‘Uncategorized’ Category

Of Frameworks And Religion

In Uncategorized on August 7, 2014 at 12:42 pm

Frameworks and religion share these aspects:

  1. Religion/Frameworks are both illusory attempts to find simplicity and idealism in an inherently complex and  imperfect environment.
  2. A rational examination of the origins of –and reasons for– religion/frameworks, as well as the benefits and disadvantages of religion/frameworks, is unlikely to change the mind of anyone who is afraid to examine these concepts objectively.
  3. Even some bright people may feel too frightened to face the challenges they have without the guidance of a framework/religion. Their upbringing has imbued in them the belief that it is safer not to subject the ideas that happen to be en vogue to close scrutiny. Furthermore, becoming an agnostic or a disbeliever can cut one off from the comfort and companionship of co-believers. This potentially damaging consequence of doubting a popular belief system is a strong deterrent to questioning deeply flawed concepts.
  4. People/Developers tend to associate in communities of other like-minded people. Believers typically restrict their social circle to other believers. They surround themselves with mirror images of themselves. So, the believer in a religion/framework asks, “How can they not believe as I believe? The believing community/cult usually provides a convenient answer to that question: The non-believers are ignorant or they do not get it. If you hang around them enough it might lead you astray. As a result, the believer in a religion/framework becomes paranoid and afraid of the non-believers, because he fails to understand that non-believers do not need to believe in anything. Non-believers rely on reason, logic and the factual evidence. Instead, the believer, when it comes to their choice of association, sees non-believers as undesirable. Thus, belief in a religion/framework maintains itself through self-affirmation, insulation and exclusion of others who don’t have the same views/beliefs.
  5. Frameworks/religions divide us.

Having stated that, frameworks/religion can also help unify us. But unlike religion, frameworks, especially UI frameworks, tend to come and go at a relatively rapid pace, so developers are learning to look for deep principles beneath the surface, where the rate of change is much more bearable.

The meta argument is that it’s really hard to reduce complex ideas like religion and frameworks to either good or bad. So we have to be pragmatic in how we approach technology, focusing on the deeper principles at play rather than the particular framework.

 

Why Bitcoin is Volatile aka I Told You So.

In Uncategorized on December 1, 2013 at 4:52 pm

Bitcoin is based on the idea that money gets its intrinsic value from its scarcity, which is somewhat of a misconception if that’s the only thing you take into consideration. Ultimately, money gets its value from the flow of goods and services that use it (that is the “flow network” view of the value of currency.)

Selling people on the misconceived notion that Bitcoin has real value because of its scarcity seems to have worked so far, but when people wake up and ask what they can buy with their Bitcoins and find the answer to be very limited they’ll likely lose interest in holding on to those Bitcoins. It doesn’t matter how scarce those Bitcoins are: if they’re not widely accepted in trade, after the speculative bubble bursts, they won’t have much value.

The built-in scarcity makes it so that speculative action can lead to huge swings in currency value, i.e. massive instability over time. If supply always caught up to demand then speculation would be a rational process more or less based on real-world transaction volume as opposed to swinging between “OMG  I gotta get some Bitcoins while there are some available at a still-favorable price” and “OMG Bitcoin adoption is slowing down and I gotta get rid of mine before I incur huge losses.” It’s very finicky.

In order to be less exposed to speculative bubbles, the intrinsic value of a currency must be decoupled from its supply/demand (in other words, supply must catch up to demand, and oversupply only happens if people stop using the currency, i.e. when you have a drop in volume of real world transactions) and the value must be based on the volume of real-world transactions.

We invented Fire. Next, we have to control it.

Update:

China bans its banks from transacting in Bitcoin thus causing an initial 20% drop in the price of Bitcoin. Here is an interesting passage from the Bloomberg article that reflects many of the opinions I expressed above just a few days before the events in China unfolded.

http://www.bloomberg.com/news/2013-12-05/china-s-pboc-bans-financial-companies-from-bitcoin-transactions.html

The People’s Bank of China said financial institutions and payment companies can’t give pricing in Bitcoin, buy and sell the virtual currency or insure Bitcoin-linked products, according to a statement on the central bank’s website.

PBOC, China Banking Regulatory Commission and other regulators have held discussions about drafting rules for trading platforms that facilitate the buying and selling of the virtual money, two people with direct knowledge of the matter said. They were not authorized to speak because the information is not public.

“We’re happy to see the government start regulating the Bitcoin exchanges,” Chief Executive Officer Bobby Lee of BTC China, the largest Bitcoin exchange in the country, said in a phone interview before the PBOC announcement. Regulations would be for “the good of the consumer,” he said. BTC is seeking recognition of the currency so it can be used to buy goods and services instead of being used for speculation, he said.

Update:

This post got plenty of traffic thanks to Hacker News, and in there, someone mentioned that, a few years back, Satoshi Nakamoto, the illusive inventor of Bitcoin, had mentioned my work on P2P Social Currency as being something that can ride atop the Bitcoin protocol. I have my doubts.

http://archive.is/5CbYM

https://www.bitcoin.com/satoshi-archive/emails/p2p-research/3/

On the Importance of Pre-Attentive Cues in Data Visualization

In Uncategorized on April 15, 2013 at 2:26 pm

The way data enters the mind, whether in a completely idle form or a highly energetic, animated form, influences how we relate to it. The initial moment of meeting between an object and an observer is the pre-attentive phase. The animal part of our brain looks for visual cues from the object to determine the initial visceral response to the stimulus.

One of the most crucial moment in the relating between the object and us, is the moment before our rational mind is engaged.. This is just as true in visualizing data as it is in interaction with everything around us. However, for reasons to do with the limitation of the print medium this crucial pre-attentive phase was up till recently largely ignored in the science and practice of data visualization. The neglect of the pre-attentive cues actually continued into the internet age, mainly due to the lack of refined tools for presenting data, but also because the emphasis on the purely rational and practical in our culture, which is finally beginning to give room to the cognitive science of relating.

It’s not just how something works that determines our relationship to it but also how it enters our mind upon first meeting it: is it a rigid and fixed form, basically mechanical and non-fluid, or an interaction that carries as much pre-attentive information as when meeting someone for the first time?

In the print scenario of data visualization, we experience contact with a rigid form that we then have to analyze and understand from a static starting point. Whereas on the web, the data can engage our mind in a pre-attentive dance with possibility, and the more graceful and well choreographed the dance is the more animated we feel about the data. That is the true meaning of relating. And we can think of data visualization as poetry in motion, with each word or piece of data entering our mind in harmony or artful juxtaposition with the others, and becoming meaningful in the pre-attentive phase of discovery prior to being rationally interacted. It’s that first visceral reaction to the experience of seeing data that makes it memorable and sticky in our mind. But in the end it is about paying roughly equal attention to the pre-attentive and the cognitive phases of realization that will create a pleasant and effectual experience for the user.

I’ve taken this somewhat fantasized theory of engagement and we’re building our data visualization with focus on both the irrational and rational (or pre-attentive and analytical phases) and the initial result have gotten strong favorable reactions from our users. Below is a video we’ve made of the first attempt:

http://www.youtube.com/watch?v=Hcv5PhMLniE

JSON API for HTML view composition

In anti-templating, DOM, Javascript, purejs, templating, Uncategorized on February 3, 2012 at 6:53 pm

idom = idi.bidi.dom

What is it?

JSON API for templated HTML & SVG view compositing

idi.bidi.dom (idom) – offers a new and different way for interacting with the DOM.

In abstract terms, idom takes the DOM and adds variables, scope, variable memoization, multiple-inheritance and type polymorphism (with the Node Prototype as the user defined type) In logical terms, it offers a flat JSON API for creating, populating, and de-populating predetermined DOM structures with the ability to link, directly access, populate and de-populate other such DOM structures at any depth within them. It gives us a simpler alternative to the browser’s hierarchical DOM manipulation API while allowing us to reduce the amount of HTML as well as separate the HTML from the presentation logic.

The idi.bidi.dom project on github:

project page: http://idibidiart.github.com/idi.bidi.dom/

sources: https://github.com/idibidiart/idi.bidi.dom

For more Javascript stuff, follow my other blog (Tales From The JavaCrypt)

Enjoy.

Ideas For Entrepreneurs: 0001 – Wifi Spots Should Have A Favicon

In Uncategorized on July 8, 2011 at 2:12 pm

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

A “favicon” or “favorites icon” is a small image that is displayed by browsers (usually a logo) that is provided by the site being displayed.

I just signed on to a random wifi spot and I thought that it would be nice to have a favicon displayed next to the signal strength indicator. That image can be sent during the sign on process. The wifi protocol does not have to be changed. In fact, the whole thing can be done by a separate app that runs on the local machine or it can be built into the OS. If it’s a separate app, it can also be clickable and can present all sorts of promotional offers (coupons for coffee?) and helpful hints about the place that is offering the wifi (like location of the restrooms, menu, etc)

Whomever builds this, I’d love to hear from you.

And I am sure there were some similar failed experiments in this area, so if you’re aware of any I’d love to hear about those, too.

The Packetization of Knowledge (Why Twitter Is Important)

In Uncategorized on May 18, 2011 at 2:53 pm

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

File Under: The Deeper Meaning of Twttr

The thesis that I’d like to articulate here is that knowledge packetization (a la Twitter), which is a term I came up with to describe the cramming of sense impressions and arbitrary logic into limited space, is leading to a perceptual compression of information that’s allowing us to perceive more and more with less and less mental resources, which is in turn leading to higher efficiencies in knowledge acquisition… and, very paradoxically at first glance, it may end up leaving more room in our head for *feelings* to emerge (have a look at this.)

To put it out there, the big picture (beyond Twitter) is that we’re ultimately heading toward an all-knowing, all-seeing, and all-connected race. And Twitter and systems like it are helping start this trend.

If you are inclined to think/feel deeply about it, Twitter, IMO, represents a fundamental evolutionary paradigm that is here to stay and that has yet to be fully explored.

Related

  1. Deep Sharing In 140 Characters Or Less!
  2. The Failure of The Facebook Social Networking Model

The Failure of The Facebook Social Networking Model (Updated)

In Uncategorized on April 27, 2011 at 3:56 pm

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

Article

It sounds strange to have the word “failure” in the title when the company has over half a billion users and their IPO is probably valued in the hundreds of billions. However, the kind of failure we’re talking about is not on the surface.

In my casual experiment, which I conducted over a two year period, I added a few “almost famous” people (like DJs, artists, mathematicians, scientists, etc) each of whom have over 1000 FaceFriends in various age ranges  and demographics. I also added actual friends and relatives some of whom have over 600 Facefriends. I also watched interaction with celebrities on Facebook including those with over 10 million fans. I noticed that just like myself and the average social network user, the percentage of people these highly popular people interact with on daily basis is not anywhere near 50% or even 30% of their total Facefriends. I estimate this figure to be in the range of 5% to 15% based on a combined sample of 200 people. If more than 20% of your FaceFriends actually interact with your status feed on regular basis then you’re doing better than some celebrities.

There is nothing wrong with a platform that allows you to engage with 5%-15% of your social contacts on regular basis, including your friends and family. If you have 100 friends that’ 5-15 people that you might not be interacting with regularly if it wasn’t for Facebook.

The disturbing truth, however, is that unless you make your Wall invisible to the 85%-95% of your FaceFriends who do not bother to engage with you, you’re exposing yourself to either of these two socially undesirable scenarios:

  1. The majority of your FaceFriends are “lurkers” (i.e. they get to read what you have to say but have no desire or are too socially inhibited to engage with you) or
  2. The majority of your FaceFriends have decided to “hide” your status feed

Lurking is the epitome of being anti-social. It’s agreed by most people that it’s very weird. However, many people will actually lurk if they think there is no way to tell and most of them think so. (Note to lurkers: there is reason to believe that Facebook exposes lurkers by assigning higher precedence to them when showing the list of FaceFriends in the Friends column on the left side of the home screen. Facebook seems to do so by showing more frequently those who have clicked to view comments on your posts and those who have recently interacted with you or whom you have recently interacted with in some way, e.g. they just added you as a friend or just sent you a message, etc.)

Having your status feed be hidden by a FaceFriend is emotionally dishonest on their part. Why would anyone hide the status feed of someone they’ve added as a friend? If they’re emotionally honest they would unfriend them rather than give them the impression that they’re one of their “friends” on FaceBook while actually hiding their feed.

Another possible reason to explain “where did everyone go?” besides people being lurkers or hiding your status feed is that most of your FaceFriends may be ‘FaceDead’ (i.e. people who have accounts on Facebook but rarely use it.)

Making your Wall invisible to the 85-95% of your FaceFriends who don’t engage with you is similarly anti-social and potentially hurtful (to them.)

So what are people doing to get out of this strange situation?

I actually ended up removing the majority of my FaceFriends (the 85% who never interact *minus* family and current real life friends and a couple of worthy people I decided to exempt from the purge) and now I’m happier than I was.

(I also made sure that out of the remaining FaceFriends only those who have interacted with my status feed in the past year or so get to see my posts.)

This turns Facebook into a private chat space, more or less, which means that it has failed (at least for me and many others) in delivering on the promise implied in its design of allowing us to interact with a large number of people.

The design of the Tumblr and Twitter social “follower” model doesn’t promise that and it actually delivers on the promise of enabling people to have followers, but as it turns out that’s not what I’m looking for.

What I’m looking for is to engage with at least half of the people on my Facebook friends list on pretty much regular basis (which is a reasonable expectation.) I figured that by adding a lot of people (with mostly similar backgrounds and tastes) to my Facebook friends list I would eventually be able to do just that. But that hasn’t worked for me and I have yet to see it work for anyone (including very popular/highly sociable people.) The 5%-15% limit seems to be a fundamental limit (or property) that’s inherent in the design of Facebook’s social networking model, and to me that signifies the failure of the model.

So more experimentation coming up to try and figure out what kind of social networking model would enable people to interact with the majority of their friends (both real and online-only friends) on regular basis. Something that requires people to interact with a good percentage of their contacts, not lurk, hide their status feed or ultimately fall into small groups.

I think such an optimistic model is not only possible but essential.

However, it’s obvious that we’ll have to think very differently about how social networking is done online.

Update:

To complicate things further, a new feature in Facebook makes it so that you will not see the status feed of people that you haven’t interacted with in a while unless you change the default setting, which means that they’ve taken the problem elaborated upon in this article (the fact that 85-95% of the total number of our “FaceFriends” on Facebook rarely interact with us if ever), which is a general problem that affects everyone from extreme loners to celebrities with tens of thousands of fans, and they’ve dug in deeper and made that into a feature to cover up the basic issue, which is that Facebook has failed in enabling people to engage with a larger audience.

This supports the well-hidden truth that Facebook is just a fancy chat room with enough smoke and mirrors to give us the impression (but not the reality) of having a large social circle.

Japan: what could we have done?

In Uncategorized on March 30, 2011 at 8:53 pm

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

What could we have done to prevent the terrible disaster in Japan?

One thing that came up as I pondered this was the idea of triggering a controlled and much smaller man-made earthquake (or a series of them)  along the critical fault line area, just enough and just in the right spots to preempt the build up of a massive event.

It’s like weather engineering for agriculture applied to fault-line maintenance.

Here is where I got the idea from:

http://pre.aps.org/abstract/PRE/v81/i1/e015102

A Natural World Order (fka, Beyond ‘Democracy vs. Dictatorship’)

In Uncategorized on February 21, 2011 at 1:42 am

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

In a democracy, the majority decides, not the individual.

We have a democracy in the US.

But do I get to decide? Nope. Do you? Nope. Does any individual get to decide? Nope.

The “majority” get to decide.

If you are part of the majority on a given issue then you’re in luck. Otherwise, your voice will be ignored, even if it’s given room to be heard.

Democracy is a concept that was conceived of thousands of years ago. Technology has advanced and now we can have something more sophisticated than what the ancients could think of in terms of our governance model.

For example, if I match my opinions to everyone out there using some “match making” app and find out that on the 100 most important issues to me there are 12 million people in the US who agree with me (not a majority) then I and those 12 million people can abide by a Constitution/Law that is in agreement with us, not a One Size Fits All one.

The Internet coupled with automated people matching technologies give us the option to join virtual societies, each with its own set of laws, and even virtually immigrate from one society to another, no different than immigrating from US to Canada or vice versa. The only universal law would be that in order to have your own laws you’d have to have a group of at least 7 million people (or 0.1% of the population of the planet) who agree to abide by those laws and nothing but those laws. This way the only governance challenge becomes about getting and keeping that many people under one ideology or set of rules.

The only concern some may have with this idea is that there would be a lack of global world order. This is actually a very good thing since a global world order would ensure hegemony and that’s not a good thing.  Instead, order would manifest on the local scale (each 7 million people or more will have their own world order tailored to their needs, hopes, and desires) so there would be no need for hegemony or a New World Order for that matter, just localized order, which is how order manifests in nature.

In a way, we can call such a system a Natural World Order.

Using Google To Send Smoke Signals

In Uncategorized on December 19, 2010 at 6:28 pm

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

The smoke signal was, as far as I’ve read, invented by native American Indians.

The way it works, as far as what I’ve gathered, is similar to a primitive blogging system, used mostly for publishing emergency or battle related information.

The tribe would send smoke-encoded messages for *anyone* who understood the particular encoding system, where the encoding technique is developed over time, in an emergent fashion, by several tribes unknown to each other who happen to recognize the patterns (that make up a language) in each others messages and produce their own messages in the same language, adopting, modifying or dropping whatever patterns they feel like, while generally sticking to the most successful/survivable patterns, thus evolving the language, its grammar, and the medium in a cooperative fashion, in the same way that all natural languages evolve.

So in my experiment, I would put certain messages in the titles of my most-Googled blog posts (the ones that appear in the top 9 links in the Google search result for a given search, e.g. “web 3.0 wikipedia” or “google monopoly” or “p2p dns” etc) which then become visible to my competitors for these search terms (as they are likely to be watching their ranking for these terms in Google search results.)

For instance, I could communicate with my Google search placement competitors by adding certain words to the titles of my most-Googled blog posts like “haha I’m still ahead :)” or some message that can only be understood by those competitors and maybe then we can strike a conversation.

Like every other thinking person, I’m amazed at how we’ve managed to evolve so far in terms of technology while remaining as primitive inside as we’ve ever been since the dawn of what archeologist would agree was the first man or woman. We have yet to evolve to a race that focuses on cooperation and emotional and cultural connectedness. Instead we continue to be in survival-of-the-fittest mode, and Google only reinforces that with algorithms that pit content producers against each other in competition for the scarce attention span/bandwidth of consumers, placing those who  have been blessed (like myself) with arbitrarily high search placement in many topics ahead of others who also have something very meaningful to say. I’m not complaining about it for the sake of complaining, but for the sake of our evolution. It’s absolutely insane to continue to base our readership economy or attention economy on mere keywords, linking, like buttons and artificial search engine optimizations: they all can be gamed such that an article that barely touches the subject would place higher in the search results than one that goes really deep and thus has far more meaning (in relation to the subject.)

Anybody with enough brains can game the system when the system is absolutely primitive, like Google search is today. Take the system to a deeper, more evolved level, where meaning not keywords determine the worth of a link (think: Semantic Rank) and we’ll all be better off.

Obviously, detractors will come up with a thousand unfounded excuses that they strongly believe are the reasons why we can’t go there yet, but I dare any triple PhD computer scientist to explain to me what we need to get there besides the sheer human will to cooperate on mass scale?

Related:

  1. Wikipedia 3.0: The End of Google?
  2. Designing a Better Web 3.0 Search Engine
  3. Intelligence (Not Content) is King in Web 3.0
  4. Wikipedia 3.0: Three Years Later
  5. Self-Aware Text
  6. Semantic Blog

License: Attribution-NonCommercial-ShareAlike 3.0

The Shift From Top-Down To Bottom-Up Production

In Uncategorized on March 28, 2010 at 11:07 am

Author: Marc Fawzi

Twitter: http://twitter.com/marcfawzi

It’s granted that top-down basic research is not well funded in the US.

It’s granted that top US technical schools like MIT, CMU and others have been doing contract basic research that is funded by foreign companies (not sure what restrictions are in place if any) and students from foreign countries have been getting their PhDs in material science, nuclear physics, biotechnology, etc and moving back to their countries of origin (since their tuition is paid for by those countries’ governments.)

It’s also granted that China (and on a smaller scale India) are funding basic research within their own countries. India now has the world’s first supersonic guided missile. China has the world first “clean coal” technology. The latter comes as a surprise since many millions have been spent by environmental groups here in the US telling us that there is no such thing as “clean coal.” Maybe those millions should have been spent inventing the technology, but conventional wisdom tells us that top-down commitment at the governmental level is needed to enable the kind of basic research that would lead to major breakthroughs.

However, the world is moving from a top-down centralized approach to basic research to a bottom-up, peer-to-peer (p2p) based one, and the US has the right culture to lead the world when it comes to bottom up innovation, even when it comes basic research.

The ever accelerating evolution (or devolution) of our top-down capitalist society is paradoxically putting more and more power in the hands of individuals; and we can all attest to it given how easy it has become to build a software startup, and hardware startups will be just as easy when the open hardware and open manufacturing movements come out into the mainstream (see: http://freedomdefined.org/OSHW.)

Another example is the case of “DIY Biology,” where bio-hackers are using “bio bricks” and the latest DIY Bio technology to build new compounds and even new (genetically modified) living systems (for now bacteria and plants) for practical purposes.

So given these trends, my view is that the US will lead the world in bottom-up basic research given the recent trends in open software, open hardware, open biology and open manufacturing that are most virulent among college students, engineers and scientist here in the US.

China is simply NOT setup to accommodate bottom-up innovation. And it’s bottom-up innovation that will change the world, for the better or worse. I tend to think for the better, despite the massive risks to the current order that come from giving individuals so much power, even power over life itself (sounds dramatic until you learn what college kids are doing with those DIY Bio kits.)

It’s true that individuals cannot setup a semiconductor foundry or a flat panel display factory but there are emerging technologies (like cheap printable electronics and even cheap printable light emitting elements) that will allow individuals to innovate in these areas, empowered by DIY tech (the key to bottom up innovation.)

And let’s not forget desktop manufacturing, especially the new bread of machines that can produce intricate and precise structures from metals, ceramics, and plastics (see Useful Links.)

The people don’t only have the fire now, they know how to make the matches.

And there is no country in the world with a more innovative bottom-up spirit than the US.

So the shift of top-down innovation to China does not bother me because it’s about time the world woke up to the idea of DIY innovation, both basic and integrative.

FWIW, I’ve been thinking a lot about peer production systems and economies, and if anyone is interested I can dialog about the key trends driving the underground shift from top-down production systems to peer production (or “bottom up” production.)

Useful Links

  1. The BioBricks Foundation (at OpenWetWare) “Using BioBrick™ standard biological parts, a synthetic biologist or biological engineer can already, to some extent, program living organisms in the same way a computer scientist can program a computer.”  See this too: http://syntheticbiology.org/BioBricks.html
  2. Ponoko – “The worlds easiest making system” (updated Nov 20, 2010: new 3D printing materials are durable plastic, superfine plastic, rainbow ceramic, stainless steel and gold plate.)
  3. Shapewayslike Ponoko but seems to be mostly for small sized objects
  4. RepRapHomemade rapid prototyper (fantastic!)
  5. Open Source Hardware initiative

More links coming soon.

 

A Better Way To Price iPhone Apps (and mp3s)

In Uncategorized on December 18, 2009 at 6:30 pm

Author: Marc Fawzi

Twitter: http://twitter.com/#!/marcfawzi

~~

If the price of an app was demand-indexed, starting at some arbitrary price near $0 when the app is launched and then going up and down with demand, then that would have interesting consequences.

Obviously, how that is tuned varies from app to app. For example, for a given class of apps, the demand-indexed price may be e.g. $0.50 at 1,000 downloads a day and $0.99 at 10,000 downloads. Many factors would go into deciding that curve (1), but the point is that the price should change with the rate of demand just like the price of scarce goods. However, unlike scarce goods, where there is the concept of an optimal [market] price at which the most profit is generated, the demand-indexed price would be optimal within the entire range of ‘FREE to CHEAP.’

This way if a developer has a great app with a great potential they get the most adoption upfront, helped by the near-free price, and as the market for that app heats up they get to enjoy higher profits from a higher price.

I think Apple got the idea for $0.99 for music singles from the publishing business where the price of e.g. music CD or a book is fixed and does not go up and down with demand.

The assumption is that a book or music CD can be replicated infinitely at a fixed cost per unit, so why slow down sales with a higher price if the demand is shooting up? However, when we’re talking about an mp3 or an .app the cost of replication is so negligible that pricing an app or mp3 at $0.01 produces a profit (after initial sunk cost of development/creation and assuming no recurring costs like cloud usage fees or bandwidth exist other than those paid for by Apple and factored into their model) so increasing the price from $0.10 to $0.25 will NOT slow down sales with rising demand because people are willing to pay ANYTHING between FREE and CHEAP for something they think is good and the perception of how good the app is increases with demand for that app (as that leads to more chatter among connected consumers and more hype in the press) …. So all one has to do is to figure out what is “CHEAP” for the given class of app (via a user survey) and then introduce the app at e.g $0.01 and change the price daily (or even in real time) while keeping it less than or equal to CHEAP.

There are a couple of key considerations to take into account when attempting this model. It has to do with the nature of demand in the long tail market for content.

License: Attribution-NonCommercial-ShareAlike 3.0

Wikipedia 3.0 (Three Years Later)

In Uncategorized on August 11, 2009 at 4:51 pm

Author: Marc Fawzi

Twitter: http://twitter.com/#!/marcfawzi

License: Attribution-NonCommercial-ShareAlike 3.0

I’ve just received a couple of questions from a contributor to an IT publication who is writing about the state of the semantic web.

I’m taking the liberty of posting one of the questions I received along with my response.

<<

[The Semantic Web has seen] several years of development. For instance, there’s been steps to create Syntax (e.g., OWL & OWL 2, etc.) and other standards. Various companies and organizations have cropped up to begin the epic work of getting ‘the semantic web’ underway.

How would you characterize where we are now with the web. Is Web 3.0 mostly hype?
>>

RDF, which emerged out of the work being done on the semantic web, is being used now to structure data for better presentation in the browser. It’s being used by Google and Yahoo. So you can say that the semantic web is starting to bear some fruits. But unlike OWL-DL, RDF does not have the structure to implement a logic model for the given domain of knowledge, which is required by machines to reason about the information published under that domain. However, RDF and RDFa (and other variations) are perfect for structuring the information itself (as opposed to the logic model for the given domain of knowledge) so the next step will be to use RDF to structure information for machine processing, not just for browser presentation, and that would be combined with the use of domain-specific inference engines (which, in this case, would combine logic programs and  description logic for the various knowledge domains) to build a pan-domain basic-AI-enabled “answer machine,” which is fundamental to any attempt to making machines ‘comprehend’ the information on the Web, per the full blown semantic web vision.

The “hard” problem with the semantic web is not the natural language processing, since we don’t really need it right at the start: we can always structure the information in such a way that it can be processed by machines and then comprehended using the aforementioned pan-domain AI, or, in the case of search queries, we can come up with a query language with proper and consistent rules that is easy to use by the average educated person, such that the information/query is machine-process-able and may be comprehended using domain-specific AI.

The “hard” problem is how can all the random people putting out the information on the Web agree to the same ontology per domain and same information structuring format when they do not have the training or knowledge to even understand the ontology and the information structuring format?

So both ontology creation/selection and information structuring has to be automated to remove incompatibilities/variances and human errors. But that’s not an easy task as far as the computer science involved.

However, instead of hoping to turn the whole web into a massive pan-domain knowledgebase, which would require that we conquer the aforementioned automation problem, we can base our semantic web model on expert-constructed domain-specific knowledgebases, which by definition include domain specific AI, and which have been in existence for some time now, providing a lot of value in specific domains.

The suggestion I had put forward three years ago in Wikipedia 3.0 (which remains as the most widely ready article on the semantic web with over 250,000 hits) was to take Wikipedia and its set of the experts, who are estimated at 30,000 peers as of 2006, and get those 30,000 experts to help build the ontologies for all domains currently covered by Wikipedia as well as properly format the information that is already on Wikipedia so that a pan-domain knowledgebase can be built on top of Wikipedia, which would be able to reason about information in all domains of knowledge covered by Wikipedia, resulting in the ultimate answer machine.

The Wikipedia 3.0 article and some of links there describe how that can be done at a high level as well as some implementation ideas. There is nothing intractable, IMO, except for the leadership problem.

2010 Update:

It’s obviously possible to start small and augment the capabilities as we go, and maybe something like Twitter, where knowledge is shared in literally small packets would be a good way to go, i.e. making tweets machine comprehensible and letting some kind of intelligence emerge from that rather than building a pan-domain answer machine, which is a much bigger task, IMO.

But that all depends on when Twitter decides to support Annotations. I hear it’s coming soon, and I can’t wait to see what can happen when tweets become ‘contextualizable’ and intelligence can emerge in a truly evolutionary process, through trial and error, collaboration and competition and (my favorite) unexpected and novel consequences of random events that end up enriching the process.

Maybe that would turn Twitter into the next Google?

2011 Update:

There is news now that Wikipedia is pursuing the Wikipedia 3.0 vision I outlined in 2006. However, the corruption among the power-tripping, teenage-like administrators at Wikipedia (documented in Slashdot and in this article) has meant that the Wikipedia 3.0 article I had written in 2006 is not welcomed on Wikipedia itself (not even under Semantic Web, Web 3.0 or Wikipedia 3.0 — try adding it yourself and see!) even though it was the article that launched public interest in Wikipedia as a semantic knowledge base and a basis for Web 3.0.

Is that sad? It may be, but it only proves the need for a P2P version of Wikipedia!

P2P Energy Production (Smart Grid) and P2P Web

In Uncategorized on September 9, 2008 at 9:49 pm

Author: Marc Fawzi

Twitter: http://twitter.com/#!/marcfawzi

License: Attribution-NonCommercial-ShareAlike 3.0

~~

In the future, everyone will be an energy producer and consumer. Everyone will produce their own energy and either sell the surplus to others or buy extra wattage from others.

That’s part of the premise and promise of the “smart grid” aka “intelligent utility network” aka “Intergrid.”

See this: http://www.odemagazine.com/doc/56/talkin-bout-my-generation/2

So if everyone can be a producer and consumer of energy then everyone can also be a producer (not just a consumer) of Web infrastructure, starting with people owning Mesh/802.11s-enabled wireless routers and all the way to people owning and renting out P2P-enabled storage, processing power and connectivity.

Where do today’s dominant Web players fit in such a scenario (e.g. Google)?

Answer: nowhere, as far as I can see.

Google is the biggest private consumer of energy. They may also be the biggest producer of energy one day. But I’m betting that such a day won’t come; i.e., that we will move to a P2P (or edge-driven) consumer-producer model, or P2P Economy, and away from the network -or cloud- centric model.

Related

  1. Towards a World Wide Mesh (WWM)
  2. P2P Energy Economy

People-Hosted “P2P” Version of Wikipedia

In Uncategorized on July 23, 2008 at 1:29 pm

Author: Marc Fawzi

Twitter: http://twitter.com/#!/marcfawzi

License: Attribution-NonCommercial-ShareAlike 3.0

Wikipedia and Web 3.0

Problem Statement:

The New York Times’ Web 3.0 article on Web 3.0 from last year, which is basically a re-wording of the popular Evolving Trends’ Web 3.0 article that came out five (5) months before it, was accepted it into the Wikipedia entry on Web 3.0 while a Wikipedia admin (or zealot) rejected the inclusion of the Evolving Trends article on the basis that it is a blog entry, i.e. insignificant, despite the fact that the Evolving Trends article was read by 211,000 people (to date) and quoted by hundreds of people, which probably makes it the most read blog article about the Semantic Web to date.

So it is disturbing that arbitrary rules, which are often applied arbitrarily, can be exploited to put the “privilege-to-dictate-what-qualifies-as-knowledge” ahead of the right of the public to complete, well-rounded and uncensored knowledge.

Why was a copy-cat article about Web 3.0 authored by the New York Times more significant than the original blog article, which preceded the New York Times article, and which was read and quoted by a very significant number of people…?

In other words, what makes the blog a lesser medium than a newspaper, especially one that has had several ethics breaches including plagiarism?

Let’s say that I had agreed to publish the Web 3.0 article in question in a well respected academic journal, which I had been contacted by (through a contributing editor at Stanford University), then would it have made the ideas any more legitimate?

That proof of quality comes from the relevance of the subject to the people. Today, two years after I blogged “Wikipedia 3.0: The End of Google?” (the first article to coin the term Web 3.0 in connection to the Semantic Web, AI Agents and Wikipedia ), we can find many Semantic Web startups today that are applying semantic technology to Wikipedia. Before the publication of the article, there were not one startup and no mention of Wikipedia in the context of the Semantic Web, although the Semantic Mediawiki guys were working in that direction already.

Is it by pure coincidence that after the huge popularity of the Evolving Trends Web 3.0 article we now have not one or two but several startups and groups working on applying semantic technology to Wikipedia? including PowerSet, a startup that was recently acquired by Microsoft, and, not surprisingly, Wikia, a commercial venture led by Jimmy Wales, Wikipedia’s founder. In addition, when the Evolving Trends Web 3.0 article was published it became the top blog post on the Google Finance front page (and stayed there for a few of months) when people searched for GOOG (Google’s stock symbol.) So I’m sure Google’s investors and management did notice it, so it’s not hard at all to think that it also had _some_ influence on Google’s decision to build a Wikipedia competitor, although there is no way to prove it did.

All of this leaves me wondering why the Evolving Trends Web 3.0 article was removed from the Web 3.0 entry in Wikipedia? After all, it was the first article to coin, in highly publicized manner, the term “Web 3.0” in conjunction with the Semantic Web and Wikipedia.

When the rules are arbitrary, and when they are applied arbitrarily, it’s impossible to tell the reason or the motive behind the reason.

The whole affair is not a single isolated case. It has happened and is happening regularly to many well-known bloggers and authors, yet it has its unique circumstances and unique flavor (or personal experience) in each case.

Thus, I feel that based on my experience and the experiences of many others that there is a real flaw in the governance model of Wikipedia, which needs to be fixed, or else risk being exposed more broadly to the tyranny that comes with the arbitrary dictation of the truth and the rewriting of history in a way that fits the agenda of those with power and influence, who can rewrite history at will by dictating what gets written about any events, which in my particular case happens to be the highly publicized first coining of the term “Web 3.0” in conjunction with the Semantic Web and Wikipedia itself, which is no where to be seen on Wikipedia!

The Solution is P2P:

The best fix, IMO, is to replace Wikipedia with a distributed “P2P”-hosted encyclopedia that allows multiple versions of any given topic, from different authors, which would be rated by the users.

Eventually, or as the second logical step, we would need to apply a democratic model rather than rely on the unwisdom of the crowds. In other words,  are the Wikipedia admins elected by the people? No, they’re not!  So what must be done in the proposed “people hosted” Wikipedia is to let the people (us, the users) elect representatives who would rate up or down the various versions of a given topic entry submitted by different authors (but who would not be able to delete or bury any of the versions.)

See the following for more on building a governance model:

  1. The Unwisdom of Crowds
  2. The Future of Governance

-*-

Wikia and Web 3.0

The hosting of the Semantic Mediawiki, i.e. the Web 3.0 version of of Wikipedia’s platform, has been taken over by Wikia, a commercial venture founded by Wikiepdia’s own founder Jimmy Wales. This opens up a huge conflict of interest, which is, namely, the fact that Wikipedia’s founder is running a commercial venture that takes creative improvements to Wikipedia’s platform, e.g. Semantic Mediawiki, and hosts (with potential to transfer) those improvements on Wikia, his own commercial for-profit venture. This shows poor judgment at best and an explicit conflict of interest at worst. And we’re talking about a key figure in Wikipedia’s governing body.

-*-

New York Times and Web 3.0

Here is the Evolving Trends article that was the first article to coin, in a very publicized manner, the term “Web 3.0” in the context of the Semantic Web, Wikipedia and AI agents:

https://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/

And here is the Web 3.0 article by the New York Times that came five (5) months after the above-mentioned article:

http://www.nytimes.com/2006/11/12/business/12web.html

Related

  1. Wikipedia 3.0: The End of Google?
  2. The Unwisdom of Crowds
  3. The Future of Governance