Friday, November 24, 2006

The Web 3.0 Manifesto - The Knowledge Doubling Curve

[This is part I in a multi-part series titled "The Web 3.0 Manifesto"]

PREFACE: I use the term Human Computing Layer or
HCL in this article. What is the HCL? Us. It’s the oldest computing system on the planet and has been here since we began as a species. Read my previous Web 3.0 article that discusses Amazon’s Mechanical Turk service to understand how the HCL is becoming a functional and integral part of the Web and for some ideas on how the integration of the HCL and the Web will take form.

The Knowledge Doubling Curve

To begin this multi-part series on Web 3.0 I want to talk about the phenomenon that is driving the breathtaking increase in the rapid rate of technological innovation. Several decades ago I read about a study where researchers decided to measure the time it took for the amount of knowledge in the world to double. They called it the “The Knowledge Doubling Curve“. (Note: If anyone knows where I can find this article I would really like to know.) They came up with their own measuring system that, if I remember correctly, consisted of measuring the total number of items in print at periodic intervals in recent history. They then graphed that figure over time to see how long it took for the amount of knowledge to double. Finally, they projected the graph into the future. By looking at the graph it became possible to see the rate at which knowledge was doubling over time. In the article they showed a graph like the one below:



figure 1



I don’t remember the exact dates or the exact quantities for the number of items in print at each time event, so I can’t label the X and Y axes of the graph properly. Roughly, the bend in the graph corresponds to a decade somewhere near or after the 1960’s. However, despite the lack of exact figures the radical conclusion represented by this graph still holds:

The time it takes for knowledge to double started out as a linear rate and is now progressing at an exponential rate.

As you can see the graph is asymptotic, that is, the rate continually approaches infinity but never reaches it. This curve of course tracks closely the rate of technological progress in modern civilization. (Note: There are some who would say that the Singularity occurs somewhere near the top of the graph. For a fascinating look at how the rate of technological innovation tends to increase exponentially, read any of the latest books or lectures by renowned futurist, Ray Kurzweil.)

The amount of technological progress made in the last hundred or so years, far outstrips all that made since modern man first appeared on this planet. It took centuries for the wheel, the spear, the bow and arrow, paper, etc. to become commonplace. But since Samuel Morse sent the first modern telegram in 1844, we’ve seen the light bulb, cars, the telephone, the atomic bomb, jets, radio, television, space travel, computers, nanotechnology, and the Internet. There has been such an astonishing increase in the speed of invention and innovation that the list I just gave you is woefully and radically incomplete.

What changed in the last 150 years or so? It is certainly not us. Being an American, I have read writings made by the forefathers of our country, who gave us one of the most eloquent and powerful documents of our time, The Declaration Of Independence. They were brilliant men who would easily tower, intellectually, head and shoulders above most our contemporary politicians. Therefore the change is not related to an evolution of our DNA or a giant jump in the intelligence of mankind.

The Knowledge Duplication Curve

To understand the answer let me share an enlightenment that I had about The Knowledge Doubling curve. Look at the curve I plotted in the figure below:



figure 2



As you can see, the curve is a mirror image of The Knowledge Doubling Curve if you flipped it vertically. This curve is not based on any study; it is an intuitive explanation of the mechanics behind The Knowledge Doubling Curve. This curve shows that as technology advanced, the amount of time wasted by humanity in creating duplicate solutions was reduced linearly and that after the bend, the rate of reduction became exponential.

When a caveman solved the problem of painting on a cave wall, the ability for him to transfer that solution to others was limited to a small geographical area around him. Paper was a giant leap forward because solutions could now be written down, copied, and spread to others far and wide. The printing press accelerated that spread by making the copying of written works, and therefore the solutions contained, much faster. But in the last 150 years, the speed of distribution and replication of solutions has reached a breakneck pace never seen before in the history of civilization.

Even in my short lifetime, I have gone from having to go to the local library to find a solution, or having to find and contact an expert via the telephone, to being able to download instantaneously an entire software package that is a complete solution to a problem or need I have. For example, if you want to have your own discussion forum all you have to do is download and take a few minutes to install a free forum software package like phpBB.

Connectivity

Now, what is driving the rapid decrease in the Knowledge Duplication Curve? Connectivity. The more connected we are the less time we waste duplicating solutions. Let’s highlight particular members from the list of recent technological advancements I made before, specifically, the modern telegraph, the telephone, radio, television, and now the Internet. Each of these was a quantum leap in connectivity, leading correspondingly to a quantum decrease in the duplication of effort. I included radio and television because links do not have to be both ways. Any efficient broadcast technology, even if it is one-way, increases our connectivity.

Web Versioning Defined

Here is the definition for what constitutes a new version of the Web:

Any technological change that is a quantum leap in our ability to rapidly share solutions over the Web by providing modular reusable building blocks of functionality constitutes a version change.
  • Web 1.0 - Connected computers together using a set of standardized protocols invented by Internet pioneer Vint Cerf.

  • Web 2.0 - Marked by the appearance of Web Services which are modular solutions to complex problems, made available over the Web to external developers via an application program interface (API).

  • Web 3.0 - The marriage of artificial intelligence and The Human Computing Layer (HCL) and their subsequent integration into the Web, making powerful pattern recognition solving capabilities widely available to web surfers and developers alike.


Web 1.0 allowed us to share files, data, and software over the Internet.

Web 2.0 allowed us to share modular programming solutions to common problems, available via web interface API calls. This allowed and allows outside developers to build software applications on top of these services without having to download or integrate foreign code libraries into their own software, greatly increasing the ease and the pace of creating new software applications.

Web 3.0 will allow us to share an entirely new class of solutions over the web, both by developers and directly by users (web surfers) to build larger more complex applications. Most importantly, these shareable solutions, with the help of artificial intelligence and the integration of The Human Computing Layer, will allow us to cooperatively solve a class of problems normally reserved for specialized applications found in the areas of complex pattern recognition and high level semantic analysis.

Conclusion

I will close this article with a word of hope and anticipation for the future. If you take a high energy beam of ordinary light and shine it at a thick piece of steel you get a nice reflection. When you take that same light and align the photons so they move together in lock step, they form a laser beam and you can burn a hole through that same steel. I leave you with this exciting question. What happens when we, the most powerful computing beings on the planet working together in superhuman harmony, turn our combined attention to the monumental problems that, to date, have evaded solution?

Coming soon…

In future articles in the Web 3.0 Manifesto series I will discuss further the shape and substance of Web 3.0, especially in regards to how artificial intelligence and The Human Computing Layer will cooperate and integrate with the Web. Thank you for reading this far and sharing some of your time with me.

For more thoughtful commentary on Web 2.0, Web 3.0 and the Semantic Web I strongly recommend reading Dion Hinchcliffe's recent blog post "Going Beyond User Generated Software: Web 2.0 and the Pragmatic Semantic Web". Pay special attention to his comments regarding "recombinant, self-assembling software that exploits collective intelligence". He does point out that the companies he mentions involved in this line of research are using good old Web 2.0 techniques, but I feel that this field of research will play a big part in shaping Web 3.0.

Bookmark This Post!

BlinkList | del.icio.us | Digg it | ma.gnolia | RawSugar |
reddit | Shadows | Simpy | Spurl | Yahoo MyWeb

Saturday, November 18, 2006

Web 3.0 - You Ain't Seen Nothing Yet!

Web 3.0. The recently coined term that has many in the blogosphere screaming "Stop the keyword hype!" and others waxing hopeful that the next wave of Internet progress is finally starting to percolate. Before the seed has even sprouted roots, already bloggers are asking "but will it make any money?". Donna Bogatin asks this very question in her blog post "Will Web 3.0 Be In The Green?".

It would be easy to write this question off under the category of being one that is far too soon to ask. It appears that this question, and a host of others, will be stuck with us for the decades ahead, due to the irrational exuberance that poisoned the dot-com bubble. Hopefully once the Web 3.0 bubble truly gets under way, and there definitely will be one, bloggers like her will remain the sober watchdogs that were missing from the tulipmania of the dot-com bubble. I fear that many of the ones now linkbaiting in their blogs with early cries of foul against Web 3.0, will rapidly change course once the rampant euphoria begins to flourish. They will do so to persist their linkbaiting activities and because the euphoria that will accompany Web 3.0 will make the current Web 2.0 mania seem harmless by comparison.

Why do I make such troubling assertions now, especially when you consider that I am one of those looking with great hope to Web 3.0? Although many are looking at Web 3.0 as the next extension of the social networking technologies pioneered in Web 2.0, and others are dismissing it as a marketing ploy to rekindle interest in the Semantic Web, I feel there is a stronger theme that will drive Web 3.0 to explosive levels.

First I need to make a crucial point about what I feel will be the primary driver of Web 3.0. I will return to my cautions on the upcoming hyper-euphoria that will accompany Web 3.0 in a few paragraphs. Please read on.

I feel that Web 3.0 will be characterized and fueled by the successful marriage of artificial intelligence and the web. Artificial Intelligence? Isn't that the kool-aid that the Semantic Web community is drinking? Yes and no. The technologies considered pivotal in the Semantic Web are indeed considered by many to have their underpinnings in artificial intelligence. But, most of the Semantic Web projects I've seen are focused squarely on the creation of, and communication between, intelligent agents that do the natural language and topical matching work in a transparent manner, behind the scenes, without requiring human intervention.

This approach may eventually be viable but I feel that it misses a key ingredient of Web 3.0 that will finally bring artificial intelligence to the forefront. Currently the vast majority of artificial intelligence is embedded in various niche areas of commerce such as credit card fraud detection, or the speech recognition application that converts your voice to text as you dictate a document, etc. The reason for this of course is that we are still decades away from computers that will have the incredible and flexible pattern recognition capabilities of the human brain.

The reason Web 3.0 will lift artificial intelligence into the limelight is it will fill in the technological gaps that currently hamper the key uses for artificial intelligence. It will do so by shunting out the parts of the problem that require a human being to human beings with the help of the web. But, it will do so in a manner that is transparent, massively parallel, and distributed.

Amazon has taken a unique and innovative step into this area with their Mechanical Turk web service. Yes I know this is the second time I've written glowingly about Amazon in regards to Web 3.0, but as a web service junkie you have to love what they are doing. The Turk service allows developers to shunt out the parts of their applications that require human intervention to a paid participating group of volunteer workers, in a manner that mimics a standard web service call. This creates a standardized platform for utilizing human pattern recognition capacity in a modular manner. Google is another company experimenting with something similar with their Google Image Labeler game. From the game page:

"You'll be randomly paired with a partner who's online and using the feature. Over a 90-second period, you and your partner will be shown the same set of images and asked to provide as many labels as possible to describe each image you see."

The players have fun and Google gets thousands of images tagged with relevant text labels.

Now let's take this bold new technology and extrapolate further. Suppose Second Life created games where the players were solving complex problems to have fun, except these problems were actually key commerce problems that needed to be solved?

For example, imagine a game where players compete to clothe a runway model that will be judged in a contest by other players. This game could very well be a job requisition submitted by a major fashion company that wants to get advanced market research on what clothes buyers will prefer. The virtual clothes in the game could be detailed in-game 3D objects that are exact duplicates of the fashion company's artwork for their clothing. The difference between this and someone just holding a contest will be the way that is structured. All the set-up, problem specification, and solution propagation aspects of the problem will be part of a standardized Web 3.0 service call instead of the ad hoc hand crafting of a live virtual contest event.

This could be taken to an even more abstract level where instead of a problem that has a direct mapping to a real life business event, like the fashion designer example, but instead requires a more subtle decision that needs human intervention. For example, the player is in a game and is presented with two different kinds of sounds coming from different directions. He or she is told to follow the sound that feels the most pleasing in order to find the treasure. This could actually be a sub-job submitted by an automotive company that is trying out different interior designs for a car. As each interior design is acoustically modeled, an MP3 file is generated using various environmental test sounds, which are then punched into the game. The game player is having fun chasing ambient sounds looking for treasure, but is actually telling the car manufacturer which interior acoustic space is more pleasing. Since there could be potentially thousands of players, the car manufacturer can have thousands of sound files analyzed in parallel leading to an immense time savings. In the end, the players have fun, the game company gets paid extra earnings for this service, and the automotive company saves money avoiding designs that people won't like because they sound bad.

It's not hard to see that once this kind of service becomes popular, other additions to the typical service call would include the number of redundant tests to make for each case, plug-ins for getting textual input or votes from the task assignee (the player in the game examples), etc.

I will conclude this article with a warning on the upcoming hyper-euphoria. I saw first hand how people lost their fiscal sanity during the first wave of the artificial intelligence hype a few decades ago. Can you imagine how easily investors will become hypnotized by the spell of new technology offerings? Offerings with Star Trek sounding buzzwords, that will make some of the insane claims on the average dot-com prospectus seem tame by comparison. The raw fear of being left behind by technologies and services so futuristic, that images of flying cars will abound in people's heads, will make wallets gush cash again and retirement plans evaporate.

This will only happen if we haven't learned our lesson from previous manias. In closing I have this to say to the doubters and the pundits out there currently warming up to covering Web 3.0, whether for or against. Stay sharp and focused. We'll need you.

Bookmark This Post!

BlinkList | del.icio.us | Digg it | ma.gnolia | RawSugar |
reddit | Shadows | Simpy | Spurl | Yahoo MyWeb

Friday, November 17, 2006

Web 3.0 - Amazon Leading The Charge

In the Novemeber 14, 2006 blog post on the Amazon Web Services Blog, Jinesh talks about the accomplishments of Enomaly who have installed a Windows 2003 server instance on the Amazon Elastic Cloud using the open source processor emulator software QEMU. As I mentioned in my last post, Amazon is positioning itself to be one of the critical components of Web 3.0. Web 3.0 applications will require additional computing power far beyond that of serving up static web pages or database lookups. To handle a blizzard of realtime user queries that will utilize CPU heavy technologies such as natural language parsing, adaptive data mining, and other Web 3.0 techniques, a cluster of high performance computers will be necessary.

Now of course, there are many companies that have such clusters available to them internally. However, for the Web 3.0 movement to explode it will be necessary to make such large scale computing power available to the much larger community of small developers. With innovators like Amazon eliminating the need for a powerful computing cluster to be in-house, a critical obstacle to wide-spread innovation is removed from the small developer's path. In the end, the dorm room and the garage can become viable playing fields for fresh new Web 3.0 solutions and innovation.

In the near future, the Internet giants will make web services available that can be used as the underpinnings for Web 3.0 applications. As an example, companies like Amazon, Yahoo, Google, IBM and may soon offer natural language parsing as a web service in the same way that Google currently offers machine translation. Once we see advanced technologies such as that turned into readily available web services, coupled with the widespread adoption of remote computing cluster resources like Amazon's Elastic Cloud, the Web 3.0 movement can move from the pundit's pulpit into the life of the average Internet user. The use of QEMU by Enomaly's is further proof that Open Source software will be a powerful driving force in the creation of Web 3.0 applications.

Keywords: , , ,

Bookmark This Post!

BlinkList | del.icio.us | Digg it | ma.gnolia | RawSugar |
reddit | Shadows | Simpy | Spurl | Yahoo MyWeb

Sunday, November 12, 2006

Web 3.0 - In The Beginning

This blog has now changed from the previous title and focus of Knowledge Management to it's new title, Web 3.0.

Web 3.0. Buzzword? A new term to launch the next wave of investor financed startups that don't have a viable business plan? Or is it the "next big thing" we have all been waiting for. We barely had a chance to sit down, savor, and sift through the meaning and madness of Web 2.0 and now Web 3.0 is upon us.

Am I skeptical? You bet. But I'm also hopeful. I've been in the artificial intelligence area for decades now, and watched with great expectation the many hopeful technologies that offered such great promise, but either died with a whimper if they succeeded, did so in a relatively tame incarnation (tame, yet important) .

There is a difference this time. Several core technologies are coming on board to liberate the solving of the problem from the secret dens of I.Q. warehouses like Google and IBM, to the dorm rooms and garages of bright undercapitalized inventors and entrepreneurs. Bold new web services like Amazon's S3 Storage services and their Elastic Computing Cloud, which allow anyone access to a Leviathan network of computing power and storage. On the software side the Open Source movement is stronger than ever with hugely powerful software packages previously only available to well financed companies, in the hands of budding young programmers world wide.

I won't list the many other power shifts in knowledge computing and the companies behind them in this post. But I will be covering them in the posts to come. Stay tuned. This is either going to be a wild ride into a brighter future, or a hilarious episode on The Daily Show 2010 in the making.

Bookmark This Post!

BlinkList | del.icio.us | Digg it | ma.gnolia | RawSugar |
reddit | Shadows | Simpy | Spurl | Yahoo MyWeb