Thursday, October 19, 2006

PCs and web2.0 : Part 1 What made PCs so successful

There is enough written about the success of the PCs, and the overall impact it has had over decades. PCs not only are attributed to increased productivity, economic gains and creation of the Information Technology business. I don’t have to delve into it very much. What I am trying to do is look at PCs and their role in the changing landscape of services based application, call it web2.0, SOA or some flavor of SaaS. I am using the term PCs very loosely here to refer to PC type devices that are the clients/desktops/notebooks that end user use.

PCs became successful because they were the ultimate “Multiple-Application” player. Before PCs compute capability was something that people with halos around their head worked on in the safety of cold closed enclosures. PCs changed all that, suddenly everybody had a platform on top of which they could write real world applications that solved real needs. It resulted in overall increased productivity but also gave us multiple booms including the massive Y2K spending.


But since the real PC innovation of early 80s things have not changed much. In fact if you look at various citing on history of PC almost all of them stop around 1985s. We had the Linux revolution but the fundamental physical form factor and H/W Spec never changed.

PC brought compute to the common man and hence replicated (in small form) the compute characteristics of standalone mainframes. Certainly it provided a steady platform for innovation to happen at the application level tied to the operating systems. Client-server world of applications design was always tuned towards replicating portions of computation and then doing some form of update between clients and server to get applications to work together. But for all practical reason they were individual units of compute. These PCs were not inherently designed to be working in connected world, Windows for Workgroup (3.11) was an almost an afterthought.

The PCs of the 90s performed three important tasks. Its foremost role used to be that of executing application. The process by which computer grinds through the bits of logical commands, interprets and produces intelligent stuff. Things that made Word run and PhotoShop do its magic.

By its very nature client PC needed to support the role of an “interaction interface” it second major role. The I/O interface that made computer understands humans and vice versa.

The third important role it provides is that of an information repository (mainly file store today). Repository that stores your life at home and vital files/data that makes businesses work

So the PC performed three basic functions – Application Execute, Interaction and Storage.

Even the advent of networking did not change this much. Before networking client-server compute meant you carried around physical copies of data and now in the networked world you could start each client node as an extension of the file-system from the server and vice versa. (Even that was not seamless though…topic for some other day)

The advent of web browser and pervasiveness of internet (now called web 1.0) did not change that much either. All it did was allowed distributed client nodes to uniquely point and click stuff that was on the other end. Now you could uniquely access files that were published anywhere in the world wide web and bring them to your machine for viewing or manipulating. The user experience was still very limited and was constrained by the requirements that the files had to now work in uniform way across multiple platforms and operating systems. So browser took the easy route – target the least common denominator. AJAX changes that a bit and is finally trying to bring client server compute to the browser.

One of the biggest thing Internet’s and popularity of HTML/Script showed was the real potential of separating User Interface (HTML) from code (JavaScript). Certainly the critics would say HTML was not the first but the point is HTML brought it to masses (technology that is not for masses is useless. I have seen enough of that in my days at IBM Watson labs). Style sheets and DOM finally drove the point home that separating U/I from the inherent logic have huge potential – not only to address transformation but also to scale and support personalization. This is the ground work that started us towards loosely binded U/Is, finally mashups are ready to take over where the promises of Composite Application left

What web2.0 drove home very clearly but the other industry trends like XUL, Laszlo, JSF, ASP+ were already doing was this notion of separating U/I from code. Coding U/I in a declarative form that could be processed independent of the application logic has become the cornerstone for U/I rendering including FLEX & XAML. And with the pervasive connectivity and increasing adoption of broadband, consumer now has a thicker/faster pipe coming the last mile. We now have the perfect ingredient to finally start separating the Application, Storage & Interaction.

Suddenly you are in a situation where Application Execute could be separated from Interaction and Storage could be distributed. Now PCs don’t have to do the role of data repository & application execution engine. Browsers have taken upon themselves to fulfill the role of platform and OS agnostic interaction engine. And there are anecdotal evidences that we are slowly moving toward that. For example BW says that in the last year the two niches in which PC Software have done very well (except OS) include “Security” (things you buy to make sure you have a working PC next time you want to do something useful) and PC Games. (One exception is the Turbo Tax which shows people still worry about where there sensitive data resides. But it could be a very US/Western phenomenon)

With pervasive connectivity and programming models that are easy to separate application, data storage and interaction it makes lot sense to pull some of the computation back into the cloud , You have better view of application and hence can do faster adaptive applications”.

Accessing “Appications AnyWhere Anytime” will continue. This mandates that applications need to scale/adjust to all forms of device forms and interactions. It also means that applications cannot assume its interaction type and this example of Google Earth where users have come up with a totally new way to interact with Google Earth is a great showcase of this capability. Certainly we will have more richer interactions happening as more and more content is created and the edge nodes starts getting sucked into the cloud.

Also separating data that can sit in the cloud allows all forms of ways in which data can be used (Maybe that is what Tim means when he says “Data is the next Intel Inside”). Interaction is the glue that connects user to the machine and will always be in the edge and there sits the biggest opportunity for an edge compute device or the “new PC

All is not lost for the old PC more on that next week……

Sony Mylo ready to rock

http://gigaom.com/2006/10/13/mylo-t-mobile/

.. Sony announced a deal with T-Mobile that gives Mylo-users a year of free access to T-Mobile’s WiFi hotspots ...


This fixes a major hole in the Mylo strategy but now I think Mylo is ready to rock. Already it has got deal going with application vendors including GoogleTalk now.