Hello Jon: while savoring one of the best bowls of
marinara spaghetti that I have ever eaten, I washed it down, so to speak, with
the chance discovery of your column on "The State of Rich Web Apps"
[IW 11.08.04]. After reading that fine article, I suggest that the
architecture of Rich Internet Appliances should grow out of existing web
paradigms, but not be stifled by them.
Your ode to the URI is well taken. The URI, simple though it be, rivals the invention of the wheel within our IT world.
I am not bothered by client/server labels or rich client or server centric architectural claims and slurs. These terms are getting tired and rapidly losing much meaning. I do believe, however, that your article hints at the shape of things to come for RIA/Generic Agent/User Agent architecture.
Agreed RIAs are protocol oriented, meaning application level protocols that ride on top of the usual TCP/IP data transports: http, tcp, ip, udp, etc. This feature actually is the next step forward, not an undesirable weirdness.
And so it begs the question: What is the proper role of the user agent (to alter terminology somewhat)? As you note this involves a shifting set of design tradeoffs between performance, bandwidth, and convenience of programming in the most general sense of the word to include "content creation" e.g. ASCII terminals (I truly wish that "terminal" had not been bound already to obsolete technology), X-Windows, PC fat clients, HTML thin clients, Flash, Real Player, etc.
But first, the issues on the table also beg the question: What is the role of the network resources/services (taken collectively)? I would answer in a somewhat simple minded way: the role of the "network" [services] is to take instructions from the user (or "edge node" more generally), discuss among themselves as need in order to do something useful, and return whatever results may be appropriate, along with keeping whatever appropriate side effects that remain within the network (various form of persistence, shared communication, and computation). In the old days, Sun called this view "the network is the computer." Some of us were and are thick enough to take it literally.
Thus the network resources should speak pure machine-to-machine-dialogues including the network side of the user agent to network resource communication, a.k.a the application protocol. People have no business inside networks. So, to borrow SGML and XML terminology, networks should speak pure content within themselves.
It is most evil to let servers generate user presentations directly. The devil is in the misuse of the MVC paradigm. Model-View-Controller made sense on the micro-scale for localized GUIs to contain the indeterminism of user dialogues while allowing somewhat of a separation of concerns between presentation (view) and model (content). The bastardization of these MVC OO concepts to JSP and ASP pages has set the Internet back more than any other single misstep - which is interesting considering these are widely considered to the pinnacle of Internet evolution. Well, at one time so was the Tyrannosaurus Rex.
Back to our main theme, we also now can see what the user agents must do at the edge nodes. User Agents take or create pure network content and translate it into forms suitable for interaction with human physiology, psychology, and cognitive abilities. To boil it down further, a well behave network oriented user agent is a glorified style sheet (bi-directional) for the underlying application protocol(s).
A surprising but significant corollary is that we don't need the actual view or model to get started - we just need the application protocol that both ends must be able to speak. True the protocol has to support the functionality for the required interactions between user agent on the edge node and the model on the server(s). But how to render that protocol is entirely the user agent's business and how to implement the model is the strictly the servers' business.
Among other things, this allows the deployment architecture to mix and match different user agent views with different implementation of the model, e.g. POP3 client and servers. Thus the application protocol represents constancy (between revisions!) and both the model and the user agent are part of variability! Given the history of the Internet this should not come as a surprise, but for the majority of software practitioners I rather think it will come as a great shock. This insight even implies that APIs are far else fundamentally important than protocols, which of course makes sense in highly distributed systems. Consider that in 1993 API oriented CORBA grabbed the lion's share of hype, but protocol oriented HTML/XML overwhelmingly came to surpass it in importance (as did MIME data-types on a quieter scale). The Java "APIs are everything" aficionados may get some heart burn from this reality.
Now comes the next step - how fruitfully do we build these well defined user agents that translate application level protocols between humans and networks?
One thing we can note off the bat, view presentation corresponding to traditional page control and page flow in these user agents naturally tracks along with the application protocol. Thus, instead of control flow logic being determined in a intermediate view generator such as a JSP or ASP server, the user agent directly follows the data flow of the protocol itself. In terms of computer architecture, the user agent is using a true data flow architecture whereby data flow equals control flow, or to put it another way, control flow and data flow are inextricably bound together. What a huge simplification.
The agent's data flow architecture also answers the question: where did the controller go from the MVC pattern? The answer is simply to the application protocol controllers on either end. A separate controller by itself was simply an artifact of low level tightly coupled programming, a pure overhead and not desirable in itself. I personally think that many systems should be designed around internal protocols even at a low level, but that is another issue for now.
Next, how do we find these user agents and how to we compose them together?
The vast majority of mundane UI primitives (i.e. the human side of the user agent) can be done with the standard web technology to pull apart or to compose XML based application protocols using "document models" or "stanzas" ala Jabber as the case may be. Basic screen real estate and dialogues etc. can be done using these web standard presentation and interaction tools - all of which understand URIs. Also, each of these technologies can be loaded in pieces and merged with others on the fly. The user agent internal pieces can originate all over the network in principle, subject to security principles.
In this scheme, when standard web technologies are not enough, they can be augmented with content handlers implemented as plug-ins. J2ME also takes this content handler approach. So, the standard web technologies represent constancy within the user agent protocols, and content handlers that display weird data types represent the variability. I would tend to restrict user agent side Java, if used at all, to implementing content handling and leave the other standard web technologies largely in charge of the UI. But to its credit, Java does know how to use URIs, including how to compose itself by downloading.
This scheme also works well with capability based security controlling the protocol on the model end as well as the user agent generation (although that does require restriction of roaming over the entire Internet to compose the user agent).
I am curious if these ideas strike a chord or not as we are at the start of working out the full architecture.