I just finished my presentation at Google I/O. It was a real blast, and I met a lot of really cool, smart people.
To celebrate the occassion, I am releasing an alpha of my jQuery-API-clone-in-GWT today. Visit gwtquery.com to download the source. It's a partial implementation at the moment, no deployable, or build script. Expect that to change in the near future. I welcome patches/contributions to flesh out the API and add the missing functionality.
-Ray
Thursday, May 29, 2008
GwtQuery 0.1 alpha released
Posted by Timepedia at 7:48 PM 4 comments
Monday, May 12, 2008
Decentralizing the Web
I have tried to keep this blog free of politics and opinion and centered on implementation and algorithms, but lately, certain things have bugged me, so for the first time, I'm climbing aboard the soapbox, and these opinions are not those of my company.
Isn't the web fundamentally decentralized?
The old days
I've been on the internet/usenet/netnews since the mid 80s. In the early days, my communications mostly occurred in news readers, email clients, Unix talk clients, later MUDs and IRC. In those days, the net was ironically, more federated and decentralized. Servers and Clients were mostly heterogenous flavors of Unix, which practically demanded that most communication server/client code were distributed as source. Servers were typically run from college campuses on modest machines, so the architectures tended to be federated. The lack of a single, uber GUI client, tended to drive people to invent specialized protocols for each application, and the poor portability of clients coupled with eager college students tended to produce multiple server/client implementations. There was no single news server, no single email host, no single chat server. Granted, big sites did exist (UUNET), but there was no virtual monopoly of one server being the sole provider of channel services.
Maybe it's nostalgia, but in the pre-web days, the spirit of the early IETF, it seems to me that engineers more commonly architected for the federated case, architectures were more open for inspection, and people cooperated openly.
The web
The browser ushered in a whole new way of designing applications, with some interesting repercussions. In the pre-web days, if I wanted to create a new messaging platform, I'd have to create a protocol spec, write the server and client, and get tons of people to port it and run servers. Something like RSS wasn't as important either because the protocols already defined machine readable ways to syndicate content.
The browser changed all that. Since the client portion was now handled by a third party, and ported to every piece of hardware known to man, and since it provided a very flexible way of displaying custom user interfaces, focus shifted to building big centralized servers which blossomed when coupled with declining hardware costs, and increased network bandwidth.
With the web, new "channels" sprang up everywhere, AOL/Yahoo/MSN/QQ/ICQ/etc and even today, there's Twitter, there's your Facebook wall and inbox, and tons more. What's disturbing is that these communications channels are often proprietary, and have a single point of failure. If Facebook or Twitter goes down, you're screwed. If my SMTP/IMAP server goes down, I'm screwed, but 50 million other people aren't.
Oh, this is about Twitter crashing again
Yes and no. I was irked into posting this while reading a bunch of blogs where authors simultaneously defend Twitter's downtime, insist that it is virtually a national infrastructure in importance, but also asserted that it should not be decentralized.
HUH?
First of all, I think it is disgraceful, that in 2008, instant messaging is still balkanized. The IETF standardized XMPP a long time ago and implementations have been proving their worth for a long time. Yet, have AOL, MSFT, Y!, et al adopted it? No, because they don't want to relinquish control. Remember all that talk of a truce in the late 90s? It never panned out.
Blogging and RSS are decentralized, there is no single, universal, host of blogs. Imagine that you could not read, nor write blogs without having a blogger account! So why should microblogging be centralized? The arguments as to why you cannot federate Twitter are pretty weak IMHO. Communication channels are like roads, they are infrastructure, and fundamentally affect the information economy, and it doesn't make sense to me that such fundamental services be centralized behind a single point of failure nor closed in implementation. Besides, federating Twitter would probably produce new business opportunities for aggregators.
Not just about Twitter
If social networks are as important as everyone makes them out to be, why should I have to log into MySpace or Facebook to access them? Why must MySpace or Facebook applications only run on those sites? Shouldn't I be able to access my social network anywhere, on any site, in any application, web or otherwise? On the desktop, I can access my Address Book or corporate LDAP server in many applications, but I can't do that on my own site, without becoming a Facebook application.
Isn't it time to federate social networks?
The OpenID, OAuth, OpenSocial, and Data Portability initiatives are going a long way to address the architecture needed to do this. FOAF/XFN provide possible ways for discovery of social network, but there are pieces missing to the puzzle that need to be addressed before the user experience can approach that of centralized social networks, or give developers easy access to building applications that can access federated data. Google has been doing a good job thus far, but perhaps the IETF and W3C should create working groups to study the issue as well.
Peer to Peer Social Networking
One interesting possibility is using something like Gears to store and replicate your social network. That is, take my social network offline. When I add you as a friend, I could store that information locally on my machine, as well as optionally broadcasting it to several public services which also record the information. Third party websites could embed social applications which use Javascript snippets to ask my permission to run queries against my Gears database, or use OAuth to ask for the data from public services I replicated to. This could be more than a simple contacts list, since authenticated peer-to-peer exchanges (you share your offline social network with me), or social graph crawl/share services, would enable more client side aggregation.
Social networks typically provide enhanced services like profile pages, server-side 'shared' data storage for applications, and activity streams. Designing a federated system for these is an interesting exercise. Federated publish/subscribe message queues are not without precedent, nor is distributed storage. I'm not saying it's a solved problem, but shouldn't we at least try?
In any case, I'd still like to partially solve the problem by taking my social network offline, and enabling social applications which need to probe my graph to do so.
I'm tired of having to log into FB every day, or import my friends into each new site I sign up with, just as surely as I am tired of creating new username/password accounts on every site for login.
Let's return to the good old days and get away from single points of failure, cathedrals, and make the social web, truly a fundament piece of internet architecture, like TCP/IP, DNS, HTTP, and SMTP, and less about Bubble 2.0 startups.
-Ray
(It's late, I'll probably regret this rant when I read it tommorow). I now return you to your regularly scheduled engineering related posts.
Posted by Timepedia at 12:50 AM 2 comments
Wednesday, May 7, 2008
Google I/O and GWT Extreme!
I've been invited to do a session at the upcoming Google I/O conference. The initial idea was to do a presentation on Syndroid, but I felt that was too specific and not generally useful to the audience. What would be useful, would be to cover general purpose techniques in GWT 1.5 to bridge Javascript execution environments as well as high performance tricks. Thus was born: GWT Extreme!.
For those attending Google I/O and want to see this session, I recommend attending Bruce Johnson's presentation on Deferred Binding and Bob Vawter's presentation on Linkers first. In this session, I will go over the following techniques:
- GQuery and compiling CSS selectors at compile time
- The Android Native Interface, packaging GWT applications as Android apps and automatically making calls to native Android Java from within Javascript.
- The ActionScript Native Interface, compiling GWT code to SWF, generating RPC stubs, and making calls between browser GWT and Flash GWT code
- Writing Gears workers in GWT
- If there's time, extreme graphics performance with GWT
As the session is only one hour long, I will not be going in-depth with source examples like I did in my Deferred Binding Presentation, but rather, I will describe the overall architectures and steps needed to achieve them in general terms. I may follow up with more in-depth blog posts later containing source code.
Finally, I am planning to release GQuery (or GWTQuery) at Google I/O, there will be a usable beta-quality build on the project code hosting site.
-Ray
Posted by Timepedia at 12:31 PM 2 comments