Tuesday, January 29, 2008

Project Syndroid:Synthesis of GWT and Android for Platform Independent Gadgets

Over the course of the last few months, I've steadily documented my work in attempting to produce portable 'cloud-safe' code that runs anywhere, both in the browser, and in native environments, and now I'd like to take it a step further.

Project Syndroid


My vision is to produce a high level Java Gadget API, that allows the authoring of both OpenSocial and Gears enabled gadgets, that can run in a variety of containers and platforms, the biggest difference being the ability to deploy to Java based phones like Android. The presentation below goes into a little more detail of what I'm proposing and why.



I Need the A-Team (cue music)


Project Syndroid is my submission to the Android Developer Challenge. I can't do it alone, I don't have that much time to dedicate to doing all the platforms. However, I do have a handle on doing the GWT version. What I would like to do is find 2 other expert developers, one in Android, and one in doing Gadgets/Widgets for Sidebar/Dashboard/Google Desktop/Konfabulator, and together, as the syndroid-team, we will split any prize money if we win anything in the developer challenge. I'll take on the task of hashing out the GWT plumbing, while the other two developers work on Android and Widget container code, plus packaging/build/deploy tools to build all versions.

I have set up a group, syndroid@googlegroups.com, for those that would like to discuss the project/idea further, or become one of the core syndroid-team members responsible for major development work.

-Ray

Chronoscope Demo in Flash + WHATWG Canvas on IE

Since it's inception, I've been talking about the design goal of Chronoscope as a scalable visualization platform that runs in any environment. However, until recently, one big hole in that vision remained: Internet Explorer. IE does not support the <CANVAS> element, it does however support a retained-mode/scenegraph-style markup language called VML. Many attempts have been made to emulate the canvas using VML, but they all leave something to be desired, especially when it comes to performance.

This leads naturally to thoughts of using Flash. It just so happens I am mostly finished with a Canvas implementation for IE using Flash, and here is a demo of Chronoscope using Flash. Oh, it runs on Internet Explorer now! (Edit: Flash Version 9 plugin required)


Nobody has ever done this before

"That's why it's going to work"

To be fair, many many people have tried this before. Paul Colton for example with AFLAX, and numerous others have floated the idea or prototyped it. When I started, I did not want to waste time duplicating effort, so I searched for any complete WHATWG Canvas emulations I could find for flash, but found none. And to be sure, there were problems with many of the prototypes that turned up, such as trying to map JS calls to CanvasRenderingContext2D directly to Flash calls via Flash's ExternalInterface, which needless to say would be incredibly slow.

I did find one interesting library that would turn out to help me a lot: AS Canvas by MixMedia, an ActionScript implementation of most of the WHATWG Canvas API, not for the browser, but for Flash developers. I am not a Flash developer nor expert, and I was vaguely familiar with the MovieClip API, however puzzles remained as to how to turn what is a scenegraph style API into an immediate mode one. ASCANVAS achieves this by flushing each stroke()/fill() call into a BitmapData object and then clearing the previous drawing commands. That was the inspiration I needed.

Buckle your seatbelt Dorothy, 'cause CANVAS, is going bye-bye.


Well, not exactly. Rather, Flash will become an option for rendering in Chronoscope.

Chronoscope's CANVAS API was designed to help accelerate performance in drawing when individual drawing commands have a high overhead (such as making RPC calls to a Flash VM or doing lots of DOM operations). The way it achieves this is by super-setting the WHATWG Canvas API with several additional features:

  • All drawing happens between beginFrame() and endFrame() calls
  • Multiple layers can be created within a Canvas and composited
  • OpenGL-style display lists which record a sequence of commands and play them back
  • Text rendering and specialized text-layers
  • Rotated text
  • Fast Clear
The beginFrame()/endFrame() abstraction allows the canvas to defer execution and buffer up multiple commands into a single batch. For Flash, this allows an entire frame of drawing commands to be sent to Actionscript for rasterization in a single call. For DOM-oriented interfaces (SVG/VML/Silverlight/etc), it would allow using innerHTML techniques over DOM operations to render the frame.

The layers API yields the possibility of accelerated hit-detection, since layer creation is cheap, while allowing efficient compositing operations that are non-destructive, which allows fast scrolling and the ability to only update layers which change. It also makes the Java2D version work better.

The Display List abstraction allows one to record a bunch of API calls, and play them back over and over, yielding a more compact buffer of commands, as well as the potential to accelerate parsing of the commands, and cache the results of some drawing operations. For example, a display list with 100 drawing operations, could be executed 100 times for a total of 10,000 drawing operations, while the command buffer in the Canvas only stores and transmits a total of 200 commands.

Text rendering is my biggest complaint about WHATWG Canvas, so it was a no-brainer to include it. Finally, Fast Clear is useful for erasing an entire canvas and/or layer if there is a cheaper way of doing it then clearRect(0,0,width,height) for example.




Unfortunately, no one can be told what VML is, you have to experience it for yourself


And once you have, you'll wish you hadn't. Flash/Actionscript3 with MXMLC however turned out to be a pleasure (mostly).

Here's how the Flash Canvas works. Canvas calls are converted into tokenized commands and pushed into a Javascript array, for example lineTo(10,20) becomes array.push('l', 10, 20) beginFrame() clears this array, and endFrame() uses Array.join() to send the entire stream to the Flash Plugin, which as exported an interface. The Flash code parses this Array, and translates Canvas API calls into semantically equivalent Flash operations.

Needless to say, getting all of the WHATWG Canvas semantics correct is tricky. For example, 'globalCompositeOperation' is tricky to implement because Flash lacks Porter-Duff compositing modes. Path drawing and filling, especially with curves, has subtle differences. And drawing Images from the browser requires a lot of bookkeeping.

My current implementation is about 95% complete. I have a few Porter-Duff modes to implement and CanvasPattern. After completion, I'm looking forward to exporting a pure-JS (non-GWT) version of this that can replace excanvas/iecanvas for high performance (and correct) WHATWG rendering in IE6+

Overall, I'm happy performance seems adequate for my uses. Again, here's a demo of Chronoscope using Flash (Flash Version 9 required)

Timepedia now has a graphics/charting platform thats runs in the browser as Javascript (WHATWG Canvas and Flash), as a Applet or Desktop Java (Java2D version), on the Server (Java2D), and in mobile environments (Android, later J2ME). As a future exploration, I'm looking at modifying the GWT compiler to produce ActionScript and translate the entire Chronoscope codebase into an SWF.

-Ray

Monday, January 14, 2008

Chronoscope in Flash soon

Just a quick Chronoscope status update. I've been working the last few days on a Canvas implementation based on using an embedded Flash component. The primary goal of this is to support Internet Explorer which lacks browser canvas support, but it can be used on any browser. Initial performance is on par or superior to native Canvas (and much much better than SVG/VML) I'll be releasing it in the next 1 or 2 weeks as I clean up some glitches in the rendering.

IE Canvas rendering finally solved (and performant!)
-Ray

Friday, January 4, 2008

Google Gears Image Manipulation API not ambitious enough

Google Gears is a very subversive and disruptive technology IMHO, and I mean that in a good way. Google has the muscle to extend native browser functionality in a cross browser way by sneaking extensions into Gears. Of course, Adobe and Microsoft can do this as well (Flash and Silverlight), but the difference is, Google is offering AJAX-level building block extensions to browser functionality, not an alternative environment-within-the-browser-environment that Flash, Silverlight, and Java applets yielded.

Gears is steadily building momentum, and I'm sure many of Google's properties will soon support offline or enhanced modes using it, which will tend to make it a 'must-have' plugin, hopefully achieving 80-90% penetration in the future. So, before the vast majority of people start using the plugin, let's try to be as ambitious with the extension functionality as we can, so when the rush-to-install happens, people will be getting a version with very rich functionality as the base. We want to avoid the need to check Gears version all over the place and force people to upgrade plugins continually ("what, oh, your Gears version 1.21 doesn't have the image.shear() function, you need Gears 1.25 for that...")

Case in point, the proposed Gears Image Manipulation API. Granted, it's just starting, but I'd like to offer some upfront suggestions before this thing gets finalized.

Don't duplicate Canvas, extend it


The proposed API adds resize() and crop() operations to an image object. And also the ability to turn images back into blobs. The resize() and crop() operations can be done today with JS Canvas, but only WHATWG Canvas allows you to turn an image back into data. I don't think this API goes anywhere near far enough to justify its existence.

Also, the biggest pain with using Canvas today is that every browser but IE supports it, so why not implement a cross-browser offscreen Gears WHATWG Canvas API to start with.

But don't stop there. WHATWG Canvas lacks text rendering, and image rendering that obeys affine transforms, two of the big complaints against the existing Canvas. The Web 2.0 world will love anyone who can get such an extended cross-browser canvas widely deployed.

So start with an off-screen WHATWG Canvas API, add text rendering (and atleast 90 degree rotated text would be nice), plus drawImageWithTransform() that obeys transforms.

Resize, Flip, and Crop aren't enough


Anyone looking to build a client-side photo-manipulation library will want more than just image scaling, cropping, composing, and flipping. They need the ability to run convolution kernels, lookup tables, and rescale/colorspace transforms as well. The most common operations people want to run on photographs, like contrast/brightness enhancement, sharpen/unsharpen, conversion to black and white or sepia, etc use these.

Don't forget RAW and EXIF/image metadata


In addition, the ability to open RAW files, manipulate exposure compensation, and extract image metadata would be a huge boon. An offline Gears photo album would be much cooler if EXIF info could be extracted as images are imported.

My own wish list for Gears Image API

  • Implements WHATWG CanvasRenderingContext2D as Base
  • Adds text rendering, with minimally 90 degree rotation support 
  • Image composition that obeys affine transforms (not just rotate)
  • Convolution and Lookup operators (NxN square kernels, N atleast up to 5)
  • Support opening RAW images
  • Support read (write would be good too!) access to image metadata


Objections?


The first objections that will be raised is bloat in the plugin. Leaving aside the fact that Flash delivered outstanding capability in a slim plugin, the proposed libgd implementation already has many of the core functions needed to satisfy the proposed richer API, and I'm sure it would not be hard for Google engineers to implement the rest. Convolves and Lookups aren't rocket science.

A likely second objection and real hiccup will be achieving cross-platform, antialiased, internationalized text rendering with rotations. I don't have an answer for this one, only that users want it. Almost everyone I've talked to who does client-side rendering wants this.

A third issue is simply complexity and time to market. Resize, Flip, Rotate, and Crop are trivial to implement as simple libgd glue code, whereas full support of WHATWG semantics, would require significantly more time. Although some of this might be mitigated by stealing WebKit or Gecko's implementation and hacking it into Gears.

Fourth, dealing with large photographs, especially RAWs will bring memory issues to bear (unless a smart tile cache oriented system is used), and could be a point of denial-of-service against the Gears plugin if care isn't taken.

Regardless of these objections, I'd still urge Google to go for it. Make client-side image processing and visualization a major part of the next Gears API. Please!

-Ray

Wednesday, January 2, 2008

Hardcore GWT Hands on Training?

It's the new year, and I've been mulling an idea recently that I'd like to get some feedback on from the community.

The GWT Conference was a blast, and it appears my Deferred Binding presentation was well received. Since then, a number of people have encouraged me to offer some sort of training or mentoring for GWT, either in classrooms, or in the form of corporate on-site training. I hesitate to get too distracted from working on Chronoscope and Timepedia, but after thinking about it some more, I do think it might be worth pursuing, perhaps one 2-day course per month.

What I don't want to do is some token GWT course that glosses over the core of GWT and shows one how to create some cute widgets and stuff them into a web page, without understanding what's going on under the hood. Rather, I'd like to teach, on Day 1, GWT from the ground up exploring in detail what is happening: Java vs JS Output, Deferred Binding, JSNI, Widget attachment, Widget event processing, RPC/Serialization vs On-the-wire representation, the Bootstrap/Selection process, Hosted Mode, debugging, testing, etc.

On Day 2, I'd then delve into more idiomatic patterns: Creating custom widgets, creating your own modules/libraries, Localization, image bundles, integrating with back-end frameworks, RPC and RPC customization, optimizing size and speed, etc

I'd require people to write code in short labs during the class, give personal help in troubleshooting, setting up Eclipse/IDEA/Netbeans environments, and ask that others who have completed labs successfully help participate in helping those having trouble, because I believe a good way to learn something is by in-class participation and trying to help someone else. Who knows, it might be a fun, campfire type atmosphere!

I'd risk losing a lot of newbies doing this, and also risk not getting any advanced users who are self-taught, but hopefully, there are a lot of people in the middle with some GWT experience, who would like a more in depth tutorial hands on. Perhaps I'd do a more marketing oriented/newbie introductory course later.

Ultimately, I'd hope that people who come away from the classes with not only enough knowledge to write code to cover their own use cases, but also have enough knowledge to fix bugs and contribute to GWT itself. The GWT compiler and core libraries should no longer be 'magic' to those taking the course.

Obviously, the course wouldn't be free, especially in offsite training where I'd have to rent the event space and also given the opportunity cost for me, but I would offer discounts if space is donated, charge less for the first 'beta' class, and given some steep student discounts, especially if students would arrive early and help me with class setup.

I'm working out the course material at the moment, and pricing out some meeting spaces. I haven't decided on a per-student price yet, but feel free to email me suggestions, course ideas, and most importantly, whether or not you'd be interested in attending, or your company would. Private replies can be sent to cromwellian / gmail.com

Oh, this would be for the SF Bay Area, hopefully BART/Caltrain accessible.

-Ray