Monthly Archives: October

Running CPU Intensive JavaScript Computations in a Web Browser

The pattern discussed below is a well known pattern that has been used for 10 years. The goal of this article is to present this pattern under a new light, and most importantly to discuss ways of reducing its overhead.

The biggest deterrent for running CPU intensive computations in a web browser is the fact that the entire browser user interface is frozen while a JavaScript thread is running. This means that under no circumstance should a script ever take more than 300 msec (at most) to complete. Breaking this rule inevitably leads to bad user experience.

Furthermore, in web browsers, JavaScript threads have a limited amount of time to complete (there can be either a static time limit — that’s the case of Mozilla based browsers — or some other limit such as a maximum number of elementary operations — that’s the case of Internet Explorer) If a script takes too long to complete, the user is presented with a dialog asking whether that script should be terminated.

Google Gears provides the ability to run CPU intensive JavaScript code without the two aforementioned limitations. However, you cannot usually rely on the presence of Gears (in the future, I would like to see a solution like the Gears WorkerPool API as part of the standard browser API)

Fortunately, the setTimeout method of the global object allows us to execute code on a delay, giving the browser a chance to handle events and update the user interface, even if the timeout value passed to setTimeout is 0. This allows us to cut a long running process into smaller units of work, and chain them according to the following pattern:

function doSomething (callbackFn [, additional arguments]) {
    // Initialize a few things here...
    (function () {
        // Do a little bit of work here...
        if (termination condition) {
            // We are done
            callbackFn();
        } else {
            // Process next chunk
            setTimeout(arguments.callee, 0);
        }
    })();
}

This pattern can also be slightly modified to accept a progress callback instead of a completion callback. This is especially useful when using a progress bar:

function doSomething (progressFn [, additional arguments]) {
    // Initialize a few things here...
    (function () {
        // Do a little bit of work here...
        if (continuation condition) {
            // Inform the application of the progress
            progressFn(value, total);
            // Process next chunk
            setTimeout(arguments.callee, 0);
        }
    })();
}

This example demonstrates the sorting of a fairly large array using this pattern.

Notes:

  1. This pattern has a lot of overhead i.e. the total amount of time required to complete a task can be far greater than the time it would take to run the same task uninterrupted.
  2. The shorter each cycle, the greater the overhead, the more reactive the user interface, the greater the overall time required to complete the task.
  3. If you can be sure that each iteration of your algorithm is of very short duration — say 10 msec, you may want to group several iterations within a single cycle to reduce the overhead. The decision whether to start the next cycle or continue with more iterations can be made based on how long the current cycle has been running. This example demonstrates this technique. Although it uses the same sorting algorithm as the example above, notice how much faster it is, while still keeping the user interface perfectly reactive.
  4. Never pass a string to setTimeout! If you do, the browser needs to do an implicit eval every time the code is executed, which adds an incredible amount of completely unnecessary overhead.
  5. If you manipulate global data, make sure that access to that data is synchronized since it could also be modified by another JavaScript thread running between two cycles of your task.

Finally, consider running this kind of task on the server (though you’ll have to deal with serialization / deserialization and network latency, especially if the data set is large) Having to run CPU intensive computations on the client might be a sign of a deeper, more serious architectural problem with your application.

The Birth Of Web 3.0

Is Web 3.0 yet another buzz word, or is it a real turnaround in our industry?

Web 1.0 was the good old web of the 1990s. In those times, all client-side changes were the result of a server round-trip. The Internet was ramping up in popularity.

Web 2.0 has been a little more than just a technological evolution. The staple of Web 2.0 has been the emergence of social media (Internet users creating most of the content), powered by mature technologies (DHTML, Ajax) on somewhat stable web browsers.

Web 3.0 is not a revolution either. It is yet another technological evolution destined to provide users with an even better experience, both online and offline. Web 3.0 will lead to the blurring of that artificial wall between the web browser and the desktop, providing a full — but secure — integration with devices and services exposed by the operating system.

Web 3.0 is just starting. Look around you and you’ll see that Web 3.0 technologies are slowly cropping up everywhere on the web. Google Gears, one of the first Web 3.0 technologies, allows you to build web applications that can work offline. Thanks to Google Gears, applications such as Remember The Milk, an online to-do list and task management system, can now work offline. The Adobe Flash player already allows application developers limited access to the webcam and the microphone. Soon, we’ll also be able to drag and drop files from the desktop to a web browser (see this Java Upload Applet for an example using the Java technology)

Another aspect of Web 3.0 is the use of stunning graphics, smooth animations, high definition audio and video, 3D, etc. and all of this inside a web browser!

At first, Web 3.0 features will be available using plugins (Google Gears, Java, Flash, Silverlight, ActiveX and Firefox extensions, etc.) But slowly, we may start seeing browser vendors integrating them into their browsers, followed by some level of standardization. The HTML 5 Working Draft seems to be going in the right direction.

These are exciting times for web front-end engineers! The risk of fragmentation, inevitable with such ground-breaking technologies, will hopefully be mitigated in the short term by the use of JavaScript toolkits. The Dojo Toolkit, for example, has already started making Web 3.0 features available (see dojo.gfx and the Dojo Offline Toolkit) Hopefully, all the other major frameworks will follow suite so we can all start building cool new applications that wow our users!

låne, | refinansiering med sikkerhet | argos discount code

The New Yahoo! Search Has Finally Arrived!

Yahoo! launched a new version of its search engine today. Until now, I was a Google user simply because Google’s results were a little bit more relevant, and also because it seemed a bit faster. However, the new Yahoo! Search has won me over. Here’s why:

First of all, the Yahoo! search page has been simplified to the extreme, which makes it load extremely fast. Second, the search page now has an auto-complete feature, similar to Google suggest. I had been waiting for this feature for a long time! Finally, Yahoo! has made huge improvements to the search results page, embedding rich media within search results, and adding an assistant to help you refine your search, and even explore related areas that you may not even have been aware existed! This is simply brilliant! Not only searching with Yahoo! quickly and efficiently leads you to what you were looking for, but it has also become a fun learning experience! Give it a try, and like me, you’ll quickly adopt it!

Below is a screenshot of a search for Nelly Furtado:

Adobe MAX

I am currently attending the Adobe MAX conference in Chicago, IL. Yesterday’s keynote was a great showcase of what Adobe’s latest technologies are about to bring to the web and to the desktop. Here are a few pictures of the keynote (hover over the images to get a short description)

Kevin Lynch's keynote

An AIR application running Google Analytics

The winners of the AIR challenge


Adobe Developer Connection announcement

Heidi Williams talking about the new features in Flex 3

The Flash Player team talking about the new features in Flash Player 10


I have been involved with web development since, and Ajax since (before it was even coined “Ajax”) I have looked at Adobe’s technologies for a while, and have finally come around. I have to admit that Flash Player 9, ActionScript 3, Flex 2 (Flex 3 coming very soon) and Flex Builder 2 (soon Flex Builder 3) make for a very solid development platform to create rich Internet applications. Add AIR (Adobe’s Integrated Runtime) to the mix and you have a great platform to develop cross-platform, web-enabled applications.

My only problem with Adobe’s technologies is that they are proprietary technologies. However, this only outlines the failure of the W3C and other standards bodies to push the web forward. And with an incredible 90% market penetration (according to Adobe), isn’t Flash a De Facto standard anyway?

Update: After attending the day 2 keynote, I keep thinking that while so many web developers are still trying to figure out how to make rounded corners work on all browsers, Adobe is really pushing the envelope with truly ground-breaking technologies. What a contrast!