Comet Daily Information about Comet techniques. Mon, 23 Mar 2015 21:41:52 +0000 en NASA and Lightstreamer for Project Morpheus Wed, 28 Sep 2011 17:06:03 +0000 Alessandro Alinone NASA’s Undergraduate Student Research Program recently published an interesting paper by Matthew Noyes and Robert Hirsh: Rendering Flight Telemetry in Platform-Independent Three-Dimensional Touch-Interactive Simulations.

The paper describes an innovative project where Lightstreamer is used for pushing spacecraft telemetry data to tablet devices, which render a 3D representation of the vehicle in real-time.

The author’s describe the rationale for the project:

In NASA’s quest to lay the foundation for next-generation space missions, this involves developing new propulsion systems and rocket designs, as well as flight software guidance algorithms to operate these vehicles. Both products require operational verifi cation on live test vehicles. As VTB flight frequency increases, telemetry analysis requires greater dedicated field test time. Due to smartphone portability, a new, intuitive telemetry display for the device provides instant datastream access.

Then, they introduce the iMorpehus application:

Designated iMorpheus, this application will be the primary education asset for NASA’s Project Morpheus. It will allow users to view test flight simulations using live data, browse and play recorded data files, and control the virtual simulation model for self-piloted fights around a virtual JSC.

The UNITY 3D engine is used for rendering the 3D models in real-time, based on live data pushed by Lightstreamer servers.

We should add just a couple of clarifications from Lightstreamer’s perspective.

At the end of page 5, the paper says:

The Morpheus vehicle provides updates at 10 Hz; while Lightstreamer’s maximum update frequency is 1 Hz; to compensate for this latency, the server delivers telemetry values as JSON strings, containing an Arraylist of 10 states. The iMorpheus application parses this object and updates vehicle state over the course of a second. iMorpheus vehicle state will have a minimum latency of 1 second from data delivery; there will be additional lag derived from linearly interpolating the spacecraft along designated waypoints.

Our clarification regards the fact that Lightstreamer does support 10 Hz updates and greater. The maximum frequency is imposed only by the software edition or by a development choice. In other words, when low-latency and high-frequency telemetry data is needed, Lightstreamer is able to deliver it in real-time with no added latency, apart from the Internet link latency.

Still on page 5, the paper says:

Lightstreamer provides clients for many diff erent software environments, including .NET, Java, and HTML/Javascript. Due to clients’ reliability on libraries unsupported by UNITY, a custom socket interface was designed for iMorpheus until Lightstreamer releases a proper UNITY client.

I confirm that the Lightstreamer engineering team has been working on a specific client library for UNITY, via a port of the Lightstreamer .NET Client library. It is still in beta stage and we are looking for testers. If you are interested in using UNITY together with Lightstreamer, please email and we will send the library file to you for testing!

Thanks to the authors of the paper and to Project Morpheus’ team for the excellent job.

Lightstreamer 4.0 Released Wed, 06 Jul 2011 14:43:59 +0000 Alessandro Alinone Duomo, is now generally available. The package includes version 4.0 of Lightstreamer Server, and updated versions for all the client SDKs, plus brand new client SDKs for mobile platforms.]]> Lightstreamer Duomo

The new major release of Lightstreamer, codenamed Duomo, is now generally available. The package includes version 4.0 of Lightstreamer Server, and updated versions for all the client SDKs, plus brand new client SDKs for mobile platforms.

This release contains many improvements, in terms of both performance and features. The Lightstreamer Server is now twice as fast as before, easier to manage, and even more reliable than before.

Read more about the new release on the Lightstreamer blog >>

The support for mobile platforms, in the form of both browser-based apps and native apps, is now complete and pervasive.

Let’s see some cool details about each of those technologies.

Mobile Browsers

The new HTML Client API for Lightstreamer, based on HTML and JavaScript, is compatible with most mobile browsers. The big news is that real streaming is now supported on the Android browser. As far as we know, Lightstreamer is currently the only solution that is able to bypass the buffering mechanisms of Android browser and deliver a true-streaming experience. This was achieved via sophisticated compression mechanisms implemented for this purpose. The same streaming capability has been added even to Adobe Flex for Android. Check out our online HTML and FLEX demos with your Android browser to see streaming in action!

All browsers, both desktop and mobile, can now benefit from a brand new Stream-Sense algorithm. There are cases where some combination of antivirus software and proxy servers block any form of streaming. The unique Stream-Sense algorithm from Lightstreamer automatically detects these situations and falls back to Smart-Polling mode, which provides a user experience that in most cases is identical to real Streaming mode. The new Stream-Sense algorithm of Lightstreamer Duomo is super-fast and much more lightweight than before.

Android Apps

The new Android Client API for Lightstreamer is based on Java for Android and enables full streaming capabilities within any native Android application. You just need to integrate the Lightstreamer client library in your Android app and all the complexity of real-time communication with Lightstreamer Server is managed transparently by the library. A simple demo app with full source code is provided. It is the famous “Lightstreamer Stock-List Demo”, where some simulated stock quotes are subscribed to and pushed in real-time from the server. Of course this is just a coding example and the same principles can be used in other application domains, like chat systems, telemetry, auctions, collaboration systems, etc. You can install the demo from the Android Market and check it out.

iOS Apps

The new iOS Client API for Lightstreamer is based on Objective-C and extends the real-time streaming features of Lightstreamer to iPhone, iPad, iPod, and any future devices based on iOS. Just include the provided library in your iOS project and forget the complexity of bidirectional real-time communication with the server. A simple demo app with full source code is provided. As explained in the Android section above, this is the “Lightstreamer Stock-List Demo”, which shows how to subscribe to some information items and receive fast real-time updates on them. You can install the demo from the iTunes Store and check it out.

BlackBerry Apps

The new BlackBerry Client API for Lightstreamer is based on Java ME and adds streaming capabilities to any RIM BlackBerry device. The provided lib can be included in your app and it will take care of managing the interaction with the server. A simple demo app with full source code is provided (including the “Lightstreamer Stock-List Demo”, see above, plus a simple messaging demo). You can download the installation file directly.

Windows Phone Apps

The new Windows Phone Client API for Lightstreamer is based on Silverlight for Windows Phone and adds streaming capabilities to any Windows Phone 7 applications. Just include the provided library in your app and enter the world of real-time data push. A simple demo app with full source code is provided (the “Lightstreamer Stock-List Demo”, see above). You can download the demo app from the Windows Phone Marketplace.

Java ME Apps

Some older devices need traditional Java midlets. With Lightstreamer, you can add push functionality even to these older apps (especially used in the Symbian world). The Java ME Client API for Lightstreamer is still provided and maintained. Even in this case, you can check out the simple demo app (the “Lightstreamer Stock-List Demo”, plus a simple messaging demo) by downloading the installation file.

Regardless of your application development needs, Lightstreamer provides a streaming solution for you.

10 Years of Push Technology, Comet, and WebSockets Wed, 06 Jul 2011 14:43:52 +0000 Alessandro Alinone More than ten years have passed since the creation of Lightstreamer. Now that Lightstreamer 4.0 is generally available, it is a good moment to look back at what happened in the history of Push/Comet and to share a short analysis of the current trends from Lightstreamer’s perspective.

1996-2000: The first wave of Push Technology (Webcasting)

At that time, Push Technology was mainly referred to techniques also known as webcasting, narrowcasting, or channeling. A channel was related to some category of information, and once the user registered with one or more channels, she would automatically receive the information, which was displayed by dedicated client software (thick applications, browser plug-ins, or special screen savers). In 1996, PointCast, the first push system based on channels, was created. Soon after, over thirty players entered this market, including Microsoft and Netscape. Push Technology was expected to become a killer application, but this forecast did not come true, and in 2000 that kind of Push Technology was finally dead. The main reason is that the pushed information was very coarse-grained and not real-time. Users did not need to find significant amounts of information downloaded to their computer every morning, most of which they would never read. Someone compared the first wave of Push Technology to having giant heaps of newspapers dumped on your doorstep every morning…

2000 onwards: The second wave of Push Technology (Comet)

Starting in 2000, the success of online security trading systems created a new need for real-time data being pushed to the user’s browser. The information had to be very fine-grained (at the level of a single field being updated) and very real-time (the lower the latency between the data generation and its delivery, the better the quality of the trading platform). The first players in this arena were Caplin and Lightstreamer, together with Pushlets and KnowNow. In these eleven years, Push Technology has kept evolving in many ways: by adding new features to the available solutions; by improving reliability and performance; by seeing many new entrants coming into this market niche; by proposing new standards.

The success of this wave is increasing, with more and more production systems taking benefit from the real-time data delivery. Most of the financial trading platforms currently employ some form of data push. Online auctions, betting, and gaming systems are moving in the same direction. And we are seeing several new projects in both the military and aerospace domains leveraging this wave of Push Technology.

The very dynamic nature of this set of technologies has made it very difficult to agree even on a common umbrella term. Below is a list of the many terms that can be somehow related to Push Technology (I am fully aware of the many, more or less subtle differences in place among such words):

  • Push Technology
  • Data Streaming
  • Data Push
  • Streaming AJAX
  • Reverse AJAX
  • AJAX Push
  • Comet
  • HTTP Streaming
  • HTTP Long Polling
  • Real-Time Web
  • Last Mile Messaging
  • Internet Messaging
  • Bi-directional HTTP
  • Full-Duplex Web Communication
  • WebSocket

How about Comet? When Alex Russell coined the Comet term in 2006, it seemed to be a very good umbrella word and we all started to adopt it. But then, partly due to marketing-originated differentiation needs, and partly due to actual technological differences, other terms kept emerging or being reused. In particular, starting from 2007, the interest began to focus on bi-directional communication, leading to a new wave of Push Technology (see next section). Notice that the list above tries to cover both these waves, as the boundaries between the two are very blurry and arbitrary.

The push channel (from server to client) can be implemented via several different techniques, such as:

  • Polling
  • Long Polling
  • Frame Streaming
  • Iframe Streaming
  • Flash Streaming
  • XHR Streaming
  • Server-Sent Events

Several protocols have been created in the past few years employing one or more of the techniques above. Some of them derive from open-source initiatives (e.g. Bayeux and BOSH) or from standard specifications (HTML5 Server-Sent Events), others (like Lightstreamer’s protocol) are proprietary. Even if Lightstreamer’s network protocol is publicly documented, we prefer to provide high-level APIs for each supported client-side technologies, including mobile applications. This way, we can keep full control over the heuristics mechanisms needed to choose the best delivery technique to use, including low-latency fall-back processes.

2007 onwards: The third wave of Push Technology (WebSocket)

Bi-directional communication means not only pushing real-time messages from the server to the client, but also the reverse. This may seem awkward, as sending messages from the client to the server at the client’s discretion has always been the normal behavior of HTTP! In reality, the aspect that is considered new and important has been the ability to send such reverse messages with low latency and high frequency. In most cases HTTP can stream messages with low latency and high frequency from the server to the client, but it always requires a full round-trip to send messages from the client to the server. This is a constraint imposed by how browsers and proxies are implemented. And this is one of the reasons that led to the WebSocket specification, aimed at enabling full-duplex communication over a single TCP connection (the other main reason being to prevent intermediaries from buffering the streaming content, as it happens in some cases with HTTP).

At the time of writing, the WebSocket specification is still in draft status (the latest version is draft-ietf-hybi-thewebsocketprotocol-09) and its adoption by “infrastructure” vendors (including browsers, proxies, packet inspectors, antivirus’, etc.) is still extremely immature and fragmented.

Without WebSocket, the reverse push channel (from client to server) can be implemented on the top of a second HTTP connection. The major problem here is that the application code has no control over the binding between HTTP requests and HTTP connections, which are handled by a pool manager that is part of the browser. This means that the browser might decide to send two backward messages originated from the application code on the top of two different physical TCP connections, with the risk of altering the message order. This issue, together with the low-latency and high-frequency requirements mentioned above, originated the need for a sophisticated reverse channel implementation within the application layer, to guarantee message ordering, low latency, and high frequency.

For example, Lightstreamer implements all of the mechanisms needed to give the developer a high-level and robust abstraction over the HTTP reverse channel. In particular, Lightstreamer automatically manages message numbering and re-ordering, transparently batches messages to minimize the round trips, and implements guaranteed delivery, by means of acknowledgements and automatic retransmissions.

WebSocket simplifies the implementation of both the communication channels, but WebSocket is only a transport layer, over which application-level protocols need to be implemented. For example, most Push Technology solutions are based on publish-and-subscribe paradigms. WebSocket alone does not offer any pub/sub facility, which must be implemented on the top of it. Another example: WebSocket alone does not dictate any technique for throttling the data flow of filtering/conflating the data based on bandwidth constraints or on Internet congestion. Again, this is something left to a higher-level implementation. For these reasons, eleven years of experience originated from second-wave Push Technology will be invaluably precious for the success of the third wave. Furthermore, the transition between the two waves requires some years of co-existence, until WebSocket is: a) specified in final form; b) fully deployed across all infrastructure components, implying the abandonment of all the older browsers (for instance, at the time of writing, Microsoft’s Internet Explorer requires a separate download to support the current draft of WebSockets on both IE9 and IE10 beta).

As for Lightstreamer, we have been delaying the roll-out of WebSocket, waiting for the more stable specifications and better browser support. Now, we believe that the time has finally come to inject all the experience gained in the past years into the third wave of Push Technology. Our role will be to guarantee a smooth and safe transition between the waves, taking care of all the issues above.


Our perspective on the history of the technologies aimed at enabling the Real-Time Web (known under many different names and with different meanings, such as Push Technology, Comet, or WebSocket) will help with the new third wave that is centered around bi-directional communication in general, and the WebSocket standard in particular. I maintain that WebSocket needs to leverage the second wave of technology to be successful, due to: the necessity, for several more years, to be able to fall back to high quality Comet when needed, and the necessity to build application-level protocols and network management optimizations over WebSocket that have already been implemented and deployed on solid second-wave solutions.

Scaling Server-Side Event Driven Applications Mon, 11 Apr 2011 21:09:22 +0000 DylanSchiemann Simone Bordet of CometD recently talked about CometD and WebSocket web applications at the CodeMotion conference in Rome. Slides are available on SlideShare:

It provides a great introduction to Comet, WebSockets, and the recently released CometD 2.1.1.

CometD Annotations Thu, 07 Apr 2011 07:07:23 +0000 GregWilkins CometD 2.1 now supports annotations to define CometD services and clients. Annotations greatly reduces the boiler plate code required to write a CometD service and also links well with new CometD 2.x features such as channel initializers and Authorizers, so that all the code for a service can be grouped in one POJO class rather than spread over several derived entities. The annotation are some CometD specific ones, plus some standard spring annotations.


This blog looks at the annotated ChatService example bundled with the 2.1.0 CometD release.

Creating a Service

A POJO (Plain Old Java Object) can be turned into a CometD service by the addition of the @Service class annotation:

package org.cometd.examples;
public class ChatService {     ... }

The service name passed is used in the services session ID, to assist with debugging.

The annotated version of the CometdServlet then needs be used and to be told the classes that it should instantiate as services and scan for annotations. This is done with a coma separated list of class names in the “services” init-parameter in the web.xml (or similar) as follows:


Configuring a Channel

A service will frequently need to create, configure and Listen or subscribe to a channel. This can now be done atomically in CometD 2.x so that messages will not be received before the channel is fully created and configured. For example the chat services configures 1 absolute channel and 2 wild card channels using the @Configure annotations:

@Configure ({"/chat/**","/members/**"})
protected void configureChatStarStar(ConfigurableServerChannel channel)
    DataFilterMessageListener noMarkup = 
      new DataFilterMessageListener(_bayeux, new NoMarkupFilter(),
      new BadWordFilter());

@Configure (”/service/members”)
protected void configureMembers(ConfigurableServerChannel channel)

The @Configure annotation is roughly equivalent to calling the BayeuxServer#createIfAbsent method with the annotated method called as the Initializer and must take a ConfigurableServerChannel as an argument. The @Configure annotation can also take two boolean arguments: errorIfExists and configureIfExists, to determine how to handle the channel if it already exists.

The configuration methods for the chat service use the new Authorizer mechanism to define fine grained authorization of what clients can publish and subscribe to a channel. This is similar to the existing SecurityPolicy mechanism, but without the need for a centralized policy instance. An operation on a channel is permitted if it is granted by at least one Authorizer and denied by none, giving black/white list style semantics.

The configuration of the chat wildcard channels installs DataFilterMessageListeners for all /chat/** and all /members/** channels. These filters ensure that there is no markup or bad words published to these channels. To construct the listener, an instance to the BayeuxServer is needed to be passed to the constructor (used only for logging in this case). A service may obtain a reference to the BayeuxService using the @Inject annotation:

private BayeuxServer _bayeux;

Adding a ChannelListener

A method of a service may be registered as a listener of a channel with the @Listener annotation:

public void handleMembership(ServerSession client, ServerMessage message)

The @Listener annotation may also be passed the boolean argument receiveOwnPublishes, to control if messages published by the service session are filtered out. Note that a Listener is different to a subscription in that the service does not subscribe to the channel, so it will not trigger any subscription listeners nor be counted as a subscriber. There is also a @Subscription annotation available, but it is not used by the ChatService (and is typically more applicable when applied to client side CometD annotations).


Annotations can also be used on the client side, if the Java BayeuxClient is used, either for service testing or for the creation of a rich non-browser client UI:

    private ClientSession session;
    private void init()
    private void destroy()
    public void handleMetaMessage(Message connect)
    public void handeFoo(Message message)

Note the use of @Session to inject the session used by the service and @PostConstruct and @PreDestroy for lifecycle events. These annotations are also available on the server side. On the client-side, the annotations are activated by an explicit call to an annotation processor:

ClientAnnotationProcessor processor = 
    new ClientAnnotationProcessor(bayeuxClient);
MyClient mc = new MyClient();


Annotations have made CometD services much simpler to create and much easier to understand. Normally I’m not a big fan of annotations, as they frequently put too much configuration into the “code”, but in this case, they are a perfect match for the semantic needed. In future, we’ll also look at making JAXB annotations work simply with the JSON mechanisms of CometD.

Is WebSocket Chat Easier? Wed, 06 Apr 2011 22:10:59 +0000 GregWilkins A year ago I wrote an article asking Is WebSocket Chat Simple?, where I highlighted the deficiencies of this much touted protocol for implementing simple comet applications like chat. After a year of intense debate there have been many changes and there are new drafts of both the WebSocket protocol and WebSocket API. Thus I thought it worthwhile to update my article with comments to see how things have improved (or not) in the last year.

The text in italics is my wishful thinking from a year ago

The text in bold italics is my updated comments

Is WebSocket Chat Simple (take II)?

The WebSocket protocol has been touted as a great leap forward for bidirectional web applications like chat, promising a new era of simple Comet applications. Unfortunately there is no such thing as a silver bullet and this blog will walk through a simple chat room to see where WebSocket does and does not help with Comet applications. In a WebSocket world, there is even more need for frameworks like cometD.

Simple Chat

Chat is the "helloworld" application of web-2.0 and a simple WebSocket chat room is included with the jetty-7 which now supports WebSockets. The source of the simple chat can be seen in svn for the client-side and server-side.

The key part of the client-side is to establish a WebSocket connection:

join: function(name) {
   var location = document.location.toString().replace('http:','ws:');
   this._ws=new WebSocket(location);

It is then possible for the client to send a chat message to the server:

_send: function(user,message){
   if (this._ws)

and to receive a chat message from the server and to display it:

_onmessage: function(m) {
   if ({
       var chat=$('chat');
       var spanFrom = document.createElement('span');
       var spanText = document.createElement('span');
       var lineBreak = document.createElement('br');
       chat.scrollTop = chat.scrollHeight - chat.clientHeight;

For the server-side, we simply accept incoming connections as members:

public void onConnect(Connection connection)

and then for all messages received, we send them to all members:

public void onMessage(byte frame, String data){
   for (ChatWebSocket member : _members){
   catch(IOException e){

So we are done, right? We have a working chat room - let’s deploy it and we’ll be the next Google GChat!! Unfortunately, reality is not that simple and this chat room is a long way short of the kinds of functionality that you expect from a chat room - even a simple one.

Not So Simple Chat

On Close?

With a chat room, the standard use-case is that once you establish your presence in the room and it remains until you explicitly leave the room. In the context of webchat, that means that you can send receive a chat message until you close the browser or navigate away from the page. Unfortunately the simple chat example does not implement this semantic because the WebSocket protocol allows for an idle timeout of the connection. So if nothing is said in the chat room for a short while then the WebSocket connection will be closed, either by the client, the server or even an intermediary. The application will be notified of this event by the onClose method being called.

So how should the chat room handle onClose? The obvious thing to do is for the client to simply call join again and open a new connection back to the server:

_onclose: function() {

This indeed maintains the user’s presence in the chat room, but is far from an ideal solution since every few idle minutes the user will leave the room and rejoin. For the short period between connections, they will miss any messages sent and will not be able to send any chat


In order to maintain presence, the chat application can send keep-alive messages on the WebSocket to prevent it from being closed due to an idle timeout. However, the application has no idea at all about what the idle timeouts are, so it will have to pick some arbitrary frequent period (e.g. 30s) to send keep-alives and hope that is less than any idle timeout on the path (more or less as long-polling does now).

Ideally a future version of WebSocket will support timeout discovery, so it can either tell the application the period for keep-alive messages or it could even send the keep-alives on behalf of the application.

The latest drafts of the WebSocket protocol do include control packets for ping and pong, which can effectively be used as messages to keep alive a connection. Unfortunately this mechanism is not actually usable because: a) there is no JavaScript API to send pings; b) there is no API to communicate to the infrastructure if the application wants the connection kept alive or not; c) the protocol does not require that pings are sent; d) neither the WebSocket infrastructure nor the application knows the frequency at which pings would need to be sent to keep alive the intermediaries and other end of the connection. There is a draft proposal to declare timeouts in headers, but it remains to be seen if that gathers any traction.

Unfortunately keep-alives don’t avoid the need for onClose to initiate new WebSockets, because the internet is not a perfect place and especially with wifi and mobile clients, sometimes connections just drop. It is a standard part of HTTP that if a connection closes while being used, the GET requests are retried on new connections, so users are mostly insulated from transient connection failures. A WebSocket chat room needs to work with the same assumption and even with keep-alives, it needs to be prepared to reopen a connection when onClose is called.


With keep-alives, the WebSocket chat connection should be mostly be a long-lived entity, with only the occasional reconnect due to transient network problems or server restarts. Occasional loss of presence might not be seen to be a problem, unless you’re the dude that just typed a
long chat message on the tiny keyboard of your vodafone360 app or instead of chat you are playing on and you don’t want to abandon a game due to transient network issues. So for any reasonable level of quality of service, the application is going to need to "pave over" any small gaps in connectivity by providing some kind of message queue in both client and server. If a message is sent during the period of time that there is no WebSocket connection, it needs to be queued until such time as the new connection
is established.


Unfortunately, some failures are not transient and sometimes a new connection will not be established. We can’t allow queues to grow forever and pretend that a user is present long after their connection is gone. Thus both ends of the chat application will also need timeouts and the user will not be seen to have left the chat room until they have no connection for the period of the timeout or until an explicit leaving message is received.

Ideally a future version of WebSocket will support an orderly close message so the application can distinguish between a network failure (and keep the user’s presence for a time) and an orderly close as the user leaves the page (and remove the user’s present).

Both the protocol and API have been updated with the ability to distinguish an orderly close from a failed close. The WebSocket API now has a CloseEvent that is passed to the onclose method that does contain the close code and reason string that is sent with an orderly close and this will allow simpler handling in the endpoints and avoid pointless client retries.

Message Retries

Even with message queues, there is a race condition that makes it difficult to completely close the gaps between connections. If the onClose method is called very soon after a message is sent, then the application has no way to know if that close event happened before or after the message was delivered. If quality of service is important, then the application currently has no option but to have some kind of per message or periodic acknowledgment of message delivery.

Ideally a future version of WebSocket will support orderly close, so that delivery can be known for non-failed connections and a complication of acknowledgements can be avoided unless the highest quality of service is required.

Orderly close is now supported (see above.)


With onClose handling, keep-alives, message queues, timeouts and retries, we finally will have a chat room that can maintain a user’s presence while they remain on the web page. But unfortunately the chat room is still not complete, because it needs to handle errors and non-transient failures. Some of the circumstances that need to be avoided include:

  • If the chat server is shut down, the client application is notified of this simply by a call to onClose rather than an onOpen call. In this case, onClose should not just reopen the connection as a 100% CPU busy loop with result. Instead the chat application has to infer that there was a connection problem and to at least pause a short while before trying again - potentially with a retry backoff algorithm to reduce retries over time.

    Ideally a future version of WebSocket will allow more access to connection errors, as the handling of no-route-to-host may be entirely different to handling of a 401 unauthorized response from the server.

    The WebSocket protocol is now full HTTP compliant before the 101 of the upgrade handshake, so responses like 401 can legally be sent. Also the WebSocket API now has an onerror call back, but unfortunately it is not yet clear under what circumstances it is called, nor is there any indication that information like a 401 response or 302 redirect, would be available to the application.

  • If the user types a large chat message, then the WebSocket frame sent may exceed some resource level on the client, server or intermediary. Currently the WebSocket response to such resource issues is to simply close the connection. Unfortunately for the chat application, this may look like a transient network failure (coming after a successful onOpen call), so it may just reopen the connection and naively retry sending the message, which will again exceed the max message size and we can lather, rinse and repeat! Again it is important that any automatic retries performed by the application will be limited by a backoff timeout and/or max retries.

    Ideally a future version of WebSocket will be able to send an error status as something distinct from a network failure or idle timeout, so the application will know not to retry errors.

    While there is no general error control frame, there is now a reason code defined in the orderly close, so that for any errors serious enough to force the connection to be closed the following can be communicated: 1000 - normal closure; 1001 - shutdown or navigate away; 1002 - protocol error; 1003 data type cannot be handled; 1004 message is too large. These are a great improvement, but it would be better if such errors could be sent in control frames so that the connection does not need to be sacrificed in order to reject 1 large message or unknown type.

Does it have to be so hard?

The above scenario is not the only way that a robust chat room could be developed. With some compromises on quality of service and some good user interface design, it would certainly be possible to build a chat room with less complex usage of a WebSocket. However, the design decisions represented by the above scenario are not unreasonable even for chat and certainly are applicable to applications needing a better QoS that most chat rooms.

What this blog illustrates is that there is no silver bullet and that WebSocket will not solve many of the complexities that need to be addressed when developing robust Comet web applications. Hopefully some features such as keep-alives, timeout negotiation, orderly close and error notification can be build into a future version of WebSocket, but it is not the role of WebSocket to provide the more advanced handling of queues, timeouts, reconnections, retries and backoffs. If you wish to have a high quality of service, then either your application or the framework that it uses will need to deal with these features.

cometD with WebSocket

cometD version 2 will soon be released with support for WebSocket as an alternative transport to the currently supported JSON long-polling and JSONP callback-polling. cometD supports all the features discussed in this blog and makes them available transparently to browsers with or without WebSocket support. We are hopeful that WebSocket usage will be able to give us even better throughput and latency for CometD than the already impressive results achieved with long-polling.

CometD 2 has been released and we now have even more impressive results WebSocket support is build into both Jetty and CometD, but uptake has been somewhat hampered by the multiple versions of the protocol in the wild and patchy/changing browser support.

Programming to a framework like CometD remains the easiest way to achieve a Comet application as well as have portability over “old” techniques like long polling and emerging technologies like WebSockets.

Comet for HbbTV-Compliant Browsers Tue, 05 Apr 2011 13:15:16 +0000 MihaiRotaru Hybrid Broadcast Broadband TV (HbbTV) is a new industry standard offering to TV viewers both TV and Web content.

Migratory Push Server has added support for Opera and ANT Galio web browsers, both used by major HbbTV device manufacturers. A demo is available.

I estimate HbbTV could rapidly become a hot market for Comet/HTML5 technology in general and for Migratory Push Server in particular, especially due to Migratory’s very high vertical scalability – able to send real-time data to 1 million TV viewers from a small server machine (benchmarks).

An Update on WebSockets Tue, 22 Mar 2011 20:31:51 +0000 DylanSchiemann CNet recently had an update on the work being done to get WebSockets back on track. This is a good, optimistic update on the current state of WebSocket support.

IE9 shipped without WebSocket support, though Microsoft does have an IE WebSockets prototype that’s available for testing. They’ve now stated publicly that they are waiting for the protocol to mature a bit, as well as implementations, before making this part of an official IE release.

The OpenCoweb Project Mon, 07 Mar 2011 15:09:31 +0000 DylanSchiemann The OpenCoweb Project is a new Dojo Foundation project focused on the development of the Open Cooperative Web Framework.

From the Dojo Foundation blog post:

This new JavaScript framework builds upon CometD to enable “cooperative web applications” featuring concurrent, real-time interactions among remote users and external data sources. The framework handles remote notification of user changes, the resolution of conflicting changes, and convergence of application state using an operational transformation algorithm.

The technology in OpenCoweb can be applied to a variety of solution areas such as:

  • E-Learning or Distance Learning
  • Call Center Support
  • Financial Analyst Briefing
  • Healthcare / Telemedicine
  • Online collaborative authoring and editing
  • One-on-One (Manager/Employee) Reviews

For more information, visit the OpenCoweb project web page. For details about the framework, see the documentation and source code.

Of particular note to Comet developers, the OpenCoweb currently delivers two cooperative web server implementations. One extends the CometD Java Server and the other is a Python server based on Facebook’s Tornado server. Both servers provide a cooperative web application container that supports coweb sessions over the Bayeux protocol. Bayeux serves as the wire format for communication between session participants and the server. Both server implementations support service bots that run within the server process and offer extension points for external bots to participate in sessions.

Game Closure: Real-time JavaScript Game Engine Thu, 17 Feb 2011 22:57:07 +0000 DylanSchiemann Game Closure is a new real-time JavaScript game engine created by Michael Carter (Comet Daily contributor, early WebSocket proponent, and creator of Orbited and Hookbox fame), Martin Hunt (Meebo and Hookbox), and Tom Fairfield. The game engine targets traditional and mobile platforms.

Here’s a short video demonstrating it in action:

Read more at the Game Closure TechCrunch story.