Comet Message Rates

by Greg WilkinsOctober 30th, 2008

Chat rooms have been cited as the “Hello World” application of Ajax Comet because chat is something that everybody can understand and represents a good exemplar of the technology. With my own work, I have frequently used the cometD chat demo as the basis of benchmarking and scalability tests.

However, the scalability issues presented by a chat application represent only one data point in the range of possible load profiles that a Comet infrastructure may be asked to support. With this article I examine the challenges presented for scaling Comet if we consider a spectrum of applications from Instant Messaging (where most events are sent to only 1 or 2 users) to stock tickers (where a change in Google’s stock price may be sent to almost every user) and how they have been met with Jetty and cometD. While much of this article applies to specifically Java comet servers, much of it is equally applicable to all Comet servers.

Much of the discussion around scaling Comet applications has focused on the resources footprint of the long held requests (long-polling) that are at the core of this Comet technique. If you are to scale your application to thousands of simultaneous users, then it is vitally important that resources allocated per user are kept to a minimum. For example, if normal servlet practices were applied to Comet servers, then each user connection could be allocated:

Resource Size (KB)
Request Buffer 8
Response Buffer 16
PrintWriter Buffer 1
UTF8 Stream encoder Buffer 8
Thread Stack 128
Other overhead 36
TOTAL 197

With Jetty’s asynchronous servlet mechanisms, a thread is not allocated per Comet request, thus the large stack frame is not allocated per user, saving 128KB. In addition, cometD uses a long-polling transport, which unlike streaming transports, does not need to access the response PrintWriter until there is an event to deliver. Jetty has been implemented so that streams and buffers are not allocated until they are requested, so in combination with cometD, this means that the Response, PrintWriter and character encoder buffers are not allocated per request, saving 25KB per user. Moreover, with Jetty’s adaptive buffering, only a small request buffer need initially be allocated and a larger buffer will be used if needed, so another 4K per user can be saved.

So Jetty+cometD reduces per user memory utilization by 157KB down to just 40KB. This allows a five-fold increase in the number of users, as the same memory previously needed for 2000 users (385MB) can be now be used for 10000 users!

But maintaining connected Comet users is only 1 part of the scalability equation. It is no good holding onto 10000+ user connections if you are unable to service the response rate that your application generates. For example, with instant messaging, typically each message is delivered only to a single user, while with chat rooms each message is delivered to every user in the room. If there is an average of ten users per room, then Chat will generate ten times the response rate than IM for a given message rate. Applications like stock tickers can have an even more extreme scaling factor, as typically a few stocks (eg GOOG, MSFT) will appear in a large percentage of user portfolios. A change in Google’s stock price may need to be delivered to 80% of the 10,000 connected users. If the price changes once per second, it will saturate the response capacity of most servers!

It is vitally important for any Comet application to reasonablly estimate the response rate, either by educated guess work or by measuring existing traffic. The following calculations show how some estimates of the request rates can be derived.

Instant Messaging

Users per server: 10000
Active: 6.25% Talking 30 minutes per 8 hour work day
Active users: 625
Per user msg rate: 0.2 1 message every 5 seconds while active
msg rate: 125/s Almost IDLE message load!

With such a low request rate, it can be seen that Instant Messaging will mostly be limited by memory per user, however there will be extra events to send to inform users of the arrival and departure of friends.

Chat Rooms

Users per server: 10000
Users per room 10
Active: 6.25% Talking 30 minutes per 8 hour work day
Talking users: 625
Per user talk rate: 0.2 1 message every 5 seconds while active
msg rate: 1250/s A moderately busy request rate

The message rate for chat can be significantly higher for chat and the scalability may be limited either by memory or response rate.

Stock Ticker

Users per server: 10000
Prices per second: 1000
Prices watched by 50% of users 0.2% Guess!
Msg rate: 10000/s 0.2% of prices can saturate server!
Portfolio update period 2.5s Ave time between price change
Msg rate: 4000/s high but achievable message rate

The stock ticker example shows that the message rate can be very challenging. A few popular prices that change frequently can rapidly saturate a server’s ability to send messages. Thus it is not possible to look at a stock ticker as a purely message passing application as it is impossible to pass each price onto all interested subscribers. In order to handle the message loads of stock tickers, the message stream must be processed so that prices can be merged and batched, but the end result (a frequently updating portfolio) is achieved.

Jetty plus cometD has several mechanisms and optimizations available that can be used to support the stock ticker message load.

The cometD/bayeux protocol allows the server to specify a minimum interval between long polls. This can be a fixed interval, adjusted according to server load or adjusted according to individual clients. If for example an interval of 2500ms was set for cometD, then any the client will wait 2.5s after receiving a price update before sending a new long-poll asking for more prices. Prices that change during that time can be merged and batched and returned with the next long-poll. The interval allows a maximum message rate to be enforced for the server and will avoid overload during periods such as stock market meltdowns. The cost is a slight increase in average latency, but which is beneficial to the user interface as prices that change too frequently cannot be viewed by a user.

The cometD server api for Java supports listeners and message queue manipulation so that server side logic can be applied to merge, batch or replace messages. The MessageListener can be used to act on messages as they are added to a client’s queue, the QueueListener can be used to act if a message queue exceeds a configured size, and the DeliverListener can be used to process a message queue immediately before delivery. In addition, DataFilters can be applied to specific channels to throttle messages for a particular price. These facilities allow a stock ticker application to have fine-grained control over how prices are throttled, merged and/or batched at either the server or client level.

Another optimization that Jetty supports for customization is the case when a large number of clients are waiting for the same event. If a price change is delivered to many users, then normally cometD would format that price once into JSON, but for each user, that JSON would be wrapped in a bayeux message and then framed as a HTTP response. Jetty has a cometD optimization that allows such a common message to be detected. Jetty then formats a reusable HTTP response that is placed in an NIO direct buffer and flushed asynchronously to all users. Unfortunately this optimization does not apply when 2 or more prices are queued for a user, but it does significantly reduce the work needed for many prices and tests show that for applicable messages the bottle neck is moved to the network system rather than the cometD software.

The rough analysis in this article shows that scaling cometD applications requires careful consideration of both memory footprint and message rates. Chat, while a good average Comet application is by no means indicative of all the load profiles that Comet is being asked to support. More challenging still is that many Comet applications are being asked to handle a mix of traffic that will have profiles similar to IM, chat and stock tickers. This is another good illustration of the long road from prototype to production of Comet applications that was the subject of my 2008 Ajax Experience Presentation.

Comments are closed.


Copyright 2015 Comet Daily, LLC. All Rights Reserved