Comet Gazing: Scaling

by Comet DailyDecember 10th, 2007

Comet Gazing is a new feature on Comet Daily. We pose a question to our Contributors, and post the answers.

This time, we’ve asked the question: “How does your preferred Comet server handle the issue of scaling to a large number of connections?”

Alessandro Alinone
Lightstreamer

Lightstreamer adopts a staged event-driven architecture. It uses a thread pool of fixed size to handle an arbitrary number of TCP connections. We transitioned from the classical one-thread-per-connection architecture to the new one in the early stage of product development in order to satisfy a set of scalability requirements imposed by a customer. With this architecture, CPU utilization depends on the inbound and outbound event throughputs, rather than the number of concurrent connections.

Michael Carter
Orbited

Orbited was built with two primary concerns: scalability and usability. In the interest of high concurrency, Orbited is built around libevent, which utilizes high performance, asynchronous network I/O libraries such as epoll on linux or kqueue on OS X. These edge triggered poll/select replacements scale in constant time, meaning that there is negligible overhead for each additional idling connection. Therefore the bottleneck is only the total bandwidth usage. This is ideal for most Comet applications that will typically have many users connected, but only a few of them receiving data at any given time. Gmail’s chat features are a prime example of this use case, in which many users are logged in at a time, but only a few of them are actually chatting.

High concurrency on a single node is just the beginning though. Orbited is built such that each daemon instance acts as a node in a distributed hash table (DHT). Each Comet connection can be looked at as an entry in this hash table. In its most simple form, the hashing algorithm is based on some unique aspect of the connection, such as the session identifier and user name. The benefit is that the application layer always knows where any given connection should be, so sending a message to any user requires no CPU or network overhead — the algorithm is simply O(1).

This scheme is optimal for an application that needs to send many specific messages to specific users, such as an instant messaging platform. But many other applications can benefit greatly from having access to a built-in publish/subscribe mechanism. There is exciting academic work in the area of peer publish/subscribe systems built on DHT network overlays such as Chord or Pastry. A prime example of this type of network is Scribe. Combining this research with the basic Orbited node yields a highly fault-tolerant system that will always deliver an event message to all members a group with the lowest Network I/O possible. These systems perform an order of magnitude or two better than IRC or XMPP (Jabber) networks, as well as surpassing most enterprise solutions. What’s more, this type of network topology still scales well for peer-peer messaging, guaranteeing in fact that a message to any given user will be delivered on the order O(logn) network hops within the cluster, where n is the size of the cluster.

There are an incredible number of applications that will never need to scale past a few hundred users. To cater to this class of application, Orbited provides a straightforward programmer interface that can easily be implemented in any language. But it’s not always clear if a given application will need to scale from a hundred users to a hundred thousand, and in a matter of weeks. That’s why Orbited provides the security of sure-fire scalability paths. The internet is an unforgiving platform: by definition, an application that can’t scale will never be popular. That’s why I use Orbited.

Roberto Saccon
ErlyComet

ErlyComet is written in Erlang, a virtual machine and microthreads based functional language. Every connection is represented by a microthread, which is in Erlang simply called “process”. Processes can easily communicate with each other via message passing, even if the processes are located on different servers.

ErlyComet is still at an early development stage and no load testing has been performed yet, but I do not expect any issues and bottlenecks related to scaling, because Erlang makes it easy to share state and distribute load among several servers. But that does not mean that from now on life will be without trouble. Erlang hasn’t been used much for web development in general yet, which means for anything apart from scaling, the wheel needs to be partially reinvented.

Martin Tyler
Caplin

Caplin Liberator is a very high performance server. A single machine can handle 30,000 concurrent streaming connections. The limit on the number of messages being sent to clients is dependent on the hardware used. Recent tests with a quad core Opteron machine showed a sweet spot of 10,000 clients receiving 100 messages/sec each, with each message consisting of 5 short fields of data, typical for financial applications.

Performance has always been a primary concern when developing Liberator, so benchmarking regularly meant we could keep on top of any performance degradation due to new features.

The application is based on an event loop, or rather an event loop per thread. The event loops are actually configurable, with poll, select, epoll (Linux) and devpoll (Solaris) all available. In reality though, no one changes the defaults and these are really just evolutions of the product. The current setup uses epoll to service client connections and standard poll to service backend data feeds. The threading is relatively fixed and is generally configured based on the number of CPU’s in the system.

Some of the obvious things can be configured, like data batching, which can improve performance for a slight trade off in latency. Conflating data is also configurable, which means only the latest values are sent for fast moving data, if the data is replacing out of date information.

Other general principals of software engineering are followed to make sure internal algorithms aren’t doing more work than they need to.

Even with a very terse protocol, the biggest limitation is often bandwidth. This is why the design of the protocol is another area that can be critical to some projects.

Joe Walker
DWR

DWR has a tricky job because it sits on top of many different types of web servers, and the Java servlet spec seems to go out of its way to make Comet hard. So for servers where we’re just following the servlet spec (and thus can’t drop threads while we’re waiting, etc), we’ve got this fancy way of gradually degrading Comet to polling, and then making the polling slower and slower to avoid thread starvation. We’re gradually adding support for asynchronous features on top of the servlet spec in web-servers. Jetty has been there for a while, and we’ve recently added support for Grizzly too. DWR relies on the server it sits in for many of the scaling decisions (like when to reject a connection) so to get the full picture, you’d need to look at a pairing of DWR + some web-server.

Greg Wilkins
Cometd + Jetty

My Comet server of choice is a Cometd Bayeux server running on Jetty. The approach to large numbers of connections is twofold: firstly, Jetty has a focus on having a small footprint so that incremental costs per connection are small, thus tens of thousands of connections may be terminated in a Jetty server; secondly, Jetty has asynchronous features to allow a reasonably sized thread pool to efficiently service those connections. The Asynchronous features help a little when reading/writing requests/responses, but where they help the most is in the ability to suspend a dispatched request with minimal resources allocated. Thus when a Comet long-poll request needs to wait for an event, it can be suspended cheaply and resumed once a message is available to be delivered to the client.

The number of connections is still the sizing limit, but more from an operating system and JVM point of view (i.e. how many file descriptors can exist and how quickly can they be polled/selected for events). CPU is also somewhat of a factor when delivering many messages to many clients, however this has recently been improved by work on the JSON libraries that allow better caching and sharing of serialized objects.

Kris Zyp
Tomcat

For developing Comet capabilities my server of choice is Tomcat. Tomcat provides a Comet specific interface for the servlets to implement that allows a servlet to receive individual fine-grained events regarding the progress on a request, and handle these events without finishing the response and closing the connection. Thus responses can be delayed and connections can be left open so that messages can be asynchronously sent to clients without requiring a thread for each open connection. Since this interface is just an extension to the basic servlet, it allows existing application logic to incorporate Comet capabilities in the same application server.

Perhaps the biggest scalability challenge for Tomcat Comet is in the difficulty of horizontally scaling. Some Comet servers like Orbited are specifically built to run clustered on multiple servers, but implementing a publish/subscribe type Comet mechanism on Tomcat that scales efficiently across multiple machines would require significant effort to properly route and distribute messages effectively.

One Response to “Comet Gazing: Scaling”

  1. Comet Daily » Blog Archive » 10 Things IT Needs to Know About Ajax? Says:

    [...] Comet Gazing: Scaling [...]


Copyright 2015 Comet Daily, LLC. All Rights Reserved