The Evolution of Comet at Caplin

by Martin TylerNovember 30th, 2007

Caplin has been developing Comet and push technology in financial markets for 10 years. This article chronicles the evolution of Caplin’s technology from 1997 to the present day.

In the beginning…

Caplin Systems, formerly Citynet, started life in the financial data business, providing software and systems for banks and data vendors for sending their real-time financial data to each other over leased lines. In 1997, using this knowledge of real-time financial data, we worked on a proof of concept to put some of this information on the Internet, or at least create the software to allow the people that own the data to do so.

The first push

This first cut at real-time data on the Internet was based around a Windows C++ server and a Java applet, and displayed 80×25 pages of text (the most common format for financial data back then and still used today). The server would connect to a data feed and receive streams of data, and in turn stream it out to subscribed clients. The communication to the client was a direct socket connection, which of course is not much use in the majority of the corporate world where firewalls and proxy servers are commonplace.

The first step on the road to what would now be called Comet came in 1998 when we implemented HTTP tunnelling using a Java servlet engine sitting in front of the C++ server. This allowed us to not touch the C++ server, as it accepted direct socket connections from the servlet. The HTTP tunnelling used full streaming, and a lot of time was spent debugging this; in those days HTTP proxy servers were relatively immature products and many of them did not like streaming data. This led us to also develop a polling solution as a fallback mechanism.

The basic principle of the HTTP tunnelling is similar to a number of other solutions these days. The client makes an HTTP request to the server, establishing a session, and the response is kept open allowing the server to send messages to the client as it pleases. When a client wants to send a message to the server, a new HTTP request is made, which includes a session identifier allowing the server to match up the request with the clients session. This allowed for asynchronous two-way messaging.

First dip into DHTML

The client side technology improved around this time too, and we started playing around with DHTML/JavaScript and using the LiveConnect bridge between a Java Applet, now used purely for communication, and the HTML page. This formed the basis of our client-side technology for a few years, as the Java Applet provided a good communication channel to the server and DHTML gave a flexible environment for creating the display. We had two levels of integration for developers. First, a JavaScript API allowing you to subscribe and receive data, much like many Comet-style JavaScript APIs these days. Second, a simple markup, allowing fairly non-technical web developers to tag HTML with a few extra attributes to transform a static HTML page into a live page with streaming real-time data flashing away.

Performance

As customers started wanting to use this technology for more serious projects, the issue of scalability raised its ugly head. The servlet was replaced with our own Java webserver with the functionality of the servlet built in. This gave us some benefits, as it was more optimised for the job. However, it still required a thread per connection, and in those days JVMs were very bad at any more than a few hundred threads, so this became a major limitation of our product.

In 2000 we developed Caplin Liberator. This was a ground up rewrite of our server technology, written in C on Linux and Solaris. It incorporated the basic functionality of the old C++ server and the Java Tunneling server into a single process. To handle the tunnelling and serve up the Java Applet and JavaScript libraries it also incorporated web server functionality. Liberator is multi-threaded, but not on a thread-per-connection basis, and so we could now handle much higher numbers of concurrent clients. In benchmark tests on a single server, up to 30,000 concurrent clients have been tested with low update rates, and up to 10,000 with very high update rates. I will probably talk about performance and scalability in more depth in another article. It also has a library for communication on the backend, whereas the previous server had a handful of data feeds coded directly into the server itself.

Java to JavaScript

We also did a lot of work around the turn of the millennium on very rich DHTML front ends, which mimicked financial terminals and combined different data displays into the browser screen. The browser rendering became a limiting factor for some displays, and we resorted to using Java Applets for some displays that needed large grids and also for charting. So we had an Applet for communication, DHTML gluing it all together and providing the interface, and more Applets for some of the data display.

Then came the JVM wars. We battled for a while with the many versions of the Sun JVM, where LiveConnect was broken, fixed and broken again on a monthly basis. We had lists of known working versions, but our customers had other applications that required different JVM versions because of different bugsā€”it was a fun time! So we pulled out some proof of concept work we had previously done on a JavaScript only implementation of the client, which became the basis for our standard client-side offering today.

Rich and interactive

In 2005 we embarked on a project with a very large bank to develop a new browser trading application. By this time Ajax was in full swing and everyone could see the possibilities of browser based applications, with new browser versions making more things possible, and more and more people developing rich interactive browser applications. The core Caplin technology lends itself perfectly to this requirement, and a very powerful modular browser front end allows users to manage their real-time data and trading in a highly configurable way.

Caplin Trader

Real-time data, especially fast moving data seen in financial applications, can stretch browsers to their limits. Caplin has spent a lot of time developing its JavaScript components to handle fast updating data. With the current popularity of Comet software, it’s important that the existing crop of Ajax libraries available do not impose limits in this area. As Comet libraries mature I’m sure there will be increased integration with Ajax Widget libraries, and these issues will be ironed out. This is another area I will write about in the future.

2 Responses to “The Evolution of Comet at Caplin”

  1. Braydon Fuller Says:

    Good article. I wasn’t aware of the history. Thanks.

  2. Comet Daily » Blog Archive » Comet Gazing: Memory Usage Says:

    [...] Caplin Liberator is a custom built server written in C and uses asynchronous IO based on poll and epoll. Some Comet servers based on existing web/application servers have an overhead per connection because they were originally designed for transient connections, rather than the long lived connections a Comet server has to deal with. [...]


Copyright 2015 Comet Daily, LLC. All Rights Reserved