[activemq-user] 2 way asynchronous messaging

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

[activemq-user] 2 way asynchronous messaging

Sanjiv Jivan
On theserverside.com <http://theserverside.com>, James said

"The Ajax client can have a blocking request which can timeout eventually if
there are no messages available - or return a message as soon as it becomes
available.

So the client side is effectively blocked reading from a socket - which is
exactly how JMS clients work too.

i.e. the effect is exactly the same; the Ajax client behaves pretty much
like a JMS client, reading messages asynchronously as they arrive from a
socket.

The server side (Jetty + ActiveMQ) does a good job of sleeping the servlet
until a message becomes available to maximise the use of resources (sockets,
buffers, threads etc).

So while in principle on paper Ajax requires polling; in practice with HTTP
keep alive and HTTP pipelining, the effect is pretty much exactly the same
as using a JMS client."

This technique is basically similar to pushlets, right? Many web servers (eg
Weblogic) do not like long lived threads and kill them once they exceed a
certain timeout period. Also if the web application is severing several
hundred clients, there would also be the possibility of resource / socket
exhaustion on the web server using this technique with so many ajax clients
keeping open socket connections with the server. What are your thoughts on
this? This solution doesn't seem scalable. Have you run any benchmarks with
a large number of clients?

You draw an analogy with JMS clients. In your opinion, how does a messaging
middleware compare to web servers with respect to scaling to a large number
of clients?

Thanks,
Sanjiv
Reply | Threaded
Open this post in threaded view
|

Re: [activemq-user] 2 way asynchronous messaging

James Strachan-2
On 22 Nov 2005, at 01:39, Sanjiv Jivan wrote:

> On theserverside.com <http://theserverside.com>, James said
>
> "The Ajax client can have a blocking request which can timeout  
> eventually if
> there are no messages available - or return a message as soon as it  
> becomes
> available.
>
> So the client side is effectively blocked reading from a socket -  
> which is
> exactly how JMS clients work too.
>
> i.e. the effect is exactly the same; the Ajax client behaves pretty  
> much
> like a JMS client, reading messages asynchronously as they arrive  
> from a
> socket.
>
> The server side (Jetty + ActiveMQ) does a good job of sleeping the  
> servlet
> until a message becomes available to maximise the use of resources  
> (sockets,
> buffers, threads etc).
>
> So while in principle on paper Ajax requires polling; in practice  
> with HTTP
> keep alive and HTTP pipelining, the effect is pretty much exactly  
> the same
> as using a JMS client."
>
> This technique is basically similar to pushlets, right?

Not really. Pushlets is a constant stream of JavaScript commands  
whereas Ajax is individual HTTP commands over a http-keepalive  
socket. Some differences are explained here...
http://activemq.org/Ajax


> Many web servers (eg
> Weblogic) do not like long lived threads and kill them once they  
> exceed a
> certain timeout period.

Note that the threads are used by the servlet engine as HTTP requests  
come in; so the threads are all part of the servlet engines pool


> Also if the web application is severing several
> hundred clients, there would also be the possibility of resource /  
> socket
> exhaustion on the web server using this technique with so many ajax  
> clients
> keeping open socket connections with the server.

Agreed - but then servlet engines have to deal with that issue anyway  
(DOS attacks and the like). Pretty much all servlet engines have a  
maximum number of concurrent request limit so that they can fail  
gracefully and return the right HTTP codes if they are too busy to  
service a request etc.


> What are your thoughts on
> this?

Use a good servlet engine with a load balancer in front - then its  
just a matter of figuring out how many boxes you need for your user  
population


> This solution doesn't seem scalable.

It depends on your definition of scalable :). It certainly scales  
linearly - as Google have shown.

Each JVM has a certain number of threads & sockets it can deal with  
(usually sockets are the limiting factor, most unix machines can have  
their kernel recompiled to get larger numbers of file descriptors per  
process). Then you can run more processes per box - but you are going  
to have some fixed number of clients per box. Then things scale  
linearly by number of concurrent users you support - just add boxes.  
Its worth mentioning that the latest solaris machines claim to handle  
over 10,000 sockets per process so depending on how many concurrent  
users your site supports you may need 1-500 machines which is nothing  
compared to a typical Google data centre :)

The number of threads are an issue as well which is why I'd recommend  
an NIO based servlet engine and one capable of sleeping the servlet  
thread - like the Jetty & ActiveMQ combo I mentioned (ActiveMQ 4.x  
also heavily reuses thread pools to minimise thread usage too) -  
which means that a relatively small pool of threads can handle many  
thousands of concurrent clients - assuming your machine can handle  
that many sockets :)


> Have you run any benchmarks with
> a large number of clients?

We're about to do so with a customer.


> You draw an analogy with JMS clients. In your opinion, how does a  
> messaging
> middleware compare to web servers with respect to scaling to a  
> large number
> of clients?

The Jetty & ActiveMQ combination of Ajax (when the client is using  
HTTP-keep-alive) is surprisingly similar to a typical JMS client - it  
tends to use 1 socket to communicate with the 'message broker' and  
the client side is blocked reading from the socket. The server side  
can then either used BIO or NIO depending on requirements, OS and  
number of clients. On Linux for example, BIO is fine for up to a 1000  
or so clients on a modern linux distro on lintel with hyperthreading  
where threads & context switches are cheap - on solaris you tend to  
want NIO for large numbers of clients.

Just out of interest, what kind of client load are you thinking of?

James
-------
http://radio.weblogs.com/0112098/