JMS to STOMP transformation causes throughput drop in STOMP consumers

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

JMS to STOMP transformation causes throughput drop in STOMP consumers

xabhi
This post was updated on .
Hi,
I am trying to benchmark throughput for my nodejs consumer (STOMP). The producer is in Java sending JMS text and map messages.

With text messages, I see that nodejs consumer is able to handle 10 Kmsgs/sec without any pending messages

But when I send Map messages and using nodejs consumer (with header 'transformation': jms-map-json), the throughput drops to 0.5 Kmsgs/sec.

I am not able to understand where this bottleneck is coming. The broker has messages in pending queue and i see unacknowledged messages in jconsole.

What i am surprised about is that if Broker has pending messages in queue then why is it not sending messages to consumer?
Why does the consumer not sending acknowledgements at the same rate in both TEXT and MAP messages case? Why is it slower in MAP messages case?

Why is that the node consumer can consume text messages faster that map messages if ultimately both are sent in TEXT format from ActiveMQ?

Does anyone from ActiveMQ dev knows about this behavior? Any help will be appreciated

Thanks,
Abhishek
Reply | Threaded
Open this post in threaded view
|

Re: JMS to STOMP transformation causes throughput drop in STOMP consumers

xabhi
Hi,

I am seeing the same behavior with Python STOMP consumer as well - throughput for TEXT msgs is 10K/s and for MAP messages with tranformation is 200 msgs/s.

 Is there a way to improve it for JMS Map messages? Some broker setting? may be plugin different serialization library instead of xstream?

Thanks,
Abhishek
Reply | Threaded
Open this post in threaded view
|

Re: JMS to STOMP transformation causes throughput drop in STOMP consumers

Tim Bain
Which process is the one spinning the CPU: the broker, or your client?

If it's the broker, you're in luck: the ActiveMQ broker is a Java process,
which means that all of the standard Java profiling tools (which will tell
you where a Java application is spending its time) are at your disposal and
can tell you why serialization is slow.  JVisualVM's profiler would
probably give you a pretty good starting point for answering your question,
and it's deployed as part of the JVM so you don't even have to install any
special tools...

Tim

On Wed, Apr 6, 2016 at 2:17 AM, xabhi <[hidden email]> wrote:

> Hi,
>
> I am seeing the same behavior with Python STOMP consumer as well -
> throughput for TEXT msgs is 10K/s and for MAP messages with tranformation
> is
> 200 msgs/s.
>
>  Is there a way to improve it for JMS Map messages? Some broker setting?
> may
> be plugin different serialization library instead of xstream?
>
> Thanks,
> Abhishek
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/JMS-to-STOMP-transformation-causes-throughput-drop-in-STOMP-consumers-tp4710148p4710409.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>