JMS to STOMP transformation causes throughput drop in STOMP consumers
This post was updated on .
I am trying to benchmark throughput for my nodejs consumer (STOMP). The producer is in Java sending JMS text and map messages.
With text messages, I see that nodejs consumer is able to handle 10 Kmsgs/sec without any pending messages
But when I send Map messages and using nodejs consumer (with header 'transformation': jms-map-json), the throughput drops to 0.5 Kmsgs/sec.
I am not able to understand where this bottleneck is coming. The broker has messages in pending queue and i see unacknowledged messages in jconsole.
What i am surprised about is that if Broker has pending messages in queue then why is it not sending messages to consumer?
Why does the consumer not sending acknowledgements at the same rate in both TEXT and MAP messages case? Why is it slower in MAP messages case?
Why is that the node consumer can consume text messages faster that map messages if ultimately both are sent in TEXT format from ActiveMQ?
Does anyone from ActiveMQ dev knows about this behavior? Any help will be appreciated
Re: JMS to STOMP transformation causes throughput drop in STOMP consumers
Which process is the one spinning the CPU: the broker, or your client?
If it's the broker, you're in luck: the ActiveMQ broker is a Java process,
which means that all of the standard Java profiling tools (which will tell
you where a Java application is spending its time) are at your disposal and
can tell you why serialization is slow. JVisualVM's profiler would
probably give you a pretty good starting point for answering your question,
and it's deployed as part of the JVM so you don't even have to install any