Network Connector too slow when receive high rate persistent message

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Network Connector too slow when receive high rate persistent message

francong2000
I have set up ActiveMQ 5.14.3 cluster (2 Brokers) with using Network Connector.  I tried to send 4,000 persistent message to a queue in 300 msg/sec rate.  The Producer can send same rate (300m/s) with using "useAsyncSend=true", however, consumer just can receive all messages in 40 msg/sec rate.  How to tune and what configuration I need to change?

Remark: Another test results 
             If only 1 broker, it can reach 300 m/s (persistent) for both producer and consumer.
             If 2 brokers and send non-persistent message, it can reach 300 m/s for both producer and consumer.

Configuration of Broker 2
================
        <networkConnectors>
                <networkConnector  name="LinkToBroker1"  duplex="true" networkTTL="3" uri="static:(tcp://POC1:61616)?wireFormat.maxInactivityDuration=0" prefetchSize="10000" userName="system" password="manager" >
                        <dynamicallyIncludedDestinations>
                        </dynamicallyIncludedDestinations>
                </networkConnector>
        </networkConnectors>
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Tim Bain
Sounds like the persistence store might be your bottleneck; how do you have
it configured?

On Jan 24, 2017 6:53 AM, "francong2000" <[hidden email]> wrote:

I have set up ActiveMQ 5.14.3 cluster (2 Brokers) with using Network
Connector.  I tried to send 4,000 persistent message to a queue in 300
msg/sec rate.  The Producer can send same rate (300m/s) with using
"useAsyncSend=true", however, consumer just can receive all messages in *40
msg/sec* rate.  How to tune and what configuration I need to change?

/Remark: Another test results/
             If only 1 broker, it can reach 300 m/s (persistent) for both
producer and consumer.
             If 2 brokers and send non-persistent message, it can reach 300
m/s for both producer and consumer.

Configuration of Broker 2
================
        <networkConnectors>
                <networkConnector  name="LinkToBroker1"  duplex="true"
networkTTL="3"
uri="static:(tcp://POC1:61616)?wireFormat.maxInactivityDuration=0"
prefetchSize="10000" userName="system" password="manager" >
                        <dynamicallyIncludedDestinations>
                        </dynamicallyIncludedDestinations>
                </networkConnector>
        </networkConnectors>




--
View this message in context: http://activemq.2283324.n4.
nabble.com/Network-Connector-too-slow-when-receive-high-
rate-persistent-message-tp4721293.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

francong2000
In reply to this post by francong2000
I used default Kaha DB and run with HDD disk under Linux O/S.

 <persistenceAdapter>
                 <kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Adam Whitney
I'm having a similar issue with 5.13.3. We have 3 brokers configured as a grid network (each connected to the other 2). For the clients, we are using JmsTemplate to produce messages from 2 hosts ... and DefaultMessageListenerContainer to consume from another 2 hosts. Each consumer client is a single tomcat instance with 50 consumer threads on a single connection to the broker. This means that one of the 3 brokers is always left with now "real" consumer clients - only "network" consumers.

When we send 570 messages per second we see messages start to queue on one of the brokers ... invariably, the broker that starts to build up it's queue is the one that has at least one producer and no "real" client consumers (i.e. only "network" consumers).

If we stop 2 of the brokers and just have a single broker with 2 producers and 2 consumers then we don't see any queueing on that broker even at 570 tps the single broker and "real" client consumers can keep up with the messages. This seems to indicate that it is the network consumers that are not able to keep up.

FWIW, my brokers are using "pure java" version of levelDB:
    <levelDB directory="${activemq.data}/leveldb"/>

And here's our network connector config:
            <networkConnector name="mqpQueueConnector"
                              uri="static:(tcp://AMQHOST_01:61616,tcp://AMQHOST_02:61616,tcp://AMQHOST_03:61616)?maxReconnectDelay=5000&amp;useExponentialBackOff=false"
                              messageTTL="-1" conduitSubscriptions="false" duplex="false" consumerPriorityBase="0" decreaseNetworkConsumerPriority="false">
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Tim Bain
Do either of your broker logs show Producer Flow Control kicking in?

And for both of you, is performance normal if you have just a single broker
(or all clients on the same broker, which is the same thing)?

On Jan 30, 2017 5:36 PM, "Adam Whitney" <[hidden email]> wrote:

I'm having a similar issue with 5.13.3. We have 3 brokers configured as a
grid network (each connected to the other 2). For the clients, we are using
JmsTemplate to produce messages from 2 hosts ... and
DefaultMessageListenerContainer to consume from another 2 hosts. Each
consumer client is a single tomcat instance with 50 consumer threads on a
single connection to the broker. This means that one of the 3 brokers is
always left with now "real" consumer clients - only "network" consumers.

When we send 570 messages per second we see messages start to queue on one
of the brokers ... invariably, the broker that starts to build up it's queue
is the one that has at least one producer and no "real" client consumers
(i.e. only "network" consumers).

If we stop 2 of the brokers and just have a single broker with 2 producers
and 2 consumers then we don't see any queueing on that broker even at 570
tps the single broker and "real" client consumers can keep up with the
messages. This seems to indicate that it is the network consumers that are
not able to keep up.

FWIW, my brokers are using "pure java" version of levelDB:
    <levelDB directory="${activemq.data}/leveldb"/>

And here's our network connector config:
            <networkConnector name="mqpQueueConnector"

uri="static:(tcp://AMQHOST_01:61616,tcp://AMQHOST_02:61616,
tcp://AMQHOST_03:61616)?maxReconnectDelay=5000&amp;
useExponentialBackOff=false"
                              messageTTL="-1" conduitSubscriptions="false"
duplex="false" consumerPriorityBase="0"
decreaseNetworkConsumerPriority="false">



--
View this message in context: http://activemq.2283324.n4.
nabble.com/Network-Connector-too-slow-when-receive-high-
rate-persistent-message-tp4721293p4721407.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Tim Bain
Also, what is network latency like between brokers and from client to
broker?

On Jan 31, 2017 7:03 AM, "Tim Bain" <[hidden email]> wrote:

> Do either of your broker logs show Producer Flow Control kicking in?
>
> And for both of you, is performance normal if you have just a single
> broker (or all clients on the same broker, which is the same thing)?
>
> On Jan 30, 2017 5:36 PM, "Adam Whitney" <[hidden email]> wrote:
>
> I'm having a similar issue with 5.13.3. We have 3 brokers configured as a
> grid network (each connected to the other 2). For the clients, we are using
> JmsTemplate to produce messages from 2 hosts ... and
> DefaultMessageListenerContainer to consume from another 2 hosts. Each
> consumer client is a single tomcat instance with 50 consumer threads on a
> single connection to the broker. This means that one of the 3 brokers is
> always left with now "real" consumer clients - only "network" consumers.
>
> When we send 570 messages per second we see messages start to queue on one
> of the brokers ... invariably, the broker that starts to build up it's
> queue
> is the one that has at least one producer and no "real" client consumers
> (i.e. only "network" consumers).
>
> If we stop 2 of the brokers and just have a single broker with 2 producers
> and 2 consumers then we don't see any queueing on that broker even at 570
> tps the single broker and "real" client consumers can keep up with the
> messages. This seems to indicate that it is the network consumers that are
> not able to keep up.
>
> FWIW, my brokers are using "pure java" version of levelDB:
>     <levelDB directory="${activemq.data}/leveldb"/>
>
> And here's our network connector config:
>             <networkConnector name="mqpQueueConnector"
>
> uri="static:(tcp://AMQHOST_01:61616,tcp://AMQHOST_02:61616,t
> cp://AMQHOST_03:61616)?maxReconnectDelay=5000&amp;useExponen
> tialBackOff=false"
>                               messageTTL="-1" conduitSubscriptions="false"
> duplex="false" consumerPriorityBase="0"
> decreaseNetworkConsumerPriority="false">
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nab
> ble.com/Network-Connector-too-slow-when-receive-high-rate-
> persistent-message-tp4721293p4721407.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Adam Whitney
Tim,

We just ran another test with all 3 brokers, 2 producer hosts, and 4 consumer hosts ... and made sure that every broker had at least one consumer host directly connected ... and we saw the queues on broker1 build up immediately - event during the "ramp up" portion of the test. We're going to try the same test again - but with conduitSubscriptions="true" and decreasNetworkConsumerPriority="true" (both are currently false).

What should I look for in the logs to see if Flow Control is kicking in? I don't think it is because the producers keep on keeping on ... it's the queues on the brokers that are filling up - but nowhere near to the limit (we allow 100gb for persistent messages). Producers send persistent messages inside of transactions ... and consumers also use transactions. And we have useAsyncSend=true.

System usage is configured as follows:

            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="70" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>

ping statistics from broker1 to broker2:

16 packets transmitted, 16 received, 0% packet loss, time 15413ms
rtt min/avg/max/mdev = 0.119/0.175/0.232/0.026 ms

from broker1 to broker3:

16 packets transmitted, 16 received, 0% packet loss, time 15260ms
rtt min/avg/max/mdev = 0.136/0.194/0.271/0.046 ms

from broker2 to broker3:

15 packets transmitted, 15 received, 0% packet loss, time 14507ms
rtt min/avg/max/mdev = 0.174/0.213/0.261/0.029 ms

from broker1 to producer1:

15 packets transmitted, 15 received, 0% packet loss, time 14381ms
rtt min/avg/max/mdev = 0.727/0.819/0.907/0.052 ms

from broker1 to producer2:

15 packets transmitted, 15 received, 0% packet loss, time 14403ms
rtt min/avg/max/mdev = 0.829/0.914/0.987/0.048 ms

from broker1 to consumer1:

15 packets transmitted, 15 received, 0% packet loss, time 14354ms
rtt min/avg/max/mdev = 0.758/0.882/0.963/0.051 ms

from broker1 to consumer2:

15 packets transmitted, 15 received, 0% packet loss, time 14802ms
rtt min/avg/max/mdev = 0.807/0.861/0.916/0.042 ms
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Adam Whitney
oops, sorry about the last post ... the test with the 3 brokers, 2 producers, and 4 consumers had a bad configuration on the consumer side (something outside the scope of ActiveMQ) ... we're running that same test again (with conduitSubscriptions="false" and decreasNetworkConsumerPriority="false") and I'll post the results back here once it's done.
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Adam Whitney
ok, we re-ran the test with 3 brokers, 2 producer hosts, and 4 consumer hosts (each consumer host has 50 consumers on a single connection) and this time with the proper configs on the consumer side and the system behaved a little better, but still started queueing on the broker at about 500 tps. Since the system doesn't queue on the broker at all with a single broker, this is more evidence that the network consumers can't handle the load.

We'll try again with conduitSubscriptions="true" and decreaseNetworkConsumerPriority="true" and I'll post back here when it's done.
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Tim Bain
Although I strongly recommend you (and everyone) use both of those settings
unless there's a clear reason not to (and I didn't even consider the
possibility of you not having them on), I expect that conduitSubscriptions
will make the lion's share of the difference. Without it, brokers must pass
many copies of each message (and must write each one to the persistence
store, which is the bottleneck), so it's no surprise that performance is
significantly worse than expected.

Let us know what you see with both those settings on.

On Jan 31, 2017 6:01 PM, "Adam Whitney" <[hidden email]> wrote:

ok, we re-ran the test with 3 brokers, 2 producer hosts, and 4 consumer
hosts
(each consumer host has 50 consumers on a single connection) and this time
with the proper configs on the consumer side and the system behaved a little
better, but still started queueing on the broker at about 500 tps. Since the
system doesn't queue on the broker at all with a single broker, this is more
evidence that the network consumers can't handle the load.

We'll try again with conduitSubscriptions="true" and
decreaseNetworkConsumerPriority="true" and I'll post back here when it's
done.



--
View this message in context: http://activemq.2283324.n4.
nabble.com/Network-Connector-too-slow-when-receive-high-
rate-persistent-message-tp4721293p4721460.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Adam Whitney
We ran a couple tests with conduitSubscriptions="true" and
decreaseNetworkConsumerPriority="true".

The first test we had 3 brokers, 2 producers, and 4 consumers hosts ... and made sure that each broker had at least 1 consumer host connected ... and everything kept up just fine.

We ran again with 3 brokers, 2 producers, and 2 consumer hosts (so 1 broker only had network consumers attached) ... and still everything kept up fine.

Thanks for your help :)
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

Tim Bain
That's great news, and though I was happy to help, you solved your own
problem.

francong2000, whose original thread we've hijacked, do the two settings
Adam identified also fix your issue, or do we need to keep digging into
your problem?

Tim

On Feb 1, 2017 7:04 PM, "Adam Whitney" <[hidden email]> wrote:

> We ran a couple tests with conduitSubscriptions="true" and
> decreaseNetworkConsumerPriority="true".
>
> The first test we had 3 brokers, 2 producers, and 4 consumers hosts ... and
> made sure that each broker had at least 1 consumer host connected ... and
> everything kept up just fine.
>
> We ran again with 3 brokers, 2 producers, and 2 consumer hosts (so 1 broker
> only had network consumers attached) ... and still everything kept up fine.
>
> Thanks for your help :)
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/Network-Connector-too-slow-when-receive-high-
> rate-persistent-message-tp4721293p4721491.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Network Connector too slow when receive high rate persistent message

francong2000
I tried to set with your suggestion as below, however, performance is still slow.  Fortunately, I found another ways to improve that set journalDiskSyncStrategy="periodic" and clear the <ActiveMQ>/data before start ActiveMQ.  The performance can go to 1,000 msg/s with 2K message size.  Second way, use SSD with AIO and  can set journalDiskSyncStrategy="always".

Your sugggestion
<networkConnectors>
                <networkConnector  name="LinkToBroker1"  duplex="true" networkTTL="3" uri="static:(tcp://POC1:61616)?wireFormat.maxInactivityDuration=0" prefetchSize="10000" userName="system" password="manager" decreaseNetworkConsumerPriority="true" conduitSubscriptions="true"  >
                        <dynamicallyIncludedDestinations>
                        </dynamicallyIncludedDestinations>
                </networkConnector>
        </networkConnectors>


My suggestion
<persistenceAdapter>
            <kahaDB directory="${activemq.data}/kahadb" journalDiskSyncStrategy="periodic" journalMaxFileLength="100mb" />
        </persistenceAdapter>