Network bridge throughput capped at default Socket buffer size

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Network bridge throughput capped at default Socket buffer size

Leung Wang Hei
Hi all,

There seems to be an invisible barrier in the socket buffer for MQ network bridge.  We expect increasing tcp socket buffer size would give high throughput but the outcome is not.  Here are the test details:

- 2 brokers(A, B) bridged together over WLAN with 140ms network latency.  
- One single duplex network connector is setup at broker B, statically includes one topic
- 10 producers each sending 10K message.  All are AMQObjectMessage.
- Socket buffer size set as url argument in network connector at broker B and transport connector at broker A
- Use wireshark to capture link traffic

Wireshark capture shows that throughput always capped at around 3.74Mbit/sec, the max throughput as with default 64K socket buffer. Attached the config details.

I don't expect a bug in MQ, am I missing something?  Any advice would be greatly appreciated.


Broker A
<transportConnectors>
             <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?transport.socketBufferSize=10485760"/>
             <transportConnector name="openwirelog" uri="tcp://0.0.0.0:61617"/>
             <transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
         </transportConnectors>

Broker B
 <destinationPolicy>
             <policyMap>
                 <policyEntries>

                     <policyEntry topic=">" producerFlowControl="false" advisoryForDiscardingMessages="true" advisoryForSlowConsumers="true" >
                         <pendingSubscriberPolicy>
                             <vmCursor />
                         </pendingSubscriberPolicy>
                     </policyEntry>
                </policyEntries>
             </policyMap>
         </destinationPolicy>


<networkConnector name="nc1-hk" uri="static://(tcp://brokerA:61616?socketBufferSize=10485760)" duplex="true" networkTTL="2">
             <staticallyIncludedDestinations>
                 <topic physicalName="test"/>
             </staticallyIncludedDestinations>
</networkConnector>


Linux traffic control
tc qdisc add dev ens32 root handle 1: htb default 12
tc class add dev ens32 parent 1: classid 1:1 htb rate 20Mbit ceil 20MBit
tc qdisc add dev ens32 parent 1:1 handle 20: netem latency 140ms
tc filter add dev ens32 protocol ip parent 1:0 prio 1 u32 match ip dst brokerB_Ip flowid 1:1


Best regards,
Leung Wang Hei
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

ceposta
What is the traffic across the WAN for an app other than ActiveMQ? Or what
is the speed of the connection when done NOT over the WAN?

WANs prioritize traffic, wonder if you're hitting a bottleneck in the WAN?

On Tue, May 19, 2015 at 3:49 AM, Leung Wang Hei <[hidden email]>
wrote:

> Hi all,
>
> There seems to be an invisible barrier in the socket buffer for MQ network
> bridge.  We expect increasing tcp socket buffer size would give high
> throughput but the outcome is not.  Here are the test details:
>
> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
> - One single duplex network connector is setup at broker B, statically
> includes one topic
> - 10 producers each sending 10K message.  All are AMQObjectMessage.
> - Socket buffer size set as url argument in network connector at broker B
> and transport connector at broker A
> - Use wireshark to capture link traffic
>
> Wireshark capture shows that throughput always capped at around
> 3.74Mbit/sec, the max throughput as with default 64K socket buffer.
> Attached
> the config details.
>
> I don't expect a bug in MQ, am I missing something?  Any advice would be
> greatly appreciated.
>
>
> *Broker A*
> <transportConnectors>
>              <transportConnector name="openwire"
> uri="tcp://0.0.0.0:61616?transport.socketBufferSize=10485760"/>
>              <transportConnector name="openwirelog"
> uri="tcp://0.0.0.0:61617"/>
>              <transportConnector name="stomp" uri="stomp://0.0.0.0:61613
> "/>
>          </transportConnectors>
>
> *Broker B*
>  <destinationPolicy>
>              <policyMap>
>                  <policyEntries>
>
>                      <policyEntry topic=">" producerFlowControl="false"
> advisoryForDiscardingMessages="true" advisoryForSlowConsumers="true" >
>                          <pendingSubscriberPolicy>
>                              <vmCursor />
>                          </pendingSubscriberPolicy>
>                      </policyEntry>
>                 </policyEntries>
>              </policyMap>
>          </destinationPolicy>
>
>
> <networkConnector name="nc1-hk"
> uri="static://(tcp://brokerA:61616?socketBufferSize=10485760)"
> duplex="true"
> networkTTL="2">
>              <staticallyIncludedDestinations>
>                  <topic physicalName="test"/>
>              </staticallyIncludedDestinations>
> </networkConnector>
>
>
> *Linux traffic control*
> tc qdisc add dev ens32 root handle 1: htb default 12
> tc class add dev ens32 parent 1: classid 1:1 htb rate 20Mbit ceil 20MBit
> tc qdisc add dev ens32 parent 1:1 handle 20: netem latency 140ms
> tc filter add dev ens32 protocol ip parent 1:0 prio 1 u32 match ip dst
> brokerB_Ip flowid 1:1
>
>
> Best regards,
> Leung Wang Hei
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Peter Hicks-2
In reply to this post by Leung Wang Hei
Hello

On 19/05/15 11:49, Leung Wang Hei wrote:

> There seems to be an invisible barrier in the socket buffer for MQ network
> bridge.  We expect increasing tcp socket buffer size would give high
> throughput but the outcome is not.  Here are the test details:
>
> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
> - One single duplex network connector is setup at broker B, statically
> includes one topic
> - 10 producers each sending 10K message.  All are AMQObjectMessage.
> - Socket buffer size set as url argument in network connector at broker B
> and transport connector at broker A
> - Use wireshark to capture link traffic
>
> Wireshark capture shows that throughput always capped at around
> 3.74Mbit/sec, the max throughput as with default 64K socket buffer. Attached
> the config details.
>
> I don't expect a bug in MQ, am I missing something?  Any advice would be
> greatly appreciated.
>
It't not a bug in ActiveMQ, it's the result of the Bandwidth/Delay
Product - take the bandwidth of your link in megabits/sec and divide it
by the round trip time in milliseconds.

See http://en.wikipedia.org/wiki/TCP_tuning for more details - you need
to increase the TCP window size at both broker A and broker B to
something larger so you can have more data "on the wire".


Peter

Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Tim Bain
Peter, I'm pretty sure that's why he's trying to adjust the socket buffer
size, and he's saying that the changes he's making aren't having the
desired effect.

Leung, I have a vague memory having to prefix the URI option with some
prefix (I'm pretty sure it was "transport.", as shown in the example at the
bottom of http://activemq.apache.org/tcp-transport-reference.html).  Give
it a try and see if that changes the behavior you're seeing.

Tim
On May 20, 2015 2:03 PM, "Peter Hicks" <[hidden email]> wrote:

> Hello
>
> On 19/05/15 11:49, Leung Wang Hei wrote:
>
>> There seems to be an invisible barrier in the socket buffer for MQ network
>> bridge.  We expect increasing tcp socket buffer size would give high
>> throughput but the outcome is not.  Here are the test details:
>>
>> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
>> - One single duplex network connector is setup at broker B, statically
>> includes one topic
>> - 10 producers each sending 10K message.  All are AMQObjectMessage.
>> - Socket buffer size set as url argument in network connector at broker B
>> and transport connector at broker A
>> - Use wireshark to capture link traffic
>>
>> Wireshark capture shows that throughput always capped at around
>> 3.74Mbit/sec, the max throughput as with default 64K socket buffer.
>> Attached
>> the config details.
>>
>> I don't expect a bug in MQ, am I missing something?  Any advice would be
>> greatly appreciated.
>>
>>  It't not a bug in ActiveMQ, it's the result of the Bandwidth/Delay
> Product - take the bandwidth of your link in megabits/sec and divide it by
> the round trip time in milliseconds.
>
> See http://en.wikipedia.org/wiki/TCP_tuning for more details - you need
> to increase the TCP window size at both broker A and broker B to something
> larger so you can have more data "on the wire".
>
>
> Peter
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Leung Wang Hei
This post has NOT been accepted by the mailing list yet.
In reply to this post by ceposta
Testing with iperf3 shows a max of 20Mbit/sec is achievable between the testing boxes.  

I wonder some parameters in the 2 MQ(s) must have be missed.
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Leung Wang Hei
In reply to this post by Tim Bain
Tim,

I have used "transport.socketBufferSize=x" in transport connector broker A and only "?socketBufferSize=x" in broker B network connector.  When x=-1, warning is raised in MQ log:

[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not start network bridge between: vm://activemq.auhkmq01?async=false&network=true and: tcp://brokerA:61616?socketBufferSize=-1 due to: java.lang.IllegalArgumentException: invalid receive size

If I prefix broker B config with "transport.", the parameter is considered invalid by MQ:

[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not connect to remote URI: tcp://brokerA:61616?transport.socketBufferSize=-1: Invalid connect parameters: {transport.socketBufferSize=-1}

It looks like my initial config is correct.
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Tim Bain
You're right, I didn't catch that in your original message, sorry.

What did you find when you investigated my suggestions about the TCP
congestion window and your OS's max socket buffer size setting?

Also, have you confirmed that a non-ActiveMQ TCP socket connection can get
better throughput?  Do a sanity check and make sure this isn't your WAN
throttling you before you sink too much time into tweaking ActiveMQ.

Tim
On May 21, 2015 12:17 AM, "Leung Wang Hei" <[hidden email]> wrote:

> Tim,
>
> I have used "transport.socketBufferSize=x" in transport connector broker A
> and only "?socketBufferSize=x" in broker B network connector.  When x=-1,
> warning is raised in MQ log:
>
> /[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not
> start network bridge between:
> vm://activemq.auhkmq01?async=false&network=true and:
> tcp://brokerA:61616?socketBufferSize=-1 due to:
> java.lang.IllegalArgumentException: invalid receive size/
>
> If I prefix broker B config with "transport.", the parameter is considered
> invalid by MQ:
>
> /[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not
> connect to remote URI: tcp://brokerA:61616?transport.socketBufferSize=-1:
> Invalid connect parameters: {transport.socketBufferSize=-1}/
>
> It looks like my initial config is correct.
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643p4696748.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Leung Wang Hei
Hi Tim,

Here are the OS config:
Broker A
$ cat /proc/sys/net/ipv4/tcp_rmem
4096    87380   16777216
$ cat /proc/sys/net/ipv4/tcp_wmem
4096    87380   16777216

Broker B
$ cat /proc/sys/net/ipv4/tcp_rmem
4096    87380   16777216
$ cat /proc/sys/net/ipv4/tcp_wmem
4096    87380   16777216

As in my 2nd last comment, iperf3 bandwidth testing shows a max bandwidth of 20Mbit/sec.  This matches the expectation with the configured Traffic Control.
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Tim Bain
OK, and what are you seeing happen with the TCP congestion window of the
broker-to-broker connection?  Is it opening fully?
On May 22, 2015 12:29 AM, "Leung Wang Hei" <[hidden email]> wrote:

> Hi Tim,
>
> Here are the OS config:
> *Broker A*
> $ cat /proc/sys/net/ipv4/tcp_rmem
> 4096    87380   16777216
> $ cat /proc/sys/net/ipv4/tcp_wmem
> 4096    87380   16777216
>
> *Broker B*
> $ cat /proc/sys/net/ipv4/tcp_rmem
> 4096    87380   16777216
> $ cat /proc/sys/net/ipv4/tcp_wmem
> 4096    87380   16777216
>
> As in my 2nd last comment, iperf3 bandwidth testing shows a max bandwidth
> of
> 20Mbit/sec.  This matches the expectation with the configured Traffic
> Control.
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643p4696847.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Peter Hicks-2
In reply to this post by Leung Wang Hei


On 22/05/15 07:11, Leung Wang Hei wrote:
> As in my 2nd last comment, iperf3 bandwidth testing shows a max bandwidth of
> 20Mbit/sec.  This matches the expectation with the configured Traffic
> Control.
Just want to check - is this the maximum bandwidth over a single TCP
session, or are you using multiple TCP sessions, or even UDP?


Peter

Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Leung Wang Hei
This post has NOT been accepted by the mailing list yet.

with 1M socket buffer, TCP throughput still capped at 3.75Mbit/sec.
Reply | Threaded
Open this post in threaded view
|

Re: Network bridge throughput capped at default Socket buffer size

Leung Wang Hei
This post has NOT been accepted by the mailing list yet.
I am using a single networkconnect between MQs.  I suppose 1 TCP session?