Three node Artemis cluster

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Three node Artemis cluster

schalmers
This post was updated on .
I'm having some issues with my 3 node Artemis cluster (v2.6.0).

Here is the snippet from my broker.xml on node1:

      <connectors>
         <connector name="netty-connector">tcp://0.0.0.0:61616</connector>
         <connector
name="cluster-connector1">tcp://10.0.201.97:61616</connector>
         <connector
name="cluster-connector2">tcp://10.0.202.250:61616</connector>
      </connectors>

       <acceptors>
        <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
      </acceptors>

      <cluster-user>someuser</cluster-user>
      <cluster-password>somepass</cluster-password>

      <cluster-connections>
         <cluster-connection name="artemis-cluster">
            <connector-ref>netty-connector</connector-ref>
            <retry-interval>1000</retry-interval>
            <use-duplicate-detection>true</use-duplicate-detection>
            <message-load-balancing>STRICT</message-load-balancing>
            <max-hops>1</max-hops>
            <static-connectors>
                <connector-ref>cluster-connector1</connector-ref>
                <connector-ref>cluster-connector2</connector-ref>
            </static-connectors>
         </cluster-connection>
      </cluster-connections>

         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

The broker.xml is the same on node2 & node3 of the cluster, but I obviously
change the cluster-connector  URL's in the connectors and static-connectors
sections to reflect node1/node3 if I'm on node2 and node1/node3 if I'm on
node3.

When I start the broker on each node, the cluster starts up fine and I can
see that there are 3 nodes in the cluster logging into the web console for
each node.

However, if I create an address and a queue on node1, that address and
queues are not created on any of the other 2 nodes - is that working as
expected or is something not working correctly?

My assumption reading the documentation is that all queues should be
available across all 3 nodes of the cluster.
Reply | Threaded
Open this post in threaded view
|

Re: Three node Artemis cluster

jbertram
This looks wrong to me:

  <connector name="netty-connector">tcp://0.0.0.0:61616</connector>

This is the connector that's referenced in your "artemis-cluster"
cluster-connection which means you're telling other nodes in the cluster
that in order to connect to this node they need to use 0.0.0.0:61616.  Of
course, a remote node using 0.0.0.0:61616 will connect to itself instead
rather than the node broadcasting that information.  Connectors should
almost always use a *real* IP address or hostname.  Try changing that and
seeing how it goes.


Justin

On Mon, Jun 11, 2018 at 1:24 AM, schalmers <
[hidden email]> wrote:

> I'm having some issues with my 3 node Artemis cluster (v2.6.0).
>
> Here is the snippet from my broker.xml on node1:
>
>       <ha-policy>
>         <live-only/>
>       </ha-policy>
>
>
>
>       <connectors>
>          <connector name="netty-connector">tcp://0.0.0.0:61616</connector>
>          <connector
> name="cluster-connector1">tcp://10.0.201.97:61616</connector>
>          <connector
> name="cluster-connector2">tcp://10.0.202.250:61616</connector>
>       </connectors>
>
>
>       <acceptors>
>         <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
>       </acceptors>
>
>
>
>       <cluster-user>someuser</cluster-user>
>       <cluster-password>somepass</cluster-password>
>
>       <cluster-connections>
>          <cluster-connection name="artemis-cluster">
>             <connector-ref>netty-connector</connector-ref>
>             <retry-interval>1000</retry-interval>
>             <use-duplicate-detection>true</use-duplicate-detection>
>             <message-load-balancing>STRICT</message-load-balancing>
>             <max-hops>1</max-hops>
>             <static-connectors>
>                 <connector-ref>cluster-connector1</connector-ref>
>                 <connector-ref>cluster-connector2</connector-ref>
>             </static-connectors>
>          </cluster-connection>
>       </cluster-connections>
>
> The broker.xml is the same on node2 & node3 of the cluster, but I obviously
> change the cluster-connector  URL's in the connectors and static-connectors
> sections to reflect node1/node3 if I'm on node2 and node1/node3 if I'm on
> node3.
>
> When I start the broker on each node, the cluster starts up fine and I can
> see that there are 3 nodes in the cluster logging into the web console for
> each node.
>
> However, if I create an address and a queue on node1, that address and
> queues are not created on any of the other 2 nodes - is that working as
> expected or is something not working correctly?
>
> My assumption reading the documentation is that all queues should be
> available across all 3 nodes of the cluster.
>
>
>
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>
Reply | Threaded
Open this post in threaded view
|

Re: Three node Artemis cluster

schalmers
This post was updated on .
Changes made:

- Installed and created new brokers using Artemis v2.6.1 from
https://github.com/apache/activemq-artemis/archive/2.6.1.tar.gz and then run
mvn package and used the zip from apache-distribution/target
- Modified broker.xml to fix up the cluster-connector (example broker.xml
attached below as well)
- Changed <message-load-balancing> to ON_DEMAND
- Added <redistribution-delay>0</redistribution-delay> to <address-setting
match="#">

Observations:

- Started all brokers; cluster came up fine with 3 nodes
- All brokers are reporting this and I only get this error when the cluster is running, if I run a broker in isolation by itself, it doesnt appear: ERROR
[org.apache.activemq.artemis.core.server] AMQ224088: Timeout (10 seconds)
while handshaking has occurred.
- Connected to node3 and created 4 queues, sent 199 messages to queue
update-terrain-leaf-bricks and 1 message to queue update-terrain-request
(node3.png screenshot attached).
- On node1, only one of the 4 queues from node3 was created:
update-terrain-response (node1.png screenshot attached).
- On node2, no queues were created (node2.png screenshot attached).
- No messages are available on any other nodes, apart from node3 where they were all originally sent.

Broker.xml:
broker.xml <http://activemq.2283324.n4.nabble.com/file/t378947/broker.xml

Node3 screenshot:
node3.PNG <http://activemq.2283324.n4.nabble.com/file/t378947/node3.PNG

Node2 screenshot:
node2.PNG <http://activemq.2283324.n4.nabble.com/file/t378947/node2.PNG

Node1 screenshot:
node1.PNG <http://activemq.2283324.n4.nabble.com/file/t378947/node1.PNG



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Reply | Threaded
Open this post in threaded view
|

Re: Three node Artemis cluster

jbertram
What happens if you actually connect a consumer to node 1 or 2 and try to
consume those messages?  I'm not 100% certain what should be visible on the
management console of a node in the cluster when a queue is created on a
different node in the cluster, but I do know with your configuration
messages should be redistributed.  I would focus on tests that are actually
*functional* vs. just looking at the management console as we discussed in
IRC last night.  Of course, I'm assuming here that you actually want to
consume those messages and not just look at the console.

Regarding the AMQ224088 error message...Do you have any additional details
that you could share?  When is that message logged relative to the brokers
starting and the producer connecting?  Are any other clients on the network
attempting to connect to those ports on the broker?  That message is
typically only logged if something connects to the port where the broker is
listening and doesn't actually complete a protocol handshake.  Sometimes
security tools which scan ports can trigger this.  You might want to work
with your networking team to trace the incoming connections to the port
where the broker is listening.


Justin

On Mon, Jun 11, 2018 at 11:12 PM, schalmers <
[hidden email]> wrote:

> Changes made:
>
> - Installed and created new brokers using Artemis v2.6.1 from
> https://github.com/apache/activemq-artemis/archive/2.6.1.tar.gz and then
> run
> mvn package and used the zip from apache-distribution/target
> - Modified broker.xml to fix up the cluster-connector (example broker.xml
> attached below as well)
> - Changed <message-load-balancing> to ON_DEMAND
> - Added <redistribution-delay>0</redistribution-delay> to <address-setting
> match="#">
>
> Observations:
>
> - Started all brokers; cluster came up fine with 3 nodes
> - All brokers are reporting this: ERROR
> [org.apache.activemq.artemis.core.server] AMQ224088: Timeout (10 seconds)
> while handshaking has occurred.
> - Connected to node3 and created 4 queues, sent 199 messages to queue
> update-terrain-leaf-bricks and 1 message to queue update-terrain-request
> (node3.png screenshot attached).
> - On node1, only one of the 4 queues from node3 was created:
> update-terrain-response (node1.png screenshot attached).
> - On node2, no queues were created (node2.png screenshot attached).
>
> Broker.xml:
> broker.xml <http://activemq.2283324.n4.nabble.com/file/t378947/broker.xml>
>
>
> Node3 screenshot:
> node3.PNG <http://activemq.2283324.n4.nabble.com/file/t378947/node3.PNG>
>
> Node2 screenshot:
> node2.PNG <http://activemq.2283324.n4.nabble.com/file/t378947/node2.PNG>
>
> Node1 screenshot:
> node1.PNG <http://activemq.2283324.n4.nabble.com/file/t378947/node1.PNG>
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>
Reply | Threaded
Open this post in threaded view
|

Re: Three node Artemis cluster

schalmers
As per our IRC conversation, JIRA raised @
https://issues.apache.org/jira/projects/ARTEMIS/issues/ARTEMIS-1928?filter=allopenissues

With regards to the timeout issues, that is caused by the AWS ELB connecting
to the nodes testing they are alive, so please disregard that.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html